arxiv_id
stringlengths
0
16
text
stringlengths
10
1.65M
2007.00643
\section{Introduction} \vspace{-7pt} Autonomous navigation is a core requirement in building intelligent embodied agents. Consider an autonomous agent being asked to navigate to a `dining table' in an unseen environment as shown in Figure~\ref{fig:teaser}. In terms of semantic understanding, this task not only involves object detection, i.e. what does a `dining table' look like, but also scene understanding of where `dining tables' are more likely to be found. The latter requires a long-term episodic memory as well as learning semantic priors on the relative arrangement of objects in a scene. Long-term episodic memory allows the agent to keep a track of explored and unexplored areas. Learning semantic priors allows the agent to also use the episodic memory to decide which region to explore next in order to find the target object in the least amount of time. How do we design a computational model for building an episodic memory and using it effectively based on semantic priors for efficient navigation in unseen environments? One popular approach is to use end-to-end reinforcement or imitation learning with recurrent neural networks to build episodic memory and learn semantic priors implicitly~\cite{mirowski2016learning, gupta2017cognitive, zhu2016target, mousavian2019visual}. However, end-to-end learning-based methods suffer from large sample complexity and poor generalization as they memorize object locations and appearance in training environments. Recently,~\Mycite{chaplot2020learning} introduced a modular learning-based system called `Active Neural SLAM' which builds explicit obstacle maps to maintain episodic memory. Explicit maps also allow analytical path planning and thus lead to significantly better exploration and sample complexity. However, Active Neural SLAM, designed for maximizing exploration coverage, does not encode semantics in the episodic memory and thus does not learn semantic priors. In this paper, we extend the Active Neural SLAM system to build explicit semantic maps and learn semantic priors using a semantically-aware long-term policy. \begin{figure*}[t!] \centering \includegraphics[width=0.99\linewidth,height=\textheight,keepaspectratio]{images/teaser.pdf} \vspace{-5pt} \caption{\small \textbf{Semantic Skills required for Object Goal navigation.} Efficient Object Goal navigation not only requires passive skills such as object detection, but also active skills such as an building an episodic memory and using it effective to learn semantic priors abour relative arrangements of objects in a scene.} \label{fig:teaser} \vspace{-10pt} \end{figure*} The proposed method, called `Goal-Oriented Semantic Exploration{}' (SemExp{}), makes two improvements over~\cite{chaplot2020learning} to tackle semantic navigation tasks. First, it builds top-down metric maps similar to~\cite{chaplot2020learning} but adds extra channels to encode semantic categories explicitly. Instead of predicting the top-down maps directly from the first-person image as in~\cite{chaplot2020learning}, we use first-person predictions followed by differentiable geometric projections. This allows us to leverage existing pretrained object detection and semantic segmentation models to build semantic maps instead of learning from scratch. Second, instead of using a coverage maximizing goal-agnostic exploration policy based only on obstacle maps, we train a goal-oriented semantic exploration policy which learns semantic priors for efficient navigation. These improvements allow us to tackle a challenging object goal navigation task. Our experiments in visually realistic simulation environments show that SemExp{} outperforms prior methods by a significant margin. The proposed model also won the CVPR 2020 Habitat ObjectNav Challenge\footnote{ \url{https://aihabitat.org/challenge/2020/}}. We also demonstrate that SemExp{} achieves similar performance in the real-world when transferred to a mobile robot platform. \vspace{-3pt} \section{Related Work} \vspace{-7pt} We briefly discuss related work on semantic mapping and navigation below. \textbf{Semantic Mapping.} There's a large body of work on building obstacle maps both in 2D and 3D using structure from motion and Simultaneous Localization and Mapping (SLAM)~\cite{henry2014rgb, izadiUIST11, snavely2008modeling}. We defer the interested readers to the survey by~\Mycite{slam-survey:2015} on SLAM. Some of the more relevant works incorporate semantics in the map using probabilistic graphical models~\cite{bowman2017probabilistic} or using recent learning-based computer vision models~\cite{zhang2018semantic, ma2017multi}. In contrast to these works, we use differentiable projection operations to learn semantic mapping with supervision in the map space. This limits large errors in the map due to small errors in first-person semantic predictions. \textbf{Navigation.} Classical navigation approaches use explicit geometric maps to compute paths to goal locations via path planning~\cite{kavraki1996probabilistic, lavalle2000rapidly, canny1988complexity, sethian1996fast}. The goals are selected base on heuristics such as the Frontier-based Exploration algorithm~\cite{yamauchi1997frontier}. In contrast, we use a learning-based policy to use semantic priors for selecting exploration goals based on the object goal category. Recent learning-based approaches use end-to-end reinforcement or imitation learning for training navigation policies. These include methods which use recurrent neural networks~\cite{mirowski2016learning, lample2016playing, chaplot2017arnold, savva2017minos, hermann2017grounded, chaplot2017gated, savva2019habitat, wijmans2019decentralized}, structured spatial representations~\cite{gupta2017cognitive, parisotto2017neural, chaplot2018active, henriques2018mapnet, gordon2018iqa} and topological representations~\cite{savinov2018semi, savinov2018episodic}. Recent works tackling object goal navigation include~\cite{wu2018learning, yang2018visual, wortsman2019learning, mousavian2019visual}. \Mycite{wu2018learning} try to explore structural similarities between the environment by building a probabilistic graphical model over the semantic information like room types. Similarly, ~\Mycite{yang2018visual} propose to incorporate semantic priors into a deep reinforcement learning framework by using Graph Convolutional Networks.~\Mycite{wortsman2019learning} propose a meta-reinforcement learning approach where an agent learns a self-supervised interaction loss that encourages effective navigation to even keep learning in a test environment.~\Mycite{mousavian2019visual} use semantic segmentation and detection masks obtained by running state-of-the-art computer vision algorithms on the input observation and used a deep network to learn the navigation policy based on it. In all the above methods, the learnt representations are implicit and the models need to learn obstacle avoidance, episodic memory, planning as well as semantic priors implicitly from the goal-driven reward. Explicit map representation has been shown to improve performance as well as sample efficiency over end-to-end learning-based methods for different navigation tasks~\cite{chaplot2020learning, chaplot2020neural}, however they learn semantics implicitly. In this work, we use explicit structured semantic map representation, which allows us to learn semantically-aware exploration policies and tackle the object-goal navigation task. Concurrent work studies the use of similar semantic maps in learning exploration policies for improving object detection systems~\cite{chaplot2020semantic}. \begin{figure*}[t!] \centering \includegraphics[width=0.99\linewidth,height=\textheight,keepaspectratio]{images/overview.pdf} \caption{\small \textbf{Goal-Oriented Semantic Exploration{} Model Overview.} The proposed model consists of two modules, Semantic Mapping and Goal-Oriented Semantic Policy. The Semantic Mapping model builds a semantic map over time and the Goal-Oriented Semantic Policy selects a long-term goal based on the semantic map to reach the given object goal efficiently. A deterministic local policy based on analytical planners is used to take low-level navigation actions to reach the long-term goal.} \label{fig:overview} \vspace{-10pt} \end{figure*} \vspace{-3pt} \section{Method} \vspace{-7pt} \textbf{Object Goal Task Definition.} In the Object Goal task~\cite{savva2017minos, anderson2018evaluation}, the objective is to navigate to an instance of the given object category such as `chair' or `bed'. The agent is initialized at a random location in the environment and receives the goal object category ($G$) as input. At each time step $t$, the agent receives visual observations ($s_t$) and sensor pose readings $x_t$ and take navigational actions $a_t$. The visual observations consist of first-person RGB and depth images. The action space $\mathcal{A}$ consists of four actions: \texttt{move\_forward, turn\_left, turn\_right, stop}. The agent needs to take the `stop' action when it believes it has reached close to the goal object. If the distance to the goal object is less than some threshold, $d_s (=1m)$, when the agent takes the stop action, the episode is considered successful. The episode terminates at after a fixed maximum number of timesteps $(=500)$. \textbf{Overview.} We propose a modular model called `Goal-Oriented Semantic Exploration{}' (SemExp{}) to tackle the Object Goal navigation task (see Figure~\ref{fig:overview} for an overview). It consists of two learnable modules, `Semantic Mapping' and `Goal-Oriented Semantic Policy'. The Semantic Mapping module builds a semantic map over time and the Goal-Oriented Semantic Policy selects a long-term goal based on the semantic map to reach the given object goal efficiently. A deterministic local policy based on analytical planners is used to take low-level navigation actions to reach the long-term goal. We first describe the semantic map representation used by our model and then describe the modules. \textbf{Semantic Map Representation.} The SemExp{} model internally maintains a semantic metric map, $m_t$ and pose of the agent $x_t$. The spatial map, $m_t$, is a ${K \times M \times M}$ matrix where $M \times M$ denotes the map size and each element in this spatial map corresponds to a cell of size $25cm^2$ ($5cm \times 5cm$) in the physical world. $K = C + 2$ is the number of channels in the semantic map, where $C$ is the total number of semantic categories. The first two channels represent obstacles and explored area and the rest of the channels each represent an object category. Each element in a channel represents whether the corresponding location is an obstacle, explored, or contains an object of the corresponding category. The map is initialized with all zeros at the beginning of an episode, $m_0 = [0]^{K \times M \times M}$. The pose $x_t \in \mathbb{R}^3$ denotes the $x$ and $y$ coordinates of the agent and the orientation of the agent at time $t$. The agent always starts at the center of the map facing east at the beginning of the episode, $x_0 = (M/2, M/2, 0.0)$. \textbf{Semantic Mapping.} In order to a build semantic map, we need to predict semantic categories and segmentation of the objects seen in visual observations. It is desirable to use existing object detection and semantic segmentation models instead of learning from scratch. The Active Neural SLAM model predicts the top-down map directly from RGB observations and thus, does not have any mechanism for incorporating pretrained object detection or semantic segmentation systems. Instead, we predict semantic segmentation in the first-person view and use differentiable projection to transform first-person predictions to top-down maps. This allows us to use existing pretrained models for first-person semantic segmentation. However small errors in first-person semantic segmentation can lead to large errors in the map after projection. We overcome this limitation by imposing a loss in the map space in addition to the first-person space. Figure~\ref{fig:semantic_mapping} shows an overview of the Semantic Mapping module. The depth observation is used to compute a point cloud. Each point in the point cloud is associated with the predicted semantic categories. The semantic categories are predicted using a pretrained Mask RCNN~\cite{mask_rcnn} on the RGB observation. Each point in the point cloud is then projected in 3D space using differentiable geometric computations to get the voxel representation. The voxel representation is then converted to the semantic map. Summing over the height dimension of the voxel representation for all obstacles, all cells, and each category gives different channels of the projected semantic map. The projected semantic map is then passed through a denoising neural network to get the final semantic map prediction. The map is aggregated over time using spatial transformations and channel-wise pooling as described in~\cite{chaplot2020learning}. The Semantic Mapping module is trained using supervised learning with cross-entropy loss on the semantic segmentation as well as semantic map prediction. The geometric projection is implemented using differentiable operations such that the loss on the semantic map prediction can be backpropagated through the entire module if desired. \begin{figure*}[t!] \centering \includegraphics[width=0.99\linewidth,height=\textheight,keepaspectratio]{images/semantic_mapping.pdf} \caption{\small \textbf{Semantic Mapping.} The Semantic Mapping module takes in a sequence of RGB ($I_t$) and Depth ($D_t$) images and produces a top-down Semantic Map.} \label{fig:semantic_mapping} \vspace{-10pt} \end{figure*} \textbf{Goal-Oriented Semantic Policy.} The Goal-Oriented Semantic Policy decides a long-term goal based on the current semantic map to reach the given object goal ($G$). If the channel corresponding to category $G$ has a non-zero element, meaning that the object goal is observed, it simply selects all non-zero elements as the long-term goal. If the object goal is not observed, the Goal-Oriented Semantic Policy needs to select a long-term goal where a goal category object is most likely to be found. This requires learning semantic priors on the relative arrangement of objects and areas. We use a neural network to learn these semantic priors. It takes the semantic map, the agent's current and past locations, and the object goal as input and predicts a long-term goal in the top-down map space. The Goal-Oriented Semantic Policy is trained using reinforcement learning with distance reduced to the nearest goal object as the reward. We sample the long-term goal at a coarse time-scale, once every $u = 25$ steps, similar to the goal-agnostic Global Policy in~\cite{chaplot2020learning}. This reduces the time-horizon for exploration in RL exponentially and consequently, reduces the sample complexity. \textbf{Deterministic Local Policy.} The local policy uses Fast Marching Method~\cite{sethian1996fast} to plan a path to the long-term goal from the current location based on the obstacle channel of the semantic map. It simply takes deterministic actions along the path to reach the long-term goal. We use a deterministic local policy as compared to a trained local policy in~\cite{chaplot2020learning} as they led to a similar performance in our experiments. Note that although the above Semantic Policy acts at a coarse time scale, the Local Policy acts at a fine time scale. At each time step, we update the map and replan the path to the long-term goal. \vspace{-3pt} \section{Experimental Setup} \vspace{-7pt} We use the Gibson~\citep{xiazamirhe2018gibsonenv} and Matterport3D (MP3D)~\citep{chang2017matterport3d} datasets in the Habitat simulator~\citep{savva2019habitat} for our experiments. Both Gibson and MP3D consist of scenes which are 3D reconstructions of real-world environments. For the Gibson dataset,wWe use the train and val splits of Gibson tiny set for training and testing respectively as the test set is held-out for the online evaluation server. We do not use the validation set for hyper-parameter tuning. The semantic annotations for the Gibson tiny set are available from~\Mycite{armeni20193d}. For the MP3D dataset, we use the standard train and test splits. Our training and test set consists of a total of 86 scenes (25 Gibson tiny and 61 MP3D) and 16 scenes (5 Gibson tiny and 11 MP3D), respectively. The observation space consists of RGBD images of size $4 \times 640 \times 480$ , base odometry sensor readings of size $3 \times 1$ denoting the change in agent's x-y coordinates and orientation, and goal object category represented as an integer. The actions space consists of four actions: \texttt{move\_forward, turn\_left, turn\_right, stop}. The success threshold $d_s$ is set to $1m$. The maximum episode length is 500 steps. We note that the depth and pose are perfect in simulation, but these challenges are orthogonal to the focus of this paper and prior works have shown that both can be estimated effectively from RGB images and noisy sensor pose readings~\cite{godard2017unsupervised, chaplot2020learning}. These design choices are identical to the CVPR 2020 Object Goal Navigation Challenge. For the object goal, we use object categories which are common between Gibson, MP3D, and MS-COCO~\cite{lin2014microsoft} datasets. This leads to set of 6 object goal categories: `chair', `couch', `potted plant', `bed', `toilet' and `tv'. We use a Mask-RCNN~\cite{mask_rcnn} using Feature Pyramid Networks~\cite{lin2017feature} with ResNet50~\cite{he2016deep} backbone pretrained on MS-COCO for object detection and instance segmentation. Although we use 6 categories for object goals, we build a semantic map with 15 categories (shown on the right in Figure~\ref{fig:habitat_episode}) to encode more information for learning semantic priors. \begin{figure*}[t!] \centering \includegraphics[width=0.99\linewidth,height=\textheight,keepaspectratio]{images/example.pdf} \caption{\textbf{Example Trajectory.} Figure showing an example trajectory of the SemExp{} model in a scene from the Gibson test set. Sample images seen by the agent are shown on the top and the predicted semantic map is shown below. The goal object is `bed'. The long-term goal selected by the Goal-driven Semantic Policy is shown in blue. The ground-truth map (not visible to the agent) with the agent trajectory is shown on the right for reference.} \label{fig:habitat_episode} \end{figure*} \textbf{Architecture and Hyperparameter details.} We use PyTorch~\citep{paszke2017automatic} for implementing and training our model. The denoising network in the Semantic Mapping module is a 5-layer fully convolutional network. We freeze the Mask RCNN weights in the Semantic Mapping module (except for results on Habitat Challenge in Section~\ref{sec:challenge}) as Matterport does not contain labels for all 15 categories in our semantic map. We train the denoising network with the map-based loss on all 15 categories for Gibson frames and only 6 categories on MP3D frames. The Goal-driven Semantic Policy is a 5 layer convolutional network followed by 3 fully connected layers. In addition to the semantic map, we also pass the agent orientation and goal object category index as separate inputs to this Policy. They are processed by separate Embedding layers and added as an input to the fully-connected layers. We train both the modules with 86 parallel threads, with each thread using one scene in the training set. We maintain a FIFO memory of size 500000 for training the Semantic Mapping module. After one step in each thread, we perform 10 updates to the Semantic Mapping module with a batch size of 64. We use Adam optimizer with a learning rate of $0.0001$. We use binary cross-entropy loss for semantic map prediction. The Goal-driven Policy samples a new goal every $u=25$ timesteps. For training this policy, we use Proximal Policy Optimization (PPO)~\citep{schulman2017proximal} with a time horizon of 20 steps, 36 mini-batches, and 4 epochs in each PPO update. Our PPO implementation is based on~\cite{pytorchrl}. The reward for the policy is the decrease in distance to the nearest goal object. We use Adam optimizer with a learning rate of $0.000025$, a discount factor of $\gamma = 0.99$, an entropy coefficient of $0.001$, value loss coefficient of $0.5$ for training the Goal-driven Policy. \textbf{Metrics.} We use 3 metrics for comparing all the methods: \textbf{Success.} Ratio of episodes where the method was successful. \textbf{SPL.} Success weighted by Path Length as proposed by~\cite{anderson2018evaluation}. This metric measures the efficiency of reaching the goal in addition to the success rate. \textbf{DTS:} Distance to Success. This is the distance of the agent from the success threshold boundary when the episode ends. This computed as follows: $$ DTS = max(||x_T - G||_2 - d_s, 0) $$ where $|| x_T - G ||_2$ is the L2 distance of the agent from the goal location at the end of the episode, $d_s$ is the success threshold. \vspace{-6pt} \subsection{Baselines.} \vspace{-8pt} We use two end-to-end Reinforcement Learning (RL) methods as baselines:\\\vspace{-14pt} \textbf{RGBD + RL:} A vanilla recurrent RL Policy initialized with ResNet18~\citep{he2016deep} backbone followed by a GRU adapted from ~\Mycite{savva2019habitat}. Agent pose and goal object category are passed through an embedding layer and append to the recurrent layer input.\\\vspace{-14pt} \textbf{RGBD + Semantics + RL~\cite{mousavian2019visual}:} This baseline is adapted from~\Mycite{mousavian2019visual} who pass semantic segmentation and object detection predictions along with RGBD input to a recurrent RL policy. We use a pretrained Mask RCNN identical to the one used in the proposed model for semantic segmentation and object detection in this baseline. RGBD observations are encoded with a ResNet18 backbone visual encoder, and agent pose and goal object are encoded usinng embedding layers as described above. \\\vspace{-14pt} Both the RL based baselines are trained with Proximal Policy Optimization~\cite{schulman2017proximal} using a dense reward of distance reduced to the nearest goal object. We design two more baselines based on goal-agnostic exploration methods combined with heuristic-based local goal-driven policy.\\\vspace{-14pt} \textbf{Classical Mapping + FBE~\cite{yamauchi1997frontier}:} This baseline use classical robotics pipeline for mapping followed by classical frontier-based exploration (FBE)~\cite{yamauchi1997frontier} algorithm. We use a heuristic-based local policy using a pretrained Mask-RCNN. Whenever the Mask RCNN detects the goal object category, the local policy tries to go towards the object using an analytical planner. \\\vspace{-15pt} \textbf{Active Neural SLAM~\cite{chaplot2020learning}:} In this baseline, we use an exploration policy trained to maximize coverage from~\cite{chaplot2020learning}, followed by the heuristic-based local policy as described above. \setlength{\tabcolsep}{5.6pt} \begin{table}[] \caption{\small \textbf{Results.} Performance of SemExp{} as compared to the baselines on the Gibson and MP3D datasets.} \vspace{-5pt} \label{tab:results} \centering \begin{tabular}{@{}llccccccc@{}} \toprule & & \multicolumn{3}{c}{Gibson} & & \multicolumn{3}{c}{MP3D} \\ Method & & SPL & Success & DTS (m) & & SPL & Success & DTS (m) \\ \midrule Random & & 0.004 & 0.004 & 3.893 & & 0.005 & 0.005 & 8.048 \\ RGBD + RL~\cite{savva2019habitat} & & 0.027 & 0.082 & 3.310 & & 0.017 & 0.037 & 7.654 \\ RGBD + Semantics + RL~\cite{mousavian2019visual} & & 0.049 & 0.159 & 3.203 & & 0.015 & 0.031 & 7.612 \\ Classical Map + FBE~\cite{yamauchi1997frontier} & & 0.124 & 0.403 & 2.432 & & 0.117 & 0.311 & 7.102 \\ Active Neural SLAM~\cite{chaplot2020learning} & & 0.145 & 0.446 & 2.275 & & 0.119 & 0.321 & 7.056 \\ SemExp{} & & \textbf{0.199} & \textbf{0.544} & \textbf{1.723} & & \textbf{0.144} & \textbf{0.360} & \textbf{6.733} \\ \bottomrule \end{tabular} \vspace{-10pt} \end{table} \vspace{-3pt} \section{Results} \vspace{-7pt} We train all the baselines and the proposed model for 10 million frames and evaluate them on the Gibson and MP3D scenes in our test set separately. We run 200 evaluations episode per scene, leading to a total of 1000 episodes in Gibson (5 scenes) and 2000 episodes in MP3D (10 scenes, 1 scene did not contain any object of the 6 possible categories). Figure~\ref{fig:habitat_episode} visualizes an exmaple trajectory using the proposed SemExp{} showing the agent observations and predicted semantic map\footnote{See demo videos at \url{https://devendrachaplot.github.io/projects/semantic-exploration}}. The quantitative results are shown in Table~\ref{tab:results}. SemExp{} outperforms all the baselines by a considerable margin consistently across both the datasets (achieving a success rate 54.4\%/36.0\% on Gibson/MP3D vs 44.6\%/32.1\% for the Active Neural SLAM baseline) . The absolute numbers are higher on the Gibson set, as the scenes are comparatively smaller. The Distance to Success (DTS) threshold for Random in Table~\ref{tab:results} indicates the difficulty of the dataset. Interestingly, the baseline combining classical exploration with pretrained object detectors outperforms the end-to-end RL baselines. We observed that the training performance of the RL-based baselines was much higher indicating that they memorize the object locations and appearance in the training scenes and generalize poorly. The increase in performance of SemExp{} over the Active Neural SLAM baseline shows the importance of incorporating semantics and the goal object in exploration. \setlength{\tabcolsep}{22pt} \begin{table}[] \caption{\small \textbf{Ablations and Error Analysis.} Table showing comparison of the proposed model, SemExp{}, with 2 ablations and with Ground Truth semantic segmentation on the Gibson dataset.} \vspace{-5pt} \label{tab:ablations} \centering \begin{tabular}{@{}lllll@{}} \toprule Method & & SPL & Success & DTS (m) \\ \midrule SemExp{} w.o. Semantic Map & & 0.165 & 0.488 & 2.084 \\ SemExp{} w.o. Goal Policy & & 0.148 & 0.450 & 2.315 \\ SemExp{} & & 0.199 & 0.544 & 1.723 \\ SemExp{} w. GT SemSeg & & 0.457 & 0.731 & 1.089 \\ \bottomrule \end{tabular} \end{table} \begin{figure*}[t] \centering \includegraphics[width=0.97\linewidth,height=\textheight,keepaspectratio]{images/ablation.pdf} \vspace{-6pt} \caption{Figure showing an example comparing the proposed model with (\textbf{top}) and without (\textbf{bottom}) Goal-Oriented Semantic Policy. Starting at the same location with the same goal object of `toilet', the proposed model with Goal-Oriented Policy can find the target object much faster than without Goal-Oriented Exploration.} \label{fig:ablation} \vspace{-7pt} \end{figure*} \subsection{Ablations and Error Analysis} \vspace{-7pt} To understand the importance of both the modules in SemExp{}, we consider two ablations:\\ \textbf{SemExp{} w.o. Semantic Map.} We replace the Semantic Map with the Obstacle-only Map. As opposed to the Active Neural SLAM baseline, the Goal-oriented Policy is still trained with distance reduced to the nearest object as the reward.\\ \textbf{SemExp{} w.o. Goal Policy.} We replace the Goal-driven policy with a goal-agnostic policy trained to maximize exploration coverage as in~\cite{chaplot2020learning}, but still trained with the semantic map as input.\\ The results in the top part of Table~\ref{tab:ablations} show the performance of these ablations. The performance of SemExp{} without the Goal-oriented policy is comparable to the Active Neural SLAM baseline, indicating that the Goal-oriented policy learns semantic priors for better exploration leading to more efficient navigation. Figure~\ref{fig:ablation} shows an qualitative example indicating the importance of the Goal-oriented Policy. The performance of SemExp{} without the Semantic Map also drops, but it is higher than the ablation without Goal Policy. This indicates that it is possible to learn some semantic priors with just the obstacle map without semantic channels. The performance of the proposed model is still far from perfect. We would like to understand the error modes for future improvements. We observed two main sources of errors, semantic segmentation inaccuracies and inability to find the goal object. In order to quantify the effect of both the error modes, we evaluate the proposed model with ground truth semantic segmentation (see \textbf{SemExp{} w. GT SemSeg} in Table~\ref{tab:ablations}) using the `Semantic' sensor in Habitat simulator. This leads to a success rate of 73.1\% vs 54.4\%, which means around 19\% performance can be improved with better semantic segmentation. The rest of the ~27\% episodes are mostly cases where the goal object is not found, which can be improved with better semantic exploration. \subsection{Result on Habitat Challenge} \label{sec:challenge} \vspace{-7pt} SemExp{} also won the CVPR 2020 Habitat ObjectNav Challenge. The challenge task setup is identical to ours except the goal object categories. The challenge uses 21 object categories for the goal object. As these object categories are not overlapping with the COCO categories, we use DeepLabv3~\cite{chen2017rethinking} model for semantic segmentation. We predict the semantic segmentation and the semantic map with all 40 categories in the MP3D dataset including the 21 goal categories. We fine-tune the DeepLabv3 segmentation model by retraining the final layer to predict semantic segmentation for all 40 categories. This segmentation model is also trained with the map-based loss in addition to the first-person segmentation loss. The performance of the top 5 entries to the challenge are shown in Table~\ref{tab:challenge}. The proposed approach outperforms all the other entries with a success rate of 25.3\% as compared to 18.8\% for the second place entry. \iffalse \begin{wrapfigure}{r}{0.45\textwidth} \centering \vspace{-15pt} \includegraphics[width=0.45\textwidth]{images/habitat_leaderboard.pdf} \vspace{-15pt} \caption{\small \textbf{Results on CVPR 2020 Habitat Object Goal Navigation Challenge.} Our submission based on the Goal-Oriented Semantic Exploration{} model code-named `Arnold' is at top of the leaderboard at the time of submission.} \label{fig:challenge} \end{wrapfigure} \fi \iffalse \setlength{\tabcolsep}{6.5pt} \begin{table}[] \caption{\small \textbf{Results on CVPR 2020 Habitat Object Goal Navigation Challenge.} Table showing the performance of top 5 entries on the challenge leaderboard. Our submission based on the SemExp{} model is at top of the leaderboard at the time of submission.} \vspace{-5pt} \label{tab:challenge} \centering \begin{tabular}{@{}llccccccc@{}} \toprule & & \multicolumn{3}{c}{Test-standard} & & \multicolumn{3}{c}{Minival} \\ Method & & SPL & Success & Dist To Goal & & SPL & Success & Dist To Goal \\ \midrule SemExp & & \textbf{0.071} & \textbf{0.179} & \textbf{8.818} & & \textbf{0.246} & \textbf{0.467} & \textbf{3.334} \\ Active Exploration & & 0.041 & 0.089 & 9.461 & & 0.108 & 0.167 & 5.079 \\ DD-PPO~\cite{wijmans2019decentralized} & & 0.021 & 0.062 & 9.316 & & - & - & - \\ Blue Ox & & 0.017 & 0.060 & 8.903 & & 0.083 & 0.133 & 4.254 \\ SRCB-robot-sudoer & & 0.002 & 0.004 & 10.276 & & 0.124 & 0.233 & 4.848 \\ PPO RGBD~\cite{savva2019habitat} & & - & - & - & & 0.000 & 0.000 & 6.055 \\ Random & & 0.000 & 0.000 & 10.330 & & 0.000 & 0.000 & 6.379 \\ \bottomrule \end{tabular} \end{table} \fi \setlength{\tabcolsep}{18pt} \begin{table}[] \caption{\small \textbf{Results on CVPR 2020 Habitat ObjectNav Challenge.} Table showing the performance of top 5 entries on the Test-Challenge dataset. Our submission based on the SemExp{} model won the challenge.} \vspace{-5pt} \label{tab:challenge} \centering \begin{tabular}{@{}llccc@{}} \toprule Team Name & & SPL & Success & Dist To Goal (m) \\ \midrule \textbf{SemExp} & & \textbf{0.102} & \textbf{0.253} & \textbf{6.328} \\ SRCB-robot-sudoer & & 0.099 & 0.188 & 6.908 \\ Active Exploration (Pre-explore) & & 0.046 & 0.126 & 7.336 \\ Black Sheep & & 0.028 & 0.101 & 7.033 \\ Blue Ox & & 0.021 & 0.069 & 7.233 \\ \bottomrule \end{tabular} \vspace{-7pt} \end{table} \begin{figure*}[t!] \centering \includegraphics[width=0.99\linewidth,height=\textheight,keepaspectratio]{images/real_world.pdf} \vspace{-7pt} \caption{\small \textbf{Real-world Transfer.} Figure showing an example trajectory of the SemExp{} model transferred to the real-world for the object goal `potted plant'. Sample images seen by the robot are shown on the top and the predicted semantic map is shown below. The long-term goal selected by the Goal-driven Policy is shown in blue.} \label{fig:real_world_living} \vspace{-7pt} \end{figure*} \vspace{-4pt} \subsection{Real World Transfer} \vspace{-7pt} We used the Locobot hardware platform and PyRobot API~\citep{pyrobot2019} to deploy the trained policy in the real world. In Figure ~\ref{fig:real_world_living} we show an episode of the robot when it was provided `potted plant' as the object goal. The long-term goals sampled by the Goal-driven policy (shown by blue circles on the map) are often towards spaces where there are high chances of finding a potted plant. This indicates that it is learning to exploit the structure in the semantic map. Out of 20 trials in the real-world, our method succeeded in 13 episodes leading to a success rate of 65\%. End-to-end learning-based policies failed consistently in the real-world due to the visual domain gap between simulation environments and the real-world. Our model performs well in the real-world as (1) it is able to leverage Mask RCNN which is trained on real-world data and (2) the other learned modules (map denoising and the goal policy) work on top-down maps which are domain-agnostic. Our trials in the real-world also indicate that perfect pose and depth are not critical to the success of our model as it can be successfully transferred to the real-world where pose and depth are noisy. \section{Conclusion} \vspace{-7pt} In this paper, we presented a semantically-aware exploration model to tackle the object goal navigation task in large realistic environments. The proposed model makes two major improvements over prior methods, incorporating semantics in explicit episodic memory and learning goal-oriented semantic exploration policies. Our method achieves state-of-the-art performance on the object goal navigation task and won the CVPR2020 Habitat ObjectNav challenge. Ablation studies show that the proposed model learns semantic priors which lead to more efficient goal-driven navigation. Domain-agnostic module design led to successful transfer of our model to the real-world. We also analyze the error modes for our model and quantify the scope for improvement along two important dimensions (semantic mapping and goal-oriented exploration) in the future work. The proposed model can also be extended to tackle a sequence of object goals by utilizing the episodic map for more efficient navigation for subsequent goals. \section*{Acknowledgements} \vspace{-7pt} This work was supported in part by the US Army W911NF1920104, IARPA D17PC00340, ONR Grant N000141812861, DARPA MCS and ONR Young Investigator. We would also like to acknowledge NVIDIA’s GPU support. \textbf{Licenses for referenced datasets:}\\ Gibson: {\footnotesize \url{http://svl.stanford.edu/gibson2/assets/GDS_agreement.pdf}}\\ Matterport3D: {\footnotesize \url{http://kaldir.vc.in.tum.de/matterport/MP_TOS.pdf} \\
2112.02545
\section{Introduction} \input{010_introduction} \section{Two new conservations} \label{sec:conserved} \input{020_conserved} \section{Conserved sums of cotangents} \label{sec:cotangents} \input{025_cotangents} \section{Harmonics and homothetics} \label{sec:homothetics} \input{040_homothetics} \section{Isocurves of Brocard angle} \label{sec:isobrocs} \input{050_isobrocs} \section{Videos} \label{sec:videos} \input{090_videos} \section*{Acknowledgements} \input{120_ack} \subsection*{Main Results} Using a simulation-based approach (mostly with Mathematica \cite{mathematica_v10}), we detected the following phenomena manifested by harmonic polygons which, to the best of our knowledge, had not been yet described. \subsection{New conservations} the following conservations are proved in \cref{sec:conserved}: \begin{itemize} \item The sum of inverse squared sidelengths. \item The sum of inverse squared radii of {\em Apollonius' circles} which are generalizations of same-named circles in triangles \cite{mw}; \item The sum of powers of internal angle cotangents, as well as all elementary symmetrical functions thereof (except for one). \end{itemize} In \cref{tab:conserve} the above conservations are compared side-by-side with others manifested by other Poncelet families studied in \cite{akopyan2020-invariants,bialy2020-invariants,caliz2020-area-product,galkin2021-affine,reznik2021-fifty-invariants,roitman2021-bicentric}. \subsection{Relationship to the Poncelet Homothetic family} In \cref{sec:homothetics} we show that a certain polar image of the harmonic family is the so-called ``homothetic family'', i.e., a Poncelet family of $N$-gons interscribed between two homothetic ellipses. Therefore, and as shown in \cref{fig:homot-polars}, two ``lateral'' harmonic families can be obtained from the homothetic one: these are polar images of the latter with respect to the left (resp. right) focus of their inner ellipse. We show that the harmonic mean of their areas is invariant for $N=3,5$ and conjecture this will hold for all $N$. \begin{figure} \centering \includegraphics[width=\textwidth]{pics/0050_homot_polars.pdf} \caption{The Poncelet homothetic family (blue), interscribed between two homothetic ellipses (black and brown) is the polar image of a harmonic family (left, magenta) with respect to its symmedian point $K$ which coincides with the (left) internal focus $f_1'$ of the homothetic family. The polar image of the latter with respect to its right internal focus $f_2'$ is a mirrored, out-of-phase image of the left one. If $N$ is odd, the harmonic mean of the areas of the two shown lateral harmonic polygons (magenta) is experimentally invariant (\cref{conj:inv-area-sum}).} \label{fig:homot-polars} \end{figure} \subsection{Isocurves of Brocard angle} Based on experimental evidence, in \cref{sec:isobrocs} we conjecture that a result by Johnson \cite{johnson17-schoute} for $N=3$ remains valid for all $N$. Namely, that the isocurves of inversion centers for constant Brocard angle are circles in a special pencil known as the Schoute pencil, containing the circumcircle and Brocard circle of the family (defined in \cref{app:harm}). \subsection*{Related Work} Original results concerning harmonic polygons can be found in \cite{casey1888,simmons1886,tarry1887}. In \cite{sharp45}, the harmonic family is defined as a generic projection of a regular polygon, but in this case metric properties are lost. In \cite[Section 4.6, p. 129]{akopyan12}, the harmonic porism is studied in the Klein model of hyperbolic plane (where $K$ becomes the center of the ideal circle). A recent study of harmonic quadrilaterals is \cite{pamfilos2014-harmonics}. The more elementary Brocard porism of triangles is studied in \cite{bradley2011-brocard,bradley2007-brocard,johnson1960,simmons1888-recent}. In \cite{garcia2020-brocard-loci} loci of triangle centers over the Brocard porism is studied while \cite{reznik2022-brocard-converging} a converging sequence of such porisms is analyzed. Quantities conserved by Poncelet $N$-gons have appeared in recent studies, including: (i) the confocal pair \cite{garcia2020-new-properties,reznik2020-eighty,reznik2021-fifty-invariants}, (ii) the homothetic pair \cite{galkin2021-affine}, (iii) the bicentric family \cite{roitman2021-bicentric}, and (iv) other special families \cite{bellio2021-parabola-inscribed,garcia2022-steiner-soddy}. In \cite{bernhart59} a certain polynomial is proposed which suggests that a collection of expressions are invariant over the harmonic family (this is related to our results in \cref{sec:conserved}). \subsection*{Article organization} In the next section we review the basics of the harmonic polygon family. Two new conserved quantities are proved in \cref{sec:conserved}; conservations based on the sum of powers of cotangents are proved in \cref{sec:cotangents}; the relationship between the harmonic family and Poncelet homothetics is derived in \cref{sec:homothetics}. A conjecture regarding isocurves of constant Brocard angle appears in \cref{sec:isobrocs}. Videos of some experiments appear in \cref{sec:videos}. In \cref{app:review} we review the basic construction and geometry of harmonic polygons. To facilitate further exploration, \cref{app:harm} provides explicit formulas for vertices and objects associated with the harmonic family. \subsection{Inverse squared sidelengths} Let $s_i$ denote the $i$-th sidelength of a harmonic polygon, $i=1,\ldots,N$. \begin{proposition} Over $\mathcal{P}$ the sum of inverse squared sidelenghts is invariant and given by: \[\sum_{k=1}^N \frac{1}{s_k^2}= {N} \frac{ d^2 \cos^2\alpha+(d^{4}+1)/4 }{ (1-d^2)^2 \sin^2\alpha} \] \end{proposition} \begin{proof} By the geometric condition that defines a vertex $w_{k}$ of $P$ in terms of a vertex $z_{k}$ of $R$, we have: \[w_{k}=\frac{d\bar{z}_{k}-1}{\bar{z}_{k}-d}\] Using this expression for $w_{k}$ and the corresponding one for $w_{k-1}$, a simple computation yields: \[w_{k}-w_{k-1}=\frac{(\bar{z}_{k}-\bar{z}_{k-1})(1-d^2)}{(\bar{z}_{k}-d)(\bar{z}_{k-1}-d)}\] From the fact that $\left|\bar{z}_{k}-\bar{z}_{k-1}\right|=2\sin \alpha$, since it is the length of a side of $R$, we may conclude that: \[s_{k}^{-2}=\left|w_{k}-w_{k-1}\right|^{-2}=\frac{\left| z_{k}-d \right|^{2}\left| z_{k-1}-d \right|^{2}}{4(1-d^2)^2\sin^2\alpha}\] By the law of cosines, it follows that: \begin{align*} \left| z_{k}-d \right|^{2}&=1+d^2-2d\cos(\nu_{k}), \\ \left| z_{k-1}-d \right|^{2}&=1+d^2-2 d\cos(\nu_{k-1}) \end{align*} where $\nu_{k}=2\alpha k+t$ and $\nu_{k-1}=2\alpha (k-1)+t$. So: {\small \[ \left| z_{k}-d \right|^{2}\left| z_{k-1}-d \right|^{2}=(1+d^2)^2-2d(1+d^2)(\cos(\nu_{k})+\cos(\nu_{k-1}))+4d^2\cos(\nu_{k})\cos(\nu_{k-1})\] } When we sum over $k$, it is clear that the sum of $\cos(\nu_{k})$ and $\cos(\nu_{k-1})$ are both zero, so that the only non-trivial sum to evaluate is: \[ \sum_{k=1}^N \cos(\nu_{k})\cos(\nu_{k-1}) \] Since: \[ \cos(\nu_{k})\cos(\nu_{k-1})=\frac{1}{2}\left(\cos(\nu_{k}+\nu_{k-1})+\cos(\nu_{k}-\nu_{k-1})\right) \] We may write the above sum as: \[ \frac{1}{2}\sum_{k=1}^N{\cos\left(2\alpha (2k-1)+2t\right)}+\frac{N}{2}\cos\left(2\alpha \right) \] It is well known that the above sum is equal to zero, see for example \cite{knapp2009}. A short computation then yields the desired expression for $\sum_{k=1}^N (1/{s_k^2})$. \end{proof} \subsection{Apollonius' radii} \begin{definition}[Apollonius' Circles] Given a triangle, one of the three circles passing through a vertex and both isodynamic points $S$ and $S'$ \cite[Isodynamic Points]{mw}. \end{definition} Referring to \cref{fig:apoll}, for each vertex $w_{k}$ in a harmonic polygon, consider the ``generalized'' Apollonius circle $C_{k}$ passing through the points $w_{k}$, $d$ and $d^{-1}$ (these are the limiting points of the generalized Schoute pencil \cite{johnson17-schoute}). Let $r_{k}$ be the radius of $C_{k}$. We will prove that: \begin{figure} \centering \includegraphics[width=\textwidth]{pics/0070_apoll.pdf} \caption{A harmonic polygon $\P$ (blue) and its Apollonius' circles (orange), each of which passes through one vertex of $\P$ and the two limiting points $\ell_1,\ell_2$ of the Schoute pencil. Also shown is the Lemoine axis (dashed green) of the pencil.} \label{fig:apoll} \end{figure} \begin{proposition} Over $\mathcal{P}$, the sum of inverse squared Apollonius' radii is invariant and given by: \[ \sum_{k=1}^N \frac{1}{r_k^2}=\frac{2N}{(d^{-1}-d)^2} \] \end{proposition} \begin{proof} Let $\gamma_{k}=\angle d w_{k}d^{-1}$, then, by the law of sines, we have \[ \frac{2\sin{\gamma_k}}{d^{-1}-d}=\frac{1}{r_k} \] A straightforward computation, using for instance the complex cross ratio, shows that the points $0,w_{k},d^{-1}$ and $z_{k}$ are concyclic, and from this we conclude that $\gamma_{k}=2\alpha k+t$ $mod$ $2\pi$. Therefore: \[ \sum_{k=1}^{N}\frac{1}{r_{k}^2}=\frac{4}{(d^{-1}-d)^2}\sum_{k=1}^{N}\sin^2{\gamma_k} \] Using the identity $\sin^2{x}=\left(1-\cos(2x)\right)/2$ and the fact that: \[ \sum_{k=1}^{N}\cos{2\gamma_k}=0 \] we conclude that \[ \sum_{k=1}^{N}\sin^2{\gamma_k}=\frac{N}{2} \] and therefore: \[ \sum_{k=1}^{N}\frac{1}{r_{k}^2}=\frac{2N}{(d^{-1}-d)^2} \] Which yields the claim. \end{proof} \subsection{Symmetric invariants} To discuss a set of invariant quantities involving the elementary symmetric functions of the cotangents of the internal angles of harmonic polygons, we will use the following notation for such functions: Let $X=(X_{1},X_{2},...,X_{N})$ and let $e_{k}(X)$ denote the elementary symmetric functions in the variables $X_{j}$ ($j=1,\ldots,N$) that is, $e_{0}(X)=1$, $e_{1}(X)=\sum_{j=1}^{N} X_{j}$, $e_{2}(X)=\sum_{1\leq j<i\leq N} X_{i}X_{j},\ldots,e_{N}(X)=X_{1}X_{2} \ldots X_{N}$. Our next result is a generalization of the invariance of the sum of cotangents of the internal angles $\theta_i$ of harmonic polygons. \begin{theorem} Let $P$ be a harmonic $N$ sided polygon, $\lambda=(\cot{\theta_1},\cot{\theta_2,...,\cot{\theta_N})}$, then, the polynomials $e_{1}(\lambda), e_{2}(\lambda),...,e_{N-1}(\lambda)$ are invariant, that is, they do not depend on $t$. \label{thm:symmetric} \end{theorem} \begin{proof} By the \cref{lem:chasing}, $e_{k}(\lambda)$ is a linear combination (with constant coefficients) of the elementary symmetric functions of the variables $$c_{j}=\cos{(2\alpha j+t)},$$ for $j=0,...,k$. Therefore, it suffices to prove that $e_{1}(c), e_{2}(c),...,e_{N-1}(c)$, where $$c=(c_{1},c_{2},...,c_{N}),$$ are invariant. Since $e_{k}(c)$ is a sum of products of cosines, then, the trigonometric identity \[ \prod_{i=1}^{m}\cos{\theta_{i}}=\frac{1}{2^{m-1}}\sum_{p_{n}\in p}{\cos{(p_n)}} \] where $p$ is the set of $2^{m-1}$ numbers having the form $\theta_{1} \pm \theta_{2} \pm...\pm \theta_{m}$, allows one to express $e_{k}(c)$ as a linear combination of cosines. The general term of this combination has the form \[ A_m\cos {(mt+\varphi_{m})} \] where $m$ varies from $0$ to $k$, and $A_m$ and $\varphi_{m}$ are constants. This general term can be rewritten as: \[ a_m\cos {(mt)}+b_m\sin {(mt)} \] Except for $m=0$, such terms are periodic functions with period $2\pi/{m}$. But notice that $e_{k}(c)$ is a periodic function with period $2\alpha $, with $N>k$. Therefore, from the well known orthogonality of trigonometric functions, it follows that $a_{m}=b_{m}=0$ for all $m\neq 0$. In other words, $e_{k}(c)$ must be constant. \end{proof} \subsection{Higher cotangent powers} As shown in \cref{tab:cot-higher}, the sum of cotangents of powers $k$ higher than $2$ will also be invariant, when $N>k$. This can be regarded as a corollary to \cref{thm:symmetric}. \begin{table} \begin{tabular}{|c|cccccc|} \hline k & N=3 & N=4 & N=5 & N=6 & N=7 & N=8 \\ \hline 1 & $\checkmark$ & 0 & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ 2 & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ 3 & & 0 & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ 4 & & & $\checkmark$ & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ 5 & & 0 & & $\checkmark$ & $\checkmark$ & $\checkmark$ \\ 6 & & & & & $\checkmark$ & $\checkmark$ \\ 7 & & 0 & & & & $\checkmark$ \\ \hline \end{tabular} \caption{A $\checkmark$ (resp. $0$) indicates that for a given $N$, $\sum\cot^k(\theta_i)$ is invariant. Note that we get invariance if (i) $N>k$ and (ii) $N=4$, odd $k$, in which case the sum is zero.} \label{tab:cot-higher} \end{table} Since for $N=4$ opposite angles are supplementary: \begin{corollary} If $N=4$, $\sum\cot^k(\theta_i)=0$ for all odd $k$. \end{corollary} \subsection{Comparing conservations across Poncelet families} \cref{tab:conserve} shows Conservations proved side-by-side with those manifested by other Poncelet families, described and/or proved in \cite{akopyan2020-invariants,bialy2020-invariants,caliz2020-area-product,galkin2021-affine,reznik2021-fifty-invariants,roitman2021-bicentric}. \begin{table} \setlength{\tabcolsep}{2pt} {\small \begin{tabular}{|c||c|c|c|c|c|} \hline invariant & Confocal & Bicentric & Inversive & Homothetic & Harmonic \\ \hline $L$ & \checkmark$^o$ & & \checkmark \cite{roitman2021-bicentric} & & \\ \hline $A$ & & & & \checkmark$^o$ & \\ \hline $L/A$ & & \checkmark$^o$ & & & \\ \hline $\sum{s_i}^2$ & & & & \checkmark \cite{galkin2021-affine} & \\ \hline $\sum{s_i}^2/A$ & & & & \checkmark \cite{galkin2021-affine} & \checkmark$^o$ \\ \hline $\sum{s_i^{-2}}$ & & & & & \checkmark \\ \hline $\sum{r_i^{-2}}$ & & & & & \checkmark \\ \hline $\sum{\cos}$ & \checkmark \cite{akopyan2020-invariants,bialy2020-invariants,caliz2020-area-product} & \checkmark \cite{roitman2021-bicentric} & \checkmark$^\dagger$ \cite{roitman2021-bicentric} & & \\ \hline $\sum{\cot}$ & & & & \checkmark \cite{galkin2021-affine} & \checkmark \\ \hline $\sum{\cot^2}$ & & & & \checkmark$^\dagger$ \cite{galkin2021-affine} & \checkmark \\ \hline $\sum{(\sin \cos)}/L$ & & $\checkmark$ & & & \\ \hline $\sum{(\sin \cos)}/A$ & & $\checkmark$ & & & $\checkmark$ \\ \hline $A_1 A_2$ & & \checkmark$^*$ & \checkmark$^*$ & & \\ \hline {\scriptsize $A_1^{-1}+A_2^{-1}$} & & & & & \checkmark$^*$ \\ \hline \hline polar of & Bicentric & Confocal & -- & Harmonic & Homothetic \\ \hline {\scriptsize \makecell[cc]{Inversion\\Center}} & $\ell_1$ & $f_1,f_2$ & -- & $K$ & $f_1',f_2'$ \\ \hline \end{tabular} } \caption{Quantities conserved by various Poncelet families and their polar-derived families. An $^o$ after a $\checkmark$ indicates the quantity is well-known. References are provided to extant proofs. The last two lines (Polar) indicate how to obtain the current family as the polar image of some other family with respect to a circle centered on the indicated inversion center. For the case of side areas ($A_1,A_2$) both foci are needed. Notes: $\dagger$: $N{\neq}4$. $*$: odd $N$.} \label{tab:conserve} \end{table} \subsection{From harmonics to homothetics} Referring to \cref{fig:homot-polars}: \begin{proposition} The polar image of $\P$ with respect to a unit circle centered on the symmedian point $K$ of $\P$ is a new Poncelet family $\H$ of polygons interscribed between two homothetic, concentric ellipses $\E_H$ (external) and $\E_h$ (internal) given by: \begin{align*} \E_{H}:& \frac{(x-x_H)^2}{a_H^2}+\frac{y^2}{b_H^2}-1=0,\;\;\;\E_h: \frac{(x-x_h)^2}{a_h^2}+\frac{y^2}{b_h^2}-1=0, \\ a_h& =\frac{(x_0^2 + 1)^2}{|1-x_0^2| },\;\; b_h=x_0^2+1,\;\; x_h=\frac{x_0(3x_0^4 + 3x_0^2 + 2)}{x_0^4-1}\\ a_H &=a_h/\cos\alpha,\;\; b_H =b_h/\cos\alpha,\;\; x_H=x_h \end{align*} \end{proposition} \subsection{From homothetics back to harmonics} Let $\H$ be a family of Poncelet $N$-gons interscribed between two concentric, homothetic ellipses $\E_H=(a_H,b_H)$ and $\E_h=(a_h,b_h)$ with common centers at $(0,0)$. Let $f_h=(-c_h,0)$ be a focus of $\E_h$, where $c_h^2=a_h^2-b_h^2$. \begin{proposition} The polar image of $\H$ with respect to a unit circle centered on $f_h$ is a harmonic family inscribed in a circle $\C_1=(O_1,R_1)$ and circumscribing an ellipse $\E_1$ with semiaxes $(a_1,b_1)$ and centered on $(x_1,0)$ where: \begin{align*} O_1=&\left[-c_h\frac{(1+b_h^2)}{b_h^2},0\right],\;\;\;R_1=\frac{a_h}{b_h^2}\\ x_1&=-c_h \left(a_h^2+(1 -c_h^{2}) \cos^2\alpha \right)/k^2\\ a_1&= a_h\cos\alpha /k^2,\;\;\;b_1= {a_h\cos\alpha}/(b_h k) \end{align*} where $k^2= a_h^2-c_h^{2} \cos^2\alpha$. Furthermore, the symmedian $K_1$ of the harmonic family coincides with $f_h$. \end{proposition} \begin{corollary} Let $\delta=|K_1-O_1|$. \[ \left(\frac{\delta}{R_1}\right)^2 = 1-\left(\frac{b_h}{a_h}\right)^2 \] \end{corollary} \subsection*{Lateral harmonic areas} Let $\H$ be a Poncelet family of $N$-gons interscribed between two homothetic, concentric ellipses $\E_H,\E_h$. Let $f_{h,1},f_{h,2}$ denote the foci of $\E_h$. Let $A_1$ (resp. $A_2$) denote the area of the harmonic polygon which is a polar image of $\H$ with respect to a circle centered on $f_{h,1}$ (resp. $f_{h,2}$). Note that if $N$ is even, a polygon in the homothetic family is centrally symmetric. Therefore, $A_1=A_2$, with each area variable. When $N$ is odd, these areas are in general distinct. \begin{proposition} For $N=3$ and $N=5$, $1/A_1+1/A_2$ is invariant and given by: \begin{align*} N=3:&\;\; \frac{\sqrt{3}}{18}\frac{b}{a} (a^2+3b^2)\\ N=5:& \;\;\frac{b}{40\sin(2\pi/5)a} {\frac { \left( {a}^{4}+10\,{b}^{2}{a}^{2}+5\,{b}^{4} \right) \left( \sqrt {5}(a^2+3\,{b}^{2}) +5\,{a}^{2}+7\,{b}^{2} \right) }{ 5\,{a}^{4}+10\,{b }^{2}{a}^{2}+{b}^{4} }}\\ \end{align*} \end{proposition} Experimentally, the following holds: \begin{conjecture} For any odd $N$, $1/A_1+1/A_2$ is invariant. \label{conj:inv-area-sum} \end{conjecture} If the \cref{conj:inv-area-sum} holds then: \begin{corollary} $\frac{1}{\sum{s_{i,1}^2}}+\frac{1}{\sum{s_{i,2}^2}}$ is invariant. \end{corollary} This stems from the fact that for any harmonic polygon $\cot\omega=\sum{s_i^2}/(4A)$ \cite[§16, pp. 298]{simmons1886}, where $s_i$ is the ith sidelength of a harmonic polygon, and both polar images (by symmetry of the foci with respect to the center of the homothetic family) have the same $\omega$. \begin{conjecture} $\frac{\sum{\sin(2\theta_i)}}{A}$ is invariant. Equivalently, $\frac{\sum{\sin(2\theta_i)}}{\sum{s_i^2}}$ is invariant. \end{conjecture} If the \cref{conj:inv-area-sum} holds then: \begin{corollary} $\frac{1}{\sum{\sin(2\theta_{i,1})}}+\frac{1}{\sum{\sin(2\theta_{i,2})}}$ is invariant. \end{corollary} \subsection{Closing the loop} Let $\mathcal{R}$, $\P$, $\H$ be as above. Referring to \cref{fig:3-fams}, below we specify transformations which interchange families in the triad. Below let $\P(N,\omega)$ denote a family of harmonic $N$-gons with Brocard angle $\omega$. \begin{figure} \centering \includegraphics[trim=75 0 125 0,clip,width=\textwidth,frame]{pics/0100_gen_harmonic_triad.pdf} \caption{The three families mentioned in this article are inversive, affine, or polar images of each other. Note that the inversive relationship between regular and harmonic is equivalent to the projection of \cref{fig:harm-proj}.} \label{fig:3-fams} \end{figure} \begin{proposition} The inversive image of $\mathcal{R}$ with respect to a unit circle centered on $(x_0,0)$ is $P(N,\omega)$ if $x_0= \sqrt{\frac{1-\tan\alpha \tan\omega}{1+\tan\alpha \tan\omega}}$. \end{proposition} Let $\H$ be a family of $N$-gons which is an affine image of the $\mathcal{R}$, where $(x,y) \to (k x, y)$. Clearly, $\H$ is bounded by two homothetic, concentric ellipses $\E,\E'$ where $\E=(k,1)$ and, using the geometry of regular polygons, $\E'=(k \cos\alpha,\cos\alpha)$. \begin{proposition} The polar image of $\H$ with respect to a focus of $\E'$ will be $\P(N,\omega)$ if $k = \cot\alpha \cot\omega$. \end{proposition} Let $a,b$ denote the semiaxes of the the inner ellipse in a Poncelet homothetic family $\H$. \begin{proposition} The polar image of $\H$ with respect to an internal focus will be identical to the inversive image of $\mathcal{R}$ with respect to a unit circle centered on $(x_0,0)$ if $x_0 = \sqrt{\frac{a+b}{a-b}}$. \end{proposition} \subsection{Harmonic Vertices} \begin{align*} [x_i,y_i]&=\left[-\frac{ \left( 1- 2\,x_0^{2} \right) \cos( 2\alpha i+t) + x_0^{3} } { 2\,x_0\, \cos(2\alpha i+t) -1-x_0^{2} }, -\frac{\sin(2\alpha i+t)}{2x_0\cos(2\alpha i+t) - 1-x_0^2 }\right] \end{align*} \subsection{Circumcircle \torp{$\C=[O,R]$}{C}} \[O=\left[{\frac { \left( x_0^{2}-2 \right) x_0}{ x_0^{2}-1}},0\right],\;\;\;R= \frac{1}{|x_0^{2}-1|} \] \subsection{Brocard points \torp{$\Omega_1$,$\Omega_2$}{omega1,omega2}} \[ \Omega_{1,2}= \frac{1}{k}\left[ (2 x_0^2 - 1)\cos(2\alpha) - x_0^4 + x_0^2 - 1, \pm \sin(2\alpha)\right] \] where $k=2 x_0 \cos(2\alpha) - x_0^3 - 1/x_0$. \subsection{Brocard inellipse \torp{$\E$}{E}} \begin{align*} \E:\;& \frac{(x-x_c)^2}{a^2}+\frac{y^2}{b^2}=1\\ a&= \frac{(1-x_0^2)\cos\alpha}{k'},\;\;\;b=\frac{\cos\alpha}{ \sqrt{k'}} \end{align*} where $x_c$ is the x-coordinate of $\Omega_1$ and $k'= (x_0^2 + 1)^2-(2 x_0 \cos\alpha)^2$. The eccentricity $\varepsilon$ of $\E$ is given by: \[\varepsilon=\frac{c}{b}= \frac{2|x_0|\sin\alpha}{\sqrt{(x_0^2 + 1)^2-4{x_0^2}\cos^2{\alpha}}} \] \subsection{Symmedian point \torp{$K$}{K}} \[K= \left[\frac{x_0^3}{ x_0^2 + 1}, 0\right]\] Let $\delta=|K-O|$. It can be shown that: \[ x_0 =\frac{ 1\pm\sqrt{ 1-(\delta/R)^2} }{ (\delta/R) \] Note that the product of the two possible $x_0$ is unity. \subsection{Brocard circle \torp{$\C'=[O',r]$}{C'}} \[ O'=\left[\frac{x_0 (x_0^4 - x_0^2 - 1 )}{x_0^4 - 1},0\right],\;\;\;r = \left|\frac{x_0}{x_0^4 - 1}\right| \] \subsection{Limiting points \torp{$\ell_{1,2}$}{l1,2} of \torp{$\C$}{C} and \torp{$\C'$}{C'}} \[\ell_1=[x_0,0],\;\; \ell_2=\left[\frac{x_0^2 - 1}{x_0},0\right] \] \subsection{Brocard angle \torp{$\omega$}{w}} Casey gives the relation \cite[Prop. 3, pp. 209]{casey1888}: \[\tan\omega =\sqrt{1-(\delta/R)^2}\cot\alpha\] where $\delta = |K-O| = 2 r$. This can also be expressed as: \[\tan\omega = \frac{ |1-x_0^2|}{ 1+x_0^2 } \cot\alpha \] This implies that: \[ x_0 = \pm \sqrt{\frac{\cot\alpha-\tan\omega}{\cot\alpha+\tan\omega}} \]
1905.01869
\section{Introduction} Let \(E\) be a vector bundle over the manifold \(M\) with fiber \(F\) endowed with a connection \(\nabla\), that is in a local trivialization \(U \times F \subset F\), \(\nabla = d + \omega\), where for each point \(p \in U\), \(\omega_p : U \to \operatorname{F} (T_pM \to \mathfrak{gl} (F))\) is the connection form, \(\operatorname{F} (T_p M \to \mathfrak{gl} (F))\) being the space of linear forms from the tangent space \(T_p M\) to the Lie algebra \(\mathfrak{gl} (F)\). Any map \(\gamma \in C^1 ([0, 1], M)\) defines a parallel transport \(\operatorname{Pt}_\gamma : [0, 1] \to GL (E_p)\) from \(E_{\gamma (0)}\) to \(E_{\gamma (1)}\) as a solution to the problem \(\operatorname{Pt}_\gamma' (t) + \omega_{\gamma (t)}[\gamma' (t)] \operatorname{Pt}_{\gamma (t)} = 0\) and \(\operatorname{Pt}_\gamma (0) = \operatorname{id}\). If \(\gamma (0) = \gamma (1) = p \in M\), \(\operatorname{Pt}_{\gamma} \in GL (E_p)\) is the holonomy of the connection along \(\gamma\). The holonomy group of the connection \(\nabla\) at \(p\) is the group generated by all the holonomies. A fundamental question is the relationship between the holonomy at a point and the curvature form of the connection which is represented in a local trivialization as \begin{equation*} \Omega = d \omega + \omega \wedge \omega \, . \end{equation*} Algebraically, this is settled by the Ambrose and Singer theorem \cite{Ambrose_Singer_1953}, originating from \'E. Cartan's work \cite{Cartan_1926}*{p.~4} (see also \citelist{\cite{Kobayashi_Nomizu_1963}*{theorem II.8.1}\cite{Reckziegel_Wilhelmus_2006}*{theorem 2}\cite{Sternberg_1964}*{theorem 1.2}\cite{Nijenhuis}*{Theorem 1}}), which states that the identity component of the holonomy group at \(p \in M\) coincides with the group of holonomies along null-homotopic loops and that the corresponding Lie algebra is generated by the images of the curvature form at any point of the connected component of \(p\) in \(M\) and transported parallely at the point \(p\). In particular the Lie algebra corresponding to the normal closure of the holonomy group is generated by the values of the curvature form in all local trivializations. \bigbreak We consider here the quantitative corresponding question about how the holonomy can be controlled by the curvature. More precisely, we assume that the structure group \(G \subset GL (F)\) is endowed with a bi-invariant metric and we define the holonomy amplitude of a curve \(\gamma \in C^1 ([0, 1], M)\) by \begin{multline} \ampl{\gamma} = \inf \, \biggl\{ \int_0^1 \abs{g'} \;:\; g \in C^1 ([0, 1], G), g (0) = \operatorname{id}, g (1) = \operatorname{Pt}_{\gamma} (1)\\[-1em] \text{ and \(g\) and \(\operatorname{Pt}_\gamma\) are homotopic relatively to \(\{0, 1\}\)} \biggr\} \, . \end{multline} The amplitude depends on the connection, the structure group \(G\) and the metric on \(G\), and is invariant under changes of gauge. If \(G\) is simply connected (which can in fact always be assumed by replacing the group \(G\) by its universal covering), the amplitude corresponds to the geodesic distance between the identity \(\operatorname{id}\) and \(\operatorname{Pt}_{\gamma} (b)\). In the case where \(G\) is an abelian group, then the holonomy amplitude can be computed by the integral formula \begin{equation*} \ampl{\gamma} = \biggabs{\int_0^1 \gamma^\ast \omega}, \end{equation*} where \(\gamma^\ast \omega\) is the pull back of the differential form \(\omega\), defined for each \(t \in [a, b]\) by \(\gamma^\ast \omega (t)[v] \triangleq \omega (\gamma (t))[\gamma' (t)]\). If \(\sigma \in C^1 (\mathbb{B}^2, M)\) and if \(\gamma : [0, 1] \to M\) is defined for \(t \in [0, 1]\) by \(\gamma (t) \triangleq \sigma (\cos 2 \pi t, \sin 2 \pi t)\), we have by the Stokes--Cartan formula \begin{equation} \label{eq_iizahK3hai} \ampl{\gamma} = \biggabs{\int_{\mathbb{B}^2} \sigma^\ast d \omega\,} = \biggabs{\int_{\mathbb{B}^2} \sigma^\ast \Omega \,} \,, \end{equation} since the group \(G\) is abelian and thus \(\Omega = d \omega\). This implies the estimate, \begin{equation} \ampl{\gamma} \le \int_{\sigma (\mathbb{B}^2)} \abs{\Omega} \,\mathrm{d} \mathcal{H}^2 \,, \end{equation} where the two-dimensional Hausdorff measure \(\mathcal{H}^2\) is taken with respect to a Riemannian metric on the manifold \(M\) and the norm with respect to the same Riemannian metric and with respect to the metric on the Lie algebra \(\mathfrak{g}\). If \(M = \mathbb{R}^m\), by the isoperimetric inequality \cite{Almgren_1986} this implies that for every closed curve \(\gamma : [0, 1] \to \mathbb{R}^m\) \begin{equation} \label{eq_fieHiefai0} \ampl{\gamma} \le \frac{\operatorname{length} (\gamma)^2}{4 \pi} \sup_M \abs{\Omega} \,. \end{equation} When \(G = U (1) \simeq SO (2)\), the connections are related to electro-magnetic gauge theories and the curvature \(\Omega\) of the connection corresponds to the magnetic field. Such connections appear in the definition of \emph{magnetic Sobolev spaces} \citelist{\cite{Esteban_Lions_1989}\cite{Lieb_Loss_2001}*{7.19--7.22}\cite{Kato_1972}*{(2.1)}}. The analysis of magnetic Sobolev spaces should be invariant under gauge transformation, that is, it should not depend on a particular choice of a local trivialization. In a recent work, Nguyen Hoai-Minh and the second author Jean Van Schaftingen have studied the problem of traces of magnetic Sobolev functions with constructions and estimates that depend only on the curvature of the connection \cite{Nguyen_VanSchaftingen}; a key point in this work was the estimate \eqref{eq_fieHiefai0} for \(U (1)\)--bundles. A nonabelian gauge-invariant extension of the theory of magnetic Sobolev spaces requires thus new estimates on the holonomy amplitude. We obtain the following non-abelian version of \eqref{eq_iizahK3hai}. \begin{theorem} \label{mainTheorem} If \(\sigma \in C^1 (\mathbb{B}^2, M)\) and \(\gamma : [0, 1] \to M\) is defined for \(t \in [0, 1]\) by \(\gamma (t) \triangleq \sigma (\cos 2 \pi t, \sin 2 \pi t)\), then \begin{equation*} \ampl{\gamma} \le \int_{\mathbb{B}^2} \abs{\sigma^\ast \Omega} \,. \end{equation*} \end{theorem} Here, \(\sigma^\ast \Omega\) is a \(\mathfrak{g}\)--valued \(2\)--form and \(\abs{\sigma^\ast \Omega}\) is the associated \emph{density} \citelist{\cite{Loomis_Sternberg}*{\S 10.3}\cite{Nicolaescu_2007}*{\S 3.4.1}\cite{Folland_1999}*{\S 11.4}}. \begin{corollary} \label{corollaryEuclidean} If \(M = \mathbb{R}^m\), if \(\gamma \in C^1([0, 1], M)\) and if \(\gamma (0) = \gamma (1)\), then \begin{equation*} \ampl{\gamma} \le \frac{\operatorname{length} (\gamma)^2 \, \sup_M \abs{\Omega}}{4 \pi} \,. \end{equation*} \end{corollary} \Cref{corollaryEuclidean} follows from \cref{mainTheorem} and from the observation that any closed curve \(\gamma\) bounds some minimal surface of area at most \(\frac{1}{4 \pi}(\operatorname{length} (\gamma))^2\) \cite{Almgren_1986}. The proofs of \cref{mainTheorem} and \cref{corollaryEuclidean} are performed for the curvature in the classical sense, that is when the connection form \(\omega\) is continuously differentiable. One could naturally ask whether the conclusion of \cref{mainTheorem} holds when \(\Omega\) is merely defined in a weak sense \cite{Uhlenbeck_1982} but still continuous, or whether \cref{corollaryEuclidean} holds when the weak curvature is bounded. If \(\sigma\) is a regular parametrization of a surface in \cref{mainTheorem} we can consider the question about suitable traces of the curvature that make the formula valid. \section{Preliminaries} \subsection{Properties of the amplitude of holonomy along paths} We state here some useful properties on the amplitude of holonomies along paths. \begin{proposition}[Amplitude of concatenated holonomies] If the metric on \(G\) is left-invariant, then for every \(\gamma \in C^1([0, 1], M)\) and \(\eta \in C^1([0, 1], M)\) and if \(\gamma (1) = \eta (0)\), then \begin{equation*} \ampl{\gamma \cdot \eta} \le \ampl{\gamma} + \ampl{\eta} \,. \end{equation*} \end{proposition} \begin{proof} We have by definition of the concatenation \begin{equation*} \gamma \cdot \eta (t) = \begin{cases} \gamma (2t) &\text{if \(t \in [0, \frac{1}{2}]\)},\\ \eta (2 t - 1) & \text{if \(t \in [\frac{1}{2}, 1]\)}. \end{cases} \end{equation*} We next observe that \(\operatorname{Pt}_{\gamma \cdot \eta} (t) = \operatorname{Pt}_{\gamma} (2t)\) if \(t \in [0, \frac{1}{2}]\) and \(\operatorname{Pt}_{\gamma \cdot \eta} (t) = \operatorname{Pt}_{\eta (2t - 1)} \,\circ\, \operatorname{Pt}_{\gamma (1)}\) if \(t \in [\frac{1}{2}, 1]\). It follows that if \(g \in C^1 ([0, 1], G)\) and \(h \in C^1([0, 1], G)\) are homotopic to \(\operatorname{Pt}_\gamma\) and \(\operatorname{Pt}_\eta\) relatively to \(\{0, 1\}\), then the map \(f : C^1 ([0, 1], G)\) defined by \begin{equation*} f (t) \triangleq \begin{cases} g (2t) &\text{if \(t \in [0, \frac{1}{2}]\)},\\ h (2 t - 1)\, g (1) & \text{if \(t \in [\frac{1}{2}, 1]\)}, \end{cases} \end{equation*} is homotopic to \(\operatorname{Pt}_{\gamma \cdot \eta}\) and the conclusion thus follows by right-invariance of the metric on \(G\). \end{proof} \begin{proposition}[Amplitude of conjugate holonomy] If the metric on \(G\) is right-invariant, then for every \(\gamma \in C^1 ([0, 1], M)\) and every \(\eta \in C^1 ([0, 1], M)\) such that \(\eta (0) = \gamma (0) = \gamma (1)\), one has \begin{equation*} \ampl{\Bar{\eta} \cdot \gamma \cdot \eta} = \ampl{\gamma} \,. \end{equation*} \end{proposition} \begin{proof} Assume that \(g \in C^1 ([0, 1], G)\) is homotopic to \(\operatorname{Pt}_\gamma\) relatively to \(\{0, 1\}\) and that \(h \in C^1 ([0, 1], G)\) is homotopic to \(\operatorname{Pt}_\eta\) relatively to \(\{0, 1\}\). We construct the map \(H : [0, 1] \times [0, 1] \to M\) by setting for \((s, t) \in [0, 1] \times [0, 1]\), \begin{equation*} H (s, t) \triangleq \begin{cases} h (1 - 3s - t) \,h (1 - t)^{-1} &\text{if \(0 \le s \le \frac{1 - t}{3}\)},\\ g (\frac{3 s + t - 1}{1 + 2t}) \,h (1) \,h (1 - t)^{-1} &\text{if \(\frac{1 -t}{3} \le s \le \frac{2 + t}{3}\)},\\ h (3s - 2 - t)\, g (1)\, h (0)\, h (1 - t)^{-1} & \text{if \(\frac{2 + t}{3} \le s \le 1\)}. \end{cases} \end{equation*} We conclude thus that \(g\) is homotopic to \(\operatorname{Pt}_{\Bar{\eta} \cdot \gamma \cdot \eta}\) and thus the conclusion follows. \end{proof} \subsection{Axial gauge} Our analysis will be facilitated by working with a trivialization that corresponds to the \emph{axial gauge}, also known as \emph{Arnowitt--Fickler gauge} \citelist{\cite{Itzykson_Zuber_1980}*{12-1-1}\cite{Arnowitt_Fickler_1962}}. \begin{proposition} \label{proposition_axial_gauge} For every pont \(p \in M\) and every \(v \in \mathbb{R}^m\), there exists a local trivialization \(U \times F\) such \(p \in U\) and \(v \intprod \omega= 0\) everywhere in \(U\) in this local trivialization. \end{proposition} Here \(v \intprod \omega\) denotes the \emph{interior multiplication} (or \emph{contraction}) of the form \(\omega\) by the vector \(v\): \( v \intprod \omega (x) \triangleq \omega (x)[v] \in \mathfrak{g} \), which is also denoted by \(i_v \omega\). When \(M = \mathbb{R}^m\), \(F = \mathbb{C}\), \(G = U (1)\) and \(v \in \mathbb{R}^m\) is a fixed vector, then the connection form \(\omega\) can be described by setting \(\omega (w) = i A \cdot w\) for some vector field \(A : \mathbb{R}^n \to \mathbb{R}^n\) and for every \(w \in \mathbb{R}^m\), and then the axial gauge prescribes that the component \(A \cdot v\) of the vector field \(A\) vanishes everywhere. The axial gauge does not fix the curvature form in directions transversal to \(v\). \begin{proof}[Proof of \cref{proposition_axial_gauge}] Let \(\Tilde{\Phi} : U \times F \to B\) be a local trivialization of the bundle \(B\) and \(U\) is a ball. That is \(\Phi\) is a diffeomorphism and \(\Tilde{\Phi}\) is linear on each fiber. Let \(\Tilde{\omega}\) be the connection form on \(U\). We define now a function \(g : U \to G\) by the condition that \(v \intprod (dg + \Tilde{\omega} g) = (dg + \Tilde{\omega} g)[v] = 0\). This can be done by parallel transport on every straight line parallel to the vector \(v\). We conclude by considering the map \(\Phi \triangleq \Tilde{\Phi} \,\circ\, g\). \end{proof} \section{Derivative of the holonomy} We define for \(r > 0\), the path \(\gamma_r : [0, 1] \to \mathbb{R}^2\) for each \(t \in [0, 1]\) by \(\gamma_r (t) \triangleq (r\cos 2 \pi t, r \sin 2 \pi t)\). We compute the holonomy on a circle of radius \(r > 0\) by finding a function \(g \in C^1 ([0,1], G)\) that satisfies the equation \begin{equation} \label{eq_rae2Yeengo} \left\{ \begin{aligned} g'_r (t) + 2 \pi \, r \omega (r e^{2 \pi i t})[i e^{2 \pi i t}] g_r (t) &= 0 & & \text{for \(t \in [0, 1]\)},\\ g_r (0) & = \operatorname{id}, \end{aligned} \right. \end{equation} where the plane \(\mathbb{R}^2\) is identified with the field of complex numbers, so that \(e^{2 \pi i t} = (\cos 2 \pi t, \sin 2 \pi t)\) and \(i e^{2 \pi i t} = (-\sin 2 \pi t, \cos 2 \pi t)\). The holononomy at \((r, 0)\) is then given by \(g_r (1)\). The core of the proof of \cref{mainTheorem} lies in the following derivative formula. \begin{lemma} If \(B_R \subset \mathbb{R}^2\) and if \(B_R \times F\) is a vector bundle, then for each \(r \in (0, R)\) one has \begin{equation*} \frac{d}{dr} g_r (1) - \omega (r)[1] \,g_r (1) + g_r (1)\, \omega (r) [1] = 2 \pi \, r \int_0^{1} g_r (1) \,g_r (t)^{-1} \, \Omega (r e^{2 \pi i t})[e^{2 \pi i t}, i e^{2 \pi i t}] \,g_r (t)\,\mathrm{d} t \, . \end{equation*} \end{lemma} \begin{proof} We define \(h_r (s)\triangleq \frac{\partial}{\partial r} g_r (s)\). In view of the holonomy equation \eqref{eq_rae2Yeengo}, the function \(h_r\) satisfies the system \begin{equation*} \left\{ \begin{aligned} \frac{h_r' (t)}{2 \pi} + r \omega (r e^{2 \pi i t})[i e^{2 \pi i t}] \, h_r (t) &= - ( \omega (r e^{2 \pi i t}) + r \partial_r \omega (r e^{2 \pi i t}))[i e^{2 \pi i t}] \,g_r (t) & & \text{for \(t \in [0, 1]\)},\\ h_r (0) & = 0 \,. \end{aligned} \right. \end{equation*} By variation of parameters for solutions of differential equations (see for example \cite{Hartmam_1964}*{Corollary 2.1}), we have for each \(r \in (0, R)\), \begin{equation*} h_r (1) = - 2 \pi \int_0^{1} g_r (1) \,g_r (t)^{-1} \, \biggl(\omega (r e^{2 \pi i t}) + r \, \frac{\mathrm{d}}{\mathrm{d}r} \omega (r e^{2 \pi i t})\biggr)[i e^{2 \pi i t}] \, g_r (t) \,\mathrm{d} t \,. \end{equation*} We note that \begin{multline*} 2 \pi \int_0^{1}g_r (1) \, g_r (t)^{-1} \, \omega (r e^{2 \pi i t})[i e^{2 \pi i t}] \, g_r (t) \,\mathrm{d} t\\ = \int_0^{1} g_r (1) \, g_r (t)^{-1} \, \omega (r e^{2 \pi i t})\biggl[\frac{\mathrm{d}}{\mathrm{d}t} \bigl(e^{2 \pi i t}\bigr)\biggr] \, g_r (t) \,\mathrm{d} t. \end{multline*} Integrating by parts the term on the right-hand side we have, \begin{equation*} \begin{split} 2 \pi \int_0^{1}g_r (1) \, g_r (t)^{-1} &\, \omega (r e^{2 \pi i t})[i e^{2 \pi i t}] \, g_r (t) \,\mathrm{d} t \\ = &\,\omega (r)[1] \, g_r (1) - g_r (1) \, \omega (r) [1]\\ &-2 \pi \int_0^{1} g_r (1)\, g_r (t)^{-1}\, \omega (r e^{2 \pi i t})[r i e^{2 \pi i t}] \, \omega (r e^{2 \pi i t})[e^{2 \pi i t}] \, g_r (t) \,\mathrm{d} t\\ &- 2 \pi \int_0^{1} g_r (1) \, g_r (t)^{-1} \, \biggl(\frac{\mathrm{d}}{\mathrm{d}t} \omega (r e^{2 \pi i t})\biggr)[e^{2 \pi i t}] \, \omega (r e^{2 \pi i t})[e^{2 \pi i t}] \, g_r (t) \,\mathrm{d} t\\ & + 2 \pi \int_0^{1} g_r (1) \, g_r (t)^{-1} \, \omega (r e^{2 \pi i t})[e^{2 \pi i t}]\, \omega (r e^{2 \pi i t})[r i e^{2 \pi i t}] \, g_r (t) \,\mathrm{d} t. \end{split} \end{equation*} We conclude that \vspace{1ex} \begin{equation*} \begin{split} h_r &(1)\\[-1ex] & = - 2 \pi \, r \int_0^{1}g_r (1) \, g_r (t)^{-1} \, \bigl(d \omega (r e^{2 \pi i t}) + \omega (r e^{2 \pi i t}) \wedge \omega (r e^{2 \pi i t})\bigr) [e^{2 \pi i t}, i e^{2 \pi i t}] \, g_r (t) \,\mathrm{d} t \\ &= -2 \pi \, r \int_0^{1} g_r (1) \, g_r (t)^{-1}\, \Omega (r e^{2 \pi i t})[e^{2 \pi i t}, i e^{2 \pi i t}] \, g_r (t)\,\mathrm{d} t \,. \qedhere \end{split} \end{equation*} \end{proof} By placing ourselves, in view of \cref{proposition_axial_gauge}, in an axial gauge with respect to the vector \(v = (0, 1) \in \mathbb{R}^2 \), we obtain the formula \begin{equation*} \frac{\mathrm{d}}{\mathrm{d}r} g_r (1) = 2 \pi \, r \int_0^{1} g_r (1) \, g_r (t)^{-1}\, \Omega (r e^{2 \pi i t})[e^{2 \pi i t}, i e^{2 \pi i t}] g_r (t)\,\mathrm{d} t \, . \end{equation*} and we deduce since the metric is bi-invariant, \begin{proposition} \label{proposition_amplitude_derivative} If \(B_R \subset \mathbb{R}^2\) and if \(B_R \times F\) is a vector bundle, then for every \(r \in (0, R)\), \begin{equation*} \limsup_{s \to r} \frac{\abs{\ampl{\gamma_r}- \ampl{\gamma_s}}}{\abs{r - s}} \le r \int_{\mathbb{S}^1} \abs{\Omega (r e^{i\theta})}. \end{equation*} \end{proposition} We then obtain as a consequence. \begin{proposition} \label{proposition_amplitude_derivative} If \(B_R \subset \mathbb{R}^2\) and if \(B_R \times F\) is a vector bundle, then \begin{equation*} \ampl{\gamma_R} \le \int_{B_R} \abs{\Omega}. \end{equation*} \end{proposition} \Cref{mainTheorem} follows then by applying a pull-back to the curvature. In the framework of weak connections, a natural generalization of \cref{proposition_amplitude_derivative} would be the case where \(\omega \in W^{1, 4} (B_R)\) so that \(\Omega \in L^1 (B_R)\) \cite{Uhlenbeck_1982}*{Lemma 1.1}. \begin{bibdiv} \begin{biblist} \bib{Almgren_1986}{article}{ author={Almgren, F.}, title={Optimal isoperimetric inequalities}, journal={Indiana Univ. Math. J.}, volume={35}, date={1986}, number={3}, pages={451--547}, issn={0022-2518}, doi={10.1512/iumj.1986.35.35028}, } \bib{Ambrose_Singer_1953}{article}{ author={Ambrose, W.}, author={Singer, I. M.}, title={A theorem on holonomy}, journal={Trans. Amer. Math. Soc.}, volume={75}, date={1953}, pages={428--443}, issn={0002-9947}, doi={10.2307/1990721}, } \bib{Arnowitt_Fickler_1962}{article}{ author={Arnowitt, R. L.}, author={Fickler, S. I.}, title={Quantization of the Yang--Mills field}, journal={Phys. Rev. (2)}, volume={127}, date={1962}, pages={1821--1829}, } \bib{Cartan_1926}{article}{ title={Les groupes d'holonomie des espaces généralisés}, author={Cartan, \'Elie}, journal={Acta Math.}, volume={48}, number={1--2}, date={1926}, pages={1--42}, doi={10.1007/BF02629755}, } \bib{Esteban_Lions_1989}{article}{ author={Esteban, Maria J.}, author={Lions, Pierre-Louis}, title={Stationary solutions of nonlinear Schr\"{o}dinger equations with an external magnetic field}, conference={ title={Partial differential equations and the calculus of variations, Vol. I}, }, book={ series={Progr. Nonlinear Differential Equations Appl.}, volume={1}, publisher={Birkh\"{a}user}, address={Boston, Mass.}, }, date={1989}, pages={401--449}, } \bib{Folland_1999}{book}{ author={Folland, Gerald B.}, title={Real analysis}, series={Pure and Applied Mathematics}, edition={2}, subtitle={Modern techniques and their applications}, publisher={John Wiley \& Sons}, address={New York}, date={1999}, pages={xvi+386}, isbn={0-471-31716-0}, } \bib{Hartmam_1964}{book}{ author={Hartman, Philip}, title={Ordinary differential equations}, publisher={John Wiley \& Sons, Inc., New York-London-Sydney}, date={1964}, pages={xiv+612}, } \bib{Itzykson_Zuber_1980}{book}{ author={Itzykson, Claude}, author={Zuber, Jean-Bernard}, title={Quantum field theory}, note={International Series in Pure and Applied Physics}, publisher={McGraw-Hill}, address={New York}, date={1980}, pages={xxii+705}, isbn={0-07-032071-3}, } \bib{Kato_1972}{article}{ author={Kato, Tosio}, title={Schr\"{o}dinger operators with singular potentials}, booktitle={Proceedings of the International Symposium on Partial Differential Equations and the Geometry of Normed Linear Spaces (Jerusalem, 1972)}, journal={Israel J. Math.}, volume={13}, date={1972}, pages={135--148 (1973)}, issn={0021-2172}, doi={10.1007/BF02760233}, } \bib{Kobayashi_Nomizu_1963}{book}{ author={Kobayashi, Shoshichi}, author={Nomizu, Katsumi}, title={Foundations of differential geometry}, volume={I}, publisher={Interscience}, address={New York-London}, date={1963}, pages={xi+329}, } \bib{Lieb_Loss_2001}{book}{ author={Lieb, Elliott H.}, author={Loss, Michael}, title={Analysis}, series={Graduate Studies in Mathematics}, volume={14}, edition={2}, publisher={American Mathematical Society}, address={Providence, R.I.}, date={2001}, pages={xxii+346}, isbn={0-8218-2783-9}, doi={10.1090/gsm/014}, } \bib{Loomis_Sternberg}{book}{ author={Loomis, Lynn H.}, author={Sternberg, Shlomo}, title={Advanced calculus}, publisher={Jones and Bartlett}, address={Boston, Mass.}, date={1990}, pages={xii+580}, isbn={0-86720-122-3}, } \bib{Nguyen_VanSchaftingen}{article}{ author={Nguyen, Hoai-Minh}*{inverted={yes}}, author={Van Schaftingen, Jean}, title={Characterization of the traces on the boundary of functions in magnetic Sobolev spaces}, eprint={https://arxiv.org/abs/1905.01188}, } \bib{Nicolaescu_2007}{book}{ author={Nicolaescu, Liviu I.}, title={Lectures on the geometry of manifolds}, edition={2}, publisher={World Scientific Publishing}, address={Hackensack, N.J.}, date={2007}, pages={xviii+589}, isbn={978-981-277-862-8}, isbn={981-277-862-4}, doi={10.1142/9789812770295}, } \bib{Nijenhuis}{article}{ author={Nijenhuis, Albert}, title={On the holonomy groups of linear connections}, partial={ part={Ia}, subtitle={General properties of affine connections}, journal={Indagationes Math.}, volume={15}, date={1953}, pages={233--240}, }, partial={ part={Ib}, subtitle={General properties of affine connections}, journal={Indagationes Math.}, volume={15}, date={1953}, pages={241--249}, }, partial={ part={II}, subtitle={Properties of general linear connections}, journal={Indagationes Math.}, volume={16}, date={1954}, pages={17--25}, } } \bib{Reckziegel_Wilhelmus_2006}{article}{ author={Reckziegel, Helmut}, author={Wilhelmus, Eva}, title={How the curvature generates the holonomy of a connection in an arbitrary fibre bundle}, journal={Results Math.}, volume={49}, date={2006}, number={3-4}, pages={339--359}, issn={1422-6383}, doi={10.1007/s00025-006-0228-y}, } \bib{Sternberg_1964}{book}{ author={Sternberg, Shlomo}, title={Lectures on differential geometry}, publisher={Prentice-Hall}, address={Englewood Cliffs, N.J.}, date={1964}, pages={xv+390}, } \bib{Uhlenbeck_1982}{article}{ author={Uhlenbeck, Karen K.}, title={Connections with \(L^{p}\) bounds on curvature}, journal={Comm. Math. Phys.}, volume={83}, date={1982}, number={1}, pages={31--42}, issn={0010-3616}, } \end{biblist} \end{bibdiv} \end{document}
hep-th/9704113
\section{Introduction} {}A phenomenologically interesting string model must contain the standard model of strong and electroweak interactions in its low energy effective field theory limit. Many of such realizations are constructed as orbifolds, both symmetric \cite{DHVW} and asymmetric \cite{NSV}, of the $4$-dimensional heterotic string. Recently, $3$-family grand unified string models were constructed via asymmetric orbifolds \cite{kt}. Analyses of these models would require the determination of the couplings of states in their spectra. The main goal of the present work is to provide a prescription for calculating their correlation functions and scattering amplitudes. In the process, the quantum numbers of the massless states are found and the selection rules of their couplings are determined. {}The prescription for calculating any correlation function in orbifold conformal field theory is given in Ref \cite{DFMS}. However, the actual calculations can be quite non-trivial \cite{LMN}, when the couplings of twisted string states are involved. The problem becomes even more difficult in asymmetric orbifold models, where one typically encounters some ambiguities which are not easily resolved. {}In contrast to symmetric orbifolds, where the original lattice of the $6$ compactified dimensions ({\em e.g.}, the compactification radii) can be arbitrary, consistency of asymmetric orbifolds imposes strong constraints on the allowed lattices, typically with enhanced (discrete or local) symmetry. This enhanced symmetry allows us to treat twists as shifts in the momentum lattices, in the so-called bosonic supercurrent (or covariant lattice) formalism \cite{bsc,sw}. Twisted states in an asymmetric orbifold now become ordinary momentum states in this bosonic supercurrent formalism; so their quantum numbers are straightforward to identify and calculating their correlation functions becomes relatively easy. {}In this paper, we shall first review this bosonic supercurrent formalism, and discuss briefly its rules for model-building. This formalism is a generalization of the free fermionic string model construction \cite{KLT}. So their rules for model-building are quite similar. Next we discuss the prescription for calculating the correlation functions, in particular for the twist states. All the couplings can be determined from the correlation functions, or the scattering amplitudes. For clarification, we shall apply this approach to the original ${\bf Z}_3$ symmetric (at special radii) and asymmetric models \cite{DHVW,NSV}. Then we shall discuss the construction of higher-level string models. Our goal is to apply this formalism to determine the couplings in the $3$-family grand unified models constructed recently. After we explain the prescription for calculating the couplings, the form of the superpotential for a number of these models will be presented. Their phenomenological implications will be discussed elsewhere. {}Within the framework of conformal (free) field theory and orbifolds, we find only one $3$-family $E_6$ model with a non-abelian hidden sector \cite{kt}. Here, we give its construction in the bosonic supercurrent formalism, where we see that its discrete symmetry and the charge assignments in the spectrum are quite non-trivial. There are two orbifold constructions of this model, where some of the twisted states in one orbifold appear as untwisted states in the other orbifold. The supercurrents in these two constructions are rather different in the bosonic supercurrent formalism; however, the form of the superpotential (at least at tree-level) turns out to be the same. The same procedure to obtain the forms of superpotentials is also applied to the two interesting $3$-family $SU(6)$ models, one with an $SU(3)$, and the other with an $SU(2) \otimes SU(2)$, asymptotically-free hidden sector. {}Without much work, we can obtain the superpotential for the $3$-family grand unified models (or standard-like models) that can be obtained by giving the $E_6$ adjoint Higgs an appropriate vacuum expectation value. We illustrate this with the $3$-family $SO(10)$ model that can be obtained this way, and which was constructed as an orbifold before. {}The plan is as follows: some preliminary discussions are given in section II, where the basic idea, the advantage, and the issues of the bosonic supercurrent formalism are reviewed. The orbifold rules for level-$1$ models are discussed in section III and that for higher-level models are discussed in section IV. Here, the prescription for calculating correlation functions is also discussed. Since the rules for model-building are given in the light-cone gauge, we explain how to calculate the scatterings in the covariant gauge. Section V contains a discussion of the original ${\bf Z}_3$ symmetric (at special radii) and asymmetric models. The construction of the $3$-family $E_6$ grand unified model in this formalism is discussed in Section VI, where some terms in its superpotential is also presented. The $SO(10)$ model is discussed in Section VII. Section VIII discusses the $SU(6)$ models. The quantum numbers of the massless spectra of these models and the form of the relevant supercurrents are given in the tables. Section IX contains some concluding remarks. \section{Preliminaries} {}To be specific, let us consider $4$-dimensional heterotic string models within the conformal field theory (CFT) framework, where free fields are used. Consider such a model with a Lorentzian lattice $\Gamma^{6,22}$. An orbifold is realized via modding out this lattice by a point group $P$, ${\em i.e.}$, a group of discrete rotations, or twists. We shall restrict ourselves to Abelian twists only. Let $X ({\overline z})$ be one of the right-moving complex chiral bosons in the $6$ compactified dimensions in $\Gamma^{6,22}$. In terms of 2 real bosons, $X=(X_1 +iX_2)/{\sqrt 2}$. For a ${\bf Z}_N$ twist, in the neighborhood of a twist field located at the origin, $X ({\overline z})$ undergoes a phase rotation \begin{eqnarray}\label{monod} \partial X ({\overline z}e^{-2\pi i})=\exp(-2\pi i k/N) \partial X ({\overline z})~, \end{eqnarray} which is called the monodromy of X. (Note that $k$ is an integer.) The basic twist field $\sigma ({\overline z})$ has conformal weight $h=k(1-k/N)/{2N}$. It twists $X$ by $\exp(-2\pi i k/N)$ and its complex conjugate ${\overline X}$ by $\exp(2\pi i k/N)$, ${\em i.e.}$, their operator product expansions (OPEs) are \cite{DFMS} \begin{eqnarray}\label{tau} i \partial X ({\overline z}) \sigma ({\overline w}) &=& ({\overline z} - {\overline w})^{-(1-k/N)} \tau({\overline w}) + ... ~,\nonumber \\ i \partial {\overline X} ({\overline z}) \sigma ({\overline w}) &=& ({\overline z} -{\overline w})^{-k/N} \tau ' ({\overline w}) + ... ~, \end{eqnarray} where $\tau $ and $\tau '$ are excited twist fields. {}Suppose we can rewrite the $i \partial X$ as exponentials of a pair of boson fields $\phi _1$ and $\phi _2$, \begin{equation} i \partial X ({\overline z}) = \exp (i e \cdot \phi ({\overline z})) + ...~. \end{equation} Here $e$ is a $2$-dimensional vector and the conformal weight $h=1$ condition requires $e^2 =2$. Then, the phase rotation of $\partial X$ in Eq. (\ref{monod}) becomes a shift in $\phi$ \begin{equation} \phi ({\overline z}e^{-2\pi i})= \phi ({\overline z}) - 2\pi u ~, \end{equation} where $e\cdot u=k/N$; that is, a twist on $\partial X$ becomes a shift in $\phi$. To recover the correct OPEs of $\partial X$ and $\partial {\overline X}$, it turns out that the $\partial X$ must be written as a linear combination of such exponential terms. The proper monodromy condition of $i \partial X$ then implies that $\phi$ must be compactified in some lattice. $\phi$ provides a particularly useful basis if the supercurrent, as well as the twist fields such as $\sigma$ and $\tau$, can also be written in terms of ordinary momentum states. This is the basic idea of the bosonic supercurrent formalism \cite{bsc}. {}Let us see more explicitly how this conversion of twists to shifts can be realized, and the advantages of this approach. In this paper, we are mostly interested in ${\bf Z}_2$ and ${\bf Z}_3$ twists. For $N=3$ in Eq. (\ref{monod}), the lattice must have a ${\bf Z}_3$ symmetry, so the $U(1)^2$ symmetry enhances to a $SU(3)$ symmetry. Here, $\partial X_1$ and $\partial X_2$ are the Cartan generators and the six root generators are given by $e^{{\pm}ie_\alpha \cdot X({\overline z})}c({\pm} e_\alpha)$, $\alpha =1,2,3$, where the 2-dimensional vectors $e_1$ and $e_2$ are the simple roots of $SU(3)$ and we define $e_3 =-e_1 -e_2$. Note that $e_\alpha \cdot e_\beta = 2$ for $\alpha = \beta$, and $-1$ otherwise. For convenience, we shall not always explicitly display the cocycle operator $c({\pm} e_\alpha )$: its presence is understood. {}In the standard orbifold formalism, the supercurrent for the right-movers can be written as \begin{equation} T_F={i\over 2}\psi \partial X + {\mbox {h.c.}} + ...~, \end{equation} where $\psi$ is a world-sheet complex fermion. Since each $e^{{\pm}ie_\alpha \cdot X}$ has conformal weight 1, same as the $i \partial X$, we may choose to rewrite the supercurrent in terms of the $SU(3)$ roots. The choice is constrained by the condition that they must obey the same OPEs as $i \partial X$ and $i \partial {\overline X}$. For any Lie algebra, one can always find such a subalgebra by rotating the Cartan generators to the root system of the algebra. The new $U(1)$ generators will always be a linear combination of the root generators. For $SU(3)$, the choice is unique, \begin{equation}\label{simple3} i \partial X = {1 \over {\sqrt 3}} \sum_\alpha e^{-ie_\alpha \cdot \phi}~,~~~ i \partial {\overline X} = {1 \over {\sqrt 3}} \sum_\alpha e^{ie_\alpha \cdot \phi}~. \end{equation} Now, there are three twist fields $\sigma _\alpha$ in the ${\bf Z}_3$ twisted sector. In this basis, it is easy to check that, \begin{eqnarray} i \partial X ({\overline z}) \sigma _\alpha ({\overline w}) = ({\overline z} - {\overline w})^{-2/3} \tau_\alpha ({\overline w}) + ... ~,\\ \nonumber i \partial {\overline X} ({\overline z}) \sigma _\alpha ({\overline w}) = ({\overline z} -{\overline w})^{-1/3} \tau_\alpha ' ({\overline w}) + ... ~, \end{eqnarray} and these OPEs agree with Eq. (\ref{tau}) for $N=3$, where \begin{eqnarray} \sigma_\alpha = e^{ie_\alpha \phi /3}~, ~~~ \tau_\alpha = {1 \over {\sqrt 3}} e^{-2ie_\alpha \phi /3} ~,~~~ \tau_\alpha' = {1\over \sqrt{3}} \sum_{\beta \not=\alpha} e^{i(e_\alpha/3+e_\beta ) \phi }~. \end{eqnarray} Note that $\tau_\alpha'$ is a linear combination of the corresponding vertex operators for {\em two} states with the same highest weight. {}The usefulness of the bosonic supercurrent formalism is now clear. The untwisted states lie in the original lattice while the twist states lie in the shifted lattice. Since they are all expressed in terms of ordinary momentum states, their normalizations and degeneracies, as well as their OPEs with each other, are easy to calculate. {}For a ${\bf Z}_2$ twist, it is sometimes convenient to decompose $X$ into 2 real bosons, $X_1$ and $X_2$. If the compactification radii of the bosons are 1, we can use the fermion-boson equivalence in CFT to rewrite each $X_i$ as a complex fermion. So, for orbifolds with only ${\bf Z}_2$ twists, the internal parts of the supercurrent can be rewritten entirely into world-sheet fermions. This is the free fermionic string model construction \cite{KLT}. In the next section, we shall give the rules for model-building using this bosonic supercurrent formalism. Since some of the $3$-family grand unified models can be constructed with the bosonic supercurrent formalism, it is natural to use this formalism to calculate the couplings in these models. {}In the examples with only level-$1$ current algebras, we shall use the above supercurrent with (\ref{simple3}) involving the $SU(3)$ lattice. For the $3$-family grand unified models, the supercurrents are more complicated. Consistency with the transformation of specific twists to shifts in a given orbifold model essentially fixes the supercurrent. As mentioned earlier, there are two equivalent constructions of the $3$-family $E_6$ model. In one construction, namely, the $E1$ model, the supercurrent involves a $E_6$ lattice and its $SU(3)^2$ sub-lattice; while, in the other construction of the same model, namely, the $E2$ model, the supercurrent involves the appropriate $SU(2)^4$, $SU(6)$ and $SU(3)^2$ sub-lattices of $E_6$. \section{Model-Building Rules} {}In this section we give the rules for constructing Abelian asymmetric orbifolds using the bosonic supercurrent formalism. The rules that we present here are less general than the ones in Ref \cite{kt} because we confine our attention to a more limited class of orbifolds. Nevertheless, some of the three-family grand unified models found recently \cite{kt} can be constructed within the present framework. Moreover, this formalism is particularly useful in computing couplings and identifying quantum numbers (both local and discrete) of the physical states. Throughout the paper, we consider only heterotic strings compactified to four space-time dimensions. \subsection{Framework} {}In this subsection we set up the framework for the remainder of this section. In the light -cone gauge which we adopt, we have the following world-sheet degrees of freedom: one right-moving complex boson $X^0$ (which along with its left-moving counterpart corresponds to two transverse space-time coordinates); three right-moving complex bosons $X^{1,2,3}$ (corresponding to six internal coordinates); four right-moving complex fermions $\psi^a$, $a=0,1,2,3$ ($\psi^0$ is the world-sheet superpartner of $(X^0)^\dagger$, whereas $\psi^{1,2,3}$ are the world-sheet superpartners of $(X^{1,2,3})^\dagger$); two left-moving real bosons ${\cal X}^\mu_L$, $\mu=1,2$ (these are the left-moving counterparts of the two real bosons ${\cal X}^\mu_R$ corresponding to $X^0$ via $X^0=({\cal X}^1_R+i{\cal X}^2_R)/\sqrt{2}$); $22$ left-moving real bosons $\varphi^A$, $A=1,...,22$ (corresponding to twenty-two internal coordinates). Before orbifolding, the corresponding string model has $N=4$ space-time supersymmetry and the internal momenta span an even self-dual Lorentzian lattice $\Gamma^{6,22}$. The underlying conformal field theory of the internal degrees of freedom is given by $G_L \otimes G_R$ where $G_L$ and $G_R$ are the left- and right-moving level-$1$ Kac-Moody algebras with central charges $c_L=22$ and $c_R=9$. The right-moving Kac-Moody algebra consists of two factors, {\em i.e.}, $G_R={\cal G}_R \otimes SO(6)_1$ where ${\cal G}_R$ is a level $1$ Kac-Moody algebra (with central charge $6$) corresponding to the right-moving part of the lattice $\Gamma^{6,22}$ and $SO(6)_1$ (with central charge $3$) comes from the right-moving fermions. After orbifolding the underlying conformal field theory becomes $(G^{\prime}_L \otimes {\cal C}_L)\otimes (G^{\prime}_R \otimes {\cal C}_R)$. Here $G^{\prime}_L$ and $G^{\prime}_R$ are left- and right-moving Kac-Moody algebras, and ${\cal C}_L$ and ${\cal C}_R$ are certain cosets that arise in the breakings $G_L\equiv {\cal G}_L \supset {\cal G}^{\prime}_L$ and ${\cal G}_R \supset {\cal G}^{\prime}_R$, respectively. (Note that since we are considering Abelian orbifolds, the $SO(6)$ subgroup of $G_R$ breaks to a level-1 subgroup, and the corresponding coset is trivial.) In this section, we restrict ourselves to the cases where after orbifolding, the left- and right-moving Kac-Moody algebras are realized at level $1$ and the cosets are trivial. The rules for higher level models will be discussed in the next section. {}It is convenient to organize the string states into sectors labeled by the monodromies of the string degrees of freedom. Consider the right-movers \begin{eqnarray} \psi^a ({\overline z}e^{-2\pi i}) &=& \exp(-2\pi i V^a_i) \psi^a (\overline z)~,\nonumber\\ \partial X^a ({\overline z}e^{-2\pi i}) &=& \exp(-2\pi i T^a_i) \partial X^a ({\overline z})~. \end{eqnarray} Here we note that $T^0_i$ must always be zero as ${\cal X}^\mu (ze^{2\pi i}, {\overline z}e^{-2\pi i})={\cal X}^\mu (z,{\overline z})$ since ${\cal X}^\mu (z,{\overline z})={\cal X}^\mu_L (z)+{\cal X}^\mu_R ({\overline z})$ correspond to space-time coordinates. Let us define $s_i \equiv V^0_i$. The sectors with $s_i \in {\bf Z}$ give rise to the space-time bosons, whereas the sectors with $s_i \in {\bf Z}+1/2$ give rise to space-time fermions. The monodromy of the supercurrent \begin{equation} T_F={i\over 2}\sum_{a=0}^{3} \psi^a \partial X^a +{\mbox {h.c.}}~. \end{equation} is given by $s_i$, {\em i.e.}, $T_F ({\overline z}e^{-2\pi i})=\exp(-2\pi is_i) T_F({\overline z})$. This, in particular, implies the {\em triplet} constraint on the supercurrent: \begin{equation}\label{super} V^a_i+T^a_i=s_i~({\mbox{mod}}~1)~,~~~a=0,1,2,3~. \end{equation} {}The twists on $\psi^{a}$ can be written as shifts if we bosonize the complex fermions: \begin{eqnarray} \psi^{a}&=& \exp(i \rho^{a}) ~, \nonumber \\ \psi^{a \dagger}&=& \exp(-i \rho^{a}) ~. \end{eqnarray} This latter form will be useful when we rewrite the {\em triplet} constraint for the bosonic supercurrent. {}Because of the worldsheet $N=2$ superconformal field theory (SCFT) (which is necessary for $N=1$ space-time supersymmetry), there is a conserved right moving $U(1)$ current \begin{equation} Y (\overline{z}) = i \sum_{a=1}^{3} \rho^{a}(\overline{z})~. \end{equation} Therefore the supercurrent can be divided into two pieces $T_{F}(\overline{z})=T_{F}^{+}(\overline{z})+T_{F}^{-}(\overline{z})$ with $U(1)$ charges $\pm 1$. The energy momentum tensor $T$, the supercurrent $T_{F}^{\pm}$ together with $Y$ form a global $N=2$ superconformal algebra \begin{eqnarray}\label{SCFT} T({\overline z}) T(0) &\sim& {{{3\over 4}\hat{c}}\over {{\overline z}^4}}+{{2T(0)}\over {{\overline z}^2}}+ {{\partial T(0)}\over {{\overline z}}}+ \cdots ~,\nonumber\\ T({\overline z}) T^{\pm}_F (0) &\sim& {{{3\over 2}T^{\pm}_F (0)}\over {{\overline z}^2}} + {{\partial T^{\pm}_F (0)}\over {{\overline z}}} + \cdots ~,\nonumber\\ T^+_F({\overline z}) T^-_F (0) &\sim& {{{1\over 8}\hat{c}}\over {{\overline z}^3}}+ {{{1\over 4} Y(0)}\over {{\overline z}^2}}+ {{{1\over 4} T(0)}\over {{\overline z}}}+ {{1\over 8} {\partial Y(0)}\over {{\overline z}}}+ \cdots ~,\\ Y({\overline z}) T^{\pm}_F (0) &\sim& {{\pm T^{\pm}_F (0)}\over {{\overline z}}} + \cdots ~,\nonumber\\ Y({\overline z}) Y(0) &\sim& {{{1\over 2}\hat{c}}\over {{\overline z}^2}}+ \cdots ~,\nonumber \end{eqnarray} where only the singular terms are shown. In our case, $\hat{c}={2 \over 3} c=8$. (The space-time part of the supercurrent carries $\hat{c}=2$ and the internal part has $\hat{c}=6$.) {}Let us consider the cases where the Kac-Moody algebra ${\cal G}^{\prime}_{R}$ corresponding to the right-moving part of the lattice is realized at level $1$ and has central charge 6. Then there exist six right-moving real bosons $\phi^I$ such that $i\partial \phi^I$ are the vertex operators for the Cartan generators of ${\cal G}^{\prime}_R$. As discussed in the previous section, in order to rewrite the twists in $\partial X^{a}$ as shifts in $\phi^{I}$, $\partial X^{a}$ must take the following form: \begin{equation}\label{X^a} i\partial X^a =\sum_{{Q}^{2}=2} {\xi}^a ({Q}) J_{Q}~,~~~a=1,2,3~, \end{equation} where \begin{equation} J_{Q} ({\overline z})= \exp (i{Q} \cdot {\bf \phi}({\overline z}))~ c({Q}) ~. \end{equation} We have introduced six-dimensional real vectors ${Q}=({Q}^1,...,{Q}^6)$ which are root vectors of ${\cal G}_R$ with length squared 2. This ensures that $i\partial X^a$ has conformal dimension $1$. The $c({Q})$ are cocycle operators necessary in the Kac-Moody algebra. Note that $J_{Q}$ are Kac-Moody currents for root generators. The supercurrent is therefore a linear combination of terms with different $H$ and $Q$ charges. {}Here ${\xi}^a ({Q})$ are numerical coefficients constrained by the OPEs: \begin{eqnarray}\label{XOPE} \partial X^{a} (\overline{z}) \partial X^{b} (0) &\sim& {\mbox{regular}} \nonumber ~,\\ \partial X^{a} (\overline{z}) \partial X^{b \dagger} (0) &\sim& - \overline{z}^{-2} \delta^{ab} + {\mbox{regular}} ~. \end{eqnarray} {}The monodromies $(V^a_i,T^a_i)$ for $\psi^{a}$ and $\partial X^{a}$ can now be translated into monodromies $(V^a_i, U^I_i)$ of $\rho^{a}$ and $\phi^{I}$: \begin{eqnarray} \rho_a ({\overline z}e^{-2\pi i}) &=& \rho_a ({\overline z})-2\pi V^a_i ~, \nonumber\\ \phi^I ({\overline z}e^{-2\pi i}) &=& \phi^I ({\overline z})-2\pi U^I_i~. \end{eqnarray} {}In this basis, the {\em triplet} constraint on the supercurrent becomes \begin{equation}\label{triplet} \xi^a (Q)=0~{\mbox{unless}}~V^a_i+ U_i \cdot Q= s_i~({\mbox{mod}}~1)~. \end{equation} {}In terms of the chiral bosons $\rho_a$ and $\phi^I$, the energy momentum tensor $T$ is given by: \begin{equation} T({\overline z}) = -{1\over 2} \sum_{a} (\partial \rho_a)^2 -{1\over 2} \sum_{I} (\partial \phi^I)^2 -{1\over 2} \sum_{\mu} ( \partial X^{\mu}_R)^2 ~. \end{equation} The above form of $T$ together with the bosonic supercurrent $T_F^{\pm}$ and $Y$ satisfy the $N=2$ SCFT (\ref{SCFT}). {}Having discussed the right-moving degrees of freedom, let us turn to the left-movers. Since ${\cal G}^{\prime}_L$ is realized at level $1$ and has central charge $22$, we have the following monodromy conditions on the fields $\varphi^A$: \begin{equation}\label{varphi} \varphi^A (ze^{2\pi i})=\varphi^A (z)+2\pi U^A_i ~. \end{equation} {}The monodromies $V^a_i, U^I_i, U^A_I$ can be conveniently combined into $(10,22)$ dimensional Lorentzian vectors (with metric $((-)^{10},(+)^{22})$): \begin{equation}\label{MONO} V_i=(V^a_i\vert U^I_i \vert\vert U^A_I)~. \end{equation} The monodromies $V_i$ can be viewed as fields $\Phi$ (where $\Phi$ is a collective notation for $\rho_a$, $\phi^I$ and $\varphi^A$) being periodic $\Phi (ze^{2\pi i} ,{\overline z}e^{-2\pi i})=\Phi(z,{\overline z})$ up to the identification $\Phi \sim g(V_i) \Phi g^{-1}(V_i)$, where $g(V_i)$ is an element of the orbifold group $G$. For $G$ to be a finite discrete group, the element $g(V_i)$ must have a finite order $m_i \in {\bf N}$, {\em i.e.}, $g^{m_i} (V_i)=1$. This implies that the vector $V_i$ must be a rational multiple of a vector in $\Delta^4 \otimes \Gamma^{6,22}$, that is, $m_i V_i \in \Delta^4 \otimes \Gamma^{6,22} $. Here $\Delta^4$ is the odd self-dual Euclidean lattice spanned by the four-dimensional vectors $p$ that correspond to the vector ${\bf v}$ and spinor ${\bf s}$ irreps of $SO(8)_1$. Thus, this lattice can be described as being spanned by vectors of the following form: $p=(p^0,p^1,p^2,p^3)$, where either $p^a \in {\bf Z}$ and $\sum_a p^a \in 2{\bf Z}+1$ (the momenta corresponding to the ${\bf v}$ irrep), or $p^a \in {\bf Z}+{1\over 2}$ and $\sum_a p^a \in 2{\bf Z}$ (the momenta corresponding to the ${\bf s}$ irrep). Here we note that we could have chosen $\Delta^4$ to be spanned by the the four-dimensional vectors $p$ that correspond to the vector ${\bf v}$ and conjugate ${\bf c}$ irreps of $SO(8)_1$. This freedom in choosing the lattice $\Delta^4$ is immaterial, and is related to the choice of the structure constant $k_{00}$ to be introduced below. {}To describe all the elements of the group $G$, it is convenient to introduce the set of generating vectors $\{V_i\}$ such that $\alpha V={\bf 0}$ if and only if $\alpha_i \equiv 0$. Here ${\bf 0}$ is the null vector, {\em i.e.}, the vector of the form (\ref{MONO}) with all its entries being null: \begin {equation} {\bf 0}=(0^4\vert 0^6\vert\vert 0^{22})~. \end{equation} Also, $\alpha V \equiv \sum_i \alpha_i V_i$ (the summation is defined as, say, $(V_i+V_j)^a=V^a_i+V^a_j$), $\alpha_i$ being integers that take values from $0$ to $m_i -1$. The elements of the group $G$ are then in one-to-one correspondence with the vectors $\alpha V$ and will be denoted by $g(\alpha V)$. It is precisely the Abelian nature of $G$ that allows this correspondence (by simply taking all the possible linear combinations of the generating vectors $V_i$). {}Now we can identify the sectors of the model. They are labeled by the vectors $\alpha V$, and in a given sector $\alpha V$ the monodromies of the string degrees of freedom are given by $\Phi (ze^{2\pi i}, {\overline z}e^{-2\pi i})=g(\alpha V) \Phi (z, {\overline z}) g^{-1} (\alpha V)$. It is clear that the sectors with $\alpha s \in {\bf Z}$ give rise to the space-time bosons, whereas those with $\alpha s\in{\bf Z}+1/2$ give rise to the space-time fermions. {}Note that a sector described by the vector \begin{equation} V_0=((-{1\over 2})^4 \vert 0^6 \vert\vert 0^{22}) \end{equation} is always present in any orbifold model. This is related to $N=4$ SUSY of the original Narain model \cite{narain}. In other words, the $V_0$ sector is the Ramond sector of the Narain model, whereas ${\bf 0}$ sector is the Neveu-Schwarz sector. It is convenient to include this sector $V_0$ into the set of generating vectors $\{V_i\}$ (then $i=0,1,2,...$). Since we have included $V_0$ into the set $\{V_i\}$, without loss of generality we can set $s_i=0$ for $i \not= 0$. Then the space-time bosons come from the sectors $\alpha V$ with $\alpha_0=0$, and the space-time fermions come from the sectors $\alpha V$ with $\alpha_0=1$. (Note that $m_0=2$.) With this definition, we can relax the constraint on $m_i V_i$, and require that $m_i V_i \in {\bf Z}^4 \otimes \Gamma^{6,22}$. The odd self-dual lattice $\Delta^4$ for the Narain model will then emerge as a consequence of the spectrum generating formula (see below). {}We use the light-cone gauge in deriving the rules for computing the spectrum of a string model, since it is manifestly ghost-free and so all the states constructed are physical. However, it is more convenient to discuss scattering in the covariant gauge. Some of the issues such as picture changing will be more clear in the covariant approach. The translation from the light-cone gauge to the covariant gauge is straightforward. Given a vertex operator in the light-cone gauge, we can construct a vertex operator in the covariant gauge by simply covariantizing the light-cone coordinates and putting in the ghosts. {}In the light-cone gauge of $4$-dimensional heterotic string with $N=1$ space-time supersymmetry (SUSY), massless physical states are created by vertex operators of the form $V(z,{\overline z})=V(z){\overline V}({\overline z})$, where $V(z)$ is a left-moving vertex operator of conformal dimension $1$ (the vacuum energy in the left-moving sector is $-1$ in the untwisted sector), and ${\overline V}({\overline z})$ is a right-moving vertex operator of conformal dimension $1/2$ (the vacuum energy in the right-moving sector, which has $N=1$ local world-sheet supersymmetry and $N=2$ global SUSY, is $-1/2$ for bosons in the untwisted sector). {}In the covariant gauge, we have in addition to the light-cone degrees of freedom: two longitudinal space-time coordinates, two longitudinal components of the right-moving fermions, reparametrization ghosts $b$ and $c$, and superconformal ghosts $\beta$ and $\gamma$ \cite{FMS}. It is most convenient to bosonize the $\beta,\gamma$ ghosts: \begin{equation} \beta = \partial \xi e^{-\phi}, ~~~ \gamma = \eta e^{ \phi}~, \end{equation} where $\xi$ and $\eta$ are auxiliary fermions and $\phi$ is a bosonic ghost field obeying the OPE $\phi(\overline{z}) \phi(\overline{w}) \sim {\mbox{log}} ( \overline{z} - \overline{w})$. The conformal dimension of $e^{q \phi}$ is $-{1\over 2} q (q+2)$. {}In covariant gauge, vertex operators are of the form $V(z,{\overline z})=V(z){\overline V}({\overline z})$ where $V(z)$ and $V(\overline{z})$ are both dimension $1$ operators constructed from the conformal fields. These include the longitudinal components as well as the ghosts. The vertex operators for space-time bosons carry integral ghost charges ($q \in {\bf Z}$) whereas for space-time fermions the ghost charges are half-integral ($q \in {\bf Z} + {1\over 2}$). Here, $q$ specifies the picture. The canonical choice is $q=-1$ for space-time bosons and $q=-{1\over 2}$ for space-time fermions. We will denote the corresponding vertex operators by $V_{-1} (z, \overline{z})$ and $V_{-{1\over 2}}(z, \overline{z})$ respectively. Vertex operators in the $q=0$ picture (with zero ghost charge) is given by {\em picture-changing}~: \begin{equation} V_{0}(z,{\overline z})= \lim_{{\overline w}\rightarrow {\overline z}}{ e^{\phi} T_F ({\overline z}) V_{-1}(z,{\overline w})}~. \end{equation} Because the supercurrents in the $3$-family grand unified models have many terms, $V_{0}$ can be somewhat involved. {}Having constructed the vertex operators for the massless states, one can in principle compute the scattering amplitudes, or the corresponding couplings in the superpotential. The coupling of $M$ chiral superfields in the superpotential is given by the scattering amplitude of the component fields in the limit when all the external momenta are zero. Due to holomorphicity, one needs to consider only the scatterings of left-handed space-time fermions, with vertices $V_{-1/2}(z,{\overline z})$, and their space-time superpartners. Since the total $\phi$ ghost charge in any tree-level correlation function is $-2$, it is convenient to choose two of the vertex operators in the $-1/2$-picture, one in the $-1$-picture, and the rest in the $0$-picture. Using the $SL(2,{\bf C})$ invariance, the scattering amplitude is therefore \begin{equation}\label{scattem} {\cal A}_{M} = g^{M-2}_{\mbox{st}}\int dz_{4} d \overline{z}_{4} \cdots dz_{M} d \overline{z}_{M} \langle V_{-{1\over 2}}(0,0)V_{-{1\over 2}}(1,1) V_{-1}(\infty,\infty) V_{0}(z_4,\overline{z}_{4}) \cdots V_{0}(z_M,\overline{z}_{M}) \rangle ~, \end{equation} where we have normalized the $c$ ghost part of the correlation function $\langle c(0,0) c(1,1) c(\infty,\infty) \rangle$ to $1$. \subsection{Orbifold Rules for Level-1 Models} {}We are now ready to give the rules for constructing consistent orbifold models with multiple twists using the bosonic supercurrent formalism. We will not give the derivation of these rules as they can be deduced from Ref \cite{kt} which contains more generic (and, therefore, more complicated) rules. Instead, we just list all the requirements that a consistent model must satisfy. In this subsection we will concentrate on models that have no non-trivial left-moving coset ${\cal C}_L$. Models with left-moving twists (such as outer automorphisms) will be considered in the subsequent sections. {}For a string model to be consistent, it must satisfy the following constraints which we impose:\\ (1) {\em Modular invariance}. One-loop partition function must be invariant under $S$ and $T$ modular transformations.\\ (2) {\em Physically sensible projection}. The physical states should appear in the partition function with proper weights and space-time statistics. The space-time bosons contribute $+1$ to the one loop vacuum amplitude while space-time fermions contribute $-1$.\\ (3) {\em Worldsheet supersymmetry}. This is essential for space-time Lorentz invariance. In order for the total supercurrent to have well defined boundary condition, the space-time supercurrent and the internal supercurrent defined above should have the same monodromies. This is guaranteed by the triplet constraint (\ref{triplet}). Furthermore, the OPEs in (\ref{XOPE}) impose extra conditions on the coefficients ${\xi}^{a}({Q})$ in $T_{F}$. This ensures the $N=2$ superconformal algebra is satisfied. {}We start from a Narain model with the momenta of the internal bosons spanning an even self-dual Lorentzian lattice $\Gamma^{6,22}$. We introduce a set of generating vectors $\{V_i\}$ that includes the vector $V_0$. Next, we find the structure constants $k_{ij}$ that satisfy the following constraints: \begin{eqnarray} k_{ij}+k_{ji} &=& V_i \cdot V_j ~({\mbox{mod}}~1)~,\\ k_{ii}+k_{i0}+s_i -{1\over 2} V_i \cdot V_i &=& 0~({\mbox{mod}}~1)~,\\ k_{ij} m_j &=& 0~({\mbox{mod}}~1)~. \end{eqnarray} Note that there is no summation over the repeated indices here. The dot product of two vectors is defined with Lorentzian signature $((-)^{10},(+)^{22})$, {\em i.e.}, \begin{eqnarray} V_i \cdot V_j &=& -\sum_a V^a_i V^a_j +{\vec U}_i \cdot {\vec U}_j \nonumber\\ &=& -\sum_a V^a_i V^a_j -\sum_I U_i^I \cdot U_j^I +\sum_A U_i^A \cdot U_j^A~. \end{eqnarray} Here we have combined the components $U^I_i$ and $U^A_i$ into a single $(6,22)$ dimensional vector ${\vec U_i}$. Note that this vector is a rational multiple of a vector in $\Gamma^{6,22}$, and, in particular, $m_i {\vec U}_i \in \Gamma^{6,22}$. The dot product of two vectors ${\vec U}_i$ and ${\vec U}_j$ is then defined in the same way as for two vectors in $\Gamma^{6,22}$. {}The above rules are not sufficient for the model to be consistent. There must exist a supercurrent which satisfies the {\em triplet} constraint (\ref{triplet}) for all generating vectors $V_i$. Furthermore, the OPEs in (\ref{XOPE}) impose extra conditions on the coefficients $\xi^{a}({Q})$ in $T_{F}$. Thus, the rules that constrain the set $\{V_i ,k_{ij}\}$ together with the existence of the required supercurrent give the necessary and sufficient conditions for building a consistent string model. {}The sectors of the model are $\alpha V$. In a given sector $\alpha V$ the states are nothing but the momentum states of ${\cal X}^\mu$, $\rho_a$, $\phi^I$ and $\varphi^A$. This means that the vertex operator for a given state has the form (in the covariant formalism where ${\cal X}^\mu$, $\mu=0,1,2,3$, are four space-time coordinates) \begin{equation}\label{vertex} V(z,{\overline z}) ={\tilde V}(z,{\overline z}) \exp(i\sum_a H_a \rho_a ({\overline z}) +i\sum_I Q^I \phi^I ({\overline z}) + i\sum_A Q^A \varphi^A (z)) \exp(ik_\mu {\cal X}^\mu (z,{\overline z}))~, \end{equation} where ${\tilde V}(z,{\overline z})$ is a combination of ghost fields, derivatives of ${\cal X}^\mu$, $\rho_a$, $\phi^I$ and $\varphi^A$ ({\em i.e.}, this is the corresponding oscillator excitation contribution), and an appropriate cocycle operator. The normal ordering is implicit here. Let us combine the $H$-charges $H_a$, $Q$-charges $Q^I$, and the gauge charges $Q^A$ into a $(10,22)$ dimensional momentum vector $P_{\alpha V}$. Then the physical states are those that satisfy the following spectrum generating formula: \begin{equation} V_i \cdot P_{\alpha V}=s_i +\sum_j k_{ij} \alpha_j ~({\mbox{mod}}~1)~, \end{equation} and the momenta $P_{\alpha V}\in {\bf Z}^4 \otimes \Gamma^{6,22}+\alpha V$. Thus, for example, $H_a \in {\bf Z}+ (\alpha V)^a$, and ${\vec {\cal Q}} \in \Gamma^{6,22} +\alpha {\vec U}$, where we have combined the $Q$-charges $Q^I$ and the gauge charges $Q^A$ into a single $(6,22)$ dimensional vector ${\vec {\cal Q}}$. {}The above spectrum generating formula gives both on- and off-shell states. The on-shell states must satisfy the additional constraint that the left- and right-moving energies be equal. In the $\alpha V$ sector they are given by \begin{eqnarray} E^L_{\alpha V}&=&-1 +\sum_{q=1}^{\infty} q(\sum_\mu m^{\mu}_q +\sum_A n^A_q) + {1\over 2}(P^L_{\alpha V})^2~,\\ E^R_{\alpha V}&=&-{1\over 2} +\sum_{q=1}^{\infty} q(\sum_\mu {\tilde m}^{\mu}_q +\sum_I n^I_q+\sum_a k^a_q) +{1\over 2}(P^R_{\alpha V})^2~, \end{eqnarray} where $m^{\mu}_q$, ${\tilde m}^{\mu}_q$, $n^A_q$, $n^I_q$ and $k^a_q$ are the oscillator occupation numbers for the real bosons ${\cal X}^\mu_L$, ${\cal X}^\mu_R$, $\varphi^A$, $\phi^I$ and $\rho_a$, respectively. Also, we note that $(P^L_{\alpha V})^2 =({\vec {\cal Q}}^L)^2$, and $(P^R_{\alpha V})^2 =({\vec {\cal Q}}^R)^2+\sum_a H^2_a$, where ${\vec {\cal Q}}^L$ and ${\vec {\cal Q}}^R$ are the left- and right-moving parts of the vector ${\vec {\cal Q}}$. (Here we note that for massless states that do not belong to the $N=1$ supergravity multiplet all the occupation numbers are zero and ${\tilde V}(z,{\overline z})=1$ up to a factor that involves ghosts and a cocycle.) {}As a simple illustration of these rules let us consider the model generated by a single vector $V_0$. This is nothing but a Narain model. There is only one structure constant, namely, $k_{00}$. For $k_{00}=0$ we find that $\Delta^4$ is spanned by ${\bf v}$ and ${\bf c}$ momenta of $SO(8)_1$, whereas for the other choice $k_{00}=1/2$ we find that $\Delta^4$ is spanned by ${\bf v}$ and ${\bf s}$ momenta of $SO(8)_1$. {}The above generating formula along with the constraints on the set $\{V_i, k_{ij} \}$ is all we need to construct the spectrum of any given model. For supersymmetric models, obtaining the spectrum is simplified further by space-time supersymmetry. For definiteness, let us concentrate on $N=1$ SUSY models. (Our discussion easily generalizes to models with larger SUSY.) In this case we have two SUSY generators $Q_L ({\overline z})$ and $Q_R ({\overline z})$, where the subscript indicates the space-time helicity. The vertex operators for these generators have the the same form as in (\ref{vertex}), but with ${\tilde V}(z,{\overline z})=1$ (up to ghosts and a cocycle) and $Q^I = Q^A =k_\mu =0$, {\em i.e.}, only the $H$-charges are non-zero. Let us combine all of these charges into $(10,22)$ dimensional momenta ${\cal P}$. Then these momenta for the SUSY generators are determined by solving the following constraint: \begin{equation}\label{SUSYgen} V_i \cdot {\cal P}=-\sum_a V^a_i H_a ({\cal P}) =k_{i0}~({\mbox{mod}}~1)~, \end{equation} where $H_a ({\cal P})=\pm {1\over 2}$ are the corresponding $H$-charges for the SUSY generators. We will use the convention that the massless fermion states with $H_0=-1/2$ are left-handed, whereas those with $H_0=+1/2$ are right-handed. Then the solution of Eq. (\ref{SUSYgen}) with $H_0 ({\cal P})=-1/2$, which we will refer to as ${\cal P}_L$, corresponds to the left-handed SUSY generator $Q_L ({\overline z})$, whereas the solution with $H_0 ({\cal P})=+1/2$, which we will refer to as ${\cal P}_R$, corresponds to the right-handed SUSY generator $Q_R ({\overline z})$. (Note that ${\cal P}_L=-{\cal P}_R$.) These generators are useful in this case in the following way. Instead of working out the entire spectrum, we can confine our attention to left-moving fermion states in the $-1/2$-picture. These have $H_0=-1/2$ according to the above convention, and let the corresponding momenta be $P_{\alpha V}$. (Their CPT-conjugate states are right-handed with $H_0=+1/2$). Their superpartners, which are boson states in the $-1$-picture, can be obtained by noting that their corresponding momenta would simply be $P_{\alpha^{\prime} V}={\cal P}_R +P_{\alpha V}$. Similarly, $P_{\alpha V}={\cal P}_L +P_{\alpha^{\prime} V}$. Note that $\alpha^{\prime }_i=\alpha_i$ for $i\not=0$, and $\alpha_0=1$, whereas $\alpha^{\prime}_0=0$. Thus, all we need to work out in this case is the spectrum for the left-moving fermion states. So by a vertex operator in the $-1/2$-picture we will always mean vertex operators for such states, whereas by those in the $-1$-picture we will mean vertex operators of the corresponding bosonic superpartners. Here we comment that the number of supersymmetries in a general case is given by a half of the number of solutions of Eq. (\ref{SUSYgen}). \section{Orbifold Rules for Higher Level Models} {}In this section we generalize the rules for the orbifold construction presented in section III. This generalization will allow us to construct models with reduced rank, in particular, models that contain gauge groups realized via higher level Kac-Moody algebras. The necessity of such a generalization can be seen from the fact that gauge symmetry in $N=1$ heterotic string models arises from the left-moving sector of the theory. On the other hand, the monodromies (\ref{varphi}) for the left-moving bosons $\varphi^A$ are such that they cannot project out Cartan generators of the original Kac-Moody algebra ${\cal G}_L$ of the Narain model that we orbifold. Thus, the final gauge group ${\cal G}^{\prime}_L$ is always realized via a level-$1$ Kac-Moody algebra and has rank $22$. Thus, to obtain models with reduced rank, we must project out some of the original Cartan generators, that is, we have to twist some of the $22$ real left-moving bosons $\varphi^A$. Here we note that such twisting does not guarantee rank reduction. Sometimes it can happen that the model with such twists still has ${\cal G}^{\prime}_L$ with rank $22$. In this case it is always possible to rewrite the model so that all the left-moving bosons are shifted but not twisted. \subsection{Framework} {}In this subsection we will set up the framework for the remainder of this section. We will borrow the notation from section III. The right-moving degrees of freedom are the same. The left-movers come in two varieties. There are $22-2d$ real bosons $\varphi^A$, $A=1,...,22-2d$, and also $d$ complex bosons $\Phi^r$, $r=1,...,d$. (These can be viewed as complexifications of the original $2d$ real bosons $\varphi^{23-2d},...,\varphi^{22}$ via $\Phi^r =(\varphi^{21-2d+2r} + i \varphi^{22-2d+2r})/\sqrt{2}$.) The string sectors are labeled by the monodromies of the string degrees of freedom: \begin{eqnarray} \rho_a ({\overline z}e^{-2\pi i})&=&\rho_a ({\overline z})-2\pi V^a_i ~, \nonumber\\ \phi^I ({\overline z}e^{-2\pi i})&=& \phi^I ({\overline z})-2\pi U^I_i~, \nonumber\\ \varphi^A (ze^{2\pi i})&=&\varphi^A (z) +2\pi U^A_i~,\\ \partial \Phi^r (ze^{2\pi i})&=&\exp (-2\pi i T^r_i) \partial \Phi^r (z)~. \nonumber \end{eqnarray} These monodromies can be combined into a single vector \begin{equation} V_i=(V^a_i \vert U^I_i \vert\vert U^A_i \vert T^r_i )~. \end{equation} Without loss of generality we can restrict the values of $T^r_i$ as follows: $0\leq T^r_i <1$. This restriction is actually necessary for correctly identifying the sectors of the orbifold model in what follows. {}The monodromies $V_i$ can be viewed as fields $\Phi$ (where $\Phi$ is a collective notation for $\rho_a$, $\phi^I$, $\varphi^A$ and $\Phi^r$) being periodic $\Phi (ze^{2\pi i} ,{\overline z}e^{-2\pi i})=\Phi(z,{\overline z})$ up to the identification $\Phi \sim g(V_i) \Phi g^{-1}(V_i)$, where $g(V_i)$ is an element of the orbifold group $G$. (Since we are considering Abelian orbifolds, we have excluded shifts from the monodromies of the bosons $\Phi^r$.) For $G$ to be a finite discrete group, the element $g(V_i)$ must have a finite order $m_i \in {\bf N}$, {\em i.e.}, $g^{m_i} (V_i)=1$. This implies that the vector $V_i$ must be a rational multiple of a vector in ${\bf Z}^4 \otimes \Gamma^{6,22} \otimes {\bf N}^d$, that is, $m_i V_i \in {\bf Z}^4 \otimes \Gamma^{6,22} \otimes {\bf N}^d$. In the component form we have: $m_i V^a_i \in {\bf Z}$, $m_i {\vec U}_i \in \Gamma^{6,22}$, and $m_i T^r_i \in {\bf N}$. Here we have combined the components $U^I_i$, $I=1,...,6$, and $U^A_i$, $A=1,...,22-2d$, along with $2d$ null entries into a single $(6,22)$ dimensional vector ${\vec U}_i =(U^I_i \vert\vert U^A_i \vert 0^{2d})$. Later we will use the dot product of such vectors defined in the same way as for two vectors in $\Gamma^{6,22}$. {}To describe all the elements of the orbifold group $G$, it is convenient to introduce the set of generating vectors$\{V_i\}$ such that ${\overline {\alpha V}}={\bf 0}$ if and only if $\alpha_i \equiv 0$. Here ${\bf 0}$ is the null vector \begin{equation} {\bf 0}=(0^4 \vert 0^6 \vert\vert 0^{22-2d} \vert 0^d )~. \end{equation} Also, ${\alpha V}\equiv\sum_i \alpha_i V_i$ (the summation is defined as, say, $(V_i+V_j)^r=T^r_i+T^r_j$), $\alpha_i$ being integers that take values from $0$ to $m_i-1$. The overbar notation is defined as follows: $({\overline {\alpha V}})^{r}=(\alpha V)^{r}~({\mbox{mod}}~1)$ and $0\leq ({\overline {\alpha V}})^{r} <1$. {}Now we can identify the sectors of the model. They are labeled by the vectors ${\overline{\alpha V}}$, and in a given sector ${\overline {\alpha V}}$ the monodromies of the string degrees of freedom are given by $\Phi (ze^{2\pi i}, {\overline z}e^{-2\pi i})=g({\overline {\alpha V}}) \Phi (z, {\overline z}) g^{-1} ({\overline {\alpha V}})$. It is clear from the supercurrent constraint (\ref{super}), (\ref{triplet}) that the sectors with $\alpha s \in {\bf Z}$ give rise to the space-time bosons, whereas those with $\alpha s\in{\bf Z}+1/2$ give rise to the space-time fermions. {}Note that a sector described by the vector \begin{equation} V_0=((-{1\over 2})^4 \vert 0^6 \vert\vert 0^{22-2d}\vert 0^d) \end{equation} is always present in any orbifold model. As before, the $V_0$ sector is the Ramond sector of the Narain model, whereas ${\bf 0}$ sector is the Neveu-Schwarz sector. Without loss of generality we can set $s_i=0$ for $i\not=0$ (Recall that $s_i \equiv V^0_i$.) Then the space-time bosons come from the sectors ${\overline{\alpha V}}$ with $\alpha_0=0$, and the space-time fermions come from the sectors ${\overline {\alpha V}}$ with $\alpha_0=1$. {}We finish this subsection by noting that we have not modified the action of the orbifold on the right-moving degrees of freedom, so that all the rules concerning the supercurrent construction, in particular, the constraints (\ref{XOPE}) and (\ref{triplet}) remain unchanged and must be satisfied by a consistent orbifold model with left-moving twists within this framework just as in the case of models with no left-moving twists. The spectrum generating formula and the left-moving energy are different in the case of left-moving twists, and we turn to these issues next. \subsection{Orbifold Rules} {}We will not attempt to give the most general rules in this subsection as the rules for constructing a rather large class of orbifolds can be found in Ref \cite{kt}. Rather, we will confine our attention to the case where we have only one generating vector, which we choose to be $V_1$, with $T^r_1\not=0$. For all the other vectors $V_i$, $i\not=1$, we require that $T^r_i\equiv 0$. (Thus, without loss of generality, we can assume that $T^r_1\not=0$ for all values of $r$.) Moreover, we only consider the cases where $m_1$ is a prime number. {}Next, we start from a Narain model with the momenta of the internal bosons spanning an even self-dual Lorentzian lattice ${\Gamma}^{6,22}$. We introduce a set of generating vectors $\{V_i\}$ with the above properties. This set includes the $V_0$ vector, and also the $V_1$ vector with a left-moving twist. Here we assume that the lattice ${\Gamma}^{6,22}$ possesses ${\bf Z}_{m_1}$ symmetry generated by the twist part of the orbifold group element $g(V_1$). Next, we find the structure constants $k_{ij}$ that satisfy the following constraints \begin{eqnarray} k_{ij}+k_{ji} &=& V_i \cdot V_j ~({\mbox{mod}}~1)~,~~~i \not=j~, \\ k_{ii}+k_{i0}+s_i -t_i -{1\over 2} V_i \cdot V_i &=& 0~({\mbox{mod}}~1)~,\\ k_{ij} m_j &=& 0~({\mbox{mod}}~1)~. \end{eqnarray} Note that there is no summation over the repeated indices here. The dot product of two vectors is defined as \begin{eqnarray} V_i \cdot V_j &=& -\sum_a V_i^a V_j^a + \vec{U}_i \cdot \vec{U}_{j} \nonumber \\ &=& -\sum_a V^a_i V^a_j -\sum_I U_i^I \cdot U_j^I +\sum_A U_i^A \cdot U_j^A ~. \end{eqnarray} Also, we have introduced the following notation: \begin{equation} t_i \equiv {1\over 2} \sum_r T^r_i (1-T^r_i)~. \end{equation} Note that $t_i=0$ for $i\not=1$. {}Let $I(V_1)$ be the sublattice of $\Gamma^{6,22}$ invariant under the twist part of the orbifold group element $g(V_1)$, and let ${\tilde I}(V_1)$ be the lattice dual to $I(V_1)$. Then for the model to be consistent (in particular, for level-matching dictated by modular invariance) we must have \begin{equation} m_1 {\vec P}^2 \in {\bf Z}~{\mbox{for all}}~{\vec P}\in {\tilde I}(V_1)~. \end{equation} Furthermore, for the sake of simplicity we will confine our attention to models with only one fixed point, so that we will require the following constraint to be satisfied: \begin{equation}\label{fixed} \prod_r [2\sin (\pi T^r_1)] =\sqrt{{\mbox{Vol}}(I(V_1))}~. \end{equation} Here ${\mbox{Vol}}(I(V_1))$ is the volume (or, equivalently, the determinant of the metric) of the lattice $I(V_1)$. (Note that Eq. (\ref{fixed}) can be relaxed so that the r.h.s. also contains a factor which is some integer power of $m_1$. Then this factor is nothing but the number of fixed points for the twist given by $(T^r_1)$. Here we only consider the models with one fixed point as we already mentioned.) {}Now we turn to describing the sectors of the theory. They are labeled by ${\overline {\alpha V}}$. Let us start with the sectors with $\alpha_1=0$. To describe the vertex operators of states in these sectors, we will need to distinguish two different cases. They arise as follows. Note that there are two types of momenta $P\in {\Gamma}^{6,22}$: those that belong to the invariant sublattice $I(V_1)$, and those that do not. Let us consider the latter type first. Thus consider a set of momentum vectors $N(V_1) \subset \Gamma^{6,22}$ such that if $P\in N(V_1)$, then $P\not\in I(V_1)$. Let $\vert P\rangle$ be the corresponding momentum states. It is clear that we can always decompose $P$ into $P=P^\perp +P^\parallel$, where $P^\parallel \in I(V_1)$, and $P^\perp \cdot P^\prime =0$ for all $P^\prime \in I(V_1)$. Then $\vert P\rangle =\vert P^\parallel\rangle \otimes \vert P^\perp \rangle$. Let $N^*(V_1)\subset N(V_1)$ be a set of momenta spanned by all $P^\perp$, and let ${\cal H}$ be the corresponding Hilbert space, {\em i.e.}, ${\cal H}$ is spanned by states $\vert P^\perp \rangle$, $P^\perp \in N^*(V_1)$. This space can be represented as ${\cal H}=\otimes_{\ell=0}^{m_1-1} {\cal H}_\ell$, where ${\cal H}_\ell$ is spanned by the states of the form \begin{equation} \vert P^\perp; \ell\rangle = {1\over \sqrt{m_1}}\sum_{k=0}^{m_1-1} \exp (-2\pi ik\ell /m_1) g^k \vert P^\perp \rangle~, \end{equation} where $g=\exp(-2\pi i\sum_r T^r_1 J^r)$, and $J^r$ is the angular momentum operator (that acts on the momenta) for the complex boson $\Phi^r$ (or, equivalently, for the real bosons $\phi^{21-2d+2r}$ and $\phi^{22-2d+2r}$, and in this language $J^r$ is the generator of $SO(2)$ rotations in the plane of these bosons). Note that $g \vert P^\perp; \ell\rangle=\exp(2\pi i \ell/ m_1) \vert P^\perp; \ell\rangle$. Later, we will identify $\ell$ as part of a discrete charge ($D$-charge). {}In the sectors with $\alpha_1=0$ there are two kinds of vertex operators. The first kind are momentum states (in the covariant formalism): \begin{equation}\label{vertex1} V (z,{\overline z}) ={\tilde V}(z,{\overline z}) \exp(i\sum_a H_a \rho_a ({\overline z}) +i\sum_I Q^I \phi^I ({\overline z}) + i\sum_A Q^A \varphi^A (z)) \exp(ik_\mu {\cal X}^\mu (z,{\overline z}))~, \end{equation} with ${\vec {\cal Q}}=(Q^I\vert\vert Q^A)\in I(V_1)$. (Note that ${\vec {\cal Q}}$ is a $(6,22-2d)$ dimensional vector.) Here ${\tilde V}(z,{\overline z})$ is a combination of derivatives of ${\cal X}^\mu$, $\rho_a$, $\phi^I$, $\varphi^A$ and $\Phi^r$ ({\em i.e.}, this is the corresponding oscillator excitation contribution), ghosts, certain cocycles, and the normal ordering is implicit here. Let us combine the $H$-charges $H_a$, $Q$-charges $Q^I$, and the gauge charges $Q^A$ into a $(10,22-2d)$ dimensional momentum vector $P_{\overline {\alpha V}}$. Then the physical states are those that satisfy the following spectrum generating formula: \begin{equation}\label{SGFU1} V_i \cdot P_{\overline {\alpha V}} + {\delta_{i1} \over m_1} D \equiv V_i \cdot P_{\overline {\alpha V}} +\sum_r T^r_i N^r=s_i +\sum_j k_{ij} \alpha_j~({\mbox{mod}}~1)~, \end{equation} and the momenta $P_{\overline {\alpha V}}\in {\bf Z}^4\otimes I(V_1) + {{\alpha V}}$. Thus, for example, $H_a \in {\bf Z}+ ({\overline {\alpha V}})^a$, and ${\vec {\cal Q} }\in I(V_1) + {\alpha U}$. Here we have introduced the boson number operators $N^r$ for the bosons $\Phi^r$. It can be expressed in terms of the occupation number operators $s^r$ and ${\tilde s}^r$ for these bosons: $N^r_{\overline {\alpha V}}=\sum_q^{\infty} (s^r_q-{\tilde s}^r_q)$. The discrete $D$-charge is defined by the above equation. The origin of this $D$-charge (defined modulus $m_1$) is the ${\bf Z}_{m_1}$ twist acting on the $d$ complex bosons $\partial \Phi^r$. This, however, is not the most general form of $D$ as it has contributions only from the oscillators. We will give the general form of $D$ in a moment. {}The vertex operators (in the covariant formalism) for the second kind of states have the same form as (\ref{vertex1}), but now, in addition to the derivatives of ${\cal X}^\mu$, $\rho_a$, $\phi^I$, $\varphi^A$ and $\Phi^r$, ghosts and cocycles, ${\tilde V} (z,{\overline z})$ contains also the vertex operator for a state $\vert P^\perp_{\overline {\alpha V}}; \ell\rangle$. The latter can be written as \begin{equation} {1\over \sqrt{m_1}}\sum_{k=0}^{m_1-1} \exp (-2\pi ik\ell /m_1) g^k \exp(i \sum_r [q^r (\Phi^r)^\dagger (z)+(q^r)^* \Phi^r (z)])~, \end{equation} where $q^r$ is the (complex) momentum of the $\Phi^r$ boson (and the $2d$ real components of all $d$ momenta $q^r$ give the momentum vector $P^\perp_{\overline {\alpha V}}$). The physical states are those that satisfy the following spectrum generating formula: \begin{equation}\label{SGFU2} V_i \cdot P^\parallel_{\overline {\alpha V}} +{\delta_{i1} \over m_1}D \equiv V_i \cdot P^\parallel_{\overline {\alpha V}} +\sum_r T^r_i N^r_{\overline {\alpha V}} + {\delta_{i1} \over m_1} \ell =s_i +\sum_j k_{ij} \alpha_j~({\mbox{mod}}~1)~. \end{equation} Here we have combined the charges $H_a$, $Q^I$ and $Q^A$ into a single vector $P^\parallel_{\overline {\alpha V}}$. The lattice momenta in this sector are given by $P_{\overline {\alpha V}}=P^\parallel_{\overline {\alpha V}}+ P^\perp_{\overline {\alpha V}}$. {}The general form of the $D$-charge is given by: \begin{equation} D \equiv \ell + m_1 \sum_r T^r_1 N^r_{\overline{\alpha V}} \pmod{m_1} \end{equation} which has contributions from both the lattice momentum and the oscillators. We recover the previous case if we set $\ell = 0$ ({\em i.e.} $P_{\overline{\alpha V}} \in I(V_1)$). {}Finally, the vertex operators for states in the sectors ${\overline {\alpha V}}$ with $\alpha_1\not=0$ read: \begin{eqnarray}\label{vertex3} V (z,{\overline z}) =&& {\tilde V}(z,{\overline z}) \sigma_{\overline {\alpha V}} (z) \times\nonumber\\ &&\exp(i\sum_a H_a \rho_a ({\overline z}) +i\sum_I Q^I \phi^I ({\overline z}) + i\sum_A Q^A \varphi^A (z)) \exp(ik_\mu {\cal X}^\mu (z,{\overline z}))~, \end{eqnarray} where ${\tilde V}(z,{\overline z})$ is as defined in (\ref{vertex1}) and $\sigma_{\overline {\alpha V}} (z)$ is a vertex operator for the twisted ground state with conformal dimension $\sum_r {1\over 2} ({\overline {\alpha V}})^r (1-({\overline {\alpha V}})^r)$. The physical states are those that satisfy the following spectrum generating formula: \begin{equation}\label{SGFT} V_i \cdot P_{\overline {\alpha V}} =s_i +\sum_j k_{ij} \alpha_j~({\mbox{mod}}~1)~,~~~i\not=1~. \end{equation} Here $P_{\overline {\alpha V}}\in {\bf Z}^4 \otimes {\tilde I}(V_1)+ {\overline {\alpha V}}$. (Here we just take the momentum part of ${\overline {\alpha V}}$.) Note that here we are not imposing the constraint with respect to the vector $V_1$ as the latter is automatically satisfied in these sectors. {}The above spectrum generating formulas give both on- and off-shell states. The on-shell states must satisfy the additional constraint that the left- and right-moving energies be equal. In the ${\overline {\alpha V}}$ sector they are given by \begin{eqnarray} E^L_{\overline {\alpha V}}=&& -1 +\sum_{q=1}^{\infty} (q\sum_\mu m^{\mu}_q +q\sum_A n^A_q \nonumber\\ &&+ \sum_r [(q+({\overline {\alpha V}})^r-1)s_q + (q-({\overline {\alpha V}})^r){\tilde s}_q])+ {1\over 2}(P^L_{\overline {\alpha V}})^2 ~,\\ E^R_{\overline {\alpha V}}=&& -{1\over 2} +\sum_{q=1}^{\infty} q(\sum_\mu {\tilde m}^{\mu}_q +\sum_I n^I_q+\sum_a k^a_q) +{1\over 2}(P^R_{\overline {\alpha V}})^2~, \end{eqnarray} where $m^{\mu}_q$, ${\tilde m}^{\mu}_q$, $n^A_q$, $n^I_q$ and $k^a_q$ are the oscillator occupation numbers for the real bosons ${\cal X}^\mu_L$, ${\cal X}^\mu_R$, $\varphi^A$, $\phi^I$ and $\rho_a$, respectively. Also, we note that $(P^L_{\overline {\alpha V}})^2 =({\vec {\cal Q}}^L)^2$, and $(P^R_{\overline {\alpha V}})^2 =({\vec {\cal Q}}^R)^2+\sum_a H^2_a$, where ${\vec {\cal Q}}^L$ and ${\vec {\cal Q}}^R$ are the left- and right-moving parts of the vector ${\vec {\cal Q}}$. \subsection{Scattering Amplitudes} {}From the scattering amplitudes ${\cal A}_{M}$ (\ref{scattem}), where the external space-time momenta are set to zero, we can read off the terms in the superpotential. For a non-zero coupling, the corresponding scattering amplitude must be present. Having taken care of the ghost factors, this means that all the gauge and discrete symmetries must be satisfied. In particular, a necessary condition is that the sum of all the lattice momenta must be zero in ${\cal A}_{M}$ (\ref{scattem}). These selection rules impose very tight constraints on the possible terms that can appear in the superpotential of any model. \noindent (1) {\em Gauge Invariance}. This local symmetry must be conserved. In the $4$-dimensional $N=1$ heterotic string models, the gauge symmetries come from left-movers only. We will refer to these as $G$-charges. Note that picture-changing does not touch the $G$-charges so each state carries well-defined gauge quantum numbers. \noindent (2) {\em $H$- and $Q$-Charge Conservation}. They must be conserved in the scattering amplitude. Note that the supercurrent carries terms with different $H$- and $Q$-charges. Because of picture changing, $H$- and $Q$- charges are not global charges even though they must be conserved exactly in ${\cal A}_{M}$. This is consistent with the fact that string theory has no global continuous symmetries \cite{global}. Point group and space group selection rules follow from these conservation laws. \noindent (3) {\em Invariance under Discrete Symmetries}. In higher level models, there is a discrete gauge charge (or quantum number) associated with the twist field. We shall call this a $D$-charge. As we shall see, in the models we are interested in, the selection rule coming from this discrete symmetry is subsumed in the other selection rules. In principle, we can calculate the couplings in the superpotential explicitly, since we know all the vertex operators. However, in this paper, we shall consider only the selection rules coming from the conservation of the above $G$-, $Q$-, $H$- and $D$-charges. Note that space-time superpartners have identical $G$-, $Q$- and $D$-charges, but different $H$-charges. \section{Simple Level-1 Examples} {}In this section we use some simple examples to illustrate our rules for constructing level-1 models and calculating scattering amplitudes. After a detailed discussion of the familiar asymmetric ${\bf Z}_3$ orbifold, we will briefly discuss the original symmetric $Z$-orbifold. {}Consider the Narain model with $\Gamma^{6,22} =\Gamma^{2,2} \otimes \Gamma^{2,2} \otimes \Gamma^{2,2} \otimes \Gamma^8 \otimes \Gamma^8$, where $\Gamma^8$ is the $E_8$ root lattice, whereas $\Gamma^{2,2}=\{(p_R \vert\vert p_L)\}$ with $p_L,p_R \in{\tilde {\Gamma}}^2$ ($SU(3)$ weight lattice) and $p_L-p_R \in{\Gamma}^2$ ($SU(3)$ root lattice). This model has $N=4$ SUSY and $SU(3) \otimes SU(3) \otimes SU(3)\otimes E_8 \otimes E_8$ gauge symmetry (counting only the gauge bosons coming from the left-moving sector of the string; the right-moving sector contributes $6$ $U(1)$ vector bosons that are part of the $N=4$ supergravity multiplet). Let us write the corresponding $V_0$ vector as \begin{equation} V_0 =((-{1\over 2})^4 \vert 0^3\vert\vert 0^3\vert 0^8 \vert 0^8)~. \end{equation} Here the first four entries stand for the right-moving complex world-sheet fermions $\psi^a$, $a=0,1,2,3$, next three stand for $3$ right-moving complex bosons $X^a$, $a=1,2,3$ (each corresponding to a factor $\Gamma^{2,2}$). Double vertical line separates the right-movers from the left-movers. The first three left-moving entries correspond to the left-moving counterparts of the $X^a$ bosons. The next $8+8$ entries correspond to the $E_8 \otimes E_8$ lattice which we will describe using $16$ real bosons, and we will use the ${\mbox{Spin}}(16)/{\bf Z}_2$ basis for each of the $E_8$ factors ({\em i.e.}, the $E_8$ roots are described as those of $SO(16)$ plus $128$ additional roots in the corresponding irrep of $SO(16)$; thus, for example, $(+1, 0, -1, 0,0,0,0,0)$ is a root of $SO(16)$, and the roots in the $128$ irrep are those with all eight entries equal $+1/2$ or $-1/2$ and total number of positive signs being even). \medskip \noindent {\em (1) An asymmetric $Z$-orbifold model.} {}Consider the following asymmetric ${\bf Z}_3$ orbifold of the $SU(3)^3 \otimes (E_8)^2$ Narain model described above: \begin{equation} V_1 =(0(-{1\over 3})^3 \vert (e_1/3)^3\vert\vert 0^3\vert {1\over 3}{1\over 3}{2\over 3} 0^5 \vert 0^8)~. \end{equation} This is the original asymmetric $Z$-orbifold model of Ref \cite{NSV}. It has $N=1$ SUSY and $SU(3)\otimes E_6 \otimes E_8 \otimes SU(3)^3$ gauge symmetry. Here $e_1$ is a simple root of $SU(3)$. We denote the other simple root by $e_2$, and define $e_3 \equiv -e_1 -e_2$. Note that $e_\alpha \cdot e_\beta =2$ for $\alpha=\beta$ and $-1$ for $\alpha\not=\beta$ ($\alpha,\beta=1,2,3$). Let $\phi^a$, $a=1,2,3$, be the two-component real bosonic fields corresponding to each of the three $SU(3)$ lattices in the model with $e_1/3$ shifts. The $i \partial X ^a$ in the supercurrent $T_{F}$ is given by \begin{equation}\label{SU3} i \partial X ^a = {1 \over {\sqrt 3}} \sum_\alpha e^{-ie_\alpha \phi ^a} c(-e_\alpha) ~. \end{equation} It is easy to see that the {\em triplet} constraint is satisfied: \begin{equation} V^a_1 + U_1 \cdot Q = -{1 \over 3} + {e_{1} \over 3} \cdot (-e_{\alpha}) = 0 \pmod{1} \end{equation} {}We can now compute the massless spectrum of the model. First, let us choose $k_{00}=0$ (note that there is a freedom in choosing $k_{00}$ to be $0$ or $1/2$ which only reflects in flipping the chirality of the states). The other structure constants are fixed: $k_{10}=1/2$, $k_{01}=0$ and $k_{11}=1/3$. It is convenient to define \begin{equation} P_{\alpha V}=(H_{0},\cdots,H_{3},{\bf Q}^{R}_{1},{\bf Q}^{R}_{2}, {\bf Q}^{R}_{3} \vert {\bf Q}^{L}_{1}, {\bf Q}^{L}_{2}, {\bf Q}^{L}_{3}, Q_{4}, \cdots, Q_{19}) \end{equation} where ${\bf Q}^{L,R}$ are charges under $U(1)^{2}$ of $SU(3)$ and all other $Q$'s are $U(1)$ charges. {}First, consider the untwisted sector, the spectrum generating formulae read: \begin{eqnarray} {1\over 2} (H_0 + H_1 + H_2 + H_3) &=& {1\over 2} \pmod{1} \\ {1\over 3} (Q_4 + Q_5 + 2 Q_6) + {1\over 3} (H_1+ H_2 + H_3) -{e_{1} \over 3} \cdot ({\bf Q}^{R}_{1} + {\bf Q}^{R}_{2} +{\bf Q}^{R}_{3}) &=& 0 \pmod{1} \end{eqnarray} The first spectrum generating formula requires that one of the $H$-charges must be equal to $1$. It then follows from the massless condition that ${\bf Q}^R=0$ which implies that ${\bf Q}^{L} \in \Gamma^{2}$ ($SU(3)$ root lattice). The choice $a=0$ gives rise to gauge bosons of $SU(3) \otimes E_{6} \otimes E_{8} \otimes SU(3)^{3}$. For $a=1,2, {\mbox{or}}~3$, the second spectrum generating formula gives $Q_4=-1$ or $Q_5=-1$ or $Q_6=1$. Therefore, ${\bf Q}^{L}=0$ or else $({\bf Q}^{L})^2 \geq 2$ and the state is massive. Note that even though the orbifold group does not act on ${\bf Q}^{L}$, constraints such as ${\bf Q}^{L}-{\bf Q}^{R} \in \Gamma^{2}$ and massless condition restrict the possible values of ${\bf Q}^{L}$. The fields that survive the projection are $\chi_{a}$ which transform in the $({\bf 3}, {\bf 27}, {\bf 1})$ irrep of $SU(3)\otimes E_6 \otimes E_8$, and are neutral under the other $SU(3)^3$. The index $a$ labels the choice of $H$-charges: $H=(1,0,0)$ for $a=1$, $H=(0,1,0)$ for $a=2$ and $H=(0,0,1)$ for $a=3$. {}We now turn to the twisted sectors. Note that in the $-1$ picture, $H_{a} \in {\bf Z} - {1\over 3}$ in $V_1$ sector while $H_{a} \in {\bf Z} + {1\over 3}$ in $2V_1$ sector. The possible choices of $H_{a}$ which do not give rise to massive states are $H_{a}=(-{1\over 3},-{1\over 3},-{1\over 3})$ in $V_1$ sector and $H_{a}=({1\over 3},{1\over 3},{1\over 3})$ in $2V_1$ sector. The left-handed chiral supermultiplets all come from $2V_{1}$ and $V_{0}+ 2 V_{1}$ sectors. In $2V_1$ sector, the spectrum generating formulae give: \begin{eqnarray} {1\over 2} (H_0 + H_1 + H_2 + H_3) &=& {1\over 2} \pmod{1} \\ {1\over 3} (Q_4 + Q_5 + 2 Q_6) + {1\over 3} (H_1+ H_2 + H_3) -{e_{1} \over 3} \cdot ({\bf Q}^{R}_{1} + {\bf Q}^{R}_{2} +{\bf Q}^{R}_{3}) &=& {2\over 3} \pmod{1} \end{eqnarray} where $Q_4,Q_5 \in {\bf Z}-{1\over 3}$, $Q_6 \in {\bf Z}+ {1\over 3}$ and ${\bf Q}^{R} \in \tilde{\Gamma}^{2} - e_{1}/3$. For massless states, $\sum_{j} ({\bf Q}_{j}^{R})^{2}=2/3$ and therefore, there are $27$ choices of ${\bf Q}^{R}$: \begin{equation} {\bf Q}^{R}= \left( -{e_{\alpha} \over 3},-{e_{\beta} \over 3}, -{e_{\gamma} \over 3} \right) ~~~{\mbox{for}}~\alpha,\beta,\gamma=1,2,3 \end{equation} The $Q$ charges ${\bf Q}^{L}$ and ${\bf Q}^{R}$ are correlated by $p_{L}-p_{R} \in \Gamma^{2}$. For simplicity, let us consider for the moment only the first $SU(3)$. We have \begin{equation} \begin{array}{llllc} & & & & \underline{{\mbox{irrep of }}~SU(3)} \\ \alpha=1 & \phantom{6} &{\bf Q}^{L}_{1}=0 & \phantom{6} &{\bf 1} \\ \alpha=2 & &{\bf Q}^{L}_{1}=\tilde{e}_{2},-\tilde{e}_{1}, \tilde{e}_{1}- \tilde{e}_{2} & & {\bf 3} \\ \alpha=3 & &{\bf Q}^{L}_{1}=-\tilde{e}_{2},\tilde{e}_{1}, \tilde{e}_{2} - \tilde{e}_{1} & & {\bf {\overline{3}}} \\ \end{array} \end{equation} where $\tilde{e}_{1}={2 \over 3} e_{1} + {1 \over 3} e_{2}$ and $\tilde{e}_{2}={1 \over 3} e_{1} + {2 \over 3} e_{2}$ are $SU(3)$ weights such that $\tilde{e}_{i} \cdot e_{j} = \delta_{ij}$. The conformal dimension of ${\bf 3}$ (and ${\bf {\overline{3}}}$) of $SU(3)$ is ${1 \over 2} ({\bf Q}^{L}_{1})^2= {1\over 3}$ as expected. {}Massless states are created by conformal fields with total left-moving conformal dimension $1$. Therefore, they must transform non-trivially under $SU(3)$ as well as other gauge group such as $E_6$. For instance, a field that transforms as ${\bf 3}$ of $SU(3)$ and $\overline{\bf 27}$ of $E_6$ (which has conformal dimension $2/3$) has the right conformal dimension. It remains to check if the spectrum generating formulae are satisfied. To summarize, we have the following left-handed chiral supermultiplets from the twisted sectors:\\ $\bullet$ The twisted sector field ${\overline \chi}$.\\ This field transforms in the $({\bf 3}, {\overline {\bf 27}}, {\bf 1}, {\bf 1},{\bf 1},{\bf 1})$ irrep of $SU(3) \otimes E_6 \otimes E_8 \otimes SU(3)^3$. The $Q$-charges for this field read $(-e_1/3,-e_1/3,-e_1/3)$.\\ $\bullet$ The twisted sector fields ${\chi}_{A\pm}$.\\ The field ${\chi}_{A+}$ transforms in the $({\bf 1}, {{\bf 27}},{\bf 1}, {\bf x}, {\bf y},{\bf z})$ irrep of $SU(3) \otimes E_6 \otimes E_8 \otimes SU(3)^3$ with ${\bf x}={\bf 3}$, ${\bf y}={\bf z}={\bf 1}$ for $A=1$, ${\bf y}={\bf 3}$, ${\bf x}={\bf z}={\bf 1}$ for $A=2$, and ${\bf z}={\bf 3}$, ${\bf x}={\bf y}={\bf 1}$ for $A=3$. Similarly, the field ${\chi}_{A-}$ transforms in the $({\bf 1}, {{\bf 27}},{\bf 1}, {\bf x}, {\bf y},{\bf z})$ irrep of $SU(3) \otimes E_6 \otimes E_8 \otimes SU(3)^3$ with ${\bf x}={\overline {\bf 3}}$, ${\bf y}={\bf z}={\bf 1}$ for $A=1$, ${\bf y}={\overline {\bf 3}}$, ${\bf x}={\bf z}={\bf 1}$ for $A=2$, and ${\bf z}={\overline {\bf 3}}$, ${\bf x}={\bf y}={\bf 1}$ for $A=3$. The $Q$-charges for the fields ${\chi}_{A+}$ are given by $(-e_2/3,-e_1/3, -e_1/3)$ for $A=1$, $(-e_1/3,-e_2/3, -e_1/3)$ for $A=2$, and $(-e_1/3,-e_1/3, -e_2/3)$ for $A=3$. Similarly, the $Q$-charges for the fields ${\chi}_{A-}$ are given by $(-e_3/3,-e_1/3, -e_1/3)$ for $A=1$, $(-e_1/3,-e_3/3, -e_1/3)$ for $A=2$, and $(-e_1/3,-e_1/3, -e_3/3)$ for $A=3$.\\ $\bullet$ The twisted sector fields ${T}_{{\bf x}{\bf y}{\bf z}}$.\\ The field ${T}_{{\bf x}{\bf y}{\bf z}}$ transforms in the $({\overline {\bf 3}}, {{\bf 1}},{\bf 1}, {\bf x}, {\bf y},{\bf z})$ irrep of $SU(3) \otimes E_6 \otimes E_8 \otimes SU(3)^3$ where ${\bf x},{\bf y},{\bf z}$ are irreps of $SU(3)$ such that only one of them is a singlet and the others can be either ${\bf 3}$ or ${\bf {\overline{3}}}$. The $Q$-charges are correlated with the $SU(3)$ irrep as described above. For example, if ${\bf x}={\bf 1}, {\bf y}={\bf z}={3}$, the $Q$-charges are given by $(-e_{1}/3,-e_{2}/3,-e_{2}/3)$. {}The gauge quantum numbers as well as the $Q$- and $H$-charges of the massless spectrum are summarized in Table I. {}Having described the bosonic supercurrent and the massless spectrum of the model, we are ready to calculate scattering amplitudes. Let us start with the three-point Yukawa interactions of the chiral families in ${\bf 27}$ of $E_6$. The three-point Yukawa coupling $\chi_a \chi_b \chi_c$ is non-zero only for $a\not=b\not=c\not=a$ as only in this case are the $H$-charges conserved. For illustrative purposes let us consider this case in more detail. If $a=1$, $b=2$ and $c=3$, then in the $-1$-picture the $H$-charges are $(+1,0,0)$, $(0,+1,0)$ and $(0,0,+1)$, respectively. Two of the fields must be in the $-1/2$-picture, and the third one must be in the $-1$-picture, however. Let the $\chi_a$ and $\chi_b$ fields in the $-1/2$-picture. Their $H$-charges then are given by $(+1/2,-1/2,-1/2)$ and $(-1/2,+1/2,-1/2)$. Then we see that the total $H$-charge is $(+1/2,-1/2,-1/2)+(-1/2,+1/2,-1/2)+(0,0,+1)=(0,0,0)$, and the process is allowed by the $H$-charge conservation. All the other quantum numbers are also conserved in this case. It is easy to see that in all the other cases $\chi_a \chi_b \chi_c$ Yukawa coupling vanishes. {}Now consider the three-point couplings of the twisted sector fields. Naively, just from the conservation of gauge charges, one might expect that, say, the coupling $\chi_{A+} \chi_{B+} \chi_{C+}$ is non-zero for $A=B=C$. This is, however, not the case. Indeed, to have a singlet of the corresponding $SU(3)$ (note that each of these three fields transform in the irrep ${\bf 3}$ of the corresponding $SU(3)$), we must completely antisymmetrize these fields. This means that we must take the completely antisymmetric combination of the three ${{\bf 27}}$s (note that each of these three fields transform in the irrep ${ {\bf 27}}$ of $E_6$). The latter antisymmetric product does not contain a singlet of $E_6$, so that the trace of this product vanishes. The same conclusion can be drawn from the $Q$-charge non-conservation. Note that the $Q$-charges in this case are not conserved. (Recall that for the three-point couplings there is no picture changing insertions of the supercurrent.) In fact, it is easy to see that for the same reason all three-point couplings involving at least one twisted sector field vanish. Some higher point couplings, however, are non-zero. For example, consider the following six-point couplings: ${\overline \chi}{\overline \chi}{\overline \chi}\chi_a \chi_b \chi_c$. It can be easily checked that $H$-, $Q$- and $G$-charge conservations in this case imply that this coupling is non-zero if and only if $a\not=b\not=c\not=a$. This is a typical situation for asymmetric orbifolds: lower point couplings typically vanish because of the $Q$-charge non-conservation, whereas there are (usually) somewhat higher point couplings that are non-zero. This observation will play an important role for the superpotentials of the three-family grand unified string theories which we will discuss in the subsequent sections. {}For illustrative purpose, we give the lowest order non-vanishing terms in the superpotential of this asymmetric ${\bf Z}_3$ orbifold: \begin{equation} W= (\lambda_1 + \lambda_2 \overline{\chi}^3) \sum_{a\not=b\not=c\not=a} \chi_a \chi_b \chi_c + \dots ~. \end{equation} {}In passing, we remark that the above model was originally constructed in the twist basis: \begin{equation} V_1 =(0(-{1\over 3})^3 \vert \theta^3\vert\vert 0^3\vert {1\over 3}{1\over 3} {2\over 3} 0^5 \vert 0^8)~. \end{equation} Here $\theta$ denotes a $2 \pi/3$ rotation in the corresponding $SU(3)$ lattice. The above generating vector defines the same asymmetric $Z_{3}$ orbifold model. The relation between the complex bosons $i \partial X^a$ in the twist formalism and the real bosons $\phi^a$ in the shift formalism is given in Eq.(\ref{SU3}). \medskip \noindent {\em (2) A symmetric ${Z}$-orbifold model}. {}Consider the following ${\bf Z}_3$ orbifold of the above Narain model: \begin{equation} V_1 =(0(-{1\over 3})^3 \vert (e_1/3)^3\vert\vert (e_1/3)^3\vert {1\over 3}{1\over 3}{2\over 3} 0^5 \vert 0^8)~. \end{equation} This model has $N=1$ SUSY, $U(1)^6 \otimes SU(3) \otimes E_6 \otimes E_8$ gauge group, and $36$ chiral families of fermions in the ${\bf 27}$ of $E_6$. Nine of these come from the untwisted sector, and the other $27$ from the twisted sector. This orbifold model is nothing but the original $Z$-orbifold model of Ref \cite{DHVW} at the special values of the moduli of the compactification torus. This is precisely the reason for the enhanced $U(1)^6$ gauge symmetry. Since the group actions are identical on the right-movers for these symmetric and asymmetric orbifolds, we can use the same supercurrent $T_{F}$. {}Let us fix the structure constants of the model. They are the same as in the asymmetric case except for $k_{11}=-1/3$. We have the following left-handed chiral supermultiplets:\\ $\bullet$ The untwisted sector fields $S_{aA\alpha}$.\\ The fields $S_{aA\alpha}$ are $SU(3)\otimes E_6 \otimes E_8$ singlets that are charged under $U(1)^6$. Thus, for $A=1$ the $U(1)$ charges are given by $(e_\alpha,0,0)$, for $A=2$ the $U(1)$ charges are given by $(0, e_\alpha,0)$, and for $A=3$ the $U(1)$ charges are given by $(0,0,e_\alpha)$. There is no correlation between the gauge quantum numbers and the index $a$. The latter is related to the $H$-charges (here we only give $H_{1,2,3}$ charges in the $-1$ picture; the corresponding $H$-charges for $Q_L$ left-handed SUSY generator are given by $(-1/2,-1/2,-1/2)$). For $a=1$ they are $(+1,0,0)$, for $a=2$ they are $(0,+1,0)$, and for $a=3$ they are $(0,0,+1)$.\\ $\bullet$ The untwisted sector fields $\chi_{a}$.\\ These are the same as in the asymmetric case. \\ $\bullet$ The twisted sector fields $\chi_{\alpha \beta \gamma}$.\\ The $27$ fields $\chi_{\alpha \beta \gamma}$ transform in the irrep $({\bf 1}, {\bf 27}, {\bf 1})$ of $SU(3)\otimes E_6 \otimes E_8$. Their $U(1)^6$ charges are given by $(-e_\alpha/3, -e_\beta/3, -e_\gamma/3)$. Their $Q$-charges are the same as their $U(1)^6$ charges in the spectrum picture. For illustrative purposes we give the part of the vertex operator for the field $\chi_{\alpha \beta \gamma}$ that corresponds to the right-moving internal bosons: \begin{equation} \exp (-i[e_\alpha \cdot \phi^1 ({\overline z}) +e_\beta \cdot \phi^2 ({\overline z}) +e_\gamma \cdot \phi^3 ({\overline z})]/3)~c((e_\alpha , e_\beta , e_\gamma ))~. \end{equation} $\bullet$ The twisted sector fields $T_{A\alpha \beta \gamma}$.\\ The $81$ fields $T_{A\alpha \beta \gamma}$ transform in the $({\overline {\bf 3}}, {\bf 1}, {\bf 1})$ irrep of $SU(3)\otimes E_6 \otimes E_8$. Their $Q$-charges are given by $(-e_\alpha/3, -e_\beta/3, -e_\gamma/3)$. Their $U(1)^6$ charges are given by $(+2e_\alpha/3, -e_\beta/3, -e_\gamma/3)$ for $A=1$, $(-e_\alpha/3, +2e_\beta/3, -e_\gamma/3)$ for $A=2$, and $(-e_\alpha/3, -e_\beta/3, +2e_\gamma/3)$ for $A=3$. Note that in the twisted sector all the left-handed fermions have $H$-charges $(-1/6,-1/6,-1/6)$, whereas their bosonic superpartners have $H$-charges $(+1/3,+1/3,+1/3)$. {}We are now ready to calculate scattering amplitudes in this model. Let us start with the three-point Yukawa interactions of the chiral families in ${\bf 27}$ of $E_6$. Note that terms like $\chi_a \chi_{\alpha \beta \gamma} \chi_{\alpha^{\prime} \beta^{\prime} \gamma^{\prime}}$ and $\chi_a \chi_b \chi_{\alpha \beta \gamma}$ are not allowed by $H$-charge and also $G$-charge conservation. The three-point Yukawa coupling $\chi_a \chi_b \chi_c$ is the same as in the asymmetric case and is non-zero only for $a\not=b\not=c\not=a$. {}Next, we turn to the scattering $\chi_{\alpha \beta \gamma} \chi_{\alpha^{\prime} \beta^{\prime} \gamma^{\prime}} \chi_{\alpha^{\prime\prime} \beta^{\prime\prime} \gamma^{\prime\prime}}$. The $G$-charge and $Q$-charge conservation (which in this case give the same selection rules as the $Q$-charges for these fields are the same as the $U(1)^6$ $G$-charges) give the following selection rules for the scattering: $\alpha\not=\alpha^{\prime}\not=\alpha^{\prime\prime}\not=\alpha$, and similarly for the $\beta$ and $\gamma$ indices. {}We will not give all the couplings for this model. We will finish our discussion here by considering the couplings of the type $S_{aA\delta} \chi_{\alpha \beta \gamma} \chi_{\alpha^{\prime} \beta^{\prime} \gamma^{\prime}} \chi_{\alpha^{\prime\prime} \beta^{\prime\prime} \gamma^{\prime\prime}}$. Since all the other couplings are similar, for definiteness let us consider the case $a=1$ and $A=1$. Let us take $S_{aA\delta}$ to be in the $0$-picture, two other fields in the $-1/2$-picture, and the last one in the $-1$-picture. Note that for the $H$-charges to be conserved, we must have $H_a=0$ for the field $S_{aA\delta}$ in the $0$-picture. This means that the term in the OPE $T_F S_{aA\delta}$ that can possibly contribute into this scattering must be of the form $\sim \exp(-i\rho_1)\sum_\alpha \exp (ie_\alpha\cdot \phi^1)$. Then the $Q$-charge and $G$-charge conservation together tell us that the above four-point coupling is non-zero if and only if $\alpha=\alpha^{\prime}=\alpha^{\prime\prime}=\delta$, $\beta\not=\beta^{\prime}\not=\beta^{\prime\prime}\not=\beta$, and $\gamma\not=\gamma^{\prime}\not=\gamma^{\prime\prime}\not=\gamma$. {}Let us compare our results with those of Ref \cite{DFMS}. To do so let us first note that this $Z$-orbifold is a factorized orbifold, and it is convenient to carry out the discussion on the example of a ${\bf Z}_3$ orbifold of one complex boson. Then $27$ families of chiral fermions in the twisted sector come from $3 \times 3\times 3$ fixed points. Thus, within the first set of $3$, fixed points in our notation are labeled by $\alpha$. So we will discuss the couplings only for one index $\alpha$, and just point out that the other two indices $\beta$ and $\gamma$ obey similar selection rules. We see that the Yukawa couplings in our case are non-zero only for $\alpha\not=\alpha^{\prime}\not=\alpha^{\prime\prime}\not=\alpha$. All the other Yukawa couplings are zero. In Ref \cite{DFMS} there are also additional non-zero Yukawa couplings with $\alpha=\alpha^{\prime}=\alpha^{\prime\prime}$ (although all the others, with say, $\alpha=\alpha^{\prime}\not=\alpha^{\prime\prime}$, are zero). The reason why we do not have the latter couplings is because we are considering a special point in the moduli space with enhanced gauge symmetry, and the $G$-charge conservation forbids these couplings. Note that if we move away from this point of enhanced gauge symmetry, and consider generic points as in Ref \cite{DFMS}, we will account for additional non-vanishing couplings. This can be seen from the four-point couplings $S_{aA\delta} \chi_{\alpha \beta \gamma} \chi_{\alpha^{\prime} \beta^{\prime} \gamma^{\prime}} \chi_{\alpha^{\prime\prime} \beta^{\prime\prime} \gamma^{\prime\prime}}$. As we move away from the special point in the moduli space we break the $U(1)^2$ (for one complex boson) gauge symmetry. In terms of effective field theory this corresponds to giving vevs to the fields $S_{aA\delta}$. Then effectively we generate three-point Yukawa couplings for $\alpha=\alpha^{\prime}=\alpha^{\prime\prime}$, but the couplings with, say, $\alpha=\alpha^{\prime}\not=\alpha^{\prime\prime}$, remain zero, because there were no corresponding higher point couplings in the superpotential to begin with. The latter was due to the $Q$-charge non-conservation. On the other hand, note that the absence of the couplings of the $\alpha=\alpha^{\prime}\not=\alpha^{\prime\prime}$ type in the orbifold language is due to the orbifold space-group selection rules. Thus, the $Q$-charge conservation has this space-group discrete symmetry encoded in it, just as $H$-charge conservation guarantees that the orbifold point-group selection rules are satisfied. {}Again, the above model can be written in the twist basis. The following generating vector: \begin{equation} V_1 =(0(-{1\over 3})^3 \vert \theta^3\vert\vert \theta^3\vert {1\over 3}{1\over 3}{2\over 3} 0^5 \vert 0^8)~. \end{equation} produces the same symmetric ${\bf Z}_3$ orbifold as above. \section{Three-Family $E_6$ Model} {}In this and the following sections, we describe the construction of some of the three-family grand unified string theories previously presented in Refs \cite{kt}. In particular, we will rewrite these models in the bases where the supercurrent is in the bosonized form. As explained previously, scattering amplitudes are most easily calculated in such a representation. This section describes the $E_6$ model. It is organized as follows. In subsection A we set up the notation and describe the construction of the Narain model to be orbifolded. In subsection B we construct the unique three-family $E_6$ model by ${\bf Z}_6$ orbifolding this Narain model. This $E_6$ model can be realized as two different ${\bf Z}_6$ orbifolds which we refer to as $E1$ and $E2$ models. In subsection C we present the shift construction for $E1$ model and compute the superpotential using the bosonic supercurrent. The shift construction and the superpotential for $E2$ model will be given in subsection D. In section VII, we will discuss a three-family $SO(10)$ model. We use this example to illustrate how one can calculate the superpotential for models that do not admit a bosonic supercurrent. In section VIII, we will discuss two three-family $SU(6)$ models which are obtained by adding a ${\bf Z}_3$ Wilson line to the $E_6$ model. Finally, we will briefly comment on other three-family grand unified string models. \subsection{Narain Model} {}Consider the Narain model, which we will refer to as $N(1,1)$, with $\Gamma^{6,22}=\Gamma^{6,6} \otimes \Gamma^{16}$, where $\Gamma^{16}$ is the ${\mbox{Spin}}(32)/{\bf Z}_2$ lattice, and ${\Gamma}^{6,6}$ is the lattice spanned by the vectors $(p_R \vert\vert p_L)$, where $p_L,p_R \in \Gamma^6$ ($E_6$ weight lattice), and $p_L-p_R\in \Gamma^6$ ($E_6$ root lattice). Recall that, under $E_6\supset SU(3)^3$, \begin{eqnarray} {\bf 27} &=& ({\bf 3},{\bf 3},{\bf 1})+ ({\overline {\bf 3}},{\bf 1}, {\bf 3}) + ({\bf 1}, {\overline {\bf 3}}, {\overline {\bf 3}}) ~,\\ {\bf 78} &=& ({\bf 8},{\bf 1},{\bf 1}) +({\bf 1},{\bf 8},{\bf 1}) + ({\bf 1},{\bf 1},{\bf 8})+ ({\bf 3},{\overline {\bf 3}},{\bf 3})+ ({\overline {\bf 3}},{\bf 3}, {\overline {\bf 3}}) ~, \end{eqnarray} so we can write ${\bf p} \in \Gamma^6$ as \begin{equation} \label{psu3} {\bf p}=({\bf q}_1+ \lambda {\bf w}_1,{\bf q}_2+ \lambda {\overline {\bf w}}_2, {\bf q}_3+ \lambda {\bf w}_3) ~, \end{equation} where ${\bf q}_i \in \Gamma^2$ ($SU(3)$ root lattice), ${\bf w}_i$ (${\overline {\bf w}}_i$) is in the ${\bf 3}$ (${\overline {\bf 3}}$) weight of $SU(3)$, and $ \lambda=0,~1,~2$. {}Next, consider the model, which we will refer to as $N1(1,1)$, generated from the $N(1,1)$ model by adding the following Wilson lines: \begin{eqnarray} &&V_1 =(0^4\vert 0^3 \vert\vert e_1/2~0~0 \vert {\bf s}~{\bf 0}~{\bf 0} \vert {\overline S}) ~, \nonumber\\ \label{Wilson} &&V_2 =(0^4\vert 0^3 \vert\vert e_2/2~0~0 \vert {\bf 0}~{\bf s}~{\bf 0} \vert {\overline S}) ~. \end{eqnarray} Here the first four entries correspond to the right-moving world-sheet fermions, the next three right-moving entries stand for the three right-moving complex bosons $X^a$, $a=1,2,3$ (each corresponding to one of the three $SU(3)$s). The double vertical line separates the right-movers from the left-movers. The first three left-moving entries correspond to the left-moving counterparts of the $X^a$ bosons. The remaining 16 left-moving world-sheet bosons generate the ${\mbox{Spin}}(32)/{\bf Z}_2$ lattice. The $SO(32)$ shifts are given in the $SO(10)^3 \otimes SO(2)$ basis. In this basis, ${\bf 0}$ stands for the null vector, ${\bf v}$($V$) is the vector weight, whereas ${\bf s}$($S$) and ${\overline {\bf s}}$(${\overline S}$) are the spinor and anti-spinor weights of $SO(10)$($SO(2)$). (For $SO(2)$, $V=1$, $S=1/2$ and ${\overline S}=-1/2$). The untwisted sector provides gauge bosons of $SU(3)^{2} \otimes U(1)^{2} \otimes SO(10)^{3} \otimes SO(2)$. There are additional gauge bosons from the new sectors. Recall that under $E_6 \supset SO(10) \otimes U(1)$, \begin{equation} {\bf 78}={\bf 1}(0)+{\bf 45}(0)+{\bf 16}(3)+\overline{\bf 16}(-3)~. \end{equation} It is easy to see that the $V_1$, $V_2$ and $V_1+V_2$ sectors provide the necessary ${\bf 16}(3)$ and $\overline{\bf 16}(-3)$ gauge bosons to the three $SO(10)$'s respectively. The resulting Narain $N1(1,1)$ model has $N=4$ SUSY and gauge group $SU(3)^2 \otimes (E_6)^3$ provided that we set $k_{10}=k_{20}=0$. The permutation symmetry of the three $E_6$ factors should be clear from the above construction. Since there is only one $N=4$ SUSY $SU(3)^2 \otimes (E_6)^3$ model in $4$-dimensional heterotic string theory, this permutation symmetry may also be explicitly checked by looking at its one-loop modular invariant partition function. \subsection{$E_6$ Model: Twist Construction} {}Before we describe the ${\bf Z}_6$ asymmetric orbifold that leads to the three-family $E_6$ model, we will introduce some notation. By $\theta$ we will denote a $2\pi/3$ rotation of the corresponding complex (or, equivalently, two real) chiral world-sheet boson(s). Thus, $\theta$ is a ${\bf Z}_3$ twist. Similarly, by $\sigma$ we will denote a $\pi$ rotation of the corresponding complex chiral world-sheet boson. Thus, $\sigma$ is a ${\bf Z}_2$ twist. By ${\cal P}$ we will denote the outer-automorphism of the three $SO(10)$s that arise in the breaking $SO(32)\supset SO(10)^3 \otimes SO(2)$. Note that ${\cal P}$ is a ${\bf Z}_3$ twist. Finally, by $(p_1,p_2)$ we will denote the outer-automorphism of the corresponding two complex chiral world-sheet bosons. Note that $(p_1,p_2)$ is a ${\bf Z}_2$ twist. {}The $E_6$ model can be constructed by performing the following asymmetric ${\bf Z}_6$ orbifolds on the $N=4$ SUSY $SU(3)^2 \otimes (E_6)^3$ model ({\em i.e.}, $N1(1,1)$ model): \\ $\bullet$ The $E1$ model. Start from the $N1(1,1)$ model and perform the following twists: \begin{eqnarray}\label{E1twist} &&T_3 =(0 (-1/3)^3\vert \theta,\theta,\theta\vert\vert \theta,e_1/3,0 \vert {\cal P} \vert 2/3)~,\nonumber\\ &&T_2=(0~(-1/2)^2~0\vert\sigma, p_1,p_2\vert\vert 0,e_1/2,e_1/2 \vert 0^{15} \vert 0)~. \end{eqnarray} This model has $SU(2)_1 \otimes (E_6)_3 \otimes U(1)^3$ gauge symmetry. The massless spectrum of the $E1$ model is given in Table \ref{E6spectra}. They are grouped according to where they come from, namely, the untwisted sector U, the ${\bf Z}_3$ twisted ({\em i.e.}, $T_3$ and $2T_3$) sector T3, the ${\bf Z}_6$ twisted ({\em i.e.}, $T_3+T_2$ and $2T_3+T_2$) sector T6, and ${\bf Z}_2$ twisted ({\em i.e.}, $T_2$) sector T2. Note that all particles have integer $U(1)$ charges. The normalization, or compactification radius $r$, of each left-moving world-sheet boson is given at the bottom of the table. The $U(1)$ charge of a particle with charge $n$ contributes $n^2 r^2/2$ to its conformal highest weight. That is, the corresponding part of the vertex operator has momentum $nr$.\\ $\bullet$ The $E2$ model. Start from the $N1(1,1)$ model and perform the following twists: \begin{eqnarray}\label{E2twist} &&T_3 =(0~0~(-1/3)^2\vert 0,\theta,\theta\vert\vert \theta,e_1/3,0 \vert {\cal P} \vert 2/3)~,\nonumber\\ &&T_2=(0~(-1/2)^2~0\vert \sigma, p_1,p_2\vert\vert 0,e_1/2,e_1/2\vert 0^{15} \vert 0)~. \end{eqnarray} This model has $SU(2)_1 \otimes (E_6)_3 \otimes U(1)^3$ gauge symmetry. The massless spectrum of the $E2$ model is given in Table \ref{E6spectra}. {}Note that the spin structures of the world-sheet fermions in the right-moving sector are fixed by the world-sheet supersymmetry consistency. The string consistency conditions impose tight constraints on the allowed twists. Using the approach given in Ref \cite{kt} one can check that both sets of twists presented above are consistent provided that the appropriate choices of the structure constants $k_{ij}$ are made. It is then straightforward, but somewhat tedious, to work out the massless spectrum of the model. (Again, more details can be found in Ref \cite{kt}). We will give an alternative way of constructing this model in the following subsections, and working out the massless spectrum there is somewhat easier. {}The models $E1$ and $E2$ have the same tree-level massless spectra. We will show that interactions on the two orbifolds are the same even though naively the fields in the two models seem to have different origins (in particular, the same states come from different twisted sectors of the two orbifolds). They are possibly a $T$-dual pair. {}In the following subsections, we will give the shift representation for the twists given in models $E1$ and $E2$. To be able to do so it is necessary and sufficient, as we already discussed in the previous sections, that the corresponding Kac-Moody algebras ${\cal G}^\prime_R$ are realized at level $1$ and have central charge $6$. This is indeed the case for the $E1$ and $E2$ models. Let us first consider the $E1$ model. The original Kac-Moody algebra ${\cal G}_R$ before orbifolding was $(E_6)_1$. The ${\bf Z}_3$ twist reduces it to $(SU(3)_1)^3$, whereas the ${\bf Z}_2$ twist breaks it to $SU(6)_1\otimes SU(2)_1$. The combined action of the ${\bf Z}_3$ and ${\bf Z}_2$ twists, {\em i.e.}, the corresponding ${\bf Z}_6$ twist breaks $(E_6)_1$ to ${\cal G}^\prime_R= [SU(2)_1 \otimes U(1)]^3$, which is realized at level one. Similarly, in the $E2$ model we have ${\cal G}^\prime_R= (SU(2)_1)^4 \otimes U(1)^2$. Even though the right moving Kac-Moody algebra is not $SU(3)^3$ for both cases, we can always write the Kac-Moody charges in this basis. The translation of the charges in different bases are given in Table \ref{convert}. \subsection{$E1$ Model: Shift Construction} {}Let us present the generating vectors in the shift formalism which produce the $E1$ model. Here $V_1$ and $V_2$ correspond to the $T_3$ and $T_2$ twists acting on the $N1(1,1)$ model: \begin{eqnarray} &&V_1 =(0 (-1/3)^3\vert e_1/3,e_1/3,e_1/3\vert\vert \theta,e_1/3,0 \vert {\cal P} \vert 2/3)~,\nonumber\\ \label{ME1} &&V_2=(0~(-1/2)^2~0\vert e_1/2,0,0\vert\vert 0,e_1/2,e_1/2 \vert 0^{15} \vert 0)~. \end{eqnarray} {}We may rewrite the vector $V_1$ in the following way. Consider the branching of $SO(32)$ to $SO(10)^3 \otimes SO(2)$. The three $SO(10)$s are permuted by the action of the ${\bf Z}_3$ outer-automorphism twist ${\cal P}$: $\phi^I_1 \rightarrow \phi^I_2 \rightarrow \phi^I_3 \rightarrow \phi^I_1$, where the real bosons $\phi^I_p$, $I=1,...,5$, correspond to the $p^{\mbox{th}}$ $SO(10)$ subgroup, $p=1,2,3$. We can define new bosons $\varphi^I \equiv {1\over \sqrt{3}}(\phi^I_1 +\phi^I_2 +\phi^I_3)$; the other ten real bosons are complexified via the linear combinations $\Phi^I \equiv {1\over\sqrt{3}}(\phi^I_1 +\omega\phi^I_2 +\omega^2 \phi^I_3)$ and $(\Phi^I)^\dagger \equiv {1\over \sqrt{3}}(\phi^I_1 +\omega^2\phi^I_2 +\omega \phi^I_3)$, where $\omega =\exp(2\pi i /3)$. Under ${\cal P}$, $\varphi^I$ is invariant, while $\Phi^I$ ($(\Phi^I)^\dagger$) are eigenstates with eigenvalue $\omega^2$ ($\omega$), i.e., ${\cal P}$ is equivalent to a ${\bf Z}_3$ twist $\theta$ on each $\Phi^I$. Finally, string consistency requires the inclusion of the $2/3$ shift in the $SO(2)$ lattice of the boson $\eta$. This simply changes the radius of this left-moving world-sheet boson. So, in the $\Phi^I$, $\varphi^I$ and $\eta$ basis, $V_1$ becomes \begin{eqnarray} V_1 =(0 (-1/3)^3\vert e_1/3,e_1/3,e_1/3\vert\vert \theta, e_1/3,0 \vert \theta ^5 0_r^5 \vert (2/3)_r)~ \end{eqnarray} where the subscript $r$ indicates real bosons. {}Including the $V_0$ vector, the generating vectors for $E1$ model can then be written as: \begin{eqnarray} V_0 &=& ( (-1/2)^4 \vert 0^6 \vert \vert 0^{22} ) \nonumber \\ V_1 &=& (0 (-1/3)^3\vert e_1/3,e_1/3,e_1/3\vert\vert (1/3)_t, e_1/3,0 \vert (1/3)^5_t 0_r^5 \vert (2/3)_r)~, \nonumber \\ V_2 &=& (0~(-1/2)^2~0\vert e_1/2,0,0\vert\vert 0,e_1/2,e_1/2 \vert 0^{15} \vert 0)~. \end{eqnarray} where the subscript $t$ indicates that the corresponding complex boson is twisted. {}The dot products $V_i \cdot V_j$ and the choice of the structure constants $k_{ij}$ are given by: \begin{equation} V_i \cdot V_j = \left( \begin{array}{ccc} -1 & -1/2 & -1/2 \\ -1/2 & -1/3 & -1/3 \\ -1/2 & -1/3 & 0 \end{array} \right) ~, \quad \quad k_{ij}= \left( \begin{array}{ccc} 0 & 0 & 0 \\ 1/2 & 0 & 0 \\ 1/2 & 2/3 & 1/2 \end{array} \right) ~. \end{equation} This defines a consistent string model, provided that a supercurrent satisfying (\ref{XOPE}) and (\ref{triplet}) can be found. \subsubsection{Bosonic Supercurrent} {}Before we compute the spectrum and calculate the superpotential, it is important to construct the supercurrent. First of all, the supercurrent satisfying the constraints (\ref{XOPE}), (\ref{triplet}) must exist in order for the model to be consistent. Furthermore, in calculating higher point couplings, picture-changing requires insertions of the supercurrent. In the following, we will construct the supercurrent for the $E1$ model in detail. {}The currents $\partial X^{a}$ can be written as linear combinations of vertex operators for the root generators of the original ${\cal G}_{R}= (E_{6})_1$ Kac-Moody algebra. It is convenient to express the roots of $E_6$ in $SU(3)^3$ basis (Eq.(\ref{psu3})). As before, we define $\pm e_1,\pm e_2$ and $\pm e_3=\mp (e_1+e_2)$ as the roots of $SU(3)$. The weight vectors $\tilde{e}^1,\tilde{e}^2$ are defined by $\tilde{e}^i e_j = \delta^i_j$ and $\tilde{e}^0=\tilde{e}^2-\tilde{e}^1$. Therefore, the weight ${\bf 3}$ of $SU(3)$ is represented by $\tilde{e}^2,-\tilde{e}^1,-\tilde{e}^0$. The triplet constraints (Eq.(\ref{triplet})) from the $V_1$ and $V_2$ vectors restrict the possible terms in $\partial X^{a}$. The currents $\partial X^{a}$ and $\partial X^{b \dagger}$ must also obey the OPEs (Eq.(\ref{XOPE})). A solution satisfying all the constraints is given by: \begin{eqnarray}\label{E1T_F} i \partial X^{1} &=& {1 \over \sqrt{12}} \left( J_1 + \sqrt{2} J_2 + \sqrt{3} J_3 + \sqrt{2} J_4 + J_5 + \sqrt{2} J_6 + J_7 \right) ~, \nonumber \\ i \partial X^{2} &=& {1 \over \sqrt{12}} \left( L_1 - \sqrt{2} L_2 + \sqrt{3} L_3 - \sqrt{2} L_4 - L_5 - \sqrt{2} L_6 + L_7 \right) ~, \\ i \partial X^{3} &=& {1 \over \sqrt{6}} \left( K_1 - K_2 - K_3 +K_1^{'} - K_{2}^{'} + K_{3}^{'} \right) ~, \nonumber \end{eqnarray} where $J_{i}= \exp(-i a_i \phi) c(-a_i)$, $L_{i}= \exp(-i l_i \phi) c(-l_i)$, $K_i = \exp(-i k_i \phi) c(-k_i)$ and $K_{i}^{'}= \exp(-i k_{i}^{'} \phi) c(-k_{i}^{'})$. Here $a_i$, $l_i$, $k_i$ and $k_i^{'}$ are roots of $E_6$ defined in Table \ref{E1super}. We will define the cocycle operators $c(K)$ in a moment. {}Notice that out of the $72$ roots of $E_6$, only $7$ of them contribute to $\partial X^{1}$. It is easy to see that $a_1,\cdots,a_6$ form a set of simple roots of $E_6$ and $-a_7=a_1+2a_2+3a_3+2a_4+a_5+2a_6$ is the highest root. (Note that the coefficients are simply the co-marks of $E_6$ simple roots). Similarly, $l_1,\cdots,l_6$ which contribute to $\partial X^{2}$ form another set of simple roots of $E_6$ with $-l_7$ being the highest root. Since the ${\bf Z}_2$ action ($V_2$ vector) does not act on $\partial X^{3}$ and $\psi^{3}$, $\partial X^{3}$ can be expressed in terms of the root generators of $SU(3)^{2}$. We see that $k_1$ and $k_2$ form a set of simple roots of $SU(3)$ and $-k_3$ is the highest root. Similarly for the $k_{i}^{'}$. The co-marks of the simple roots of $E_6$ and $SU(3)$ are also listed in Table \ref{E1super}. Other choices of the supercurrent are equivalent to this one (involving only a change of basis). In this sense, the supercurrent consistent with the twists-shifts of this model is unique. {}Any weight vector $K$ of $E_6$ can then be expressed in terms of the basis vectors $a_1,\cdots,a_6$, {\em i.e.}, $K=\sum_{j} n_j a_j$. We can define an ordered product of weight vectors $K$ and $K^{'}$, \begin{equation} K \ast K^{'} = \sum_{i>j} n_{i} n_{j}^{'} a_{i} \cdot a_{j} \end{equation} The cocycle operator and the cocycle structure constant can then be given by \cite{cocycle}: \begin{eqnarray} c(K)&=&(-1)^{{\bf p} \ast K} \\ \epsilon(K,K^{'})&=&(-1)^{K \ast K^{'}} \end{eqnarray} where ${\bf p}$ is the momentum operator, i.e., ${\bf p} \vert P \rangle = P \vert P \rangle$. The cocycles defined above depend crucially on our choice of the basis vectors. {}The coefficient of each term in $\partial X^{a}$ ({\em i.e.\/} $\sqrt{m_i/ h}$ where $m_i$ is the co-mark and $h=1+\sum_{i} m_{i}^{2}$ is the Coxeter number) are determined up to a phase by the OPE of $\partial X^{1}$ with $\partial X^{1 \dagger}$. The phases we have chosen in (\ref{E1T_F}) ensures that the OPEs of $\partial X^{a}$ and $\partial X^{b}$ for $a \not= b$ are non-singular. {}The supercurrent $T_F$ (the internal part) is therefore a linear combination of $40$ terms, each with different right-moving quantum numbers (given in Table \ref{E1super}): \begin{eqnarray} T_{F} &=& {i \over 2} \sum_{a=1}^{3} \psi^a \partial X^a +{\mbox{h.c.}} \nonumber \\ &=& {1 \over 2} \left( {1\over \sqrt{12}} e^{i H_1 \rho_1} \sum_{i=1}^{7} \sqrt{m_i} e^{-ia_i \phi} c(-a_i) + {1\over \sqrt{12}} e^{i H_2 \rho_2} \sum_{i=1}^{7} \sqrt{m_i} e^{-il_i \phi} c(-l_i) \right. \nonumber \\ &&+\left. {1\over \sqrt{6}} e^{i H_3 \rho_3} \sum_{i=1}^{3} ( e^{-ik_i \phi} c(-k_i) +e^{-ik^{\prime}_i \phi} c(-k^{\prime}_i)) \right) +{\mbox{h.c.}}~. \end{eqnarray} where $m_i$ are the co-marks of the $E_6$ simple roots and $h=12$ for $E_6$. \subsubsection{Spectrum} {}We are now ready to compute the massless spectrum. Let us recall what the left- and the right-moving quantum numbers are. For the left-movers, we have the $H$-charges $H_0,\cdots,H_3$, and the $Q^R$ charges $({\bf Q}^R_1,{\bf Q}^R_2,{\bf Q}^R_3)$ in the $SU(3)^3$ basis. The left-moving charges are: $SU(3)$ charges ${\bf Q}^L_2,{\bf Q}^L_3$, a $SO(10)$ charge ${\bf q}$, a $U(1)$ charge $Q$ and a discrete charge $D$ coming from the ${\bf Z}_3$ twist. Here, the charge $Q$ is the $U(1)$ charge of the real boson which is shifted by $2/3$. {}Let us first consider the untwisted sector. In the ${\bf 0}$ sector, the spectrum generating formulae (\ref{SGFU1}),(\ref{SGFU2}) read: \begin{eqnarray} V_{0} \cdot N &=& {1\over 2} (H_0 + H_1 +H_2 + H_3) = {1\over 2} \pmod{1} \\ V_{1} \cdot N &=& {1\over 3} (H_1 + H_2 + H_3) - {e_{1} \over 3} \cdot ({\bf Q}^R_1 + {\bf Q}^R_2 + {\bf Q}^R_3) \nonumber \\ &&+ {e_{1} \over 3} \cdot {\bf Q}^L_2 + {1 \over 3} D + {2\over 3} Q = 0 \pmod{1} \\ V_{2} \cdot N &=& {1\over 2} (H_1+H_2) - {e_1 \over 2} \cdot {\bf Q}^R_1 + {e_1 \over 2} \cdot ( {\bf Q}^L_2 + {\bf Q}^L_3 ) = 0 \pmod{1} \end{eqnarray} Here, the quantum number $D$ is the eigenvalue of the twist operator on the $6$ twisted complex bosons. Since the twist is ${\bf Z}_3$, $D=0,1,2 \pmod{3}$. For gauge bosons, $H_0=1$. The right-moving energy given by \begin{equation} E_R=-{1\over 2} + {1\over 2} \sum_{j=0}^3 H_j^2 + {1 \over 2} \sum_{j=1}^{3} ({\bf Q}^R_j)^2 + {\mbox{oscillators}} \end{equation} implies that ${\bf Q}^R=(0,0,0)$. The original Narain model has gauge symmetry $SU(3)^2 \otimes (E_6)^3$. It is easy to see from the last spectrum generating formula that all the $SU(3)^2$ roots are projected out except ${\bf Q}^L_3=\pm e_1$. As a result, $SU(3)^2$ is broken to $SU(2) \otimes U(1)^3$. On the other hand, $(E_6)^3$ is broken to the diagonal $(E_6)_3$ by the ${\bf Z}_3$ outer-automorphism. The resulting gauge group is therefore $SU(2)_1 \otimes (E_6)_3 \otimes U(1)^3$. {}There are other massless states in the ${\bf 0}$ sector with their fermionic partners in $V_0$ sector. To determine the chiralities of the fields, let us consider the $V_0$ sector. We choose the convention that $H_0=-1/2$ for left handed fermions. The spectrum generating formulae (\ref{SGFU1}),(\ref{SGFU2}) read: \begin{eqnarray} V_{0} \cdot N &=& {1\over 2} (H_0 + H_1 +H_2 + H_3) = {1\over 2} \pmod{1} \\ V_{1} \cdot N &=& {1\over 3} (H_1 + H_2 + H_3) - {e_{1} \over 3} \cdot ({\bf Q}^R_1 + {\bf Q}^R_2 + {\bf Q}^R_3) \nonumber \\ &&+ {e_{1} \over 3} \cdot {\bf Q}^L_2 + {1 \over 3} D + {2\over 3} Q = {1\over 2} \pmod{1} \\ V_{2} \cdot N &=& {1\over 2} (H_1+H_2) - {e_1 \over 2} \cdot {\bf Q}^R_1 + {e_1 \over 2} \cdot ( {\bf Q}^L_2 + {\bf Q}^L_3 ) = {1\over 2} \pmod{1} \end{eqnarray} Again, massless states have ${\bf Q}^R=(0,0,0)$. First, consider states that are charged only under ${\bf Q}^L_2$ and ${\bf Q}^L_3$ and hence do not carry $D$ and $Q$ quantum number. The left-moving energy is zero only if $({\bf Q}^L_2)^2+({\bf Q}^L_3)^2=2$. The spectrum generating formulae further restrict the choices to $({\bf Q}^L_2,{\bf Q}^L_3)=(e_1,0)$ for the field $U_0$, $(e_2,0)$ for $U_{\pm +}$, and $(e_3,0)$ for $U_{\pm -}$. The definition of the fields as well as their $H$-charges are given in Table \ref{E1charges}. {}Now we turn to states that has non-trivial $D$ quantum numbers. The adjoint scalar $\Phi$ has ${\bf Q}^L_2={\bf Q}^L_3=0$. The spectrum generating formulae give $(H_1,H_2,H_3)=(+1/2,+1/2,+1/2)$ and \begin{equation}\label{D} D= 2 (1-Q) \pmod{3} \end{equation} {}Upon decomposition of ${\bf 78}$ of $E_6$ into $SO(10) \otimes U(1)$ representations: \begin{equation} {\bf 78} = {\bf 1}(0) + {\bf 45}(0) + {\bf 16}(+3) + {\overline{\bf 16}}(-3)~, \end{equation} we notice that the component fields of ${\bf 78}$ carry different $U(1)$ charges. ($Q=0,0,1/2,-1/2$ respectively). It then follows from (\ref{D}) that the component fields of a $E_6$ multiplet can carry different discrete ${\bf Z}_3$ charges. We will return to this point when we discuss the superpotential. {}Now, let us turn to the twisted sectors. In $\overline{V_0+V_1}$ sector, \begin{equation} \overline{V_0+V_1}=(-1/2~(1/6)^3 \vert (e_1/3)^3 \vert \vert (1/3)_t (e_1/3) 0 \vert (1/3)^5 0_r^5 \vert (2/3)_r) ~, \end{equation} the spectrum generating formulae (\ref{SGFT}) give: \begin{eqnarray} V_{0} \cdot N &=& {1\over 2} (H_0 + H_1 +H_2 + H_3) = {1\over 2} \pmod{1} \\ V_{1} \cdot N &=& {1\over 3} (H_1 + H_2 + H_3) - {e_{1} \over 3} \cdot ({\bf Q}^R_1 + {\bf Q}^R_2 + {\bf Q}^R_3) \nonumber \\ &&+ {e_{1} \over 3} \cdot {\bf Q}^L_2 + {1 \over 3} D + {2\over 3} Q = {1\over 2} \pmod{1} \\ V_{2} \cdot N &=& {1\over 2} (H_1+H_2) - {e_1 \over 2} \cdot {\bf Q}^R_1 + {e_1 \over 2} \cdot ( {\bf Q}^L_2 + {\bf Q}^L_3 ) = +{1\over 6} \pmod{1} \end{eqnarray} Since $H_a \in {\bf Z}+ {1\over 6}$ for $a=1,2,3$, for massless states, $(H_1,H_2,H_3)=(1/6,1/6,1/6)$ and hence $H_0=1/2$. Therefore, this sector provides antiparticle states whereas $\overline{V_0+2V_1}$ provides the corresponding particles. The $H$-charges (and also $Q$-charges) of particle and antiparticle states have opposite sign and so it suffices to consider only $\overline{V_0+V_1}$ sector. In this sector, ${\bf Q}^R_j$ for $j=1,2,3$ and ${\bf Q}^L_{2}$ are shifted by $e_1/3$. The simplest choice ${\bf Q}^R=(e_1/3,e_1/3,e_1/3)$ and $({\bf Q}^L_2,{\bf Q}_3)=(e_1/3,0)$ has $E_R=0$ and \begin{eqnarray} E_L &=& -1+ {1\over 2} \sum_{r=1}^{6} T^r(1-T^r) + {1\over 2} \left( ({\bf Q}^L_2)^2 +({\bf Q}^L_3)^2 \right) + {1\over 2} ({{\bf q} \over \sqrt{3}})^2 + {1 \over 2}({2\over 3}+Q)^2 +\dots \nonumber\\ &=& -{2\over 9} + {1\over 6} {\bf q}^2 + {1 \over 2}({2\over 3}+Q)^2 + \dots \end{eqnarray} The only possible massless states come from $({\bf q},Q)=({\bf 0},0)$, $({\bf v},-1)$ and $({\bf c},-1/2)$. Here ${\bf 0}$, ${\bf v}$ and ${\bf c}$ are the singlet, vector and spinor representation of $SO(10)$ respectively, and together they form ${\overline{\bf 27}}$ of $E_6$. It is easy to see that the spectrum generating formulae are satisfied. {}This, however, is not the only choice of $({\bf Q}^R,{\bf Q}^L)$ which gives rise to massless states. The appearance of the other massless states requires a careful explanation. Recall that the outer-automorphism $\theta -{\cal P}$ in (\ref{ME1}) does not commute with the Wilson lines (\ref{Wilson}) and the combined action can be viewed as a non-Abelian orbifold corresponding to modding out the original Spin(32)/${\bf Z}_2$ Narain model (which we refer to as $N(1,1)$ model) by the tetrahedral group ${\cal T}$ \cite{KST}. This group is generated by three elements $\Theta, R_1, R_2$, where $\Theta^3=1$, $(R_1)^2=(R_2)^2=1$, and $R_2=\Theta R_1$. In our case, the Wilson lines $V_1$ and $V_2$ in (\ref{Wilson}) correspond to the elements $R_1$ and $R_2$, respectively, whereas the ${\bf Z}_3$ twist in (\ref{ME1}) corresponds to the element $\Theta$. The ${\bf Z}_2$ twist in (\ref{ME1}) commutes with all the elements $\Theta, R_1, R_2$ and will not be important for this discussion. The resulting $E1$ model is therefore a ${\cal T} \times {\bf Z}_2$ orbifold. The spectrum generating formulae are written in the basis in which the Wilson lines (\ref{Wilson}) are diagonal. Therefore, the ${\bf Z}_3$ action $\theta$ in (\ref{ME1}) is represented as a twist but not a shift. However, in determining the ${\bf Q}^L$ charges, we can always go to the basis where the $\theta$ twist is replaced by a shift ${\bf Q}^L_1 \rightarrow {\bf Q}^L_1 + e_1/3$. (The Wilson lines in this basis are not diagonal). Notice that the conformal dimension of the momentum state $h={1\over 2}(e_1/3)^2=1/9$ is the same as that of a ${\bf Z}_3$ twist field. We therefore have ${\bf Q}^R=p_R + (e_1/3,e_1/3,e_1/3)$ and ${\bf Q}^L=p_L + (e_1/3,e_1/3,0)$ where $p_R,p_L \in \tilde{\Gamma}^6$ ($E_6$ weight lattice), and $p_L-p_R \in \Gamma^6$ ($E_6$ root lattice). We can add weight vectors $(p_R,p_L)$ to the above $({\bf Q}^R,{\bf Q}^L)$ provided that the lengths $({\bf Q}^R)^2$ and $({\bf Q}^L_2)^2+({\bf Q}^L_3)^2$ are preserved (hence $E_R=E_L=0$) and the spectrum generating formulae are satisified. It turns out there are 4 choices: \begin{equation} \begin{array}{ccccc} \chi_{++}: \quad &{\bf Q}^R= & (e_3/3,e_3/3,e_1/3) \quad &{\bf Q}^L= &(e_3/3,e_3/3,0) \\ \chi_{-+}: \quad & &(e_2/3,e_1/3,e_3/3) \quad & &(e_3/3,e_3/3,0) \\ \chi_{+-}: \quad & &(e_3/3,e_1/3,e_2/3) \quad & &(e_2/3,e_2/3,0) \\ \chi_{--}: \quad & &(e_2/3,e_2/3,e_1/3) \quad & &(e_2/3,e_2/3,0) \end{array} \end{equation} For example, the $({\bf Q}^R,{\bf Q}^L)$ charges of $\chi_{++}$ are obtained by taking $p_R=p_L=(-\tilde{e}^1,-\tilde{e}^1,0)$. {}The other twisted sector fields coming from $T6$ and $T2$ sectors can be worked out in a similar way. The $Q$- and $H$- charges for the $E1$ model are summarized in Table \ref{E1charges}. The discrete $D$-charges are given in Table \ref{discrete}. \subsubsection{Superpotential} {}Now we are ready to calculate scattering amplitudes and deduce the non-vanishing terms in the superpotential for $E1$ model. Here we are only interested in whether a given term is vanishing or not, not in the actual numerical values of these couplings. That is, we are only concerned with the selection rules for the scattering. Since the coefficients $\xi^a ({Q})$ in the supercurrent are completely determined, calculating the actual numerical values of the coupling is rather straightforward, although certain couplings might be tedious to work out. We will return to such a calculation in future publications if there is a necessity for it to be done for phenomenological purposes. {}Let us recall our selection rules. First, all the terms in the superpotential must be gauge singlets. This is dictated by gauge invariance ($G$-charge conservation). Next, the $H$-charges must be conserved. Here one should be careful as the $H$-charges are altered by picture changing. Similarly, the $Q$-charges also must be conserved. These are also altered by picture changing, and Table \ref{E1super} provides the quantum numbers of terms that are relevant when doing the picture changing by inserting the supercurrent $T_F$. Finally, the discrete charges ($D$-charges) coming from the left-moving coset ${\cal C}_L$ (recall that this is a level-$3$ model with reduced rank) must also be conserved. As we will see, $D$-charge conservation is guaranteed by $G$-, $H$-, and $Q$-charge conservation. We will see this explicitly in an $SO(10)$ model in the next section. {}Let us first focus on gauge symmetries which are common for the $E1$ and $E2$ models. Gauge invariance implies that some of the fields cannot enter into the superpotential by themselves. For instance, $D_{+}$ must couple with $D_{-}$ due to the $SU(2)$ symmetry. Similarly, the only fields that are charged under the first $U(1)$ are $\tilde{U}_{\pm}$ and $\tilde{\chi}_{\pm}$. Therefore, they must enter into the superpotential in the invariant combinations $\tilde{U}_{+}\tilde{U}_{-}$, $\tilde{\chi}_{+}\tilde{\chi}_{-}$ and $\tilde{U}_{\pm} \tilde{\chi}_{\mp}^3$. In the same fashion, $U_{i\pm}$ and $\chi_{i\pm}$, $i=+,-$, are bound to form the invariant combinations $U_{i+}U_{j-}$, $\chi_{i+}\chi_{j-}$ and $U_{i\pm}\chi_{j_1 \mp} \chi_{j_2 \mp} \chi_{j_3 \mp}$ as a consequence of invariance under the first $U(1)$. {}Naively, the three-point couplings $\chi_{0} \chi_{i\pm}\chi_{j\mp}$ are allowed by $G$-charge conservation. However, the $Q$-charges are not conserved. Therefore, there is no three-point coupling in the superpotential. Nevertheless, conservation of $Q$- and $H$-charges allows a limited set of four-point couplings: $\chi_{0} \chi_{++}\chi_{--} \Phi$ and $\chi_{0} \chi_{+-} \chi_{-+} \Phi$. To see this, let us consider the $H$- and $Q$-charges of $\chi_{0}$, $\chi_{++}$ and $\chi_{--}$ in the $-1$, $-1/2$ and $-1/2$ picture respectively. The total $H$-charge is then $(1/3,1/3,1/3)+(-1/6,-1/6,-1/6) +(-1/6,-1/6,-1/6)=(0,0,0)$, whereas the total $Q$-charge is $(-e_1/3,-e_1/3,-e_1/3)+(-e_3/3,-e_3/3,-e_1/3) +(-e_2/3,-e_2/3,-e_1/3)=(0,0,-e_1)$. The adjoint $\Phi$ has $H=(0,0,1)$ and $Q=(0,0,0)$ in the $-1$ picture. Using the supercurrent we have constructed in the previous section, it is easy to see that $\Phi$ in the $0$-picture contains a term with $H=(0,0,0)$ and $Q=(0,0,e_1)$. Hence, the four-point coupling $\chi_{0} \chi_{++}\chi_{--} \Phi$ is allowed. Similarly, one can show that $\chi_{0} \chi_{+-} \chi_{-+} \Phi$ is allowed but other four-point couplings such as $\chi_{0} \chi_{++} \chi_{+-} \Phi$ and $\chi_{0} \chi_{-+} \chi_{--}$ are forbidden. This analysis has been performed in the $SU(3)^3$ basis, but we could have equally successfully used, say, the $SU(2)^3 \otimes U(1)^3$ basis for the $E1$ model, and obtained the same result. Notice that $\Phi$ carries $H$-charges in the $-1$ picture and so $\Phi^{n}$ by itself for any integer $n$ is not allowed in the superpotential due to $H$-charge conservation. Hence, the adjoint $\Phi$ is a moduli. However, in the $0$-picture, $\Phi$ contains terms with $H=(0,0,0)$ and $Q=(0,e_{\alpha},0)$ or $(0,0,e_{\alpha})$. Therefore, $\Phi^{3}$ contains terms with no $H$- and $Q$-charges when all the $\Phi$ are in the $0$-picture. An immediate consequence is that if $w$ is an allowable $N$-point coupling ($N \geq 3$) in the superpotential, then $w \Phi^{3n}$ is also allowable. {}Let us now turn to the $D$-charge. As mentioned before, in the branching $E_6 \supset SO(10) \otimes U(1)$, the component fields ({\em i.e.}, different representations of $SO(10)$) of a $E_6$ multiplet carry different $D$-charges. They are summarized in Table \ref{discrete}. This $D$-charge is a ${\bf Z}_3$ symmetry charge and so it must be conserved modulus $3$. The origin of this $D$-charge is the $S_3$ permutation symmetry of the three $(E_6)^3 \supset SO(10)^3 \otimes U(1)^3$. A priori, $D$-charge conservation imposes an extra constraint on the superpotential. However, it turns out that this $D$-charge conservation is subsumed in the other selection rules. To see this, consider the example of $\chi_{0} \chi_{++} \chi_{--} \Phi$ coupling. In terms of $SO(10) \otimes U(1)$ representations, \begin{eqnarray} \chi_{0} \chi_{++} \chi_{--} \Phi = & & [ Q_{0}Q_{++}H_{--}+Q_{0}Q_{--}H_{++} + Q_{++}Q_{--}H_{0} \nonumber \\ &+&H_{0}H_{++}S_{--}+H_{0}H_{--}S_{++} + H_{++}H_{--}S_{0} ] (\Phi + \phi) \nonumber \\ &+& [Q_{0}H_{++}S_{--}+Q_{0}H_{--}S_{++}+ Q_{++}H_{0}S_{--} +Q_{++}H_{--}S_{0} \nonumber \\ &+& Q_{--}H_{0}S_{++}+ Q_{--}H_{++}S_{0} + Q_{0}Q_{++}Q_{--} ] Q \nonumber \\ &+& [ Q_{0}H_{++}H_{--} + Q_{++}H_{0}H_{--} + Q_{--}H_{0}H_{++} ] \overline{Q} \end{eqnarray} From Table \ref{discrete}, one can easily check that the $D$-charge is conserved for every term on the right-handed side. The same is true for other terms in the superpotential. {}It is, therefore, useful to organize the fields and the invariant combinations according to their $U(1)^3$ $G$-charges, $Q$-charges and $H$-charges when deducing the selection rules. Instead of trying to write down the most general form of the superpotential, for illustrative purposes we will give the lowest order non-vanishing couplings here: \begin{eqnarray} W=&\phantom{+}&\lambda_1(\Phi^3,D_{+}D_{-}) \chi_{0} (\chi_{++}\chi_{--} + \chi_{+-}\chi_{-+}) \Phi \nonumber\\ &+& \lambda_2 (\Phi^3,D_{+}D_{-})\tilde{\chi}_{+}\tilde{\chi}_{-} (\chi_{++}\chi_{--} + \chi_{+-}\chi_{-+}) \nonumber\\ &+& \lambda_{3} (\Phi^3,D_{+}D_{-}) U_0 ( U_{++}U_{--} + U_{+-}U_{-+}) \nonumber\\ &+& \lambda_{4} (\Phi^3,D_{+}D_{-}) [ \chi_{++}^3 U_{--} + \chi_{+-}^3 U_{-+} + \chi_{-+}^3 U_{+-} + \chi_{--}^3 U_{++} ] \Phi^2 \nonumber\\ &+& \lambda_{5} (\Phi^3,D_{+}D_{-}) (\chi_{0})^3 U_{0} D_{+}D_{-} \Phi^2 \nonumber\\ &+& \lambda_{6} (\Phi^3,D_{+}D_{-}) [ (\tilde{\chi}_{+})^3 \tilde{U}_{-} + \tilde{\chi}_{-}^3 \tilde{U}_{+} ] D_{+}D_{-} \Phi +...~, \end{eqnarray} where traces over the irreps of the gauge group are implicit. Here, $\lambda_{k}$ are certain polynomials of their respective arguments such that $\lambda_{k} (0)\not=0$, {\em i.e.}, \begin{equation} \lambda_{k}(\Phi^{3},D_{+}D_{-})= \sum_{m,n} \lambda_{kmn} \Phi^{3m} (D_{+}D_{-})^{n} \end{equation} It is clear that all discrete (local) symmetries impose stringent constraints on the couplings. This is clearly an important property of string theory. \subsection{$E2$ Model: Shift Construction} {}After a detail discussion of the $E1$ model, our discussion of the $E2$ model will be brief. The generating vectors in the shift formalism which produce the $E2$ model are: \begin{eqnarray} &&V_1 =(0~0~(-1/3)^2\vert 0,e_1/3,e_1/3\vert\vert \theta,e_1/3,0 \vert {\cal P} \vert 2/3)~,\nonumber\\ \label{ME2} &&V_2=(0~(-1/2)^2~0\vert e_1/2,0,0\vert\vert 0,e_1/2,e_1/2\vert 0^{15} \vert 0)~. \end{eqnarray} As in $E1$ model, we can rewrite the generating vectors as follows: \begin{eqnarray} V_1 &=& (0~0~(-1/3)^2\vert 0,e_1/3,e_1/3\vert\vert (1/3)_t, e_1/3,0 \vert (1/3)^5_t 0_r^5 \vert (2/3)_r)~, \nonumber \\ V_2 &=& (0~(-1/2)^2~0\vert e_1/2,0,0\vert\vert 0,e_1/2,e_1/2 \vert 0^{15} \vert 0)~. \end{eqnarray} The structure constants can then be determined: $k_{00}=k_{10}=k_{02}=0$, $k_{12}=k_{20}=k_{22}=1/2$, and $k_{01}=k_{11}=k_{21}=2/3$. {}The currents $i \partial X^a$ that satisfy the triplet constraint (\ref{triplet}) and the OPEs (\ref{XOPE}) are given by \begin{eqnarray} i \partial X^1 &=& {1\over 2} \left( J_1 + J_2 + J_3 - J_4 \right) ~, \nonumber \\ i \partial X^2 &=& {1\over \sqrt{6}} \left( L_1+L_2+L_3+L_4+L_5+L_6 \right)~,\\ i \partial X^3 &=& {1\over \sqrt{6}} \left( K_1+K_2+K_3-K^{\prime}_1+K^{\prime}_2 + K^{\prime}_3 \right) ~, \nonumber \end{eqnarray} where $J_i=\exp(-i \alpha_i \phi) c(-\alpha_i)$, $L_i=\exp(-i \beta_i \phi) c(-\beta_i)$, $K_i=\exp(-ik_i \phi) c(-k_i)$ and $K^{\prime}_i=\exp(-ik^{\prime}_i \phi) c(-k^{\prime}_i)$. Here, $\alpha_i$, $\beta_i$, $k_i$ and $k^{\prime}_i$ are the roots of $E_6$ defined in Table \ref{E2super}. It is easy to see that $\alpha_1,\dots,\alpha_4$ are simple roots of $SU(2)^4$ whereas $\beta_1,\dots,\beta_5$ and $-\beta_6$ form the simple roots and the highest weight of $SU(6)$. The roots of $E_6$ that enter into $i \partial X^3$ are the same as that in $E1$ model because again the ${\bf Z}_2$ action does not act on $\partial X^3$ and $\psi^3$ and so $i \partial X^3$ can be expressed in terms of the root generators of $SU(3)^2$. {}The massless spectrum of the $E2$ model can be obtained in a similar way. The $G$-, $Q$- and $H$-charges for the $E2$ model are given in Table \ref{E2charges}. Notice that $E1$ and $E2$ models have the same massless spectrum with some of the twisted and untwisted sector fields interchanged. By examining some of the lowest order non-vaishing couplings, one sees that the $E1$ and $E2$ models have the same tree level superpotential. \section{$SO(10)$ Model} {}We have seen in the previous section that for models which admit a bosonic supercurrent, scattering amplitudes become straightforward to compute. However, there are other three-family grand unified models classified in \cite{kt} which do not admit a basis where the supercurrent can be bosonized. It is, nonetheless, possible to deduce the superpotential in an indirect way. We will use the $T1(1,1)$ and $T2(1,1)$ models of Ref \cite{kt} to illustrate how this can be done. {}Let us begin with the construction of the models. Start from $N(1,1)$, {\em i.e.}, the $N=4$ SUSY Spin(32)/${\bf Z}_2$ model, and add the following Wilson lines: \begin{eqnarray} &&V_1 =(0^4\vert 0~e_1/2~e_1/2 \vert\vert e_1/2~0~0 \vert {\bf s}~{\bf 0}~{\bf 0} \vert {\overline S}) ~,\\ &&V_2 =(0^4\vert 0~e_2/2~e_2/2 \vert\vert e_2/2~0~0 \vert {\bf 0}~{\bf s}~{\bf 0} \vert {\overline S}) ~. \end{eqnarray} The resulting model, which we will refer to as $N2(1,1)$, is a $N=4$ SUSY model with gauge symmetry $SU(3)^2\otimes SO(10)^3\otimes U(1)^3$ provided that we set $k_{10}=k_{20}=0$.\\ $\bullet$ The $T1(1,1)$ model. Start from the $N2(1,1)$ model and perform the same twists $T_3$ and $T_2$ as in the $E1$ model (\ref{E1twist}). This model has $SU(2)_1 \otimes SO(10)_3 \otimes U(1)^4$ gauge symmetry. The massless spectrum of the $T1(1,1)$ model is given in Table \ref{SO(10)spectra}.\\ $\bullet$ The $T2(1,1)$ model. Start from the $N2(1,1)$ Narain model and perform the same twists $T_3$ and $T_2$ as in the $E2$ model (\ref{E2twist}). This model has $SU(2)_1 \otimes SO(10)_3 \otimes U(1)^4$ gauge symmetry. The massless spectrum of the $T2(1,1)$ model is given in Table \ref{SO(10)spectra}. {}The models $T1(1,1)$ and $T2(1,1)$ are possibly a $T$-dual pair just as $E1$ and $E2$ are. We will compute the superpotentials for these models, and show that they are the same. {}Let us briefly discuss the underlying conformal field theory, or the Kac-Moody algebra ${\cal G}^\prime_R$, for the $T1(1,1)$ and $T2(1,1)$ models. Without going into details, we simply state the results: ${\cal G}^\prime_R=SU(2)_3 \otimes U(1)$ for the $T1(1,1)$ model, and ${\cal G}^\prime_R=SU(2)_3 \otimes SU(2)_1$ for the $T2(1,1)$ model. Thus, what happens is that the first three $SU(2)_1$s in both models (meaning starting from $E1$ and $E2$ and arriving at $T1(1,1)$ and $T2(1,1)$, respectively) are broken down to their diagonal subgroup $SU(2)_3$. The last two $U(1)s$ are completely broken. The first $U(1)$ in the $E1$ model, and the last $SU(2)_1$ in the $E2$ model are untouched. In the process of this breaking one encounters additional cosets ${\cal C}_R$, which give rise to certain discrete symmetries. {}Therefore, the $SO(10)$ models $T1(1,1)$ and $T2(1,1)$ do not admit bosonization of the supercurrent because the corresponding right-moving Kac-Moody algebras ${\cal G}^\prime_R$ have reduced rank which is less than $6$. There is, however, a way to deduce their superpotentials which is due to the fact that they are connected by (classically) flat directions to the $E_6$ model. {}Comparing the massless spectrum of the $SO(10)$ model to that of the $E_6$ model, given in Table \ref{E6spectra}, we see that the two spectra are very similar. In particular, the spectrum of the $SO(10)$ model is the same as that of the $E_6$ model with a non-zero vev of the adjoint $\Phi$ of $(E_6)_3$ such that the $(E_6)_3$ is broken down to $SO(10)_3 \otimes U(1)$ (the last $U(1)$ in the first and second columns of Table \ref{SO(10)spectra}). Therefore, the ${\bf 27}$ of $(E_6)_3$ branches into ${\bf 16}(-1)+{\bf 10}(+2)+ {\bf 1}(-4)$ of $SO(10)_3 \otimes U(1)$. The adjoint ${\bf 78}$ branches into ${\bf 45}(0)+{\bf 1}(0)+{\bf 16}(+3)+{\overline {\bf 16}}(-3)$. Note that the ${\bf 45}(0)$ and ${\bf 1}(0)$ are present in the $SO(10)$ model, the ${\bf 16}(+3)$ and ${\overline {\bf 16}}(-3)$ are missing, however. This is due to the fact that the latter have been eaten by the corresponding gauge bosons of $(E_6)_3$ in the super-Higgs mechanism. Thus, in the effective field theory language, the $SO(10)$ model is the same as the $E_6$ model with the adjoint vev turned on. There is a subtlety here. The two models are equivalent only if we also turn on the vev of the singlet $\phi$ in the $SO(10)$ model correspondingly. This can be seen from the fact that once the adjoint of $(E_6)_3$ acquires a vev, there is an effective three-point Yukawa coupling in the $SO(10)$ model which has the form $H_0 Q_+ Q_-$ (here we define $Q_{\pm}$ and $Q^\prime_{\pm}$ in the same way as $\chi_{\pm}$ and $\chi^\prime_{\pm}$). In the $SO(10)$ model without the vev of $\phi$ turned on there are only four-point couplings $H_0 Q_+ Q_- \phi$ and $H_0 Q_+ Q_- \Phi$. Thus, to get the three-point coupling, we have to turn on the vev of $\phi$ (whereas turning on the vev of $\Phi$ in the $SO(10)$ model would break the $SO(10)$ symmetry further, say, to $SU(5)\otimes U(1)$). From this discussion it is clear how to get the superpotential for the $SO(10)$ model from that of the $E_6$ model. One simply starts from the latter and turns on the vev of the adjoint of $(E_6)_3$. In practice, just to get the selection rules for the $SO(10)$ model, simply replace $\chi$s by $(Q+H+S)$s (and similarly for ${\tilde \chi}s$), and $\Phi$ by $\Phi +\phi$. Let us write down the first few lowest order terms in the superpotential: \begin{eqnarray} W=& & \lambda_1((\Phi + \phi^{\prime} )^3,D_{+}D_{-}) \left[ Q_{0} (Q_{++}H_{--} + Q_{+-}H_{-+} + Q_{-+}H_{+-} + Q_{--}H_{++}) \nonumber \right. \\ &+& H_{0} (Q_{++}Q_{--}+Q_{+-}Q_{-+}) + H_{0} (H_{++}S_{--}+H_{+-}S_{-+}+H_{-+}S_{+-}+H_{--}S_{++}) \nonumber \\ &+&\left. S_{0} (H_{++}H_{--}+H_{+-}H_{-+}) \right] (\Phi + \phi^{\prime}) \nonumber\\ &+&\lambda_2 ((\Phi + \phi^{\prime} )^3,D_{+}D_{-}) \left[ \tilde{Q}_{+}\tilde{Q}_{-} (Q_{++}Q_{--}+Q_{+-}Q_{-+}) \right. \nonumber \\ &+& \tilde{Q}_{+}\tilde{Q}_{-} ( H_{++}S_{--}+H_{+-}S_{-+} + H_{-+}S_{+-}+H_{--}S_{++} ) \nonumber \\ &+&(\tilde{Q}_{+}\tilde{S}_{-}+\tilde{Q}_{-}\tilde{S}_{+}) (Q_{++}S_{--}+Q_{+-}S_{-+} + Q_{-+}S_{+-} + Q_{--}S_{++}) \nonumber\\ &+&(\tilde{H}_{+} \tilde{S}_{-} +\tilde{H}_{-} \tilde{S}_{+}) (H_{++}S_{--}+H_{+-}S_{-+}+H_{-+}S_{+-}+H_{--}S_{++}) \nonumber \\ &+&(\tilde{H}_{+} \tilde{S}_{-} +\tilde{H}_{-} \tilde{S}_{+}) (Q_{++}Q_{--}+Q_{+-}Q_{-+}) \nonumber \\ &+&(\tilde{Q}_{+}\tilde{H}_{-}+\tilde{Q}_{-}\tilde{H}_{+}) (Q_{++}H_{--} + Q_{+-}H_{-+} + Q_{-+}H_{+-} + Q_{--}H_{++}) \nonumber \\ &+& \left. \tilde{H}_+ \tilde{H}_- (H_{++}H_{--}+H_{+-}H_{-+}) + \tilde{S}_+ \tilde{S}_- (S_{++}S_{--}+S_{+-}S_{-+}) \right] \nonumber\\ &+& \lambda_{3} ((\Phi + \phi^{\prime} )^3,D_{+}D_{-}) U_0 ( U_{++}U_{--} + U_{+-}U_{-+}) \nonumber\\ &+& \lambda_{4} ((\Phi + \phi^{\prime} )^3,D_{+}D_{-}) \left[ (Q_{++}^2 H_{++} + H_{++}^2 S_{++}) U_{--} + (Q_{+-}^2 H_{+-} + H_{+-}^2 S_{+-}) U_{-+} \right. \nonumber \\ &+& \left. (Q_{-+}^2 H_{-+} + H_{-+}^2 S_{-+}) U_{+-} +(Q_{--}^2 H_{--} + H_{--}^2 S_{--}) U_{++} \right] (\Phi + \phi^{\prime} )^2 +...~, \end{eqnarray} where traces over the irreps of the gauge group are implicit and the Clebsch-Gordan coefficients are of order $1$. The field $\phi^{\prime}$ is defined to be \begin{equation} \phi^{\prime}= \phi + \langle \phi \rangle \end{equation} Here, $\lambda_{k}$ are certain polynomials of their respective arguments such that $\lambda_{k} (0)\not=0$, {\em i.e.}, \begin{equation} \lambda_{k}((\Phi + \phi^{\prime} )^3,D_{+}D_{-}) = \sum_{m,n} \lambda_{kmn} (\Phi + \phi^{\prime})^{3m} (D_{+}D_{-})^{n} \end{equation} \section{$SU(6)$ Models} {}In this section we give the construction of two three-family $SU(6)$ models. Both of them can be obtained from the $E_6$ models discussed in section VI by adding the following Wilson line: \begin{equation} V_3 =(0^4\vert 0^3\vert\vert 0,{\tilde e}^2,{\tilde e}^2\vert ({1\over 3}{1\over 3} {1\over 3} {1\over 3} {2\over 3})^3 \vert 0)~. \end{equation} Since the Wilson line $V_3$ does not act on the right-movers, the supercurrent is the same as that of the original $E_6$ model.\\ $\bullet$ The $S1$ model. Add the $V_3$ Wilson line to the set (\ref{ME1}) and set $k_{13}=0$ or $k_{13}=1/3$ (both choices give the same model). This model has $SU(3)_1 \otimes SU(6)_3 \otimes U(1)^3$ gauge symmetry. The $Q$- and $H$-charges of the massless spectrum are given in Table \ref{S1charges}.\\ $\bullet$ The $S2$ model. Add the $V_3$ Wilson line to the set (\ref{ME1}) and set $k_{13}=2/3$. This model has $SU(2)_1 \otimes SU(2)_1 \otimes SU(6)_3 \otimes U(1)^3$ gauge symmetry. The $Q$- and $H$-charges of the massless spectrum are given in Table \ref{S2charges}.\\ $\bullet$ The $S3$ model. Add the $V_3$ Wilson line to the set (\ref{ME2}) and set $k_{13}=0$ or $k_{13}=1/3$ (both choices give the same model). This model has $SU(3)_1 \otimes SU(6)_3 \otimes U(1)^3$ gauge symmetry. The $Q$- and $H$-charges of the massless spectrum are given in Table \ref{S3charges}.\\ $\bullet$ The $S4$ model. Add the $V_3$ Wilson line to the set (\ref{ME2}) and set $k_{13}=2/3$. This model has $SU(2)_1 \otimes SU(2)_1 \otimes SU(6)_3 \otimes U(1)^3$ gauge symmetry. The $Q$- and $H$-charges of the massless spectrum are given in Table \ref{S4charges}. {}The models $S1$ and $S3$ have the same tree-level massless spectra. We will show that interactions are also the same. These two models are possibly $T$-dual to each other. Similarly, $S2$ and $S4$ are a possible $T$-dual pair. \subsection{Superpotential For $SU(6)$ Model $S1=S3$}. {}Next we give the superpotential for the $S1=S3$ model. We will be brief here as the techniques involved in deducing the selection rules should be clear by now after we have given the example of the $E_6$ model. {}The superpotential for the $SU(3)_1 \otimes SU(6)_3 \otimes U(1)^3$ model ($S1=S3$) reads: \begin{eqnarray} W = &\phantom{+}& \lambda_{1} (\Phi,\phi) U_0 ( U_{++}U_{--} +U_{+-}U_{-+} ) + \lambda_{2} (\Phi,\phi) T_0 ( {\tilde T}_{+} U_{-+} +{\tilde T}_{-} U_{++}) \nonumber\\ &+& \lambda_{3} (\Phi,\phi) {\tilde T}_0 T_- U_0 + \lambda_{4} (\Phi,\phi) {\tilde T}_0 T_0 {\tilde U}_- \nonumber\\ &+& \lambda_{5} (\Phi,\phi) \sum_{A,B,C} y_{ABC} [ U_{++} ( F^A_{+}F^B_{+}F^C_{-} + {1 \over 3} F^A_{-}F^B_{-}F^C_{-} ) + U_{-+} ( F^A_{+}F^B_{-}F^C_{-} + {1 \over 3} F^A_{+}F^B_{+}F^C_{+} )] \nonumber\\ &+& [\lambda_{6} (\Phi,\phi) \Phi^2 +\lambda_{7} (\Phi,\phi) \Phi \phi +\lambda_{8} (\Phi,\phi) \phi^2] \nonumber \\ && ~~~~~~\times \sum_A \{ U_{++} [(F^A_{-})^3 + F^A_{-}(F^A_{+})^2] +U_{-+} [(F^A_{+})^3 + F^A_{+}(F^A_{-})^2] \} \nonumber\\ &+& \lambda_{9} (\Phi,\phi) \sum_A [F^A_{+} {\tilde S}^A_{-} + F^A_{-} {\tilde S}^A_{+}] {\tilde S}^A_0 \nonumber\\ &+& [\lambda_{10} (\Phi,\phi) \Phi + \lambda_{11} (\Phi,\phi) \phi ] \sum_{A,B,C} y_{ABC} (F^A_{+} {\tilde S}^B_{-}+F^A_{-} {\tilde S}^B_{+}) {\tilde S}^C_0 \nonumber\\ &+& \lambda_{12} (\Phi,\phi) {\tilde U}_+ T_+ {\tilde T}_0 \sum_{A,B,C} y_{ABC} {\tilde F}^A {\tilde F}^B {\tilde F}^C \nonumber\\ &+& [\lambda_{13} (\Phi,\phi)\Phi + \lambda_{14} (\Phi,\phi)\phi ] {\tilde U}_+ T_+ {\tilde T}_0 \sum_{A} ({\tilde F}^A)^3 \nonumber\\ &+& \lambda_{15}(\Phi,\phi) \sum_{A,B} z_{AB} {\tilde F}^A S^B ( F^A_+ {\tilde S}^B_{-} + F^A_- {\tilde S}^B_{+} +F^B_+ {\tilde S}^A_{-} + F^B_- {\tilde S}^A_{+} ) \nonumber\\ &+& [\lambda_{16}(\Phi,\phi) \Phi + \lambda_{17}(\Phi,\phi)\phi] \sum_{A,B,C} y_{ABC} {\tilde F}^A S^A (F^B_{+} {\tilde S}^C_{-}+ F^B_{-} {\tilde S}^C_{+}) \nonumber\\ &+& [\lambda_{18}(\Phi,\phi) \Phi^2 + \lambda_{19}(\Phi,\phi) \Phi \phi+ \lambda_{20}(\Phi,\phi) \phi^2] \sum_{A,B,C} y_{ABC} [F^A_{+} {\tilde S}^A_{-} +F^A_{-} {\tilde S}^A_{+}] {\tilde F}^B S^C \nonumber\\ &+& [\lambda_{21}(\Phi,\phi) \Phi^3 + \lambda_{22}(\Phi,\phi) \Phi^2 \phi+ \lambda_{23}(\Phi,\phi) \Phi \phi^2+\lambda_{24}(\Phi,\phi) \phi^3] \nonumber\\ &&~~~~~~\times \sum_{A}[F^A_{+} {\tilde S}^A_{-}+F^A_{-}{\tilde S}^A_{+}] {\tilde F}^A S^A \nonumber\\ &+& \lambda_{25}(\Phi,\phi) \sum_{A,B,C} y_{ABC} [ (F^{A}_{+})^2+(F^{A}_{-})^2 ] S^B S^C ( U_{++}U_{+-} + U_{-+}U_{--} ) \nonumber\\ &+& \lambda_{26}(\Phi,\phi) \sum_{A,B,C} y_{ABC} F^A_{+}F^A_{-}S^B S^C ( U_{++}U_{--}+U_{+-}U_{-+} ) \nonumber\\ &+& [ \lambda_{27}(\Phi,\phi) \Phi + \lambda_{28}(\Phi,\phi) \phi ] \sum_{A,B} z_{AB} F^A_{+}F^B_{+}S^A S^B ( U_{++}U_{+-} + U_{-+}U_{--} ) \nonumber\\ &+& [ \lambda_{29}(\Phi,\phi) \Phi^2 +\lambda_{30}(\Phi,\phi) \Phi \phi +\lambda_{31}(\Phi,\phi) \phi^2 ] \nonumber \\ &&~~~~~~\times \sum_{A,B,C} y_{ABC} F^A_{+}F^B_{+} (S^C)^2 ( U_{++}U_{+-} + U_{-+}U_{--} ) \nonumber\\ &+& [ \lambda_{32}(\Phi,\phi) \Phi + \lambda_{33}(\Phi,\phi) \phi ] \sum_{A,B} z_{AB} F^A_{+}F^B_{-}S^A S^B ( U_{++}U_{--}+U_{+-}U_{-+} ) \nonumber\\ &+& [ \lambda_{34}(\Phi,\phi) \Phi^2 + \lambda_{35}(\Phi,\phi) \Phi \phi +\lambda_{36}(\Phi,\phi) \phi^2 ] \nonumber\\ &&~~~~~~\times \sum_{A,B,C} y_{ABC} F^A_{+}F^B_{-}(S^C)^2 ( U_{++}U_{--}+U_{+-}U_{-+} ) \nonumber\\ &+&\lambda_{37}(\Phi,\phi) \sum_{A,B,C} y_{ABC} \tilde{S}^A_{+} \tilde{S}^B_{-} (\tilde{F}^C)^2 U_{0} \nonumber\\ &+& [ \lambda_{38}(\Phi,\phi) \Phi + \lambda_{39}(\Phi,\phi) \phi ] \sum_{A,B,C} y_{ABC} \tilde{S}^A_{+} \tilde{S}^A_{-} \tilde{F}^B \tilde{F}^C U_{0} \nonumber\\ &+& [ \lambda_{40}(\Phi,\phi) \Phi^2 + \lambda_{41}(\Phi,\phi) \Phi \phi + \lambda_{42}(\Phi,\phi) \phi^2 ] \sum_{A} \tilde{S}^A_{+} \tilde{S}^A_{-} (\tilde{F}^A)^2 U_{0} \nonumber\\ &+& [ \lambda_{43}(\Phi,\phi) \Phi^2 + \lambda_{44}(\Phi,\phi) \Phi \phi + \lambda_{45}(\Phi,\phi) \phi^2 ] \sum_{A,B} z_{AB} \tilde{S}^A_{+} \tilde{S}^B_{-} \tilde{F}^A \tilde{F}^B U_{0} +...~, \end{eqnarray} where $\lambda_{k}$, $k=1,...,45$, are certain polynomials of their respective arguments, which combine into the terms of the form $\Phi^{3n-m} \phi^m$, $n,m\in {\bf N}$, such that $\lambda_k (0,\phi)\not\equiv 0$, and $\lambda_k (\Phi,0)\not\equiv 0$, and traces over the irreps of the gauge group are implicit here. The coefficients $y_{ABC}$ and $z_{AB}$ are defined as follows: $y_{ABC}=\epsilon_{ABC}$, and $z_{AB}=1-\delta_{AB}$. \subsection{Superpotential For $SU(6)$ Model $S2=S4$}. {}The superpotential for the $SU(2)_1^2 \otimes SU(6)_3 \otimes U(1)^3$ model ($S2=S4$) reads: \begin{eqnarray} W= &\phantom{+}& \lambda_{1} (\Phi,\phi) U_0 ( d_{++}d_{--} +d_{+-}d_{-+}) +\lambda_{2} (\Phi,\phi) (D_+ {\tilde d}_- \Delta_- + D_- {\tilde d}_+ \Delta_+) \nonumber\\ &+&\lambda_{3} (\Phi,\phi) U_0 \Delta_+ \Delta_- \sum_{A,B,C} y_{ABC} F^A F^B F^C \nonumber\\ &+&[\lambda_{4} (\Phi,\phi) \Phi^2 +\lambda_{5} (\Phi,\phi)\Phi \phi + \lambda_{6} (\Phi,\phi) \phi^2 ] U_0 \Delta_+ \Delta_- \sum_{A} (F^A)^3 \nonumber\\ &+& \lambda_{7} (\Phi,\phi) \sum_A F^A [ {\tilde S}^A_{++}{\tilde S}^A_{--} +{\tilde S}^A_{+-}{\tilde S}^A_{-+} ] \nonumber\\ &+& [\lambda_{8} (\Phi,\phi)\Phi + \lambda_{9} (\Phi,\phi)\phi ] \sum_{A,B,C} y_{ABC} F^A ({\tilde S}^B_{++}{\tilde S}^C_{--} +{\tilde S}^B_{-+}{\tilde S}^C_{+-}) \nonumber\\ &+& \lambda_{10} (\Phi,\phi) \sum_{A,B} z_{AB} S^A_{+}S^B_{-} ({\tilde S}^A_{++} {\tilde S}^B_{--}+{\tilde S}^A_{-+} {\tilde S}^B_{+-} +{\tilde S}^A_{+-} {\tilde S}^B_{-+}+{\tilde S}^A_{--} {\tilde S}^B_{++}) \nonumber \\ &+& [\lambda_{11}(\Phi,\phi) \Phi^3 + \lambda_{12}(\Phi,\phi) \Phi^2 \phi+ \lambda_{13}(\Phi,\phi) \Phi \phi^2+\lambda_{14}(\Phi,\phi) \phi^3] \nonumber\\ &&~~~~~~\times \sum_{A} S^A_+ S^A_- ({\tilde S}^A_{++} {\tilde S}^A_{--} + {\tilde S}^A_{+-}{\tilde S}^A_{-+}) \nonumber\\ &+& [\lambda_{15}(\Phi,\phi) \Phi + \lambda_{16}(\Phi,\phi) \phi] \sum_{A,B,C} y_{ABC} S^A_+ S^A_- ({\tilde S}^B_{++}{\tilde S}^C_{--}+{\tilde S}^B_{-+}{\tilde S}^C_{+-}) \nonumber\\ &+&[\lambda_{17}(\Phi,\phi) \Phi^2 + \lambda_{18}(\Phi,\phi) \Phi \phi+ \lambda_{19}(\Phi,\phi) \phi^2 ] \nonumber\\ &&~~~~~~\times \sum_{A,B,C} y_{ABC} S^B_{+}S^C_{-} ({\tilde S}^A_{++}{\tilde S}^A_{--}+{\tilde S}^A_{+-}{\tilde S}^A_{-+}) +...~. \end{eqnarray} The couplings $\lambda_k$ and the coefficients $y_{ABC}$ and $z_{AB}$ are defined in the same way as in the $S1=S3$ model. \section{Remarks} {}Using the bosonic supercurrent formalism, we give a prescription how to calculate the correlation functions in the $3$-family grand unification models. We use this approach to determine the quantum numbers of the massless spectra of some of these models. This gives us selection rules for the allowed couplings in the respective superpotentials. Many couplings that are allowed by gauge symmetries are forbidden by stringy discrete symmetries. The explicit values of the couplings are not hard to determine; however we will leave them for the future. Even without their explicit determination, there are still plenty of phenomenological issues one can address. This will also be discussed elsewhere. One question that might need clarification is whether there are additional discrete quantum numbers associated with the coset (${\cal C}_L$) coming from the rank reduction. {}In this paper, we discuss explicitly some $3$-family grand unified string models: the unique $E_6$ model, an $SO(10)$ model and two $SU(6)$ models. There are other $SO(10)$, $SU(5)$ and $SU(6)$ three-family grand unified string models classified in Refs \cite{kt}. Most of these models are connected to the unique $E_6$ model by classically flat moduli. Some $SU(5)$ models are connected to the two $SU(6)$ models. Finally, there are a few $SU(5)$ models and one unique $SO(10)$ model that are isolated from the $E_6$ and two $SU(6)$ models we have described here. The superpotentials of the models that are connected to the above models via flat moduli can be easily obtained simply by giving the flat moduli appropriate vevs. These vevs can be of the order of string scale. {}As we have already mentioned, some of these models do not admit bosonization of the supercurrent because their corresponding right-moving Kac-Moody algebras ${\cal G}^\prime_R$ have reduced rank which is less than $6$. Thus, deducing the discrete symmetries is complicated by the presence of the right-moving cosets ${\cal C}_R$. There is, however, a way to deduce their superpotentials when they are connected by (classically) flat directions to the above three models. We have illustrated how this can be done with the $SO(10)$ model $T1(1,1)=T2(1,1)$. {}We note that all the couplings (in string units) $\lambda_k$ are of order one for vanishing values of the $\Phi$ and $\phi$ vevs. The latter fields are (classically) flat moduli in these models. In the $E_6$ and the two $SU(6)$ models we have studied in this paper, there are no other completely flat moduli of a geometric origin, but such flat directions are present in other three-family grand unified string models classified in Refs \cite{kt}. Their stabilizations can only be achieved via non-perturbative dynamics. \acknowledgments \bigskip {}We would like to thank Damiano Anselmi, Ignatios Antoniadis, Costas Bachas, Alexander Bais, Riccardo Barbieri, Zurab Berezhiani, Michael Bershadsky, Keith Dienes, Gia Dvali, Alon Faraggi, Matthias Gaberdiel, Luis Ib{\'a}{\~n}ez, Andrei Johansen, Elias Kiritsis, Costas Kounnas, Pran Nath, Peter Nilles, Michal Spalinski, Zurab Tavartkiladze, Tom Taylor, Angel Uranga, Cumrun Vafa, Erik Verlinde, Herman Verlinde, Yan Vtorov-Karevsky, Mikhail Vysotsky and Barton Zwiebach for discussions. The research of G.S. and S.-H.H.T. was partially supported by the National Science Foundation. G.S. would also like to thank Joyce M. Kuok Foundation for financial support. The work of Z.K. was supported in part by the grant NSF PHY-96-02074, and the DOE 1994 OJI award. Z.K. would like to thank the High Energy Theory Group at Cornell University for their kind hospitality while parts of this work were completed. Z.K. would also like to thank Mr. Albert Yu and Mrs. Ribena Yu for financial support. \begin{table}[t] \begin{tabular}{|c|l|l|l|l|l|} & Field & $SU(3) \otimes E_6 \otimes E_8 \otimes SU(3)^{3}$ & $Q^R$-charges in $SU(3)^3$ & $(H_1,H_2,H_3)_{-1}$ & $(H_1,H_2,H_3)_{-1/2}$ \\ \hline & & & & & \\ & $\chi_{1}$ &$({\bf 3},{\bf 27},{\bf 1},{\bf 1},{\bf 1},{\bf 1})$ & $(0,0,0)$ &$(+1,0,0)$ & $(+{1\over 2},-{1\over 2},-{1\over 2})$ \\ $U$ & $\chi_{2}$ &$({\bf 3},{\bf 27},{\bf 1},{\bf 1},{\bf 1},{\bf 1})$ & $(0,0,0)$ &$(0,+1,0)$ & $(-{1\over 2},+{1\over 2},-{1\over 2})$ \\ & $\chi_{3}$ &$({\bf 3},{\bf 27},{\bf 1},{\bf 1},{\bf 1},{\bf 1})$ & $(0,0,0)$ &$(0,0,+1)$ & $(-{1\over 2},-{1\over 2},+{1\over 2})$ \\ & & & & & \\ \hline & & & & & \\ & $\tilde{\chi}$ & $({\bf 3},\overline{\bf 27},{\bf 1},{\bf 1},{\bf 1},{\bf 1})$ & $(-e_{1}/3,-e_{1}/3,-e_{1}/3)$ & $({1\over 3},{1\over 3},{1\over 3})$ & $(-{1\over 6},-{1\over 6},-{1\over 6})$ \\ & $\chi_{1+}$ & $({\bf 1},{\bf 27},{\bf 1},{\bf 3},{\bf 1},{\bf 1})$ & $(-e_{2}/3,-e_{1}/3,-e_{1}/3)$ & $({1\over 3},{1\over 3},{1\over 3})$ & $(-{1\over 6},-{1\over 6},-{1\over 6})$ \\ & $\chi_{1-}$ & $({\bf 1},{\bf 27},{\bf 1},\overline{\bf 3},{\bf 1},{\bf 1})$ & $(-e_{3}/3,-e_{1}/3,-e_{1}/3)$ & $({1\over 3},{1\over 3},{1\over 3})$ & $(-{1\over 6},-{1\over 6},-{1\over 6})$ \\ $T3$ & $\chi_{2+}$ & $({\bf 1},{\bf 27},{\bf 1},{\bf 1},{\bf 3},{\bf 1})$ & $(-e_{1}/3,-e_{2}/3,-e_{1}/3)$ & $({1\over 3},{1\over 3},{1\over 3})$ & $(-{1\over 6},-{1\over 6},-{1\over 6})$ \\ & $\chi_{2-}$ & $({\bf 1},{\bf 27},{\bf 1},{\bf 1},\overline{\bf 3},{\bf 1})$ & $(-e_{1}/3,-e_{3}/3,-e_{1}/3)$ & $({1\over 3},{1\over 3},{1\over 3})$ & $(-{1\over 6},-{1\over 6},-{1\over 6})$ \\ & $\chi_{3+}$ & $({\bf 1},{\bf 27},{\bf 1},{\bf 1},{\bf 1},{\bf 3})$ & $(-e_{1}/3,-e_{1}/3,-e_{2}/3)$ & $({1\over 3},{1\over 3},{1\over 3})$ & $(-{1\over 6},-{1\over 6},-{1\over 6})$ \\ & $\chi_{3-}$ & $({\bf 1},{\bf 27},{\bf 1},{\bf 1},{\bf 1},\overline{\bf 3})$ & $(-e_{1}/3,-e_{1}/3,-e_{3}/3)$ & $({1\over 3},{1\over 3},{1\over 3})$ & $(-{1\over 6},-{1\over 6},-{1\over 6})$ \\ & $T_{{\bf x}{\bf y}{\bf z}}$ & $(\overline{\bf 3},{\bf 1},{\bf 1},{\bf x},{\bf y},{\bf z})$ & $(Q({\bf x}),Q({\bf y}),Q({\bf z}))$ & $({1\over 3},{1\over 3},{1\over 3})$ & $(-{1\over 6},-{1\over 6},-{1\over 6})$ \\ & & & & & \\ \end{tabular} \caption{The $G$-, $Q^R$- and $H$-charges (in the $-1$ and $-1/2$ pictures) of the massless fields in the asymmetric ${\bf Z}_3$ orbifold model. Here, ${\bf x}$, ${\bf y}$ and ${\bf z}$ are irreps of $SU(3)$ such that only one of them is a singlet and the others can be either ${\bf 3}$ or $\overline{\bf 3}$. The charges $Q$ as a function of irrep is given by $Q({\bf 1})=-e_{1}/3$, $Q({\bf 3})=-e_{2}/3$ and $Q(\overline{\bf 3})=-e_{3}/3$.} \end{table} \begin{table}[t] \begin{tabular}{|c|l|l||l|l|} &$E1$ & & $E2$ &\\ M & $SU(2) \otimes E_6 \otimes U(1)^3$ & Field & $SU(2) \otimes E_6 \otimes U(1)^3$ & Field \\ \hline & & & &\\ & $ ({\bf 1},{\bf 78})(0,0,0)_L$ & $\Phi$ & $ ({\bf 1},{\bf 78})(0,0,0)_L$ & $\Phi$ \\ $U$ & $ ({\bf 1},{\bf 1})(0,+6,0)_L$ & $U_{0}$ & $ ({\bf 1},{\bf 1})(0,+6,0)_L$ & $U_{0}$\\ & $ 2 ({\bf 1},{\bf 1})(0,-{3},\pm 3)_L$ & $U_{+ \pm}, U_{- \pm}$ & $({\bf 2}, {\bf 1})(0,0,\pm 3)_L$ & $D_{\pm}$\\ & & & $({\bf 1},{\bf 1})(\pm 3,+{3},0)_L$ & $\tilde{U}_{\pm}$\\ & & & &\\ \hline & & & & \\ $T3$ & $ ({\bf 1},{\bf 27})(0,-{2},0)_L$ & $\chi_{0}$ & $ ({\bf 1},{\bf 27})(0,-{2},0)_L$ & $\chi_{0}$\\ & $2({\bf 1}, {\bf 27})(0,+1,\pm1)_L$ & $\chi_{+ \pm}, \chi_{- \pm}$ & $({\bf 1},{\overline {\bf 27}}) (\pm 1,-1,0)_L$ &$\tilde{\chi}_{\pm}$\\ & & & & \\ \hline & & & & \\ T6 & $ ({\bf 1},{\overline {\bf 27}}) (\pm 1,-1,0)_L$ & $ \tilde{\chi}_{\pm}$ & $2({\bf 1}, {\bf 27})(0,+1,\pm1)_L$ &$\chi_{+ \pm}, \chi_{- \pm}$\\ & & & & \\ \hline & & & & \\ $T2$ & $({\bf 2},{\bf 1})(0,0,{\pm 3})_L$ & $D_{\pm}$ & $2 ({\bf 1},{\bf 1})(0,-{3},\pm 3)_L$ & $U_{+ \pm},U_{- \pm}$\\ & $ ({\bf 1},{\bf 1})(\pm {3},+{3},0)_L$ & $\tilde{U}_{\pm}$ & &\\ & & & & \\ \hline & & & & \\ $U(1)$ & $(1/ \sqrt{6}, ~1/{3\sqrt{2}}, ~1/\sqrt{6})$ & & $(1/ \sqrt{6}, ~1/{3\sqrt{2}}, ~1/\sqrt{6})$ &\\ \end{tabular} \caption{The massless spectra of the $T$-dual pair of $E_6$ models $E1$ and $E2$ both with gauge symmetry $SU(2)_1 \otimes (E_6)_3\otimes U(1)^3$. The $U(1)$ normalization radii are given at the bottom of the Table. The gravity, dilaton and gauge supermultiplets are not shown.} \label{E6spectra} \end{table} \begin{table}[t] \begin{tabular}{|c|l|l|l|l|l|} $E1$ & Field & $SU(2) \otimes E_6 \otimes U(1)^{3}$ & $Q^R$-charges in $SU(3)^3$ & $(H_1,H_2,H_3)_{-1}$ & $(H_1,H_2,H_3)_{-1/2}$ \\ \hline & & & & & \\ & $\Phi$ & $({\bf 1},{\bf 78})(0,0,0)_L$ &$(0,0,0)$ & $(0,0,+1)$ &$(-{1\over 2}, -{1\over 2}, +{1\over 2})$ \\ $U$ & $U_0$ & $({\bf 1},{\bf 1})(0,+6,0)_L$ &$(0,0,0)$ & $(0,0,+1)$ &$(-{1\over 2}, -{1\over 2}, +{1\over 2})$ \\ & $U_{+ \pm}$ & $({\bf 1},{\bf 1})(0,-{3},\pm 3)_L$ &$(0,0,0)$ & $(+1,0,0)$ & $(+{1\over 2}, -{1\over 2}, -{1\over 2})$ \\ & $U_{- \pm}$ & $({\bf 1},{\bf 1})(0,-{3},\pm 3)_L$ &$(0,0,0)$ & $(0,+1,0)$ & $(-{1\over 2}, +{1\over 2}, -{1\over 2})$ \\ & & & & & \\ \hline & & & & & \\ & $\chi_0$ & $({\bf 1},{\bf 27})(0,-{2},0)_L$ & $-(e_1/3,e_1/3,e_1/3)$ & $(+{1\over 3}, +{1\over 3},+{1\over 3})$ & $(-{1\over 6}, -{1\over 6},-{1\over 6})$\\ & $\chi_{++}$ & $({\bf 1}, {\bf 27})(0,+1,+1)_L$ & $-(e_3/3,e_3/3,e_1/3)$ & $(+{1\over 3}, +{1\over 3},+{1\over 3})$ & $(-{1\over 6}, -{1\over 6},-{1\over 6})$ \\ $T3$ & $\chi_{-+}$ & $({\bf 1}, {\bf 27})(0,+1,+1)_L$ & $-(e_2/3,e_1/3,e_3/3)$ & $(+{1\over 3}, +{1\over 3},+{1\over 3})$ & $(-{1\over 6}, -{1\over 6},-{1\over 6})$ \\ & $\chi_{+-}$ & $({\bf 1}, {\bf 27})(0,+1,-1)_L$ & $-(e_3/3,e_1/3,e_2/3)$ & $(+{1\over 3}, +{1\over 3},+{1\over 3})$ & $(-{1\over 6}, -{1\over 6},-{1\over 6})$\\ & $\chi_{--}$ & $({\bf 1}, {\bf 27})(0,+1,-1)_L$ & $-(e_2/3,e_2/3,e_1/3)$ & $(+{1\over 3}, +{1\over 3},+{1\over 3})$ & $(-{1\over 6}, -{1\over 6},-{1\over 6})$\\ & & & & & \\ \hline & & & & & \\ $T6$ & $\tilde{\chi}_{+}$ & $({\bf 1},{\overline {\bf 27}}) (+1,-1,0)_L$ & $(-e_1/6,e_3/3,e_3/3)$ & $(+{1\over 6},+{1\over 6},+{2\over 3})$ &$(-{1\over 3},-{1\over 3},+{1\over 6})$ \\ & $\tilde{\chi}_{-}$ & $({\bf 1},{\overline {\bf 27}}) (-1,-1,0)_L$ & $(-e_1/6,e_2/3,e_2/3)$ & $(+{1\over 6},+{1\over 6},+{2\over 3})$ & $(-{1\over 3},-{1\over 3},+{1\over 6})$\\ & & & & &\\ \hline & & & & &\\ $T2$ & $D_{\pm}$ &$({\bf 2},{\bf 1})(0,0,{\pm 3})_L$ & $(e_1/2,0,0)$ & $(+{1\over 2},+{1\over 2},0)$ & $(0,0,-{1\over 2})$ \\ & $\tilde{U}_{\pm}$ &$ ({\bf 1},{\bf 1})(\pm {3},+{3},0)_L$ & $-(e_1/2,0,0)$ & $(+{1\over 2},+{1\over 2},0)$ & $(0,0,-{1\over 2})$ \\ & & & & &\\ \hline & & & & &\\ $U(1)$ & & $(1/ \sqrt{6}, ~1/{3\sqrt{2}}, ~1/\sqrt{6})$ & &$(1,1,1)$ & $(1,1,1)$ \end{tabular} \caption{The $G$-, $Q^R$- and $H$-charges (in the $-1$ and $-1/2$ pictures) of the massless fields in the $E1$ model. The $U(1)$ normalization radii for the $G$- and $H$-charges are given at the bottom of the Table. The $Q^R$-charges (in the $-1$ picture which are the same as in the $-1/2$ picture) are written in $SU(3)^3$ basis.} \label{E1charges} \end{table} \begin{table}[t] \begin{tabular}{|c|l|c|l|} $E1$ & $Q^R$-charges in $SU(3)^3$ basis &co-marks &$(H_1,H_2,H_3)$ \\ \hline & & & \\ & $a_1=(-\tilde{e}^{1},\tilde{e}^{0},-\tilde{e}^{0})$ & $1$ &\\ & $a_2=(-\tilde{e}^{0},\tilde{e}^{1},\tilde{e}^{2})$ & $2$ &\\ & $a_3=(e_2,0,0)$ & $3$ &\\ $i\partial X^{1}$ & $a_4=(-\tilde{e}^{0},\tilde{e}^{0},-\tilde{e}^{1})$ &$2$ &$(-1,0,0)$\\ & $a_5=(-\tilde{e}^{1},-\tilde{e}^{2},\tilde{e}^{2})$ & $1$ &\\ & $a_6=(-\tilde{e}^{0},-\tilde{e}^{2},-\tilde{e}^{0})$ & $2$ &\\ & $a_7=(-\tilde{e}^{1},\tilde{e}^{1},-\tilde{e}^{1})$ & &\\ & $~~~=-a_1-2a_2-3a_3-2a_4-a_5-2a_6$ & $1$ &\\ & & & \\ \hline & & & \\ & $l_1=(\tilde{e}^{0},-\tilde{e}^{0},\tilde{e}^{0})=-a_1-a_2-a_3-a_4-a_6$ &$1$ &\\ & $l_2=(\tilde{e}^{1},\tilde{e}^{2},\tilde{e}^{1})=a_1+2a_2+2a_3+a_4+a_6$ & $2$ & \\ & $l_3=(e_3,0,0)=-a_2-2a_3-a_4-a_6$ & $3$ &\\ $i\partial X^{2}$ & $l_4=(\tilde{e}^{1},-\tilde{e}^{1},\tilde{e}^{0})=a_2+2a_3+2a_4+a_5+a_6$ & $2$ &$(0,-1,0)$ \\ & $l_5=(\tilde{e}^{0},\tilde{e}^{2},-\tilde{e}^{2})=-a_2-a_3-a_4-a_5-a_6$ & $1$ & \\ & $l_6=(\tilde{e}^{1},-\tilde{e}^{0},-\tilde{e}^{2})=-a_1-a_2-a_3-a_4-a_5$ & $2$ &\\ & $l_7=(\tilde{e}^{0},-\tilde{e}^{1},\tilde{e}^{1})= -l_1-2l_2-3l_3-2l_4-l_5-2l_6$ & &\\ & $~~~=a_1+a_2+2a_3+a_4+a_5+a_6$ & $1$ &\\ & & &\\ \hline & & &\\ & $k_1=(0,e_{1},0)=-a_1-a_2-2a_3-2a_4-a_5-a_6$ &$1$ &\\ & $k_2=(0,e_{2},0)=a_1+a_2+a_3+a_4$ &$1$ &\\ & $k_3=(0,e_{3},0)=a_3+a_4+a_5+a_6$ & $1$ &\\ $i\partial X^{3}$ & $k_1^{'}=(0,0,e_1)=a_1+a_2+a_3+a_6$ &$1$ & $(0,0,-1)$ \\ & $k_{2}^{'}=(0,0,e_2)=a_2+a_3+a_4+a_5$ &$1$ &\\ & $k_{3}^{'}=(0,0,e_3)=-a_1-2a_2-2a_3-a_4-a_5-a_6$ &$1$ & \\ & & & \\ \end{tabular} \caption{The terms that enter the expressions for the currents $i\partial X^{a}$ for $E1$ model. In the first column they are given in terms of their quantum numbers under the Kac-Moody algebra ${\cal G}_{R}=E_6$ in the $SU(3)^3$ basis. The weight vectors $\tilde{e}^1$, $\tilde{e}^2$ are defined by $\tilde{e}^{i} e_{j} = \delta^{i}_{j}$, and $\tilde{e}^{0}=\tilde{e}^{2}-\tilde{e}^{1}$. The co-marks of the roots are given in the second column. The corresponding $H$-charges carried by the supercurrent are given in the last column.} \label{E1super} \end{table} \begin{table}[t] \begin{tabular}{|c|l|l|l|l|l|} $E2$ & Field & $SU(2) \otimes E_6 \otimes U(1)^{3}$ & $Q^R$-charges in $SU(3)^3$ & $(H_1,H_2,H_3)_{-1}$ & $(H_1,H_2,H_3)_{-1/2}$ \\ \hline & & & & & \\ & $\Phi$ & $ ({\bf 1},{\bf 78})(0,0,0)_L$ &$(0,0,0)$ & $(0,0,+1)$ & $(+{1\over 2}, +{1\over 2}, +{1\over 2})$ \\ $U$ & $U_0$ & $ ({\bf 1},{\bf 1})(0,+6,0)_L$ &$(0,0,0)$ & $(0,0,+1)$ &$(+{1\over 2}, +{1\over 2}, +{1\over 2})$ \\ & $D_{\pm}$ & $({\bf 2}, {\bf 1})(0,0,\pm 3)_L$ &$(0,0,0)$ & $(-1,0,0)$ & $(-{1\over 2}, +{1\over 2}, -{1\over 2})$ \\ & ${\tilde U}_{\pm}$ & $({\bf 1},{\bf 1})(\pm 3,+{3},0)_L$ &$(0,0,0)$ & $(0,-1,0)$ & $(+{1\over 2}, -{1\over 2}, -{1\over 2})$ \\ & & & & & \\ \hline & & & & & \\ & $\chi_0$ & $ ({\bf 1},{\bf 27})(0,-{2},0)_L$ & $ (0,-e_1/3,-e_1/3)$ & $(0, -{2\over 3},+{1\over 3})$ & $(+{1\over 2}, -{1\over 6},-{1\over 6})$\\ $T3$ & ${\tilde \chi}_{+}$ & $({\bf 1},{\overline {\bf 27}}) (+1,-1,0)_L$ & $(0,e_3/3,e_3/3)$ & $(0, -{1\over 3},+{2\over 3})$ & $(+{1\over 2}, +{1\over 6},+{1\over 6})$ \\ & ${\tilde \chi}_{-}$ & $({\bf 1},{\overline {\bf 27}}) (-1,-1,0)_L$ & $(0,e_2/3,e_2/3)$ & $(0, -{1\over 3},+{2\over 3})$ & $(+{1\over 2}, +{1\over 6},+{1\over 6})$ \\ & & & & & \\ \hline & & & & &\\ $T6$ & ${\chi}_{++}$ &$({\bf 1}, {\bf 27})(0,+1,+1)_L$ & $({\tilde e}^2/2,-e_3/3,-e_1/3)$ & $(-{1\over 2},-{1\over 6},+{1\over 3})$ & $(0,+{1\over 3},-{1\over 6})$\\ & ${\chi}_{-+}$ &$({\bf 1}, {\bf 27})(0,+1,+1)_L$ & $(-{\tilde e}^2/2,-e_1/3,-e_3/3)$ & $(-{1\over 2},-{1\over 6},+{1\over 3})$ & $(0,+{1\over 3},-{1\over 6})$\\ & ${\chi}_{+-}$ &$({\bf 1}, {\bf 27})(0,+1,-1)_L$ & $({\tilde e}^2/2,-e_1/3,-e_2/3)$ & $(-{1\over 2},-{1\over 6},+{1\over 3})$ & $(0,+{1\over 3},-{1\over 6})$\\ & ${\chi}_{--}$ &$({\bf 1}, {\bf 27})(0,+1,-1)_L$ & $(-{\tilde e}^2/2,-e_2/3,-e_1/3)$ & $(-{1\over 2},-{1\over 6},+{1\over 3})$ & $(0,+{1\over 3},-{1\over 6})$\\ & & & & & \\ \hline & & & & & \\ $T2$ & ${U}_{+ \pm}$ &$({\bf 1},{\bf 1})(0,-{3},\pm 3)_L$ & $(e_1/2,0,0)$ & $(-{1\over 2},-{1\over 2},0)$ & $(0,0,-{1\over 2})$ \\ & $U_{- \pm}$ &$({\bf 1},{\bf 1})(0,-{3},\pm 3)_L$ & $(-e_1/2,0,0)$ & $(-{1\over 2},-{1\over 2},0)$ & $(0,0,-{1\over 2})$ \\ & & & & & \\ \hline & & & & & \\ $U(1)$ & & $(1/ \sqrt{6}, ~1/{3\sqrt{2}}, ~1/\sqrt{6})$ & & $(1,1,1)$ & $(1,1,1)$ \end{tabular} \caption{The $G$-, $Q^R$- and $H$-charges (in the $-1$ and $-1/2$ pictures) of the massless fields in the $E2$ model. The $U(1)$ normalization radii for the $G$- and $H$-charges are given at the bottom of the Table. The $Q^R$-charges (in the $-1$ picture which are the same as in the $-1/2$ picture) are written in $SU(3)^3$ basis.} \label{E2charges} \end{table} \begin{table}[t] \begin{tabular}{|c|l|l|c|} $E2$ & $Q^R$-charges in $SU(3)^3$ basis &co-marks &$(H_1,H_2,H_3)$ \\ \hline & & & \\ & $\alpha_1=-(e_2,0,0)=-a_3$ & $1$ & \\ & $\alpha_2=(-\tilde{e}^1,\tilde{e}^0,-\tilde{e}^0)=a_1$ &$1$ & \\ $i \partial X^1$ & $\alpha_3=(-\tilde{e}^1,-\tilde{e}^2,\tilde{e}^2)=a_5$ & $1$ & $(-1,0,0)$ \\ & $\alpha_4=(-\tilde{e}^1,\tilde{e}^1,-\tilde{e}^1) =-a_1-2a_2-3a_3-2a_4-a_5-2a_6$ & $1$ &\\ & & & \\ \hline & & &\\ & $\beta_1=(\tilde{e}^{0},-\tilde{e}^{1},-\tilde{e}^{2})=-a_2$ & $1$ & \\ & $\beta_2=(-\tilde{e}^{0},\tilde{e}^{0},\tilde{e}^{2}) =a_1+2a_2+2a_3+2a_4+a_5+a_6$ & $1$ & \\ & $\beta_3=(\tilde{e}^{0},-\tilde{e}^{0},\tilde{e}^{1})=-a_4$ & $1$ & \\ $i\partial X^{2}$ & $\beta_4=(-\tilde{e}^{0},-\tilde{e}^{2},-\tilde{e}^{1})=-a_1-a_2-a_3$ & $1$ &$(0,-1,0)$ \\ & $\beta_5=(\tilde{e}^{0},\tilde{e}^{2},\tilde{e}^{0})=-a_6$ & $1$ & \\ & $\beta_6=(-\tilde{e}^{0},\tilde{e}^{1},-\tilde{e}^{0})=-a_3-a_4-a_5$ & $1$ & \\ & & &\\ \hline & & &\\ & $k_1=(0,e_{1},0)=-a_1-a_2-2a_3-2a_4-a_5-a_6$ & $1$ &\\ & $k_2=(0,e_{2},0)=a_1+a_2+a_3+a_4$ & $1$ &\\ & $k_3=(0,e_{3},0)=a_3+a_4+a_5+a_6$ & $1$ &\\ $i\partial X^{3}$ & $k_1^{'}=(0,0,e_1)=a_1+a_2+a_3+a_6$ & $1$ & $(0,0,-1)$ \\ & $k_{2}^{'}=(0,0,e_2)=a_2+a_3+a_4+a_5$ & $1$ &\\ & $k_{3}^{'}=(0,0,e_3)=-a_1-2a_2-2a_3-a_4-a_5-a_6$ & $1$ &\\ & & &\\ \end{tabular} \caption{The terms that enter the expressions for the currents $i\partial X^{a}$ for $E2$ model. In the first column they are given in terms of their quantum numbers under the Kac-Moody algebra ${\cal G}_{R}=E_6$ in the $SU(3)^3$ basis. The weight vectors $\tilde{e}^1$, $\tilde{e}^2$ are defined by $\tilde{e}^{i} e_{j} = \delta^{i}_{j}$, and $\tilde{e}^{0}=\tilde{e}^{2}-\tilde{e}^{1}$. The co-marks of the roots are given in the second column. The corresponding $H$-charges carried by the supercurrent are given in the last column.} \label{E2super} \end{table} \begin{table}[t] \begin{tabular}{|c|l|l|l|} ${\cal G}^{\prime}_R$ & $SU(2)^3 \otimes U(1)^3$ & $SU(2)^4 \otimes U(1)^2$ \hspace{2cm} & $SU(3)^3$ \hspace{2cm} \\ \hline & & & \\ & $({\bf 1},{\bf 1},{\bf 1})(+3,0,0)$ & $({\bf 1},{\bf 1},{\bf 1},{\bf 2}_+)(0,0,0)$ &$(e_1/2,0,0)$ \\ &$({\bf 1},{\bf 1},{\bf 1})(0,+12,0)$ &$({\bf 1},{\bf 1},{\bf 1},{\bf 1})(+12,0)$ &$(0,e_1,e_1)$ \\ &$({\bf 1},{\bf 1},{\bf 1})(0,0,+4)$ &$({\bf 1},{\bf 1},{\bf 1},{\bf 1})(0,+4)$ &$(0,-\tilde{e}^2,-\tilde{e}^2)$ \\ &$({\bf 1},{\bf 1},{\bf 2}_+)(0,0,0)$ &$({\bf 1},{\bf 1},{\bf 2}_+,{\bf 1})(0,0,0)$ &$(\tilde{e}^2/2,\tilde{e}^0/2,-\tilde{e}^0/2)$ \\ &$({\bf 1},{\bf 2}_+,{\bf 1})(0,0,0)$ &$({\bf 1},{\bf 2}_+,{\bf 1},{\bf 1})(0,0,0)$ &$(\tilde{e}^2/2,\tilde{e}^1/2,-\tilde{e}^1/2)$ \\ &$({\bf 2}_+,{\bf 1},{\bf 1})(0,0,0)$ &$({\bf 2}_+,{\bf 1},{\bf 1},{\bf 1})(0,0,0)$ &$(\tilde{e}^2/2,-\tilde{e}^2/2,\tilde{e}^2/2)$ \\ & & & \\ \hline & & & \\ $U(1)$ &$(1/3 \sqrt{2},1/6,1/2 \sqrt{3})$ & $(1/6,1/2 \sqrt{3})$ & \\ \end{tabular} \caption{The quantum numbers under the Kac-Moody algebra ${\cal G}^{\prime}_R$ in the $SU(3)^3$ basis. For $E1$ model, ${\cal G}^{\prime}_R=SU(2)^3 \otimes U(1)^3$. For $E2$ model, ${\cal G}^{\prime}_R=SU(2)^4 \otimes U(1)^2$. The $U(1)$ normalization radii are given at the bottom of the Table. Here ${\bf 2}_+$ and ${\bf 2}_-$ stand for the upper and lower components of an $SU(2)$ doublet. In the $SU(2)\supset U(1)$ basis we have ${\bf 2}_\pm = (\pm 1)$.} \label{convert} \end{table} \begin{table}[t] \begin{tabular}{|c|l|c|l|c|} Field & $SU(2) \otimes E_6 \otimes U(1)^3$ & Field & $SU(2) \otimes SO(10)\otimes U(1)^4$ & $D$-charge \\ \hline & & & & \\ $\Phi$ & $ ({\bf 1},{\bf 78})(0,0,0)_L$ & $\Phi$ & $ ({\bf 1},{\bf 45})(0,0,0,0)_L$ & $2$ \\ & & $\phi$ & $ ({\bf 1},{\bf 1})(0,0,0,0)_L$ & $2$ \\ & & $Q$ & $({\bf 1},{\bf 16})(0,0,0,3)_L$ & $1$ \\ & & $\overline{Q}$ & $({\bf 1},{\overline{\bf 16}})(0,0,0,-3)_L$ & $0$ \\ & & & & \\ \hline & & & & \\ $\chi_0$ & $ ({\bf 1},{\bf 27})(0,-{2},0)_L$ &$Q_0$ & $ ({\bf 1},{\bf 16})(0,-{2},0,-1)_L$ & $1$ \\ & & $H_0$ & $ ({\bf 1},{\bf 10})(0,-{2},0,+2)_L$ & $0$ \\ & & $S_0$ & $ ({\bf 1},{\bf 1})(0,-{2},0,-4)_L$ & $2$ \\ & & & & \\ \hline & & & & \\ $\chi_{+\pm}$ & $({\bf 1}, {\bf 27})(0,+1,\pm1)_L$ & $Q_{+ \pm}$ & $({\bf 1}, {\bf 16})(0,+1,\pm1,-1)_L$ & $2$ \\ & & $H_{+ \pm}$ & $({\bf 1}, {\bf 10})(0,+1,\pm1,+2)_L$ & $1$ \\ & & $S_{+ \pm}$ & $({\bf 1}, {\bf 1})(0,+1,\pm1,-4)_L$ & $0$ \\ & & & & \\ \hline & & & & \\ $\chi_{-\pm}$ & $({\bf 1}, {\bf 27})(0,+1,\pm1)_L$ & $Q_{- \pm}$ & $({\bf 1}, {\bf 16})(0,+1,\pm1,-1)_L$ & $2$ \\ & & $H_{- \pm}$ & $({\bf 1}, {\bf 10})(0,+1,\pm1,+2)_L$ & $1$ \\ & & $S_{- \pm}$ & $({\bf 1}, {\bf 1})(0,+1,\pm1,-4)_L$ & $0$ \\ & & & & \\ \hline & & & & \\ $\tilde{\chi}_{\pm}$ & $ ({\bf 1},{\overline {\bf 27}}) (\pm 1,-1,0)_L$ & $\tilde{Q}_{\pm}$ & $ ({\bf 1},{\overline {\bf 16}}) (\pm 1,-1,0,+1)_L$ & $1$ \\ & & $\tilde{H}_{\pm}$ & $ ({\bf 1},{{\bf 10}}) (\pm 1,-1,0,-2)_L$ & $2$ \\ & & $\tilde{S}_{\pm}$ & $ ({\bf 1},{{\bf 1}}) (\pm 1,-1,0,+4)_L$ & $0$ \\ & & & & \\ \end{tabular} \caption{The discrete $D$-charges for $E1$ model. The second column gives the gauge quantum numbers of the fields in $E1$ model. The fourth column gives the gauge quantum numbers of the fields in the branching $E_6 \supset SO(10) \otimes U(1)$. The last column gives the discrete $D$-charges. This $D$-charge comes from a ${\bf Z}_3$ symmetry and must be conserved modulus $3$ in the scattering amplitude. Fields in the $E1$ model that have $D=0$ for the entire $E_6$ multiplet are not shown.} \label{discrete} \end{table} \begin{table}[t] \begin{tabular}{|c|l|l||l|l|} &$T1(1,1)$ & &$T2(1,1)$ & \\ M & $SU(2) \otimes SO(10)\otimes U(1)^4$ & Field& $SU(2) \otimes SO(10) \otimes U(1)^4$ & Field \\ \hline & & & & \\ & $ ({\bf 1},{\bf 45})(0,0,0,0)_L$ & $\Phi$ & $ ({\bf 1},{\bf 45})(0,0,0,0)_L$ & $\Phi$\\ & $ ({\bf 1},{\bf 1})(0,0,0,0)_L$ & $\phi$ & $ ({\bf 1},{\bf 1})(0,0,0,0)_L$ & $\phi$\\ $U$ & $2 ({\bf 1},{\bf 1})(0,-{3},\pm 3,0)_L$ & $U_{+\pm},U_{-\pm}$& $ ({\bf 1},{\bf 1})(\pm 3,+{3},0,0)_L$ & ${\tilde U}_\pm$\\ & $ ({\bf 1},{\bf 1})(0,+6,0,0)_L$ & $U_0$ & $ ({\bf 1},{\bf 1})(0,+6,0,0)_L$ & $U_0$\\ & & & $({\bf 2}, {\bf 1})(0,0,\pm 3,0)_L$ &$D_{\pm}$\\ & & & & \\ \hline & & & & \\ & $ ({\bf 1},{\bf 16})(0,-{2},0,-1)_L$ & $Q_0$ & $ ({\bf 1},{\bf 16})(0,-{2},0, -1)_L$ &$Q_0$\\ & $ ({\bf 1},{\bf 10})(0,-{2},0,+2)_L$ & $H_0$ & $ ({\bf 1},{\bf 10})(0,-{2},0,+2)_L$ & $H_0$\\ & $ ({\bf 1},{\bf 1})(0,-{2},0,-4)_L$ & $S_0$ & $ ({\bf 1},{\bf 1})(0,-{2},0,-4)_L$ &$S_0$ \\ $T3$ & $2({\bf 1}, {\bf 16})(0,+1,\pm1,-1)_L$ & $Q_{+\pm},Q_{-\pm}$ & $ ({\bf 1},{\overline {\bf 16}}) (\pm 1,-1,0,+1)_L$ &${\tilde Q}_\pm$\\ & $2({\bf 1}, {\bf 10})(0,+1,\pm1,+2)_L$ & $H_{+\pm},H_{-\pm}$& $ ({\bf 1},{ {\bf 10}}) (\pm 1,-1,0,-2)_L$ &${\tilde H}_\pm$\\ & $2({\bf 1}, {\bf 1})(0,+1,\pm1,-4)_L$ & $S_{+\pm},S_{-\pm}$& $ ({\bf 1},{ {\bf 1}}) (\pm 1,-1,0,+4)_L$ &${\tilde S}_\pm$\\ & & & & \\ \hline & & & &\\ $T6$ & $ ({\bf 1},{\overline {\bf 16}}) (\pm 1,-1,0,+1)_L$ & ${\tilde Q}_\pm$& 2({\bf 1}, ${\bf 16})(0,+1,\pm1,-1)_L$ & $Q_{+\pm},Q_{-\pm}$\\ & $ ({\bf 1},{{\bf 10}}) (\pm 1,-1,0,-2)_L$ &${\tilde H}_\pm$ & 2({\bf 1}, ${\bf 10})(0,+1,\pm1,+2)_L$ & $H_{+\pm},H_{-\pm}$\\ & $ ({\bf 1},{{\bf 1}}) (\pm 1,-1,0,+4)_L$ &${\tilde S}_\pm$& $2({\bf 1}, {\bf 1})(0,+1,\pm1,-4)_L$ & $S_{+\pm},S_{-\pm}$ \\ & & & &\\ \hline & & & & \\ $T2$ & $({\bf 2},{\bf 1})(0,0,{\pm 3},0)_L$ & $D_\pm$ & $2 ({\bf 1},{\bf 1})(0,-{3},\pm 3,0)_L$ & $U_{+\pm},U_{-\pm}$ \\ & $ ({\bf 1},{\bf 1})(\pm {3},+{3},0,0)_L$ & ${\tilde U}_\pm$& &\\ & & & &\\ \hline & & &&\\ $U(1)$ & $(1/ \sqrt{6}, ~1/3\sqrt{2}, ~1/\sqrt{6},~1/6)$ & &$(1/ \sqrt{6}, ~1/{3\sqrt{2}}, ~1/\sqrt{6},~1/6)$ &\\ \end{tabular} \caption{The massless spectra of the two $SO(10)$ models $T1(1,1)$ and $T2(1,1)$ both with gauge symmetry $SU(2)_1 \otimes SO(10)_3\otimes U(1)^4$. The $U(1)$ normalization radii are given at the bottom of the Table. The gravity, dilaton and gauge supermultiplets are not shown.} \label{SO(10)spectra} \end{table} \begin{table}[t] \begin{tabular}{|c|l|l|l|l|l|} $S1$ & Field & $SU(3) \otimes SU(6) \otimes U(1)^3$ & $Q^R$-charges in $SU(3)^3$ & $(H_1,H_2,H_3)_{-1}$ & $(H_1,H_2,H_3)_{-1/2}$ \\ \hline & & & & &\\ & $\Phi$ & $({\bf 1},{\bf 35})(0,0,0)_L$ &$(0,0,0)$ & $(0,0,+1)$ & $(-{1\over 2}, -{1\over 2}, +{1\over 2})$ \\ & $\phi$ & $({\bf 1},{\bf 1})(0,0,0)_L$ &$(0,0,0)$ & $(0,0,+1)$ & $(-{1\over 2}, -{1\over 2}, +{1\over 2})$ \\ & $U_0$ & $ ({\bf 1},{\bf 1}) (+6,0,0)_L$ &$(0,0,0)$ & $(0,0,+1)$ & $(-{1\over 2}, -{1\over 2}, +{1\over 2})$ \\ $U$ & $T_0$ & $({\bf 3},{\bf 1})(0,0,-4)_L$ &$(0,0,0)$ & $(0,0,+1)$ & $(-{1\over 2}, -{1\over 2}, +{1\over 2})$ \\ & $U_{+ \pm}$ & $({\bf 1},{\bf 1})(-3, {\pm 3},{\pm 3})_L$ & $(0,0,0)$ & $(+1,0,0)$ & $(+{1\over 2}, -{1\over 2}, -{1\over 2})$ \\ & ${\tilde T}_{+}$ & $({\overline {\bf 3}},{\bf 1})(+3,-3,+1)_L$ &$(0,0,0)$ & $(+1,0,0)$ & $(+{1\over 2}, -{1\over 2}, -{1\over 2})$ \\ & $U_{- \pm}$ &$({\bf 1},{\bf 1})(-3, {\pm 3},{\pm 3})_L$ &$(0,0,0)$ & $(0,+1,0)$ & $(-{1\over 2}, +{1\over 2}, -{1\over 2})$ \\ & ${\tilde T}_{-}$ & $({\overline {\bf 3}},{\bf 1})(+3,-3,+1)_L$ &$(0,0,0)$ & $(0,+1,0)$ & $(-{1\over 2}, +{1\over 2}, -{1\over 2})$ \\ & & & & &\\ \hline & & & & &\\ & ${\tilde S}^1_0$ & $({\bf 1},{\overline {\bf 6}})(+1,0,+2)_L$ & $ -{1\over 3}(e_1,e_1,e_1)$ & $(+{1\over 3}, +{1\over 3},+{1\over 3})$ & $(-{1\over 6}, -{1\over 6},-{1\over 6})$\\ & ${\tilde S}^2_0$ & $({\bf 1},{\overline {\bf 6}})(+1,0,+2)_L$ & $ -{1\over 3}(e_1,e_2,e_2)$ & $(+{1\over 3}, +{1\over 3},+{1\over 3})$ & $(-{1\over 6}, -{1\over 6},-{1\over 6})$\\ & ${\tilde S}^3_0$ & $({\bf 1},{\overline {\bf 6}})(+1,0,+2)_L$ & $ -{1\over 3}(e_1,e_3,e_3)$ & $(+{1\over 3}, +{1\over 3},+{1\over 3})$ & $(-{1\over 6}, -{1\over 6},-{1\over 6})$\\ & $F^1_\pm$ & $({\bf 1},{\bf 15})(+1,-1,-{1})_L$ & $-{1\over 3}(e_3,e_2,e_3),-{1\over 3}(e_2,e_3,e_2)$ & $(+{1\over 3}, +{1\over 3},+{1\over 3})$ & $(-{1\over 6}, -{1\over 6},-{1\over 6})$\\ $T3$ & ${\tilde S}^1_\pm$ & $({\bf 1},{\overline {\bf 6}})(-2,+1,-{1})_L$ & $-{1\over 3}(e_3,e_2,e_3),-{1\over 3}(e_2,e_3,e_2)$ & $(+{1\over 3}, +{1\over 3},+{1\over 3})$ & $(-{1\over 6}, -{1\over 6},-{1\over 6})$\\ & $F^2_\pm$ & $({\bf 1},{\bf 15})(+1,-1,-{1})_L$ & $-{1\over 3}(e_3,e_3,e_1),-{1\over 3}(e_2,e_1,e_3)$ & $(+{1\over 3}, +{1\over 3},+{1\over 3})$ & $(-{1\over 6}, -{1\over 6},-{1\over 6})$\\ & ${\tilde S}^2_\pm$ & $({\bf 1},{\overline {\bf 6}})(-2,+1,-{1})_L$ & $-{1\over 3}(e_3,e_3,e_1),-{1\over 3}(e_2,e_1,e_3)$ & $(+{1\over 3}, +{1\over 3},+{1\over 3})$ & $(-{1\over 6}, -{1\over 6},-{1\over 6})$\\ & $F^3_\pm$ & $({\bf 1},{\bf 15})(+1,-1,-{1})_L$ & $-{1\over 3}(e_3,e_1,e_2),-{1\over 3}(e_2,e_2,e_1)$ & $(+{1\over 3}, +{1\over 3},+{1\over 3})$ & $(-{1\over 6}, -{1\over 6},-{1\over 6})$\\ & ${\tilde S}^3_\pm$ & $({\bf 1},{\overline {\bf 6}})(-2,+1,-{1})_L$ & $-{1\over 3}(e_3,e_1,e_2),-{1\over 3}(e_2,e_2,e_1)$ & $(+{1\over 3}, +{1\over 3},+{1\over 3})$ & $(-{1\over 6}, -{1\over 6},-{1\over 6})$\\ & & & & &\\ \hline & & & & &\\ & $S^1$ & $({\bf 1},{\bf 6})(+2,+1,+1)_L$ & $(-e_1/6,e_1/3,e_1/3)$ & $(+{1\over 6},+{1\over 6},+{2\over 3})$ & $(-{1\over 3},-{1\over 3},+{1\over 6})$\\ & ${\tilde F}^1$ & $({\bf 1},{\overline {\bf 15}}) ( -1,-{1},+{1})_L$ & $(-e_1/6,e_1/3,e_1/3)$ & $(+{1\over 6},+{1\over 6},+{2\over 3})$ & $(-{1\over 3},-{1\over 3},+{1\over 6})$\\ & $S^2$ & $({\bf 1},{\bf 6})(+2,+1,+1)_L$ & $(-e_1/6,e_2/3,e_2/3)$ & $(+{1\over 6},+{1\over 6},+{2\over 3})$ & $(-{1\over 3},-{1\over 3},+{1\over 6})$\\ $T6$ & ${\tilde F}^2$ & $({\bf 1},{\overline {\bf 15}}) ( -1,-{1},+{1})_L$ & $(-e_1/6,e_2/3,e_2/3)$ & $(+{1\over 6},+{1\over 6},+{2\over 3})$ & $(-{1\over 3},-{1\over 3},+{1\over 6})$\\ & $S^3$ & $({\bf 1},{\bf 6})(+2,+1,+1)_L$ & $(-e_1/6,e_3/3,e_3/3)$ & $(+{1\over 6},+{1\over 6},+{2\over 3})$ & $(-{1\over 3},-{1\over 3},+{1\over 6})$\\ & ${\tilde F}^3$ & $({\bf 1},{\overline {\bf 15}}) ( -1,-{1},+{1})_L$ & $(-e_1/6,e_3/3,e_3/3)$ & $(+{1\over 6},+{1\over 6},+{2\over 3})$ & $(-{1\over 3},-{1\over 3},+{1\over 6})$\\ & & & & &\\ \hline & & & & &\\ & $T_{+}$ & $({\bf 3},{\bf 1})(+3,-3,-1)_L$ & $(e_1/2,0,0)$ & $(+{1\over 2},+{1\over 2},0)$ & $(0,0,-{1\over 2})$ \\ $T2$ & ${\tilde T}_0$ & $({\overline {\bf 3}},{\bf 1})(-3,+3,+1)_L$ & $(e_1/2,0,0)$ & $(+{1\over 2},+{1\over 2},0)$ & $(0,0,-{1\over 2})$ \\ & $T_{-}$ & $({\bf 3},{\bf 1})(-3,-3,-1)_L$ & $(-e_1/2,0,0)$ & $(+{1\over 2},+{1\over 2},0)$ & $(0,0,-{1\over 2})$ \\ & $\tilde{U}_{\pm}$ &$ ({\bf 1},{\bf 1})(+3,\pm 3,\mp 3)_L$ & $(-e_1/2,0,0)$ & $(+{1\over 2},+{1\over 2},0)$ & $(0,0,-{1\over 2})$ \\ & & & & &\\ \hline & & & & &\\ $U(1)$ & & $({1\over{3\sqrt{2}}},~{1\over{2\sqrt{3}}},~{1\over {2\sqrt{3}}})$ & & $(1,1,1)$ & $(1,1,1)$ \end{tabular} \caption{The $G$-, $Q^R$- and $H$-charges (in the $-1$ and $-1/2$ pictures) for the massless fields of the $S1$ model. The $U(1)$ normalization radii for the $G$- and $H$-charges are given at the bottom of the Table.} \label{S1charges} \end{table} \begin{table}[t] \begin{tabular}{|c|l|l|l|l|l|} $S2$ & Field & $SU(2)^{2} \otimes SU(6) \otimes U(1)^3$ & $Q^R$-charges in $SU(3)^3$ & $(H_1,H_2,H_3)_{-1}$ & $(H_1,H_2,H_3)_{-1/2}$ \\ \hline & & & & &\\ & $\Phi$ & $ ({\bf 1},{\bf 1},{\bf 35})(0,0,0)_L$ &$(0,0,0)$ & $(0,0,+1)$ & $(-{1\over 2}, -{1\over 2}, +{1\over 2})$ \\ & $\phi$ & $ ({\bf 1},{\bf 1},{\bf 1})(0,0,0)_L$ &$(0,0,0)$ & $(0,0,+1)$ & $(-{1\over 2}, -{1\over 2}, +{1\over 2})$ \\ $U$ & $U_0$ & $ ({\bf 1},{\bf 1},{\bf 1}) (0,0,-6)_L$ &$(0,0,0)$ & $(0,0,+1)$ & $(-{1\over 2}, -{1\over 2}, +{1\over 2})$ \\ & $D_{\pm}$ & $ ({\bf 2},{\bf 1}, {\bf 1})({\pm 2},0,+3)_L$ &$(0,0,0)$ & $(0,0,+1)$ & $(-{1\over 2}, -{1\over 2}, +{1\over 2})$ \\ & $d_{+ \pm}$ & $({\bf 1},{\bf 2},{\bf 1})({\pm 1},{\mp 3},+3)_L$ &$(0,0,0)$ & $(+1,0,0)$ & $(+{1\over 2}, -{1\over 2}, -{1\over 2})$ \\ & $d_{- \pm}$ & $({\bf 1},{\bf 2},{\bf 1})({\pm 1},{\mp 3},+3)_L$ & $(0,0,0)$ & $(0,+1,0)$ & $(-{1\over 2}, +{1\over 2}, -{1\over 2})$ \\ & & & & &\\ \hline & & & & &\\ & ${F}^1$ & $({\bf 1},{\bf 1},{\bf 15})(0,0,+{2})_L$ & $-(e_1/3,e_1/3,e_1/3)$ & $(+{1\over 3}, +{1\over 3},+{1\over 3})$ & $(-{1\over 6}, -{1\over 6},-{1\over 6})$\\ & ${F}^2$ & $({\bf 1},{\bf 1},{\bf 15})(0,0,+{2})_L$ & $-(e_1/3,e_2/3,e_2/3)$ & $(+{1\over 3}, +{1\over 3},+{1\over 3})$ & $(-{1\over 6}, -{1\over 6},-{1\over 6})$\\ & ${F}^3$ & $({\bf 1},{\bf 1},{\bf 15})(0,0,+{2})_L$ & $-(e_1/3,e_3/3,e_3/3)$ & $(+{1\over 3}, +{1\over 3},+{1\over 3})$ & $(-{1\over 6}, -{1\over 6},-{1\over 6})$\\ & $\tilde{S}^1_{+ \pm}$ & $({\bf 1},{\bf 1},{\overline {\bf 6}})(\pm 1,\mp {1},-1)_L$ & $-(e_3/3,e_2/3,e_3/3)$ & $(+{1\over 3}, +{1\over 3},+{1\over 3})$ & $(-{1\over 6}, -{1\over 6},-{1\over 6})$\\ $T3$ & $\tilde{S}^1_{- \pm}$ & $({\bf 1},{\bf 1},{\overline {\bf 6}})(\pm 1,\mp {1},-1)_L$ & $-(e_2/3,e_3/3,e_2/3)$ & $(+{1\over 3}, +{1\over 3},+{1\over 3})$ & $(-{1\over 6}, -{1\over 6},-{1\over 6})$\\ & $\tilde{S}^2_{+ \pm}$ & $({\bf 1},{\bf 1},{\overline {\bf 6}})(\pm 1,\mp {1},-1)_L$ & $-(e_3/3,e_3/3,e_1/3)$ & $(+{1\over 3}, +{1\over 3},+{1\over 3})$ & $(-{1\over 6}, -{1\over 6},-{1\over 6})$\\ & $\tilde{S}^2_{- \pm}$ & $({\bf 1},{\bf 1},{\overline {\bf 6}})(\pm 1,\mp {1},-1)_L$ & $-(e_2/3,e_1/3,e_3/3)$ & $(+{1\over 3}, +{1\over 3},+{1\over 3})$ & $(-{1\over 6}, -{1\over 6},-{1\over 6})$\\ & $\tilde{S}^3_{+ \pm}$ & $({\bf 1},{\bf 1},{\overline {\bf 6}})(\pm 1,\mp {1},-1)_L$ & $-(e_3/3,e_1/3,e_2/3)$ & $(+{1\over 3}, +{1\over 3},+{1\over 3})$ & $(-{1\over 6}, -{1\over 6},-{1\over 6})$\\ & $\tilde{S}^3_{- \pm}$ & $({\bf 1},{\bf 1},{\overline {\bf 6}})(\pm 1,\mp {1},-1)_L$ & $-(e_2/3,e_2/3,e_1/3)$ & $(+{1\over 3}, +{1\over 3},+{1\over 3})$ & $(-{1\over 6}, -{1\over 6},-{1\over 6})$\\ & & & & &\\ \hline & & & & &\\ & $S^1_{\pm}$ & $({\bf 1},{\bf 1}, {\bf 6})(\pm 1,\pm {1},+1)_L$ & $(-e_1/6,e_1/3,e_1/3)$ & $(+{1\over 6},+{1\over 6},+{2\over 3})$ & $(-{1\over 3},-{1\over 3},+{1\over 6})$\\ $T6$ & $S^2_{\pm}$ & $({\bf 1},{\bf 1}, {\bf 6})(\pm 1,\pm {1},+1)_L$ & $(-e_1/6,e_2/3,e_2/3)$ & $(+{1\over 6},+{1\over 6},+{2\over 3})$ & $(-{1\over 3},-{1\over 3},+{1\over 6})$\\ & $S^3_{\pm}$ & $({\bf 1},{\bf 1}, {\bf 6})(\pm 1,\pm {1},+1)_L$ & $(-e_1/6,e_3/3,e_3/3)$ & $(+{1\over 6},+{1\over 6},+{2\over 3})$ & $(-{1\over 3},-{1\over 3},+{1\over 6})$\\ & & & & &\\ \hline & & & & &\\ $T2$ & $\Delta_{\pm}$ & $({\bf 2},{\bf 2},{\bf 1})(\pm 1,\mp 3,0)_L$ & $(e_1/2,0,0)$ & $(+{1\over 2},+{1\over 2},0)$ & $(0,0,-{1\over 2})$ \\ & ${\tilde d}_{\pm}$ & $({\bf 1},{\bf 2},{\bf 1})(\pm 1,\pm 3, -3)_L$ & $(-e_1/2,0,0)$ & $(+{1\over 2},+{1\over 2},0)$ & $(0,0,-{1\over 2})$ \\ & & & & &\\ \hline & & & & &\\ $U(1)$ & & $({1\over 2},~{1\over {2\sqrt{3}}},~{1\over {3\sqrt{2}}})$ & & $(1,1,1)$ & $(1,1,1)$ \end{tabular} \caption{The $G$-, $Q^R$- and $H$-charges (in the $-1$ and $-1/2$ pictures) for the massless fields of the $S2$ model. The $U(1)$ normalization radii for the $G$- and $H$-charges are given at the bottom of the Table.} \label{S2charges} \end{table} \begin{table}[t] \begin{tabular}{|c|l|l|l|l|l|} $S3$ & Field & $SU(3) \otimes SU(6) \otimes U(1)^3$ & $Q^R$-charges in $SU(3)^3$ & $(H_1,H_2,H_3)_{-1}$ & $(H_1,H_2,H_3)_{-1/2}$ \\ \hline & & & & &\\ & $\Phi$ & $({\bf 1},{\bf 35})(0,0,0)_L$ & $(0,0,0)$ & $(0,0,+1)$ & $(+{1\over 2}, +{1\over 2}, +{1\over 2})$ \\ & $\phi$ & $({\bf 1},{\bf 1})(0,0,0)_L$ & $(0,0,0)$ & $(0,0,+1)$ & $(+{1\over 2}, +{1\over 2}, +{1\over 2})$ \\ & $U_0$ & $ ({\bf 1},{\bf 1}) (+6,0,0)_L$ & $(0,0,0)$ & $(0,0,+1)$ & $(+{1\over 2}, +{1\over 2}, +{1\over 2})$ \\ $U$ & $T_0$ & $({\bf 3},{\bf 1})(0,0,-4)_L$ & $(0,0,0)$ & $(0,0,+1)$ & $(+{1\over 2}, +{1\over 2}, +{1\over 2})$ \\ & $T_+$ & $({\bf 3},{\bf 1})(+3,-3,-1)_L$ & $(0,0,0)$ & $(-1,0,0)$ & $(-{1\over 2}, +{1\over 2}, -{1\over 2})$ \\ & ${\tilde T}_0$ & $({\overline {\bf 3}},{\bf 1})(-3,+3,+1)_L$ & $(0,0,0)$ & $(-1,0,0)$ & $(-{1\over 2}, +{1\over 2}, -{1\over 2})$ \\ & $T_-$ & $({\bf 3},{\bf 1})(-3,-3,-1)_L$ &$(0,0,0)$ & $(0,-1,0)$ & $(+{1\over 2}, -{1\over 2}, -{1\over 2})$ \\ & ${\tilde U}_{\pm}$ & $ ({\bf 1},{\bf 1})(+3,\pm 3,\mp 3)_L$ &$(0,0,0)$ & $(0,-1,0)$ & $(+{1\over 2}, -{1\over 2}, -{1\over 2})$ \\ & & & & &\\ \hline & & & & &\\ & ${\tilde S}^1_0$ &$({\bf 1},{\overline {\bf 6}})(+1,0,+2)_L$ & $ (0,-e_1/3,-e_1/3)$ & $(0, -{2\over 3},+{1\over 3})$ & $(+{1\over 2}, -{1\over 6},-{1\over 6})$\\ & ${\tilde S}^2_0$ &$({\bf 1},{\overline {\bf 6}})(+1,0,+2)_L$ & $ (0,-e_2/3,-e_2/3)$ & $(0, -{2\over 3},+{1\over 3})$ & $(+{1\over 2}, -{1\over 6},-{1\over 6})$\\ & ${\tilde S}^3_0$ &$({\bf 1},{\overline {\bf 6}})(+1,0,+2)_L$ & $ (0,-e_3/3,-e_3/3)$ & $(0, -{2\over 3},+{1\over 3})$ & $(+{1\over 2}, -{1\over 6},-{1\over 6})$\\ & ${\tilde F}^1$ &$({\bf 1},{\overline {\bf 15}}) ( -1,-{1},+{1})_L$ & $ (0,e_1/3,e_1/3)$ & $(0, -{1\over 3},+{2\over 3})$ & $(+{1\over 2}, +{1\over 6},+{1\over 6})$\\ $T3$ & $S^1$ &$({\bf 1},{\bf 6})(+2,+1,+1)_L$ &$ (0,e_1/3,e_1/3)$ & $(0, -{1\over 3},+{2\over 3})$ & $(+{1\over 2}, +{1\over 6},+{1\over 6})$\\ & ${\tilde F}^2$ &$({\bf 1},{\overline {\bf 15}}) ( -1,-{1},+{1})_L$ & $ (0,e_2/3,e_2/3)$ & $(0, -{1\over 3},+{2\over 3})$ & $(+{1\over 2}, +{1\over 6},+{1\over 6})$\\ & $S^2$ &$({\bf 1},{\bf 6})(+2,+1,+1)_L$ & $ (0,e_2/3,e_2/3)$ & $(0, -{1\over 3},+{2\over 3})$ & $(+{1\over 2}, +{1\over 6},+{1\over 6})$\\ & ${\tilde F}^3$ &$({\bf 1},{\overline {\bf 15}}) ( -1,-{1},+{1})_L$ & $ (0,e_3/3,e_3/3)$ & $(0, -{1\over 3},+{2\over 3})$ & $(+{1\over 2}, +{1\over 6},+{1\over 6})$\\ & $S^3$ &$({\bf 1},{\bf 6})(+2,+1,+1)_L$ & $ (0,e_3/3,e_3/3)$ & $(0, -{1\over 3},+{2\over 3})$ & $(+{1\over 2}, +{1\over 6},+{1\over 6})$\\ & & & & &\\ \hline & & & & &\\ & ${\tilde S}^1_{\pm}$ & $({\bf 1},{\overline {\bf 6}})(-2,+1,-{1})_L$ & $({\tilde{e}^2 \over 2},-{e_2 \over 3},-{e_3 \over 3}), -({\tilde{e}^2 \over 2},{e_3 \over 3},{e_2 \over 3})$ & $(-{1\over 2},-{1\over 6},+{1\over 3})$ & $(0,+{1\over 3},-{1\over 6})$\\ & $F^1_{\pm}$ &$({\bf 1},{\bf 15})(+1,-1,-{1})_L$ & $({\tilde{e}^2 \over 2},-{e_2 \over 3},-{e_3 \over 3}), -({\tilde{e}^2 \over 2},{e_3 \over 3},{e_2 \over 3})$ & $(-{1\over 2},-{1\over 6},+{1\over 3})$ & $(0,+{1\over 3},-{1\over 6})$\\ $T6$ & ${\tilde S}^2_{\pm}$ & $({\bf 1},{\overline {\bf 6}})(-2,+1,-{1})_L$ & $({\tilde{e}^2 \over 2},-{e_3 \over 3},-{e_1 \over 3}), -({\tilde{e}^2 \over 2},{e_1 \over 3},{e_3 \over 3})$ & $(-{1\over 2},-{1\over 6},+{1\over 3})$ & $(0,+{1\over 3},-{1\over 6})$\\ & $F^2_{\pm}$ &$({\bf 1},{\bf 15})(+1,-1,-{1})_L$ & $({\tilde{e}^2 \over 2},-{e_3 \over 3},-{e_1 \over 3}), -({\tilde{e}^2 \over 2},{e_1 \over 3},{e_3 \over 3})$ & $(-{1\over 2},-{1\over 6},+{1\over 3})$ & $(0,+{1\over 3},-{1\over 6})$\\ & ${\tilde S}^3_{\pm}$ & $({\bf 1},{\overline {\bf 6}})(-2,+1,-{1})_L$ & $({\tilde{e}^2 \over 2},-{e_1 \over 3},-{e_2 \over 3}), -({\tilde{e}^2 \over 2},{e_2 \over 3},{e_1 \over 3})$ & $(-{1\over 2},-{1\over 6},+{1\over 3})$ & $(0,+{1\over 3},-{1\over 6})$\\ & $F^3_{\pm}$ &$({\bf 1},{\bf 15})(+1,-1,-{1})_L$ & $({\tilde{e}^2 \over 2},-{e_1 \over 3},-{e_2 \over 3}), -({\tilde{e}^2 \over 2},{e_2 \over 3},{e_1 \over 3})$ & $(-{1\over 2},-{1\over 6},+{1\over 3})$ & $(0,+{1\over 3},-{1\over 6})$\\ & & & & &\\ \hline & & & & &\\ & ${U}_{\pm+}$ &$({\bf 1},{\bf 1})(-3, +{3},+{3})_L$ & $(e_1/2,0,0),(-e_1/2,0,0)$ & $(-{1\over 2},-{1\over 2},0)$ & $(0,0,-{1\over 2})$ \\ $T2$ & $U_{\pm -}$ &$({\bf 1},{\bf 1})(-3, -{3},-{3})_L$ & $(e_1/2,0,0),(-e_1/2,0,0)$ & $(-{1\over 2},-{1\over 2},0)$ & $(0,0,-{1\over 2})$ \\ & ${\tilde T}_\pm$ & $({\overline {\bf 3}},{\bf 1})(+3,-3,+1)_L$ & $(e_1/2,0,0),(-e_1/2,0,0)$ & $(-{1\over 2},-{1\over 2},0)$ & $(0,0,-{1\over 2})$ \\ & & & & &\\ \hline & & & & &\\ $U(1)$ & & $({1\over{3\sqrt{2}}},~{1\over{2\sqrt{3}}},~{1\over {2\sqrt{3}}})$ & & $(1,1,1)$ & $(1,1,1)$ \end{tabular} \caption{The $G$-, $Q^R$- and $H$-charges (in the $-1$ and $-1/2$ pictures) for the massless fields of the $S3$ model. The $U(1)$ normalization radii for the $G$- and $H$-charges are given at the bottom of the Table.} \label{S3charges} \end{table} \begin{table}[t] \begin{tabular}{|c|l|l|l|l|l|} $S4$ & Field & $SU(2)^2 \otimes SU(6) \otimes U(1)^3$ & $Q^R$-charges in $SU(3)^3$ & $(H_1,H_2,H_3)_{-1}$ & $(H_1,H_2,H_3)_{-1/2}$ \\ \hline & & & & &\\ & $\Phi$ & $ ({\bf 1},{\bf 1},{\bf 35})(0,0,0)_L$ & $(0,0,0)$ & $(0,0,+1)$ & $(+{1\over 2}, +{1\over 2}, +{1\over 2})$ \\ & $\phi$ & $ ({\bf 1},{\bf 1},{\bf 1})(0,0,0)_L$ & $(0,0,0)$ & $(0,0,+1)$ & $(+{1\over 2}, +{1\over 2}, +{1\over 2})$ \\ & $U_0$ & $ ({\bf 1},{\bf 1},{\bf 1})(0,0,-6)_L$ & $(0,0,0)$ & $(0,0,+1)$ & $(+{1\over 2}, +{1\over 2}, +{1\over 2})$ \\ $U$ & $D_{\pm}$ &$ ({\bf 2},{\bf 1}, {\bf 1})({\pm 2},0,+3)_L$ & $(0,0,0)$ & $(0,0,+1)$ & $(+{1\over 2}, +{1\over 2}, +{1\over 2})$ \\ & $\Delta_{\pm}$ & $({\bf 2},{\bf 2},{\bf 1})(\pm 1,\mp 3,0)_L$ & $(0,0,0)$ & $(-1,0,0)$ & $(-{1\over 2}, +{1\over 2}, -{1\over 2})$ \\ & $\tilde{d}_{\pm}$ &$({\bf 1},{\bf 2},{\bf 1})(\pm 1,\pm 3, -3)_L$ &$(0,0,0)$ & $(0,-1,0)$ & $(+{1\over 2}, -{1\over 2}, -{1\over 2})$ \\ & & & & &\\ \hline & & & & &\\ & ${F}^1$ & $({\bf 1},{\bf 1},{\bf 15})(0,0,+2)_L$ & $ (0,-e_1/3,-e_1/3)$ & $(0, -{2\over 3},+{1\over 3})$ & $(+{1\over 2}, -{1\over 6},-{1\over 6})$\\ & ${F}^2$ & $({\bf 1},{\bf 1},{\bf 15})(0,0,+2)_L$ & $ (0,-e_2/3,-e_2/3)$ & $(0, -{2\over 3},+{1\over 3})$ & $(+{1\over 2}, -{1\over 6},-{1\over 6})$\\ $T3$ & ${F}^3$ & $({\bf 1},{\bf 1},{\bf 15})(0,0,+2)_L$ & $ (0,-e_3/3,-e_3/3)$ & $(0, -{2\over 3},+{1\over 3})$ & $(+{1\over 2}, -{1\over 6},-{1\over 6})$\\ & ${S}^1_{\pm}$ &$({\bf 1},{\bf 1}, {\bf 6})(\pm 1,\pm {1},+1)_L$ & $ (0,e_1/3,e_1/3)$ & $(0, -{1\over 3},+{2\over 3})$ & $(+{1\over 2}, +{1\over 6},+{1\over 6})$\\ & ${S}^2_{\pm}$ &$({\bf 1},{\bf 1}, {\bf 6})(\pm 1,\pm {1},+1)_L$ & $ (0,e_2/3,e_2/3)$ & $(0, -{1\over 3},+{2\over 3})$ & $(+{1\over 2}, +{1\over 6},+{1\over 6})$\\ & ${S}^3_{\pm}$ &$({\bf 1},{\bf 1}, {\bf 6})(\pm 1,\pm {1},+1)_L$ & $ (0,e_3/3,e_3/3)$ & $(0, -{1\over 3},+{2\over 3})$ & $(+{1\over 2}, +{1\over 6},+{1\over 6})$\\ & & & & &\\ \hline & & & & &\\ & ${\tilde S}^{1}_{+ \pm}$ &$({\bf 1},{\bf 1},{\overline {\bf 6}})(\pm 1,\mp {1},-1)_L$ & $(\tilde{e}^2/2,-e_2/3,-e_3/3)$ & $(-{1\over 2},-{1\over 6},+{1\over 3})$ & $(0,+{1\over 3},-{1\over 6})$\\ & ${\tilde S}^{1}_{- \pm}$ &$({\bf 1},{\bf 1},{\overline {\bf 6}})(\pm 1,\mp {1},-1)_L$ & $(-\tilde{e}^2/2,-e_3/3,-e_2/3)$ & $(-{1\over 2},-{1\over 6},+{1\over 3})$ & $(0,+{1\over 3},-{1\over 6})$\\ $T6$ & ${\tilde S}^2_{+ \pm}$ &$({\bf 1},{\bf 1},{\overline {\bf 6}})(\pm 1,\mp {1},-1)_L$ & $(\tilde{e}^2/2,-e_3/3,-e_1/3)$ & $(-{1\over 2},-{1\over 6},+{1\over 3})$ & $(0,+{1\over 3},-{1\over 6})$\\ & ${\tilde S}^{1}_{- \pm}$ &$({\bf 1},{\bf 1},{\overline {\bf 6}})(\pm 1,\mp {1},-1)_L$ & $(-\tilde{e}^2/2,-e_1/3,-e_3/3)$ & $(-{1\over 2},-{1\over 6},+{1\over 3})$ & $(0,+{1\over 3},-{1\over 6})$\\ & ${\tilde S}^3_{+ \pm}$ &$({\bf 1},{\bf 1},{\overline {\bf 6}})(\pm 1,\mp {1},-1)_L$ & $(\tilde{e}^2/2,-e_1/3,-e_2/3)$ & $(-{1\over 2},-{1\over 6},+{1\over 3})$ & $(0,+{1\over 3},-{1\over 6})$\\ & ${\tilde S}^{1}_{- \pm}$ &$({\bf 1},{\bf 1},{\overline {\bf 6}})(\pm 1,\mp {1},-1)_L$ & $(-\tilde{e}^2/2,-e_2/3,-e_1/3)$ & $(-{1\over 2},-{1\over 6},+{1\over 3})$ & $(0,+{1\over 3},-{1\over 6})$\\ & & & & &\\ \hline & & & & &\\ $T2$ & ${d}_{+ \pm}$ &$({\bf 1},{\bf 2},{\bf 1})({\pm 1},{\mp 3},+3)_L$ & $(e_1/2,0,0)$ & $(-{1\over 2},-{1\over 2},0)$ & $(0,0,-{1\over 2})$ \\ & ${d}_{- \pm}$ &$({\bf 1},{\bf 2},{\bf 1})({\pm 1},{\mp 3},+3)_L$ & $(-e_1/2,0,0)$ & $(-{1\over 2},-{1\over 2},0)$ & $(0,0,-{1\over 2})$ \\ & & & & &\\ \hline & & & & &\\ $U(1)$ & & $({1\over 2},~{1\over {2\sqrt{3}}},~{1\over {3\sqrt{2}}})$ & & $(1,1,1)$ & $(1,1,1)$ \end{tabular} \caption{The $G$-, $Q^R$- and $H$-charges (in the $-1$ and $-1/2$ pictures) for the massless fields of the $S4$ model. The $U(1)$ normalization radii for the $G$- and $H$-charges are given at the bottom of the Table.} \label{S4charges} \end{table}
astro-ph/9704005
\section{Introduction} The nova-like brightening of Sakurai's object in Sagittarius has been attributed to the final helium-shell flash of a central star of a planetary nebula, that returns the star towards the domain of red giants in the Hertzsprung-Russell diagram (Duerbeck \& Benetti 1996). Few stars have been identified with this phase of evolution: examples include FG Sge, V605 Aql, the planetary nebulae Abell 30, Abell 78 and N66. It is expected that a born-again red giant will consume hydrogen and become starkly hydrogen-deficient, helium- and carbon-rich. Low-resolution spectra led Duerbeck and Benetti (1996) to suggest that Sakurai's object is hydrogen-poor. The presence of strong lines of neutral carbon and oxygen was also noted. Changes of the surface chemical composition may be rapid for born-again AGB stars, as was observed for FG Sge. Sakurai's object offers the prospect of monitoring such secular changes in another born-again candidate. Observations, as reported here, are surely crucial for an improved understanding of the final He-shell flash. \section{Observations} Spectra covering 3700-10150\,\AA $ $ at a resolution of about 30,000 were obtained with the 2.7\,m telescope at McDonald Observatory on May 5 and 6 and on October 7, 1996. A spectrum was also obtained with the 2.1\,m telescope: this spectrum from May 9, 1996, covers the region 5720\,\AA\ to 7200\,\AA\ at a resolution of about 60,000. \section{Chemical composition} Our analysis is based on line-blanketed, hydrogen-deficient model atmospheres, similar to those described by Asplund et al. (1997) but with a range of hydrogen abundances. In estimating the stellar parameters T$_{\rm eff}$, log $g$ and hydrogen abundance various ionization (Fe\,{\sc i}/Fe\,{\sc ii}, Mg\,{\sc i}/Mg\,{\sc ii}, Si\,{\sc i}/Si\,{\sc ii}, Cr\,{\sc i}/Cr\,{\sc ii}) and excitation equilibria ([O\,{\sc i}]/O\,{\sc i}, Fe\,{\sc i}, Fe\,{\sc ii}) together with the H$\beta$ and H$\alpha$ line profiles (with line broadening data following Seaton 1990) have been used. The C/He ratio was determined from the C\,{\sc ii} and He\,{\sc i} lines in the May spectra, which indicate C/He $\simeq 10$ \%. The same ratio had to be assumed for October when the lines were too weak to be utilized. The microturbulence parameter was estimated from Ti\,{\sc ii}, Fe\,{\sc i} and Fe\,{\sc ii} lines of different strengths. The May spectra are characterized by $T_{\rm eff} = 7500\pm 300$\,K, log\,$g = 0.0\pm 0.3$, and $\xi_{\rm t} = 8.0\pm 1.0$\,km\,s$^{-1}$, while it had cooled significantly in October: $T_{\rm eff} = 6900$\,K, log\,$g = 0.5$, and $\xi_{\rm t} = 6.5$\,km\,s$^{-1}$. In fact, the derived parameters are not consistent with a constant stellar luminosity but rather indicate a decrease by a factor of 4, which is not supported by the observed photometry. It could, however, be that hydrostatic equilibrium is inapplicable in May due to an expansion of the star or effects of turbulent pressure: a dynamical atmosphere can be mimicked by an underestimate of log\,$g$ when assuming hydrostatic equilibrium. Indeed, with the May parameters the star is located at the classical Eddington limit (e.g. Asplund \& Gustafsson 1996). The analysis of the C\,{\sc i} lines reveals the same inconsistency between theoretical and observed line strengths as for R\,CrB stars (Gustafsson \& Asplund 1996; Lambert et al., in preparation): the strengths of weak lines predicted with the input C abundance are a factor of 4 stronger than observed (Fig. \ref{f:spectra} and \ref{f:hbeta}). It should be noted that no agreement between all $T_{\rm eff}$-log\,$g$ indicators could be achieved using consistent C abundance for the analysis. Naturally, this C\,{\sc i} problem makes the absolute abundances uncertain but relative abundances are generally expected to be much less affected (Lambert et al., in preparation). \begin{figure}[t] \centerline{ \psfig{figure=figure1.ps,height=6.cm}} \caption{A selected piece of spectrum in May (solid) and October (dashed) showing the increase of some elements, e.g. Sc, Ti, and Y. The dotted curve is the synthetic spectrum with the stellar parameters of October but with the May abundances. Note also that all predicted C\,{\sc i} lines are too strong } \label{f:spectra} \end{figure} \begin{figure}[t] \centerline{ \psfig{figure=figure2.ps,height=9.8cm}} \caption{{\bf a} H$\beta$ in October (thick solid) compared with predicted line profiles for solar H (dotted) and H-deficient by 3.0\,dex (dashed). {\bf b} C$_2$ (1-0) Swan band for $^{12}$C/$^{13}$C = 2 (dotted), 5 (dashed) and 10 (dash-dotted), together with the observed October spectrum (thick solid). Also shown in both figures are the May spectra (solid) but displaced upwards by 0.2 for clarity } \label{f:hbeta} \end{figure} The derived LTE abundances for May and October are summarized in Table \ref{t:abund}. More details on the analysis and atomic data (lines, gf, hfs, etc) as well as a comparison with V854\,Cen will be given elsewhere. The weak Balmer lines certainly rule out a solar hydrogen abundance (Fig. \ref{f:hbeta}). Note that the absolute abundances of most elements are effectively unchanged from May to October within the uncertainties (typically $\leq 0.3\,$dex). Some elements, however, exhibit a marked change, for example, hydrogen declined as lithium and the light $s$-process elements increased in abundance by a factor of about 4 (Fig. \ref{f:spectra}). Also Sc, Ti, Cr and Zn seem to have increased during the time\-span (Fig. \ref{f:spectra}). The general agreement between the May and October abundances for most elements suggests that the stellar parameters are not seriously in error, which could otherwise have resulted in spurious abundance effects. Besides Li, the abundances of elements showing variations are not very sensitive to the stellar parameters: the required $\Delta T_{\rm eff} \approx 1000$\,K for either May or October to annul the abundance variations would be inconsistent with the $T_{\rm eff}$--log\,$g$ indicators and introduce other as severe changes (e.g. for Ca) less easily explainable; a different log\,$g$ can not simultaneously explain all changes. It would also only aggravate the luminosity discrepancy. Hence, the few changes seem to be real. Furthermore, they are limited to elements expected to show alterations due to a final flash. \begin{table}[t] \caption{ Chemical compositions of Sakurai's object, the R\,CrB stars and the Sun (normalized to log\,($\Sigma \mu_i \epsilon_i$) = 12.15) \label{t:abund} } \begin{tabular}{lccccc} \hline Element & Sun$^{\rm a}$ & \multicolumn {2} {c} {Sakurai's object} & \multicolumn {2} {c} {R\,CrB$^{\rm b}$} \\ && May & October & majority & minority \\ \hline \\ H & 12.0 & 9.7 & 9.0 & & $<4.1 - 10.8$ \\ He & 11.0 & 11.4$^{\rm c}$ & 11.4$^{\rm c}$ & 11.5$^{\rm c}$ & 11.5$^{\rm c}$ \\ Li & 3.3$^{\rm a}$ & 3.6 & 4.2 & & \\ C & 8.6 & 9.7$^{\rm d}$ & 9.8$^{\rm d}$ & 8.9$^{\rm d}$ & 8.6 -- 9.5$^{\rm d}$ \\ N & 8.0 & 8.9 & 8.9 & 8.6 & 7.6 -- 8.6 \\ O & 8.9 & 9.5 & 9.4 & 8.2 & 7.5 -- 8.8 \\ Ne & 8.1 & 9.3 & & & 7.9 -- 9.6 \\ Na & 6.3 & 6.7 & 6.8 & 6.1 & 5.8 -- 5.9 \\ Mg & 7.6 & 6.6 & 6.5 & & 6.1 -- 7.3 \\ Al & 6.5 & 6.6 & 6.3 & 6.0 & 5.3 -- 5.6 \\ Si & 7.5 & 7.1 & 7.5 & 7.1 & 7.3 -- 8.1 \\ S & 7.3 & 6.6 & 6.9 & 6.9 & 6.7 -- 7.6 \\ K & 5.1 & 4.8 & 5.0 & & \\ Ca & 6.4 & 5.6 & 5.5 & 5.4 & 5.0 -- 5.3 \\ Sc & 3.2 & 3.1 & 3.9 & & \\ Ti & 5.0 & 4.1 & 4.6 & & \\ Cr & 5.7 & 4.5 & 5.1 & & \\ Fe & 7.5 & 6.3 & 6.6 & 6.5 & 5.0 -- 5.8 \\ Ni & 6.2 & 6.1 & 6.2 & 5.9 & 5.2 -- 5.8 \\ Cu & 4.2 & 4.9 & 5.0 & & \\ Zn & 4.6 & 4.7 & 5.4 & 4.3 & 3.8 -- 4.1 \\ Rb & 2.6 & $<3.7$ & 4.6 & & \\ Sr & 3.0 & 4.9 & 5.4: & & \\ Y & 2.2 & 3.3 & 4.2 & 2.1 & 0.6 -- 2.8 \\ Zr & 2.6 & 3.0 & 3.5 & & \\ Ba & 2.1 & 1.5 & 1.9 & 1.6 & 0.7 -- 1.3 \\ La & 1.2 & $<1.6$ & 1.5 & & \\ \hline \end{tabular} \begin{list}{}{} \item[$^{\rm a}$] From Grevesse et al. (1996). For Li the meteoritic value is adopted. \item[$^{\rm b}$] From Rao \& Lambert (1996) and Jeffery \& Heber (1993). The majority is an average of 14 stars while the minority consists of V\,CrA, VZ\,Sgr, V3795\,Sgr and DY\,Cen. \item[$^{\rm c}$] Input C/He ratio for model atmospheres: C/He=1\% assumed for R\,CrB stars and 10\% estimated for Sakurai's object from the 1996 May spectra. \item[$^{\rm d}$] Spectroscopically determined C\,{\sc i} abundance, see text. \end{list} \end{table} The metallicity of Sakurai's object is, judging from Fe, slightly below solar by 0.2\,dex in mass fraction (0.9\,dex if the input rather than the spectroscopic C abundance is adopted). The quantities [Si/Fe], [S/Fe], [Ca/Fe], and [Ti/Fe], which are 0.8, 0.6, 0.3 and 0.4 respectively, are, if unchanged from the star's birth, also indicative of a metal-poor star (Edvardsson et al. 1993). An isotopic ratio $1.5\leq^{12}$C/$^{13}$C$ \leq 5$ is determined from the strong C$_2$ (1-0) and (0-1) Swan bands (Fig. \ref{f:hbeta}). The strengthening of the C$_2$ bands due to the change in stellar parameters is clearly illustrated in Fig. \ref{f:hbeta}. It is of considerable interest to compare the compositions of Sakurai's object and FG\,Sge, another born-again candidate which has recently experienced R\,CrB-like visual declines. FG\,Sge resembles Sakurai's object in that it is strongly $s$-element enriched (Langer et al. 1974), as well as carbon-rich and poor in iron-group elements, except for Sc (Kipper \& Kipper 1993). In FG Sge, however, the heavy $s$-elements are as overabundant as the light, and it has not yet been shown to be hydrogen-deficient. FG\,Sge may therefore have experienced a late shell flash as a luminous post-AGB star rather than a final flash as a white dwarf (Bl\"ocker \& Sch\"onberner 1996). Two of the outstanding aspects of the chemical composition of Sakurai's object are hallmarks of the R\,CrBs: H-deficiency and a high C content, but also other similarities in relative abundances exist (Lambert et al., in preparation; Rao \& Lambert 1996; Lambert \& Rao 1994). Except for the high Y/Fe other observed X/Fe ratios are similar to those found in R\,CrB stars. In particular it resembles the (relatively) H-rich V854\,Cen (Asplund et al., in preparation). If, however, C/He is correctly estimated, it may sooner be related to objects such as V605\,Aql, Abell 30 and 78 and the hot R\,CrB star V348\,Sgr, which are also surrounded by planetary nebulae and have been proposed to be final flash candidates (Renzini 1990). Similar abundance patterns as presented here for Sakurai's object have also been obtained in less detailed analyses by Shetrone \& Keane (1997) and Kipper \& Klochkova (1997). Shetrone \& Keane's finding of a near normal H abundance is however very puzzling. \section{Abundance variations and nucleosynthesis} In broad terms, the composition of Sakurai's object shows evidence of severe contamination by material exposed to hydrogen and helium burning and associated nuclear reactions. Close examination provides some interesting constraints on the nucleosynthesis experienced by the star. The present atmosphere is not a simple mix of initial unprocessed gas, gas run through the H-burning CNO-cycles, and H-exhausted gas exposed to He-burning, but must have been accompanied by further processing. This is demonstrated by the low observed $^{12}$C/$^{13}$C ratio, which encompasses the equilibrium value of 3.5 for CNO-cycling. As the equilibrium abundance of $^{13}$C is very low following He-burning, the observed ratio suggests that $^{12}$C from He-burning has been exposed to hot protons. It would seem that C-rich material from He-burning has been mixed with ingested hydrogen such that the proton supply is effectively exhausted in converting inhibited (see Renzini 1990). Not all protons are consumed in He-rich regions. Production of lithium is ascribable to the Cameron-Fowler (1971) mechanism. Here $^3$He synthesised in a low mass main sequence star is converted to $^7$Li in an envelope that convects $^7$Li to low temperatures where it survives until re-exposed to high temperatures. Production of lithium implies H-burning in regions not previously exposed to H-burning temperatures; $^3$He which is destroyed in regions that have undergone H-shell or H-core burning can hardly be resynthesised. The observed Li is not a fossil from an earlier stage as a Li-rich AGB star: the predicted Li/H ratio for AGB stars which have undergone hot-bottom burning is 10$^{-8}$ while the observed ratio is 10$^{-5}$ to 10$^{-6}$ and hydrogen consumption necessarily destroys fossil lithium. The overabundant Na and Al have likely been synthesised through $^{22}$Ne(p,$\gamma$)$^{23}$Na and $^{25}$Mg(p,$\gamma$)$^{26}$Al. As in other H-deficient stars Ne is very high, which can not be explained by $\alpha$-captures on N from initial CNO, but must be due to products of He-burning and possibly additional CNO-cycling. A remarkable feature of Sakurai's object is the large overabundance of light $s$-process elements and the high ratio of light to heavy $s$-process elements. Probably, $^{13}$C($\alpha,n)^{16}$O is the neutron source. The $s$-processing may be characterized by the neutron exposure $\tau$. We find a good fit to the abundances from Ni to La for October with $\tau = 0.2 \pm 0.1$\,mb$^{-1}$ using Malaney's (1987) predictions for a single exposure. The Rb abundance indicates a low neutron density of $N_{\rm n} \approx 10^8$\,cm$^{-3}$ (Malaney 1987), while no useful limit could be set on the Tc abundance. An exponential distribution of exposures provides less good agreement with the observed abundances. For the R\,CrB star U Aqr, which also shows light $s$-element enhancements, Bond et al. (1979) obtained $\tau \simeq 0.6$\,mb$^{-1}$. Such exposures imply that about 10 neutrons were captured by each Fe seed nucleus. Given that the observed ratio $^{13}$C/Fe $\simeq 10^3$, the exposure, even in the presence of neutron poisons such as $^{14}$N, seems an achievable goal. The fact that Ni, Cu and Zn are well fit by the predictions indicates that the envelope consists mostly of material exposed to neutrons. This fact also likely explains the anomalous high ratios of K/Fe and Sc/Fe. The final He-shell flash may occur in a luminous post-AGB star or the white dwarf that evolves from the post-AGB star. In the latter case, hydrogen may be mixed with deep layers of He and C and consumed. In contrast, the H-burning layer in the post-AGB star prevents deep mixing. About 10\% of all AGB stars may experience their final He-shell flash as a white dwarf and, if H consumption is severe, may convert the born-again AGB star to an R\,CrB star (Renzini 1990; Iben et al. 1996). Iben \& MacDonald (1995) have presented a model in which mixing and nucleosynthesis were followed: their chosen model of a 0.6\,$M_{\odot}$ star ended with an outer layer having the abundance ratios (by number of atoms) H/He $\simeq 10^{-0.8}$, C/He $\simeq 10^{-1.2}$, N/C $\simeq 10^{-0.5}$, and O/C $\simeq 10^{-1.3}$. This resembles the composition of Sakurai's object, apart from the predicted H deficiency not being as severe as observed. The model O/C ratio is lower than observed but might be raised by adjustment of the uncertain rate for the reaction $^{12}$C($\alpha,\gamma$)$^{16}$O. In summary, final flash models offer a tantalising prospect of accounting for the observed composition of Sakurai's object. Life as a born-again AGB star is brief: the model by Iben \& MacDonald (1995) brightens by a factor of 10 and cools from T$_{\rm eff}$ of 40,000K to 6300K in just 17 yr. Evolution over the narrow temperature range covered by Sakurai's object between May and October is, of course, much faster. The evolutionary timescale seems similar to that of V605\,Aql (Lundmark 1921). The timescale for compositional changes for Sakurai's object is likely even shorter; processed material rising from below will mix very quickly with the atmosphere. The other final flash candidate FG\,Sge has also showed rapid abundance alterations, e.g. some $s$-process elements increased by 0.8 dex in 7 years (Langer et al. 1974). It is now important to extend the few available calculations of the final flash to a wider range of initial conditions, and to include Li-production and $s$-processing. The hints that the surface composition is evolving rapidly must be pursued by continued spectroscopic observations, which may shed further light on its evolutionary status and relation to the R\,CrB stars; we may have witnessed the birth of an R\,CrB star. Furthermore, a determination of the nebular composition would reveal the original composition of Sakurai's object prior to the final flash. Monitoring the visual variability of the star searching for R\,CrB-like declines is naturally of importance. \begin{acknowledgements} We are grateful to Craig Wheeler for bringing the star to our attention, to Guillermo Gonzalez for a spectrum of Sakurai's object, and to Jim MacDonald and Icko Iben for helpful comments. Nikolai Piskunov and Sveneric Johansson are thanked for help with the hydrogen profiles and atomic data, and V.P. Arkhipova and V.P. Goranskij for photometric information. Financial support from the Swedish Natural Research Council, NSF (grant AST93-15124) and the Robert A. Welch Foundation of Houston, Texas is acknowledged. The analysis has made use of the VALD database. The referee Simon Jeffery is thanked for helpful comments. \end{acknowledgements}
2204.03556
\section{Introduction} \label{introduction} \input{1_introduction} \section{Background} \label{sec:background} \input{2_background} \input{3_method} \section{Results} \label{sec:results} \input{4_results} \section{Discussion} \label{sec:discussion} \input{5_discussion} \section{Conclusions \& Future Work} \label{sec:conclusions} \input{6_conclusions} \begin{acks} We thank Jake Stein and Alexander Fanta for helpful comments and Ulrik Lyngs for help with data analysis. Konrad Kollnig was funded by the UK Engineering and Physical Sciences Research Council (EPSRC) under grant number EP/R513295/1. Max Van Kleek has been supported by the PETRAS National Centre of Excellence for IoT Systems Cybersecurity, which has been funded by the UK EPSRC under grant number EP/S035362/1. Max Van Kleek, Reuben Binns, and Nigel Shadbolt have been supported by the Oxford Martin School EWADA Programme. \end{acks} \bibliographystyle{ACM-Reference-Format} \subsection{Related work} \label{sec:related-work} Previous research extensively studied privacy in mobile apps. Two main methods have emerged in the academic literature: dynamic and static analysis. \emph{Dynamic analysis} observes the run-time behaviour of an app, to gather evidence of sensitive data leaving the device. Early research focused on OS instrumentation, i.e. modifying Android~\cite{enck_taintdroid_2010} or iOS~\cite{agarwal_protectmyprivacy_2013}. With growing complexity of mobile operating systems, recent work has shifted to analysing network traffic~\cite{privacyguard_vpn_2015,nomoads_2018,free_v_paid_2019,reyes_wont_2018,van_kleek_better_2017,ren_recon_2016,nomoads_2018,shuba_nomoats_2020}. This comes with certain limitations. One problem is limited scalability, since every app is executed individually. Another issue is that not all privacy-relevant parts of apps may be invoked during analysis, potentially leading to incomplete results. \emph{Static analysis} dissects apps without execution. Usually, apps are decompiled, and the obtained program code is analysed~\cite{han_comparing_2013,pios_2011}. The key benefit of static analysis is that it can analyse apps quickly, allowing it to scale to millions of apps~\cite{china_2018,playdrone_2014,binns_third_2018,chen_following_2016,kollnig_before_2021}. However, static analysis can involve substantial computational effort and~--~unlike dynamic analysis~--~does not allow the observation of real data flows because apps are never actually run. Programming techniques, such as the use of code obfuscation and native code, can pose further obstacles. This is especially true for iOS apps, which are often harder to analyse and decompile~--~compared to Android~--~and are encrypted by default~\cite{kollnig2021iphones,maps_2019,binns_third_2018}. While this iOS encryption might legitimately protect \textit{paid} apps against piracy, Apple also encrypts all free apps downloaded from the App Store. By contrast, Google only encrypts paid apps (not free ones) when downloaded from its Play Store. The encryption of iOS apps by Apple~--~even of free ones~--~is problematic for research efforts because it drives researchers into legal grey areas of copyright law~\cite{kollnig2021iphones}. Partly because of these difficulties, our recent work~\cite{kollnig2021iphones} was the first large-scale app privacy analysis study on iOS apps since 2013~\cite{agarwal_protectmyprivacy_2013}. We avoided legal problems relating to copyright law by conducting part of the analysis on-device through using the popular app instrumentation tool \texttt{Frida}~\cite{frida}. In this paper, we follow the methodology of our previous paper, which used a combination of both dynamic and static analysis, so as to compare the privacy practices of the studied apps before and after the introduction of Apple's new privacy rules. We discuss our methodology for this paper in more detail in Section~\ref{sec:methodology}. \subsection{Regulation of App Platforms} \label{sec:regulation} The centrality of app platforms~--~i.e. Apple's iOS and Google's Android ecosystem~--~makes them a target for effective privacy regulation, however such regulation is limited ~\cite{o_fathaigh_european_2019,hoboken2021}. The US Federal Trade Commission (FTC) established some baseline rules for app stores in 2013. They strongly recommended to app platforms to require just-in-time consent for sensitive data access, to seek privacy policies from app developers, and to implement system-wide opt-out mechanism from data collection~\cite{ftc_app_stores}. Despite not being law, Google and Apple followed many of the recommendations, and have not seen further public recommendations from the FTC since. In the EU and UK, there exists no targeted regulation of app stores. The Regulation on platform-to-business relations (P2BR) contains general provisions for online intermediaries, including app stores, but does little to enact better privacy protections \cite{o_fathaigh_european_2019}. Data protection laws, such as the GDPR and the ePrivacy Directive, arguably place the primary responsibility for data protection with the app developers, not usually with app platform providers~--~although this is subject to ongoing debate; this lack of data protection obligations within the entire software development process -- not just deployment -- has been widely criticised \cite{bygrave_data_2017,jasmontaite_data_2018}. While no targeted regulation exists, app platforms face increasing scrutiny by courts and regulators. In the case \textit{Epic Games v Apple} running since 2020, a US District Court judge largely found no monopolistic behaviour of Apple, but did identify some anticompetitive conduct in Apple's business practices. The judge ordered Apple to allow app developers to inform app users of alternative payment methods. Both Apple and Epic Games have appealed the ruling. In the EU, following a complaint of Spotify against Apple from 2019, the European Commission identified multiple anticompetitive aspects about Apple's ecosystem in a preliminary ruling~--~the case is, however, still ongoing. In January 2022, the Dutch competition authority demanded changes from Apple to its App Store policies; Apple has to date not fulfilled the demands of the regulators in their entirety, and has instead chosen to pay a weekly penalty of €5 million up to a maximum of €50 million~\cite{dutch_authority}. The challenges in keeping up with regulation of platforms have spurred a recent countermovement by lawmakers. In South Korea, parliament amended the Telecommunication Business Act to force app stores to allow alternative payment methods and reduce commissions~\cite{korea}. In response, Apple lowered the share it takes from App Store revenues of small developers (making less than \$1 million per year) from 30\% to 15\%. In the US, Congress is debating a new Open App Markets Act that aims to address common competition concerns around app stores and passed the Senate Judiciary Committee with a strong a 20–-2 bipartisan vote in February 2022. In the EU, lawmakers are seeking to enact two new pieces of legislation that aim to improve the regulation of digital markets, the Digital Markets Act and the Digital Services Act. Any new legal requirement for app platforms will likely have implications worldwide, due to the nature of digital ecosystems. In sum, there currently exist few specific legal obligations for app platforms. Instead, they are encouraged to self-regulate their conduct. The following analysis shall shine a light on how the recent policy changes by Apple, a highly prominent example of this self-regulation, have affected the actual privacy practices of mobile apps. \section{Methodology} \label{sec:methodology} \begin{figure} \centering \includegraphics[width=0.95\linewidth]{figures/organisation_cropped.pdf} \caption{Overview of our analysis methodology (Section \ref{sec:methodology}): First, (1) we select and download 1,759 apps from before the introduction of the ATT, and 1,759 from after. We also collect apps' Privacy Nutrition Labels. Next, we perform (2) \textbf{Code Analysis} to examine permissions and tracking libraries usage; and (3) \textbf{Network Traffic Analysis} to analyse tracking domains contacted at the first app start and the sharing of personal data. The results of this analysis (Section \ref{sec:results}) are detailed \textbf{App Privacy Footprints} (4) of the downloaded apps.} \label{fig:method_flow} \Description{This figures shows a visualisation of the methodology that is also explained in Section~\ref{sec:methodology}.} \end{figure} In this section, we describe our analysis methodology (depicted in Figure \ref{fig:method_flow}), which follows the one that we previously used for a comparative analysis of iOS and Android apps' privacy practices~\cite{kollnig2021iphones}. Code and data to replicate our results are available at \url{https://www.platformcontrol.org/}. We therefore keep our description of the methodology short and refer the reader to the original paper for details. \subsection{App Selection and Download} This section details our process for selecting and downloading apps from the Apple App Store (step 1 in Figure~\ref{fig:method_flow}). For the selection of apps, we revisited the same 12,000 iOS apps as in our previous study~\cite{kollnig2021iphones}. These apps were selected by first generating a large list of apps available on the Apple App Store between December 2019 and February 2020. We then downloaded a random subset ($n=12,000$) of those apps that were last updated since 2018 so as to focus on apps currently in use. For this work, we re-downloaded those apps that were updated to comply with Apple's ATT and privacy label rules, in October 2021. This resulted in a dataset of 1,759 \textit{pairs} of apps, one from before iOS 14 and one from after. This number of apps is comparatively small because many apps had not yet been updated since the new rules, while some other apps had been removed from the store (2,713 out of 12,000 apps were not available on the App Store anymore). We additionally scraped the Privacy Nutrition Labels for the newly downloaded apps. \subsection{Code Analysis} To identify the presence of tracking libraries (step 2 in Figure~\ref{fig:method_flow}), we extracted the names of all classes loaded by each app using the tool \texttt{Frida}~\cite{frida} and checked them against a list of known tracker class names from our previous paper~\cite{kollnig2021iphones}. We also examined the app manifest (every iOS app must provide such a file) to determine how certain tracking libraries are configured -- many tracking libraries allow developers to restrict data collection using settings in the manifest file, e.g. to disable the collection of unique identifiers or the automatic SDK initialisation at the first app start. This can help set up tracking libraries in a legally compliant manner. For example, `Data minimisation' is one of the key principles of GDPR (Article 5.1 (c)), and user opt-in is required prior to app tracking in the EU and UK~\cite{kollnig_2021}. We analysed the privacy settings provided by some of the most prominent tracking libraries: Google AdMob, Facebook, and Google Firebase. Beyond analysing tracking in apps, we also obtained a list of permissions that apps can request. Permissions form an important part of the security model of iOS as they protect sensitive information on the device, such as apps' access to the camera or address book. As such, permissions are different to the new privacy labels, which do not affect the runtime behaviour of apps. We extracted apps' permissions by automatically inspecting the manifest file. \subsection{Network Analysis} To analyse apps's network traffic (step 3 in Figure~\ref{fig:method_flow}), we executed every app on a real device~--~one iPhone SE 1st Gen with iOS 14.2, and one with iOS 14.8~--~for 30 seconds without user interaction. We captured network traffic using the tool \texttt{mitmdump}. We disabled certificate validation using \texttt{SSL Kill Switch 2}, after gaining system-level access on both iPhones (known as `jailbreak'). On the iPhone with iOS 14.2, we did not opt-out from ad personalisation from the system settings, thereby assuming user opt-in to use the IDFA (reflecting the assumption that many users, who would reject tracking, do not do so because the option is in the less prominent settings on the OS~\cite{kollnig2021iphones}). On the iPhone with iOS 14.8, we asked all apps not to track from the system settings. Although in Android privacy research real user behaviour is simulated via various automation tools~\cite{van_kleek_x-ray_2018,ren_recon_2016,okoyomon_ridiculousness_2019,han_price_2020,binns_measuring_2018,reyes_wont_2018,shuba_nomoats_2020}, Apple's restrictions on debugging and instrumentation have hindered the development of such tools for iOS. Tracking libraries are usually initialised at the first app start and without user consent~\cite{kollnig_2021, reyes_wont_2018,nguyen_share_first_consent_2021,kollnig2021iphones}, and they can thus be detected without user interaction in the network traffic, as done in our analysis. \subsection{Tracking Libraries} \label{sec:static_tracking} Apps from both before the ATT and after widely used tracking libraries (see Figure~\ref{fig:apps_trackers}a). The median number of tracking libraries included in an app was 3 in both datasets. The mean before was 3.7, the mean after was 3.6. 4.75\% of apps from before ATT contained more than 10 tracking libraries, compared to 4.75\% after. 86.39\% contained at least one before ATT, and 87.52\% after. The most prominent libraries have not changed since the introduction of ATT. The top one was the SKAdNetwork library (in 78.4\% of apps before, and 81.8\% after). While part of Apple's privacy-preserving advertising attribution system, this library discloses information about what ads a user clicked on to Apple, from which Apple could (theoretically) build user profiles for its own advertising system. Following up with Apple about this potential issue (by one of the authors exercising the GDPR's \textit{right to be informed} under Article 13), they did not deny the fact that this data might be used for advertising, but assured us that any targeted ads would only be served to segments of users (of at least 5,000 individuals with similar interests). Google Firebase Analytics ranked second (64.3\% of apps from before ATT, and 67.0\% after), and Google Crashlytics third (43.6\% before, 44.4\% after). Overall, Apple's privacy measures seem not to have affected the integration of tracker libraries into \textit{existing} apps. \subsubsection{Configuration for Data Minimisation} \label{sec:static_tracking_config} Among the apps that used Google AdMob, 2.9\% of apps from before and 4.5\% from after chose to delay data collection. Choosing to delay data collection can be helpful for app developers, to seek consent before enabling tracking and to fulfil legal obligations. Among the apps using the Facebook SDK, there was an increase in those which delayed the sending of app events (6.7\% before, and 12.5\% after); an increase in those which delayed the SDK initialisation (1.0\% before ATT, 2.2\% after), and an increase in those which disabled the collection of the IDFA (5.0\% before, 8.6\% after). Among apps using Google Firebase, 0.6\% permanently deactivated analytics before ATT and 0.8\% after, 0.0\% disabled the collection of the IDFA before and 0.6\% after, and 0.6\% delayed the Firebase data collection before ATT and 1.0\% after. Overall, we found that only a small fraction of apps made use of data-minimising SDK settings in their manifest files. One reason for this observation might be that some developers are not aware of these settings because tracking companies tend to have an interest in less privacy-preserving defaults regarding data collection~\cite{mhaidli_we_2019,kollnig_2021}. This fraction has subtly increased since the introduction of the ATT. \subsection{Data Access and Permissions} \label{sec:data_access} \begin{figure} \centering \includegraphics[width=0.95\linewidth]{figures/top-permissions.pdf} \caption{Top 10 permissions that apps can request.} \label{fig:permissions} \Description{This figure shows a bar chart of the top 10 permissions.} \end{figure} \textbf{Most prevalent permissions.} Figure~\ref{fig:permissions} shows the most prevalent permissions before and after the introduction of the ATT. On average, there was an increase in permission use ($4.3$ permissions before, $4.7$ after~--~excluding the new \textit{Tracking} permission). \texttt{CameraUsage} (for camera access) was the most common permission (62.6\% before ATT, 66.9\% after), closely followed by \texttt{PhotoLibraryUsage} (65.8\% before ATT, 66.9\% after), and \texttt{LocationWhenInUseUsage} (53.8\% before ATT, 58.0\% after). \textbf{Tracking permission and access to IDFA.} As part of ATT, apps that want to access the IDFA or conduct tracking must declare the \texttt{TrackingUsage} permission in their manifest. 24.7\% of apps from our dataset chose to declare this permission, and might ask users for tracking. At the same time, the share of apps that contain the \texttt{AdSupport} library, necessary to access the IDFA in the app code, stayed unchanged at 50.8\% of apps. This means that 50.8\% of apps from after the ATT could access the IDFA on earlier versions of iOS than 14.5, but only 24.7\% can on iOS 14.5 or higher. \textbf{Tracking permission and integration of tracking SDKs.} The share of apps that both contained a tracking library and could request tracking varied somewhat between the used tracking library. 69.3\% of the 350 apps that integrated Google AdMob declared the \texttt{TrackingUsage} permission; 78.7\% of the 110 apps that integrated Unity3d Ads; 50.0\% of the 116 apps that integrated Moat; and 77.3\% of the 54 apps that integrated Inmobi. Whether the app is from before or after the ATT, the vast majority of apps (between 97 and 100\%) that integrated any of these tracking libraries also integrated the \texttt{AdSupport} library, and could therefore access the IDFA if running on iOS versions before 14.5. \subsection{Data Sharing} \label{sec:data_sharing} \subsubsection{Before Consent} \label{sec:data_sharing_consent} This section analyses how many tracking domains apps contacted before any user interaction has taken place; the next Section~\ref{sec:data_sharing_pii} then analyses what data was shared with trackers. Since tracking libraries usually start sending data right at the first app start~\cite{kollnig_2021,reyes_wont_2018,nguyen_share_first_consent_2021,kollnig2021iphones}, this approach provides additional evidence as to the nature of tracking in apps~--~and without consent. Our results are shown in Figure~\ref{fig:apps_trackers}b. The average number of tracking domains contacted was somewhat higher for apps from after the introduction of the ATT (4.0 before, 4.7 after). The most popular domains were related to Google's analytics services: \path{firebaseinstallations.googleapis.com} (4.1\% of apps before the ATT, 47.4\% after) and \path{app-measurement.com} (45.2\% before, 47.2\% after). Since both endpoints are related to Google Firebase, the large increase in \path{firebaseinstallations.googleapis.com} prevalence likely reflects internal restructuring of Firebase following Google's acquisitions of other advertising and analytics companies. For example, Google acquired the crash reporting software Crashlytics from Twitter in January 2017, which is clearly reflected in our data. Google deprecated the old API endpoint (\path{settings.crashlytics.com} and changed it to \path{firebase-settings.crashlytics.com}) from November 2020. This had the direct effect that all Crashlytics users must now also use Google Firebase. The domain \path{settings.crashlytics.com} was contacted by 36.4\% for apps from before the ATT, and \path{firebase-settings.crashlytics.com} by 32.3\% after the ATT. While this might point to a small difference in the adoption of Google Crashlytics, the exact same number of apps (734, 43.6\%) integrated the Crashlytics library into their code, before and after the ATT. Similarly, the exact same number of apps integrate the Facebook SDK (523, 31.1\%); the share of apps that contacted the associated API endpoint \path{graph.facebook.com} at the first start fell from 27.7\% to 23.1\%. The Google Admob SDK, too, was integrated in the same number of apps (350, 20.8\%), and did not see a decline in apps that contact the associated API endpoint \path{googleads.g.doubleclick.net} (12.1\% before, 12.9\% after). Overall, data sharing with tracker companies before any user interaction remains common, even after the introduction of the ATT. This is in potential violation with applicable data protection and privacy laws in the EU and UK, which require prior consent~\cite{kollnig_2021}. \subsubsection{Exposure of Personal Data} \label{sec:data_sharing_pii} We found that 26.0\% of apps from before the ATT shared the IDFA over the Internet, but none from after the ATT. In this sense, the ATT effectively prevents apps from accessing the IDFA. Despite Apple's promises, closer inspection of the network traffic showed that both Apple and other third parties are still able to engage in user tracking. \begin{figure} \centering \small \begin{tabular}{llrr} \toprule Information & Example & Before & After \\ \midrule iPhone Name & MyPhone & 2.5\% & 4.2\% \\ iPhone Model & iPhone8,4$\mid$iPhone SE & 60.2\%& 74.5\% \\ Carrier & Three & 20.2\% & 20.2\% \\ Locale & en\_GB$\mid$en-gb & 85.7\% & 90.1\% \\ CPU Architecture & ARM64$\mid$16777228 & 13.7\% & 16.1\% \\ Board Config & N69uAP & 3.1\% & 4.5\% \\ OS Version & 14.8$\mid$18H17 & 79.9\% & 86.9\% \\ Timezone & Europe/London & 3.9\% & 3.4\% \\ \bottomrule \end{tabular} \caption{Proportion of \textit{all} apps that shared device information. This information can potentially be used for fingerprinting or cohort tracking.} \label{tab:pii} \Description{This figure shows a table with types of data that have been shared by apps, as well as the prevalence of such sharing across all studied apps.} \end{figure} We found that iPhones continued to share a range of information with third-parties, that can potentially be used for device fingerprinting or cohort tracking, see Table~\ref{tab:pii}. Only \textit{timezone} saw a subtle decrease in the number of apps that shared this information. It is not clear why apps need to access or share some of this information, e.g. the carrier name (shared by 20.2\% of apps) or the iPhone name (shared by 3--4\% of apps). Meanwhile, some types of information, particularly the iPhone name, might allow the identification of individuals, especially when combined with other information. \begin{table*} \centering \footnotesize \begin{tabular}{llrcccc} \toprule Domain & Company & Apps & User ID & Locale & Model & OS Version \\ \midrule \texttt{firebaseinstallations.googleapis.com} & Google & 47.4\% & \checkmark & \checkmark & & \\ \texttt{app-measurement.com} & Google & 47.2\% & \checkmark & \checkmark & & \\ \texttt{firebase-settings.crashlytics.com} & Google & 32.3\% & \checkmark & \checkmark & \checkmark & \checkmark \\ \texttt{device-provisioning.googleapis.com} & Google & 25.8\% & \checkmark & \checkmark & \checkmark & \checkmark \\ \texttt{graph.facebook.com} & Facebook & 23.1\% & \checkmark & \checkmark & \checkmark & \checkmark \\ \texttt{itunes.apple.com} & Apple & 18.3\% & \checkmark & \checkmark & \checkmark & \checkmark \\ \texttt{fbcdn.net} & Facebook & 13.0\% & & \checkmark & & \\ \texttt{googleads.g.doubleclick.net} & Google & 12.9\% & \checkmark & \checkmark & \checkmark & \checkmark \\ \texttt{firebaseremoteconfig.googleapis.com} & Google & 11.8\% & \checkmark & \checkmark & & \\ \texttt{gsp-ssl.ls.apple.com} & Apple & 9.9\% & \checkmark & \checkmark & \checkmark & \checkmark \\ \texttt{tpc.googlesyndication.com} & Google & 8.3\% & & \checkmark & & \checkmark \\ \texttt{www.googletagservices.com} & Google & 8.1\% & & \checkmark & & \checkmark \\ \texttt{clients3.google.com} & Google & 5.3\% & & \checkmark & & \\ \texttt{firebasedynamiclinks.googleapis.com} & Google & 5.2\% & \checkmark & \checkmark & & \checkmark \\ \texttt{in.appcenter.ms} & Microsoft& 4.3\% & \checkmark & \checkmark & \checkmark & \checkmark \\ \texttt{play.googleapis.com} & Google & 4.2\% & \checkmark & \checkmark & \checkmark & \checkmark \\ \texttt{skadsdk.appsflyer.com} & AppsFlyer& 4.0\% & \checkmark & \checkmark & & \\ \texttt{gsp64-ssl.ls.apple.com} & Apple & 3.9\% & & \checkmark & \checkmark & \checkmark \\ \texttt{api.onesignal.com} & OneSignal& 3.7\% & & \checkmark & & \\ \texttt{ca.iadsdk.apple.com} & Apple & 3.7\% & \checkmark & \checkmark & \checkmark & \checkmark \\ \bottomrule \end{tabular} \caption{20 most common tracking domains after ATT: sharing of user identifiers with third-parties, alongside device information. Empty cells mean that we did not observe the sharing of a certain type of information, although this might still take place.} \label{tab:id_sharing} \end{table*} In our analysis, we found 9 apps that were able to generate a mutual user identifier that can be used for cross-app tracking, through the use of server-side code. These 9 apps used an \enquote{AAID} (potentially leaning on the term Android Advertising Identifier) implemented and generated by Umeng, a subsidiary of the Chinese tech company Alibaba. The flow to obtain an AAID is visualised in Figures~\ref{fig:aaid1} and \ref{fig:aaid2} in the Appendix. As expected, the IDFA is only zeros because we used the opt-out provided by iOS 14.8; we observed, however, that the IDFV (ID for Vendors), a non-resettable, app-specific identifier was shared over the Internet, see Figure~\ref{fig:aaid1}. The sharing of device information for purposes of fingerprinting would be in violation of the Apple's policies, which do not allow developers to \enquote{derive data from a device for the purpose of uniquely identifying it}~\cite{apple_tracking_definition}. Other experts and researchers have also voiced concerns that tracking might continue~\cite{att_caid1,apple_enforcement1,apple_enforcement2,apple_enforcement3}. We reported our observations to Apple on 17 November 2021, who promised to investigate the problem. We conducted a follow-up investigation on 1 February 2022, and re-downloaded and analysed a range of iOS apps. Some of the apps still continued to retrieve a unique identifier from the URL \url{https://aaid.umeng.com/api/postZdata}. Other apps now contacted the URL \url{https://utoken.umeng.com/api/postZdata/v2}, and applied additional encryption (rather than just HTTPS) to the requests and responses. This encrypted data had roughly the same size as before (\textasciitilde750 bytes for the request, \textasciitilde350 bytes for the response) and the same mimetype (\path{application/json} for the request, \path{application/json;charset=UTF-8} for the response). The issue seems thus to be present still, but has now been hidden away from the public through the use of encryption. We have tried to reproduce these experiments for a few apps on iOS 15 and higher, but did not observe the same behaviour; there currently exists no public jailbreak for these iOS versions, and similar investigations as ours are therefore not (yet) possible on these iOS versions. There is a possibility that the issue has been fixed on iOS 15 or higher, or that we did not pick up the same behaviour in our small-scale testing (about 10 apps instead of more than 1000). However, Apple did not provide further details to us. Analysing the top 20 most commonly contacted domains, we could confirm that installation-specific identifiers (see column \enquote{User ID}) are commonly collected alongside further device-specific information, see Table~\ref{tab:id_sharing}. While these installation-specific identifiers are usually randomly generated at the first app start, large tracking companies can likely still use these identifiers to build profiles of an app user's journey across apps, using their server-side code to link different identifiers together (e.g. through the user's IP address, other device information, and first-party data). Companies also receive information about a user's locale (i.e. the display language), the device model, and the OS version. Such information can be used to disambiguate different users connecting from the same IP address (e.g. households sharing the same Wi-Fi router)~--~and even across different IP addresses through the use of additional, first-party data that large tracking companies hold. Table~\ref{tab:id_sharing} does not include all the different kinds of information that we observed being sent to tracking domains because the kinds of information varied between companies. For example, Google assigned an \texttt{android\_id} to an iOS app upon first contact with the company that was then used for all subsequent communication with Google's API endpoints. This identifier differed between apps, and did not seem to be used for cross-app tracking on-device (it might be on Google's servers). When contacting the domain \path{googleads.g.doubleclick.net}, Google collected the current system volume and the status of the silencing button. As already described above, \path{ca.iadsdk.apple.com} collected a \texttt{purchaseTimestamp}, that can be used to identify the user, and is not accessible for other app developers. The domain \texttt{gsp64-ssl.ls.apple.com}, belonging to Apple's location services, even collected the IP address and port that we used for proxying the network traffic through \texttt{mitmdump} as part of our analysis. We did not observe any other domains that had access to this information, underlining Apple's privileged data access. Crucially, for many of the observed transmissions between apps and servers, we could not even determine what data was sent, due to use of encryption~\cite{apple_enforcement2} and closed-source communication protocols. \textbf{System-Level Tracking by Apple.} We found that iPhones exchanged a range of unique user identifiers directly with Apple, see Figure~\ref{fig:apple_id_collection} in the Appendix. We observed that network requests, which included various unique user identifiers and other personal data, were issued following the interaction with apps and connected to Apple's App Store and advertising technologies. While this does not allow user-level apps to gain access to these user identifiers, Apple itself can use these identifiers to enrich its own advertising services. Indeed, Apple claims in its privacy policy that it may use users' interactions with its advertising platform and with the App Store to group users into segments (of at least 5,000 individuals), and show adverts to these groups~\cite{apple_advertising_tracking}. Specifically, we found that the App Store collected the UDID, the serial number of the device, the DSID (an identifier linked to a user's Apple account), and a \texttt{purchaseTimestamp}. All of these identifiers can be used by Apple to single out individual users. Crucially, the UDID has been inaccessible to app developers other than Apple since 2013~\cite{uuid_deprecation}, but Apple continues to have access to this identifier. Moreover, Apple collects the serial number, which cannot be changed and is linked to a user's iPhone. This might be unexpected for some users. These findings are in-line with previous reports that both Google and Apple collect detailed information about their users as part of regular device usage~\cite{leith_mobile_2021}. \subsection{Disclosure of Tracking in Privacy Nutrition Labels} \label{sec:nutrition_labels} We now consider whether and to what extent apps (from after the introduction of iOS 14) disclose their tracking activities in their Privacy Nutrition Labels. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{figures/datanotcollected-libraries.pdf} \caption{Top tracking libraries in apps that claim in their Privacy Nutrition Labels not to collect any data.} \label{fig:datanotcollected} \Description{This figure shows a bar chart of the top tracking libraries in analysed apps that claim in their Privacy Nutrition Labels not to collect any data.} \end{figure} Among the studied apps, 22.2\% claimed that they would not collect any data from the user. This was often not true: as shown in Figure~\ref{fig:datanotcollected}, 80.2\% of these apps actually contained at least one tracker library (compared to 93.1\% for apps that did disclose some data sharing), and 68.6\% sent data to at least one known tracking domain right at the first app start (compared to 91.4\%). On average, apps that claimed not to collect data contained 1.8 tracking libraries (compared to 4.3), and contacted 2.5 tracking companies (compared to 4.2). Among the 22.2\% of apps claiming not to collect data, only 3 were in the App Store charts. As noticed above (see Table~\ref{tab:id_sharing}), tracking libraries usually create a unique user identifier. Among the apps that used the SKAdNetwork, 42.0\% disclosed their access to a \enquote{User ID}, 42.2\% of apps using Google Firebase Analytics, 48.2\% of apps using Google Crashlytics, and 53.2\% of apps using the Facebook SDK. 63.2\% of apps using Google Firebase Analytics disclosed that they collected any data about \enquote{Product Interaction} or \enquote{Other Usage Data}, and about 70\% of apps using the Facebook SDK, Google Analytics, or Google Tag Manager. Additionally, apps can disclose their use of \enquote{Advertising Data}: 27.5\% of apps with the SKAdNetwork did so, 66.0\% of apps with Google AdMob, 80.9\% of apps with Unity3d Ads, and 45.4\% apps with AppsFlyer. All of this points to notable discrepancies between apps' disclosed and actual data practices. App developers might be able to address this, but are often not fully aware of all the data that is collected through third-party tracking software~\cite{mhaidli_we_2019,anirudhchi2021}. Conversely, Apple itself might be able to reduce this discrepancy through increased use of automated code analysis, in particular applied to third-party tracking software. \subsection{Limitations} \label{sec:limitations} A few limitations of our study are worth noting. First, for practical reasons, we were not able to analyse all the apps in the App Store, only a reasonably large subset of free apps in the App Store's UK region. Furthermore, for the purposes of examining the effect of ATT, we only focused on apps that already existed on the App Store before iOS 14~--~newly released apps may adopt different strategies. Regarding our analysis methods, our instruments are also potentially limited in several ways. The results of our static analysis must be interpreted with care, since not all code shipped in an app will necessarily be invoked in practice. We may have overestimated tracking in certain contexts, e.g., if tracking code was included but not used. In our network analysis, we performed this off-device, meaning that all device traffic was analysed in aggregate. The risk here is that we may wrongly attribute some communications to an app that in fact was generated by some other app or subsystem on the device. To minimise this risk, we uninstalled all pre-installed apps, and ensured no apps were running in the background. We also used jailbreaking (i.e. gained full system access by exploiting a vulnerability in the iOS operating system) to circumvent certificate validation, which might make some apps alter their behaviour. In all parts of our analysis, we consider all apps equally, regardless of popularity~\cite{binns_measuring_2018} and usage time~\cite{van_kleek_x-ray_2018}, both of which can impact user privacy. Likewise, we treat all tracking domains, libraries and companies equally, though they might pose different risks to users. \section*{Appendix} \begin{figure}[H] \centering \includegraphics[width=0.7\linewidth]{figures/credit_scoring.png} \caption{Apple's definition of tracking: Excerpt from Apple's exempt data practices, including credit scoring, from requiring user opt-in under ATT (emphasis added)~\cite{apple_tracking_definition}. We discuss the limitations of Apple's definition of tracking in Section~\ref{sec:discussion}.} \Description{This figure shows an excerpt from Apple's ATT rules that foresee an exemption for data collection related to credit scoring.} \label{fig:exemptions} \end{figure} \begin{figure}[H] \begin{subfigure}[t]{\linewidth} \lstset{language=json} \begin{lstlisting} { "sdk_version": "1.2.0", "bundle_id": "[Redacted]", "hw_model": "N69uAP", "kid": "[Redacted]", "total_storage": "30745123781", "country": "GB", "zdata": "[Redacted]", "app_version": "[Redacted]", "app_name": "[Redacted]", "sdk_type": "IOS", "storage": "14078912372", "zdata_ver": "1.1.0", "source_id": "umeng", "idfv": "7EBDAFC8-97BB-4FDB-B4D3-E2F4EA040B8C", "timezone": "1", "os_version": "14.8", "model": "iPhone8,4", "hostname": "MyPhone", "appkey": "[Redacted]", "idfa": "00000000-0000-0000-0000- 000000000000" } \end{lstlisting} \caption{Request: Sending a range of device information to Umeng at \url{https://aaid.umeng.com/api/postZdata}.} \label{fig:aaid1} \end{subfigure} \begin{subfigure}[t]{\linewidth} \centering \lstset{language=json} \begin{lstlisting} { "aaid": "BAEC362C-49FC-494B-B0A7-175D990B059D", ... } \end{lstlisting} \caption{Response: Umeng returns an identifier that is shared by multiple apps, and can be used for cross-app tracking.} \label{fig:aaid2} \end{subfigure} \caption{Fingerprinting in apps, even after the ATT. This is likely in violation of Apple's new policies and the expectations of many end-users (personal data redacted). We provide more results on the circumvention of the ATT in Section~\ref{sec:data_sharing_pii}.}\label{fig:fingerprinting} \Description{This figures shows the content of a network request and of the subsequent response of a tracking company helping apps agree on a shared unique use identifier.} \end{figure} \begin{figure}[H] \begin{subfigure}[t]{\linewidth} \centering \lstset{language=XML} \begin{lstlisting} <plist version="1.0"> <dict> ... <key>dsid</key> <string>[Apple ID]</string> <key>guid</key> <string>[UDID]</string> <key>serialNumber</key> <string>[serial number]</string> ... </dict> </plist> \end{lstlisting} \caption{Request of Apple App Store to \url{https://buy.itunes.apple.com/WebObjects/MZFinance.woa/wa/renewVppReceipt?guid=[UDID]}.} \end{subfigure} \begin{subfigure}[t]{\linewidth} \centering \lstset{language=json} \begin{lstlisting} { "attributionMetadataExistsOnDevice": false, "toroId": "[Redacted]", "purchaseTimestamp": "2021-11-01T15:15:05Z", "adamId": 477718890, "attributionDownloadType": 0, "developmentApp": false, "anonymousDemandId": "[Redacted]", "bundleId": "ru.kinopoisk", "attributionKey": "[Redacted]" } \end{lstlisting} \caption{Request (shortended) of Apple's advertising framework to \url{https://ca.iadsdk.apple.com/adserver/attribution/v2}.} \end{subfigure} \caption{Sharing of unique user identifiers with Apple (personal data redacted). We explain more about the tracking of users by Apple in Section~\ref{sec:data_sharing_pii}.}~\label{fig:apple_id_collection} \Description{This figures shows two example network traffic of Apple collecting personal data that can be used for user tracking.} \end{figure}
1607.02960
\section{Introduction} Usually, the wavefunction, employed in the non-relativistic quantum mechanics, and the Hamilton principal function, appearing in the Hamilton--Jacobi equation of classical mechanics, behave as scalar fields. For instance, if one starts with the Schr\"odinger equation written in Cartesian coordinates, its expression in any other coordinate system is obtained by replacing the partial derivatives of the wavefunction with respect to the Cartesian coordinates by its derivatives with respect to the new coordinates. However, in the case of certain transformations, such as the Galilean transformations, the wavefunction acquires an extra phase factor and, similarly, the Hamilton principal function requires an additional term. One way of finding the transformation law for a wavefunction under a change of reference frame, applicable to the cases where the transformations of interest form a continuous group, consists in finding first the infinitesimal generators of the action of the group on the wavefunctions; then, with the aid of the exponential map, the elements of the group can be constructed and, making use of the BCH formula, the desired transformations can be expressed in a convenient manner (see, e.g., Refs.\ \cite{GY,Pa,JE,Do}). Another approach consists in assuming that the Hamiltonian transforms into some specific operator under the change of frame being considered, and then looking for a transformation of the wavefunction such that a solution of the Schrödinger equation in the initial frame is mapped into a solution of the Schrödinger equation in the second frame (see, e.g., Refs.\ \cite{EM,vG}). In this approach, it is not necessary to consider a continuous group of transformations, but one has to postulate the form of the new Hamiltonian. In this paper we apply a simple method to find the operator that represents the effect of a change of frame on the state vectors (or on the wavefunctions), without having to impose from the start some specific form for the transformed Hamiltonian. Furthermore, this method is applicable to transformations that do not belong to a continuous group, and we do not have to deal with ``infinitesimal'' transformations. In Section 2 we show how one can readily obtain the operator that represents a change of frame on the state vectors, presenting several examples. In Section 3 we show that the phase factors appearing in the transformations of the wavefunctions, obtained in Section 2, are given by ${\rm e}^{- {\rm i} F_{1}/\hbar}$, where $- F_{1}$ is the term that has to be added to the Hamilton principal function in the change of frame under consideration. \section{Transformation of the wavefunctions} In the context of the non-relativistic quantum mechanics we consider a transformation given by a {\em unitary}\/ operator, $U$, defined by the conditions \begin{equation} U x_{i} U^{-1} = X_{i}(x_{j}, t), \qquad U p_{i} U^{-1} = P_{i}(p_{j}, t), \label{3.1} \end{equation} where the $x_{i}$ and $p_{i}$ are Hermitian operators representing the Cartesian coordinates and momenta, the $X_{i}$ are given functions of $x_{j}$ and $t$, and the $P_{i}$ are given functions of $p_{j}$ and $t$. For example, for a Galilean transformation \begin{equation} {\bf X} = {\bf x} - {\bf V} t, \qquad {\bf P} = {\bf p} - m {\bf V}, \label{gal} \end{equation} where $m$ is the mass of the particle being considered, and ${\bf V}$ is a constant vector, corresponding to the velocity of the boost. (In order to facilitate the comparison with the results of previous works, we consider {\em active}\/ transformations.) It should be noticed that Eqs.\ (\ref{3.1}) define $U$ up to a phase factor that depends on $t$ only (see the examples below). The state of the system is transformed according to \begin{equation} | \psi' \rangle = U | \psi \rangle \label{3.2} \end{equation} and a straightforward computation shows that $U$ maps any solution of the Schr\"odinger equation \[ {\rm i} \hbar \frac{{\rm d} | \psi \rangle}{{\rm d} t} = H | \psi \rangle \] into a solution of \[ {\rm i} \hbar \frac{{\rm d} | \psi' \rangle}{{\rm d} t} = K | \psi' \rangle \] if \begin{equation} K = U H U^{-1} + {\rm i} \hbar \frac{{\rm d} U}{{\rm d} t} U^{-1}. \label{3.3} \end{equation} This last equation shows that if $U$ depends explicitly on the time, then the Hamiltonian does not transform following the simple rule $H \mapsto U H U^{-1}$ (cf.\ Ref.\ \cite{GYb}). (Note that we are working in the Schr\"odinger picture.) If the arbitrary phase factor contained in $U$ can be chosen in such a way that $K = H$, then we say that $H$ is invariant under $U$ \cite{JE}. Let $| {\bf x}_{0} \rangle$ and $| {\bf p}_{0} \rangle$ be eigenstates of the position and momentum operators ${\bf x}$ and ${\bf p}$, respectively (with ${\bf x} | {\bf x}_{0} \rangle = {\bf x}_{0} | {\bf x}_{0} \rangle$ and ${\bf p} | {\bf p}_{0} \rangle = {\bf p}_{0} | {\bf p}_{0} \rangle$) then, making use of Eqs.\ (\ref{3.1}) we have \[ {\bf x} U^{-1} | {\bf x}_{0} \rangle = U^{-1} {\bf X}({\bf x}, t) | {\bf x}_{0} \rangle = {\bf X}({\bf x}_{0}, t) \, U^{-1} | {\bf x}_{0} \rangle, \] which means that $U^{-1} | {\bf x}_{0} \rangle$ is an eigenstate of ${\bf x}$ with eigenvalue ${\bf X}({\bf x}_{0}, t)$, thus \begin{equation} U^{-1} | {\bf x}_{0} \rangle = {\rm e}^{{\rm i} \alpha/\hbar} | {\bf X}({\bf x}_{0}, t) \rangle, \label{3.1.2} \end{equation} where $\alpha$ is some real number, which may depend on ${\bf x}_{0}$, $t$, and the parameters contained in $U$. This last equation, together with (\ref{3.2}), imply that a wavefunction transforms according to \begin{equation} \psi'({\bf x}_{0}) = \langle {\bf x}_{0} | U | \psi \rangle = {\rm e}^{- {\rm i} \alpha/\hbar} \langle {\bf X}({\bf x}_{0}, t) | \psi \rangle = {\rm e}^{- {\rm i} \alpha/\hbar} \psi\big( {\bf X}({\bf x}_{0}, t) \big). \label{wftr} \end{equation} As we shall see in the examples below, in some cases $\alpha$ is different from zero. In a similar manner, from Eqs.\ (\ref{3.1}) it follows that \begin{equation} U^{-1} | {\bf p}_{0} \rangle = {\rm e}^{{\rm i} \beta/\hbar} | {\bf P}({\bf p}_{0}, t) \rangle, \label{3.1.3} \end{equation} where $\beta$ is some real number, which may depend on ${\bf p}_{0}$, $t$, and the parameters contained in $U$. In order to determine the values of $\alpha$ and $\beta$ we form the scalar product $\langle {\bf x}_{0} | U U^{-1} | {\bf p}_{0} \rangle = \langle {\bf x}_{0} | {\bf p}_{0} \rangle = (2\pi \hbar)^{-3/2} \exp ({\rm i} {\bf p}_{0} \cdot {\bf x}_{0}/\hbar)$, which, by virtue of (\ref{3.1.2}), (\ref{3.1.3}) and the unitarity of $U$, must coincide with \[ (2\pi \hbar)^{-3/2} {\rm e}^{{\rm i} (\beta - \alpha)/\hbar} \langle {\bf X}({\bf x}_{0}, t) | {\bf P}({\bf p}_{0}, t) \rangle = (2\pi \hbar)^{-3/2} \exp \frac{{\rm i}}{\hbar} \big[ \beta - \alpha + {\bf P}({\bf p}_{0}, t) \cdot {\bf X}({\bf x}_{0}, t) \big]. \] Hence, \begin{equation} {\bf p}_{0} \cdot {\bf x}_{0} = \beta - \alpha + {\bf P}({\bf p}_{0}, t) \cdot {\bf X}({\bf x}_{0}, t). \label{bas} \end{equation} In the following subsections we consider several applications of the basic formula (\ref{bas}). \subsection{Spatial translations} A relatively simple and common example of a change of frame corresponds to translations. We include it because it serves to illustrate the method and because some results will be employed below. A spatial translation by a constant vector ${\bf a}$ can be defined by Eqs.\ (\ref{3.1}) with \begin{equation} {\bf X} = {\bf x} - {\bf a}, \qquad {\bf P} = {\bf p}. \label{tra} \end{equation} Then, from Eq.\ (\ref{bas}) we obtain ${\bf p}_{0} \cdot {\bf x}_{0} = \beta - \alpha + {\bf p}_{0} \cdot ({\bf x}_{0} - {\bf a})$, i.e., \[ \alpha = \beta - {\bf p}_{0} \cdot {\bf a}. \] Hence, taking into account that $\alpha$ may depend on ${\bf x}_{0}$, and $\beta$ may depend on ${\bf p}_{0}$, we conclude that \begin{equation} \alpha = \chi(t), \qquad \beta = {\bf p}_{0} \cdot {\bf a} + \chi(t), \label{ab2.1} \end{equation} where $\chi(t)$ is a real-valued function of $t$ only. Substituting the expression for $\beta$ into Eq.\ (\ref{3.1.3}), making use of (\ref{tra}), we obtain \[ U^{-1} | {\bf p}_{0} \rangle = {\rm e}^{{\rm i} \chi/\hbar} \, {\rm e}^{{\rm i} {\bf p}_{0} \cdot {\bf a}/\hbar} | {\bf p}_{0} \rangle = {\rm e}^{{\rm i} \chi/\hbar} \, {\rm e}^{{\rm i} {\bf p} \cdot {\bf a}/\hbar} | {\bf p}_{0} \rangle, \] which amounts to \begin{equation} U^{-1} = {\rm e}^{{\rm i} \chi/\hbar} \, {\rm e}^{{\rm i} {\bf p} \cdot {\bf a}/\hbar}. \label{traop} \end{equation} Substituting (\ref{traop}) and the first equation (\ref{tra}) into (\ref{3.1.2}) we obtain the well-known relation \begin{equation} {\rm e}^{{\rm i} {\bf p} \cdot {\bf a}/\hbar} | {\bf x}_{0} \rangle = | {\bf x}_{0} - {\bf a} \rangle \label{trax} \end{equation} (note that the phase factor ${\rm e}^{{\rm i} \chi/\hbar}$ cancels out). On the other hand, from Eqs.\ (\ref{3.3}) and (\ref{traop}) we find that, in the present case, \begin{equation} K = U H U^{-1} + \frac{{\rm d} \chi}{{\rm d} t}, \label{h2.1} \end{equation} so that, if, for example, \begin{equation} H = \frac{{\bf p}^{2}}{2m} - {\bf F} \cdot {\bf x}, \label{unif} \end{equation} where ${\bf F}$ is a constant vector, corresponding to a particle subject to a constant force ${\bf F}$, then [see (\ref{3.1}) and (\ref{tra})] \[ K = U \left( \frac{{\bf p}^{2}}{2m} - {\bf F} \cdot {\bf x} \right) U^{-1} + \frac{{\rm d} \chi}{{\rm d} t} = \frac{{\bf p}^{2}}{2m} - {\bf F} \cdot ({\bf x} - {\bf a}) + \frac{{\rm d} \chi}{{\rm d} t} = H + {\bf F} \cdot {\bf a} + \frac{{\rm d} \chi}{{\rm d} t}. \] {\em If}\/ we demand that $K = H$ (which is reasonable, since the particle is in a uniform field of force, and $U$ represents a translation), we have to choose $\chi = - {\bf F} \cdot {\bf a} t$ and, according to (\ref{wftr}), the wavefunctions must transform as \begin{equation} \psi'({\bf x}_{0}) = {\rm e}^{{\rm i} {\bf F} \cdot {\bf a} t/\hbar} \psi({\bf x}_{0} - {\bf a}). \label{2.1.2} \end{equation} (Note that with this choice for $\chi$, according to Eq.\ (\ref{traop}), the operator corresponding to translations is $U = {\rm e}^{- {\rm i} ({\bf p} - {\bf F} t) \cdot {\bf a}/\hbar}$, which involves the conserved operator ${\bf p} - {\bf F} t$ \cite{JE}.) \subsection{Translations in the momentum} Even though it is not a change of frame, we shall consider a ``translation'' in the momentum, defined by ${\bf X} = {\bf x}$, ${\bf P} = {\bf p} - {\bf b}$, where ${\bf b}$ is a constant vector. In this case Eq.\ (\ref{bas}) gives ${\bf p}_{0} \cdot {\bf x}_{0} = \beta - \alpha + ({\bf p}_{0} - {\bf b}) \cdot {\bf x}_{0}$, which leads to \begin{equation} \alpha = - {\bf b} \cdot {\bf x}_{0} + \chi(t), \qquad \beta = \chi(t), \label{ab2.2} \end{equation} where $\chi(t)$ is a real-valued function of $t$ only. Then, from Eq.\ (\ref{3.1.2}) we obtain \[ U^{-1} | {\bf x}_{0} \rangle = {\rm e}^{{\rm i} \chi/\hbar} {\rm e}^{- {\rm i} {\bf b} \cdot {\bf x}_{0}/\hbar} | {\bf x}_{0} \rangle = {\rm e}^{{\rm i} \chi/\hbar} {\rm e}^{- {\rm i} {\bf b} \cdot {\bf x}/\hbar} | {\bf x}_{0} \rangle \] which means that \begin{equation} U^{-1} = {\rm e}^{{\rm i} \chi/\hbar} {\rm e}^{- {\rm i} {\bf b} \cdot {\bf x}/\hbar} \label{trm} \end{equation} and from Eq.\ (\ref{3.1.3}) we have $U^{-1} | {\bf p}_{0} \rangle = {\rm e}^{{\rm i} \chi/\hbar} | {\bf p}_{0} - {\bf b} \rangle$, i.e., \begin{equation} {\rm e}^{- {\rm i} {\bf b} \cdot {\bf x}/\hbar} | {\bf p}_{0} \rangle = | {\bf p}_{0} - {\bf b} \rangle. \label{trap} \end{equation} Another useful formula follows from the second equation in (\ref{3.1}): $U {\bf p} U^{-1} = {\bf p} - {\bf b}$ or, equivalently, \begin{equation} {\rm e}^{{\rm i} {\bf b} \cdot {\bf x}/\hbar} {\bf p} {\rm e}^{- {\rm i} {\bf b} \cdot {\bf x}/\hbar} = {\bf p} - {\bf b}. \label{2.2.2} \end{equation} It may be noticed that Eqs.\ (\ref{trap}) and (\ref{2.2.2}) do not contain the function $\chi$. \subsection{Galilean transformations} In the case of the Galilean transformations the functions ${\bf X}({\bf x}, t)$ and ${\bf P}({\bf p}, t)$ are given by ${\bf X}({\bf x}, t) = {\bf x} - {\bf V} t$, ${\bf P}({\bf p}, t) = {\bf p} - m {\bf V}$ [see Eqs.\ (\ref{gal})]. Then, Eq.\ (\ref{bas}) becomes \[ {\bf p}_{0} \cdot {\bf x}_{0} = \beta - \alpha + ({\bf p}_{0} - m {\bf V}) \cdot ({\bf x}_{0} - {\bf V} t), \] that is \[ \alpha + m {\bf V} \cdot {\bf x}_{0} - {\textstyle \frac{1}{2}} m V^{2} t = \beta - {\bf p}_{0} \cdot {\bf V} t + {\textstyle \frac{1}{2}} m V^{2} t, \] which implies that \begin{equation} \alpha = - m {\bf V} \cdot {\bf x}_{0} + {\textstyle \frac{1}{2}} m V^{2} t + \chi(t), \qquad \beta = {\bf p}_{0} \cdot {\bf V} t - {\textstyle \frac{1}{2}} m V^{2} t + \chi(t), \label{ab2.3} \end{equation} where $\chi(t)$ is some real-valued function of $t$ only. Hence, according to (\ref{3.1.2}), we have \begin{equation} U^{-1} | {\bf x}_{0} \rangle = \exp ({\rm i}/\hbar) \big[ - m {\bf V} \cdot {\bf x}_{0} + {\textstyle \frac{1}{2}} m V^{2} t + \chi(t) \big] \, | {\bf x}_{0} - {\bf V} t \rangle, \label{2.2.1} \end{equation} which can also be expressed as [see Eq.\ (\ref{trax})] \[ \exp ({\rm i}/\hbar) \big[ - m {\bf V} \cdot {\bf x}_{0} + {\textstyle \frac{1}{2}} m V^{2} t + \chi(t) \big] {\rm e}^{{\rm i} {\bf p} \cdot {\bf V} t/\hbar} \, | {\bf x}_{0} \rangle \] or, equivalently, \[ \exp ({\rm i}/\hbar) \big[ {\textstyle \frac{1}{2}} m V^{2} t + \chi(t) \big] \, {\rm e}^{{\rm i} {\bf p} \cdot {\bf V} t/\hbar} \, {\rm e}^{- {\rm i} m {\bf V} \cdot {\bf x}/\hbar} \, | {\bf x}_{0} \rangle \] and, therefore, \begin{equation} U^{-1} = \exp ({\rm i}/\hbar) \big[ {\textstyle \frac{1}{2}} m V^{2} t + \chi(t) \big] \, {\rm e}^{{\rm i} {\bf p} \cdot {\bf V} t/\hbar} \, {\rm e}^{- {\rm i} m {\bf V} \cdot {\bf x}/\hbar}. \label{2.2.5} \end{equation} As in Section 2.1, we can determine the function $\chi$ if we impose some specific relation between the Hamiltonians $H$ and $K$ [see Eq.\ (\ref{3.3})]. Substituting (\ref{2.2.5}) into (\ref{3.3}), with the aid of (\ref{2.2.2}), we find \begin{equation} K = U H U^{-1} - \frac{1}{2} m V^{2} + {\bf p} \cdot {\bf V} + \frac{{\rm d} \chi}{{\rm d} t} \label{h2.3} \end{equation} (cf.\ Ref.\ \cite{SW}). Thus, if we take $H = {\bf p}^{2}/2m$, then \[ K = \frac{1}{2m} ({\bf p} - m {\bf V})^{2} - \frac{1}{2} m V^{2} + {\bf p} \cdot {\bf V} + \frac{{\rm d} \chi}{{\rm d} t}, \] which coincides with $H$ if $\chi = 0$. Other Hamiltonians are also invariant under the Galilean transformations, with the appropriate choice of $\chi$ \cite{JE}. If one does not allow for the presence of a phase factor ${\rm e}^{{\rm i} \chi/\hbar}$ in $U^{-1}$ one arrives at the wrong conclusion that only the Hamiltonian of a free particle is invariant under the Galilean transformations \cite{JP}. \subsection{Constant acceleration} Now we consider the effect of a constant acceleration, ${\bf a}$, which corresponds to \[ {\bf X} = {\bf x} - {\textstyle \frac{1}{2}} {\bf a} t^{2}, \qquad {\bf P} = {\bf p} - m {\bf a} t, \] Substituting these expressions into Eq.\ (\ref{bas}) we have ${\bf p}_{0} \cdot {\bf x}_{0} = \beta - \alpha + ({\bf p}_{0} - m {\bf a} t) \cdot ({\bf x}_{0} - {\textstyle \frac{1}{2}} {\bf a} t^{2})$, which implies that \begin{equation} \alpha = - m {\bf a} t \cdot {\bf x}_{0} + {\textstyle \frac{1}{6}} m a^{2} t^{3} + \chi(t), \qquad \beta = {\textstyle \frac{1}{2}} {\bf p}_{0} \cdot {\bf a} t^{2} - {\textstyle \frac{1}{3}} m a^{2} t^{3} + \chi(t), \label{ab2.4} \end{equation} where $\chi(t)$ is a function of $t$ only, and we have included the term ${\textstyle \frac{1}{6}} m a^{2} t^{3}$ into $\alpha$ for later convenience. The expression of the operator $U^{-1}$ can be obtained by calculating $U^{-1} | {\bf x}_{0} \rangle$, following the same steps as in Section 2.3. Alternatively, we can start by considering the action of $U^{-1}$ on $| {\bf p}_{0} \rangle$. From Eqs.\ (\ref{3.1.3}), (\ref{ab2.4}), and (\ref{trap}) we find that \begin{eqnarray*} U^{-1} | {\bf p}_{0} \rangle & = & \exp ({\rm i}/\hbar) \left[ {\textstyle \frac{1}{2}} {\bf p}_{0} \cdot {\bf a} t^{2} - {\textstyle \frac{1}{3}} m a^{2} t^{3} + \chi(t) \right] |{\bf p}_{0} - m {\bf a} t \rangle \\ & = & \exp ({\rm i}/\hbar) \left[ {\textstyle \frac{1}{2}} ({\bf p} + m {\bf a} t) \cdot {\bf a} t^{2} - {\textstyle \frac{1}{3}} m a^{2} t^{3} + \chi(t) \right] |{\bf p}_{0} - m {\bf a} t \rangle \\ & = & \exp ({\rm i}/\hbar) \left[ {\textstyle \frac{1}{6}} m a^{2} t^{3} + \chi(t) \right] {\rm e}^{{\rm i} {\bf p} \cdot {\bf a} t^{2}/2 \hbar} |{\bf p}_{0} - m {\bf a} t \rangle \\ & = & \exp ({\rm i}/\hbar) \left[ {\textstyle \frac{1}{6}} m a^{2} t^{3} + \chi(t) \right] {\rm e}^{{\rm i} {\bf p} \cdot {\bf a} t^{2}/2 \hbar} \, {\rm e}^{- {\rm i} m {\bf a} t \cdot {\bf x}/\hbar} |{\bf p}_{0} \rangle, \end{eqnarray*} hence \[ U^{-1} = \exp ({\rm i}/\hbar) \left[ {\textstyle \frac{1}{6}} m a^{2} t^{3} + \chi(t) \right] {\rm e}^{{\rm i} {\bf p} \cdot {\bf a} t^{2}/2 \hbar} \, {\rm e}^{- {\rm i} m {\bf a} t \cdot {\bf x}/\hbar}. \] Thus, from Eq.\ (\ref{3.3}), making use of (\ref{2.2.2}), we obtain \begin{equation} K = U H U^{-1} + {\bf a} t \cdot {\bf p} - m {\bf a} \cdot {\bf x} - \frac{1}{2} m a^{2} t^{2} + \frac{{\rm d} \chi}{{\rm d} t}. \label{h2.4} \end{equation} If we take $H = {\bf p}^{2}/2m$, corresponding to a free particle, we have \begin{eqnarray*} K & = & \frac{({\bf p} - m {\bf a} t)^{2}}{2m} + {\bf a} t \cdot {\bf p} - m {\bf a} \cdot {\bf x} - \frac{1}{2} m a^{2} t^{2} + \frac{{\rm d} \chi}{{\rm d} t} \\ & = & \frac{{\bf p}^{2}}{2m} - m {\bf a} \cdot {\bf x} + \frac{{\rm d} \chi}{{\rm d} t}. \end{eqnarray*} Choosing $\chi = 0$, the Hamiltonian $K$ corresponds to a particle in a uniform force field of intensity $m {\bf a}$ [cf.\ Eq.\ (\ref{unif})], and Eqs.\ (\ref{wftr}) and (\ref{ab2.4}) reproduce the result of Ref.\ \cite{vG}. \section{Connection with classical mechanics In this section we shall show that the function $\alpha$ obtained in the examples of Section 2 coincides with the function $F_{1}$ defined by \begin{equation} P_{i} {\rm d} X_{i} - H {\rm d} t - (p_{i} {\rm d} x_{i} - K {\rm d} t) = {\rm d} F_{1}, \label{ct} \end{equation} where $H = H(X_{i}, P_{i}, t)$ and $K(x_{i}, p_{i}, t)$ are the Hamiltonian {\em functions}\/ for the canonical coordinates $(X_{i}, P_{i})$ and $(x_{i}, p_{i})$, respectively. As is well known, the transformation that relates the coordinates $(X_{i}, P_{i}, t)$ and $(x_{i}, p_{i}, t)$ of the extended phase space is canonical if and only if there exists a function $F_{1}$ such that Eq.\ (\ref{ct}) holds. (Very often, the function $F_{1}$ is called a generating function of the transformation, but that name is not always adequate, as in all the cases considered here, see, e.g., Refs.\ \cite{CT,HM}.) In the case considered in Section 2.1, Eq.\ (\ref{ct}) takes the form \[ {\bf p} \cdot {\rm d} {\bf x} - H {\rm d} t - {\bf p} \cdot {\rm d} {\bf x} + K {\rm d} t = {\rm d} F_{1}, \] i.e., $(K - H) {\rm d} t = {\rm d} F_{1}$, which is equivalent to saying that $K - H$ is some function of $t$ only; hence, $F_{1}$ is some function, $\chi(t)$ [cf.\ Eq.\ (\ref{ab2.1})], and \[ K({\bf x}, {\bf p}, t) - H({\bf X}, {\bf P}, t) = \frac{{\rm d} \chi}{{\rm d} t} \] [cf.\ Eq.\ (\ref{h2.1})]. If $H$ is given by Eq.\ (\ref{unif}), then \begin{eqnarray*} K({\bf x}, {\bf p}, t) & = & \frac{{\bf P}^{2}}{2m} - {\bf F} \cdot {\bf X} + \frac{{\rm d} \chi}{{\rm d} t} \\ & = & \frac{{\bf p}^{2}}{2m} - {\bf F} \cdot ({\bf x} - {\bf a}) + \frac{{\rm d} \chi}{{\rm d} t}, \end{eqnarray*} which reduces to ${\bf p}^{2}/2m - {\bf F} \cdot {\bf x}$ if $\chi = - {\bf F} \cdot {\bf a} t$ (cf.\ Sect.\ 2.1). In the case of the translations in the momentum (Sect.\ 2.2), Eq.\ (\ref{ct}) yields \[ ({\bf p} - {\bf b}) \cdot {\rm d} {\bf x} - H {\rm d} t - ({\bf p} \cdot {\rm d} {\bf x} - K {\rm d} t) = {\rm d} F_{1} \] or \[ (K - H) {\rm d} t = {\rm d} (F_{1} + {\bf b} \cdot {\bf x}), \] which is equivalent to the existence of a function $\chi(t)$ such that $F_{1} + {\bf b} \cdot {\bf x} = \chi(t)$ and $K - H = {\rm d} \chi/{\rm d} t$. Thus, $F_{1} = - {\bf b} \cdot {\bf x} + \chi(t)$, which coincides with the expression for $\alpha$ given in (\ref{ab2.2}). For the Galilean transformations, considered in Section 2.3, from Eq.\ (\ref{ct}) we have \[ ({\bf p} - m {\bf V}) \cdot {\rm d} ({\bf x} - {\bf V} t) - H {\rm d} t - ({\bf p} \cdot {\rm d} {\bf x} - K {\rm d} t) = {\rm d} F_{1}, \] i.e., \[ - {\bf p} \cdot {\bf V} {\rm d} t - m {\bf V} \cdot {\rm d} {\bf x} + m V^{2} {\rm d} t + (K - H) {\rm d} t = {\rm d} F_{1}, \] which can be written in the form \[ (K - H - {\bf p} \cdot {\bf V} + {\textstyle \frac{1}{2}} m V^{2}) {\rm d} t = {\rm d} (F_{1} - {\textstyle \frac{1}{2}} m V^{2} t + m {\bf V} \cdot {\bf x}). \] Thus, there exists a function $\chi(t)$ such that \[ F_{1} = {\textstyle \frac{1}{2}} m V^{2} t - m {\bf V} \cdot {\bf x} + \chi(t) \] [cf.\ Eq.\ (\ref{ab2.3})] and \[ K = H + {\bf p} \cdot {\bf V} - \frac{1}{2} m V^{2} + \frac{{\rm d} \chi}{{\rm d} t} \] [cf.\ Eq.\ (\ref{h2.3})]. If $H({\bf X}, {\bf P}, t) = {\bf P}^{2}/2m$, then \begin{eqnarray*} K({\bf x}, {\bf p}, t) & = & \frac{({\bf p} - m {\bf V})^{2}}{2m} + {\bf p} \cdot {\bf V} - \frac{1}{2} m V^{2} + \frac{{\rm d} \chi}{{\rm d} t} \\ & = & \frac{{\bf p}^{2}}{2m} + \frac{{\rm d} \chi}{{\rm d} t}, \end{eqnarray*} which coincides with $H({\bf x}, {\bf p}, t)$ if $\chi = 0$. In the case of uniform acceleration considered in Section 2.4, from Eq.\ (\ref{ct}) we have \[ ({\bf p} - m {\bf a} t) \cdot {\rm d} ({\bf x} - {\textstyle \frac{1}{2}} {\bf a} t^{2}) - H {\rm d} t - ({\bf p} \cdot {\rm d} {\bf x} - K {\rm d} t) = {\rm d} F_{1}, \] or, equivalently, \[ (K - H - {\bf p} \cdot {\bf a} t + {\textstyle \frac{1}{2}} m a^{2} t^{2} + m {\bf a} \cdot {\bf x}) {\rm d} t = {\rm d} (F_{1} + m {\bf a} \cdot {\bf x} t - {\textstyle \frac{1}{6}} m a^{2} t^{3}). \] Hence, there exists a function $\chi(t)$ such that \[ F_{1} = - m {\bf a} \cdot {\bf x} t + {\textstyle \frac{1}{6}} m a^{2} t^{3} + \chi(t) \] [cf.\ Eq.\ (\ref{ab2.4})] and \[ K({\bf x}, {\bf p}, t) = H({\bf X}, {\bf P}, t) + {\bf p} \cdot {\bf a} t - \frac{1}{2} m a^{2} t^{2} - m {\bf a} \cdot {\bf x} + \frac{{\rm d} \chi}{{\rm d} t} \] [cf.\ Eq.\ (\ref{h2.4})]. Taking $H({\bf X}, {\bf P}, t) = {\bf P}^{2}/2m$, corresponding to a free particle, setting $\chi = 0$ we obtain $K({\bf x}, {\bf p}, t) = {\bf p}^{2}/2m - m {\bf a} \cdot {\bf x}$, corresponding to a particle in a uniform force field. It may be noticed that, in the derivations presented so far in this section, only the function $\alpha$ appears, without reference to $\beta$. However, we can see that Eq.\ (\ref{bas}) amounts to \[ \beta = \alpha + {\bf p} \cdot {\bf x} - {\bf P} \cdot {\bf X}, \] which shows that $\beta$ is a ``type $F_{4}$ generating function'' (though, in the examples considered here, it is not really a generating function owing to the fact that the variables ${\bf p}$ and ${\bf P}$ are not functionally independent). As shown in Ref.\ \cite{PP} (see also Ref.\ \cite{HM}), under a canonical transformation relating the coordinates $(X_{i}, P_{i}, t)$ and $(x_{i}, p_{i}, t)$, the principal function transforms according to \begin{equation} S' = S - F_{1}, \end{equation} in the sense that if the function $S$ is a solution of the Hamilton--Jacobi (HJ) equation for $H$, then $S' = S - F_{1}$ is a solution of the HJ equation for the Hamiltonian $K$, with $H$ and $K$ related as in (\ref{ct}). Thus, at least in the examples considered here, the transformation law for the wavefunctions is related in a simple manner with the transformation law for the Hamilton principal function. This behavior is not totally surprising if we take into account the relationship between the solutions of the Schr\"odinger equation and $\exp {\rm i} S/\hbar$, where $S$ is a solution of the corresponding HJ equation. By contrast with the assertion in Ref.\ \cite{vG}, we see that the function $\alpha$ (denoted as $- \hbar S$ in Ref.\ \cite{vG}) is not a Hamilton's principal function, but the difference between two of such functions. In fact, the assertion in Ref.\ \cite{vG} (suggested by one of the referees of that paper) simply makes no sense because there are two Hamiltonians (or Lagrangians) involved, one of them corresponding to a free particle and the other to a particle in a uniform force, while a Hamilton's principal function is associated with just one Hamiltonian (or Lagrangian). \section{Concluding remark} The examples presented in this paper explicitly show that the representation on the state vectors of a transformation is not completely specified by its action on the coordinates and momenta. The remaining phase factor in the operator $U$ determines (or is determined by) the difference between the Hamiltonians $H$ and $K$.
physics/0512220
\section{Introduction} The existence of the magnetic monopole was one of the open questions of modern physics. In 1931, P. Dirac made the first convincing proposal that the existence of even a single magnetic monopole may explain the quantization of electric charge. According to the quantization condition of a system of an electron and a magnetic monopole with charge {g} due to Dirac\cite{d1,d2}, \begin{equation} eg=n\frac{\hbar c}{2} \end{equation} where $\hbar=\frac{h}{2\pi}$, $c$ is the speed of light and $n$ is an integer. We should modify this quantization condition by a factor of 3 if free quarks are found. Moreover, the introduction of magnetic monopole would bring a symmetry to the Maxwell's equations of electromagnetism. Such appealing proposal exhilarated a number of experimental investigations since then. Just as J. D. Jackson has described in his famous book \cite{ja} on Classical Electrodynamics: {\it ``chiefly because of an early, brilliant theoretical argument of Dirac, the search for monopoles is renewed whenever a new energy region is opened up in high energy physics or a new source of matter, such as rocks from the moon, becomes available". } New efforts to search for magnetic monopoles have been motivated considerably since Grand Unified Theory (GUT) of strong and electroweak interactions predicted the magnetic monopoles as well. In 1974, 't Hooft and Polyakov \cite {th, po} pointed out that a unified gauge theory in which electromagnetism is embedded in a semisimple gauge group would predict the existence of the magnetic monopole as a soliton with spontaneous symmetry breaking. To be more specific, a semisimple non abelian gauge group may break into its subgroups including U(1) which essentially describes magnetic monopole in the framework of GUT. Numerous experimental searches for magnetic monopoles in cosmic radiation and for magnetic monopoles trapped in matter for example at accelerator have been carried out\cite{al,rr,eb,ki,je,mac,her,cdf}. Thanks to the proposal due to Dirac. Although we have not observed magnetic monopoles, the elusive monopole has found its application to a number of different research areas in physics, such as in particle physics, condensed matter physics, string theory, astrophysics and cosmology. \section{Detections of magnetic monopoles} Various techniques of detection in the experiments to search for magnetic monopole have been developed since the 1930s. In this paper, we concentrate on the induction technique by exploying the Magnetometer SQUID (superconducting quantum interference device). As Tassie showed in his paper \cite{tass} that when moving through a superconducting loop, a magnetic monopole would induce a supercurrent in the loop because of the change in magnetic flux through the loop surface. The passage of a magnetic monopole through superconducting loop would result in a magnetic flux change of 2$\phi _0$, where $\phi _0=\frac{hc}{2e}=2\times 10^{-7}$ $G\cdot cm^2$ is the flux quantum. To be more specific, let us consider a monopole with charge g passing along the axis of a superconducting loop. Due to the Maxwell's equation involving the magnetic monopole current $\vec J_m$, \begin{equation} \frac{1}{c}\frac{\partial {\vec B}}{\partial t}+\nabla\times \vec{E}=- \frac{4\pi }{c} {\vec {J_m}}. \end{equation} By integration and Stokes theorem, we obtain \begin{equation} \frac{1}{c}\frac{\,d}{\,dt}{\int \vec{B}\cdot d \vec {S}} =-\oint \vec{E}\cdot d \vec {l}-\frac{4\pi}{c}\int \vec {J_m}\cdot d \vec{S} \end{equation} where path $l$ is the boundary of area $S$. According to the theory of superconductivity\cite{lo}, the fluxoid $\phi_c$ is \begin{equation} \phi_c=\int \vec{B}\cdot d \vec {S}+c\oint \Lambda\vec{j}\cdot d \vec {l} \end{equation} where constant parameter $\Lambda $ is related to the penetration depth $\lambda_L$ which is the characteristic length of superconductor. \begin{eqnarray} \lambda_L &=& \sqrt \frac{mc^2}{4\pi n' e_s^2} \nonumber \\ &=&\sqrt \frac{\Lambda c^2}{4\pi } \end{eqnarray} where $n'$ is the number of the carriers of the supercurrent per unit volume, $m$ and $e_s $ are the mass and charge of the carriers of the supercurrent respectively. According to the BCS theory, the carriers of the supercurrent are Cooper pairs which are paired electrons. So we obtain \begin{eqnarray} m&=&2m_e \nonumber \\ e_s&=&2e \end{eqnarray} where $m_e$ and $e$ are the mass and electric charge of an electron. The value of the penetration depth depends on the temperature. Following Tassie's assumpation that the initial and final conditions are stationary, therefore when a monopole passing through the superconducting loop the change in the fluxoid is \begin{equation} \Delta \phi_c =-4\pi g \end{equation} In addition the change in the magnetic flux through the superconducting loop is approximately the same as the change in the fluxoid especially in case that the superconducting loop is sufficiently thick, so we have \begin{eqnarray} \Delta\phi &\approx & \Delta \phi_c \nonumber \\ &=& -4\pi g \end{eqnarray} where $\Delta \phi $ denotes the change in the magnetic flux. In experiments, the Magnetometer SQUID is needed to monitor the small induced supercurrent in the superconducting loop due to a monopole. Shielding from magnetic fields by using superconductors is of great importance for the operation of SQUID because the motion of the magnetic flux quanta trapped in the SQUID sensor may produce signals mimicking the magnetic monopole events. As a matter of fact, in experiments, both SQUID and detector loops should be placed in the space bounded by superconducting shields. Searches for magnetic monopole should based upon the properties of magnetic monopole. Therefore we summarize the properties of magnetic monopole next. It is an electrically neutral particle unlike dyon, due to Schwinger\cite{sc,sh}, which is both electrically and magnetically charged. It differs from light quanta in not travelling with the speed of light. In addition, magnetic charge density is a pseudoscalar. When looking at a magnetic monopole from both the right-handed coordinate system and the left-handed coordinate system, we find the signs of a magnetic charge are opposite in the two coordinate systems. Therefore the space inversion of an interaction involving a magnetic monopole would be violated. As J. D. Jackson pointed out in his famous book \cite{ja} on Classical Electrodynamics: ``... it is a necessary consequence of the existence of a particle with both electric and magnetic charges that space inversion and time reversal are no longer valid symmetries of the laws of physics. It is a fact, of course, that these symmetry principles are not exactly valid in the realm of elementary particle physics, but present evidence is that their violation is extremely small and associated somehow with the weak interaction." This behavior of the magnetic monopole is quite similar to the behavior of neutrino\footnote{In this paper, neutrino means electron neutrino only except when specified.}. As a matter of fact, parity violations\cite{ly,wu,ga} always take place in weak interactions whenever there are neutrinos involved. We suggest that the elusive neutrino has magnetic charge. The flavor change of neutrino is the direct consequence of this proposal\cite{yy} and we may also explain the long-standing solar neutrino puzzle easily. Before completing this section, we give a possible experimental test based upon the Faraday\cite{fa} induction method. Place a radioactive source at the center of an enclosed superconducting sphere rather than superconducting loops. Whenever $\beta$-decay of the radioactive source happens, an anti-neutrino is released which would induce the supercurrents on the superconducting sphere due to the proposal. A SQUID or a scanning system may be exploited to monitor the supercurrents. To eliminate any unwanted influence of electrically charged particles, an absorbent layer would be introduced between the radioactive source and the enclosed superconducting sphere. Even though, the sensitive devices in the experiment are vulnerable to spurious signals\cite{kl}, it is still an ideal way to detect the monopole since the method is independence of the particle's mass and velocity. The detectors should be placed inside a magnetic shield made up of lead or mumetal to protect the detectors from external magnetic fields. \section{Summary and conclusion} The search for the magnetic monopole would be of fundamental significance in modern physics. A great deal of efforts have been made to detect magnetic monopole since the prediction of existence of the magnetic monopole by Dirac. We have presented a short review of history of magnetic monopoles. The theoretical work and experimental technique in the search for magnetic monopoles using SQUID are investigated in the present paper. The change in fluxoid as well as in magnetic flux have been discussed when a magnetic monopole moves through the superconducting loops. We have also studied the properties of magnetic monopole and proposed a possible experimental test based upon the Faraday induction method. This year is the unprecedented World Year of Physics which marks the hundredth anniversary of the pioneering contributions of Albert Einstein. We dedicate this paper to Albert Einstein.
nlin/0512046
\section{Introduction} There has been much recent interest in the close relation between integrable partial dif\/ferential equations and the dif\/ferential geometry of plane and space curves (see \cite{ChouQu1,ChouQu2,ChouQu3,ChouQu4,SandersWang1} for an overview and many results). The present paper studies f\/lows of curves in Riemannian manifolds $G/SO(N)$ for arbitrary $N \geq 2$, where $G=SO(N+1),SU(N)$. Such symmetric spaces \cite{Helgason} are well-known to exhaust all examples of curved $G$-invariant geometries that are a natural generalization of Euclidean spaces $\Rnum{N} \simeq Euc(N)/SO(N)$ modeled by replacing the Euclidean isometry group with a compact semisimp\-le Lie-group $G \supset SO(N)$. It will be shown that if non-stretching curves are described using a moving parallel frame and an associated frame connection $1$-form in $G/SO(N)$ then the frame structure equations for torsion and curvature encode $O(N-1)$-invariant bi-Hamiltonian operators. These operators will be demonstrated to produce a hierarchy of integrable f\/lows of curves in which the frame components of the principal normal along the curve satisfy $O(N-1)$-invariant vector soliton equations. The hierarchies for both $SO(N+1)/SO(N)$, $SU(N)/SO(N)$ will be seen to possess a scaling symmetry and accordingly will be organized by the scaling weight of the f\/lows. The $0$~f\/low just consists of a convective (traveling wave) equation, while the $+1$ f\/low will be shown to give the two vector generalizations of the mKdV equation known from symmetry-integrability classif\/ications of vector evolution equations in \cite{SokolovWolf}. A recent classif\/ication analysis \cite{AncoWolf} found there are vector hyperbolic equations for which the respective vector mKdV equations are higher symmetries. These two vector hyperbolic equations will be shown to describe a $-1$ f\/low in the respective hierarchies for $SO(N+1)/SO(N)$ and $SU(N)/SO(N)$. As further results, the Hamiltonian operators will yield explicit $O(N-1)$-invariant recursion operators for higher symmetries and higher conservation laws of the vector mKdV equations and the vector hyperbolic equations. The associated curve f\/lows produced from these equations will describe geometric nonlinear PDEs, in particular given by wave maps and mKdV analogs of Schr\"odinger maps. Previous fundamental work on vector generalizations of KdV and mKdV equations as well as their Hamiltonian structures and geometric origin appeared in \cite{AthorneFordy,Athorne,SandersWang2,SandersWang3}. In addition, the bi-Hamiltonian structure of both vector mKdV equations was f\/irst written down in \cite{Wang} from a~more algebraic point of view, in a multi-component (non-invariant) notation. Special cases of two component KdV--mKdV integrable systems related to vector mKdV equations have been discussed recently in \cite{Foursov,TsuchidaWolf,SergyeyevDemskoi}. \section[Curve flows, parallel frames, and Riemannian symmetric spaces]{Curve f\/lows, parallel frames,\\ and Riemannian symmetric spaces} Let $\gamma(t,x)$ be a f\/low of a non-stretching curve in some $n$-dimensional Riemannian manifold $(M,g)$. Write $Y=\gamma_{t}$ for the evolution vector of the curve and write $X=\gamma_{x}$ for the tangent vector along the curve normalized by $g(X,X)=1$, which is the condition that $\gamma$ is non-stretching, so thus $x$ represents arclength. In the tangent space $T_\gamma M$ of the two-dimensional surface swept out by $\gamma(t,x)$ we introduce orthonormal frame vectors $\frame{a}{}$ and connection $1$-forms $\conx{}{ab}=\conx{}{[ab]}$ related through the Riemannian covariant derivative operator $\gcovder{}$ in the standard way \cite{KobayashiNomizu}: \begin{gather*} \gcovder{x} \frame{a}{} = (X \lrcorner \conx{a}{b}) \frame{b}{} ,\qquad \gcovder{t} \frame{a}{} = (Y \lrcorner \conx{a}{b}) \frame{b}{} . \end{gather*} (Throughout, $a,b=1,\ldots,n$ denote frame indices which get raised and lowered by the Euclidean metric $\delta\downindex{ab}={\rm diag}(+1,\ldots,+1)$). Now choose the frame along the curve to be parallel \cite{Bishop}, so it is adapted to $\gamma$ via \begin{gather*} \frame{a}{} := X\ \ (a=1),\qquad (\frame{a}{})_{\perp}\ \ (a=2,\ldots,n) \end{gather*} where $g(X,(\frame{a}{})_{\perp})=0$, such that the covariant derivative of each of the $n-1$ normal vectors~$(\frame{a}{})_\perp$ in the frame is tangent to $\gamma$, \begin{gather}\label{parallelconxtang} \gcovder{x} (\frame{a}{})_{\perp} =-\v{}{a} X \end{gather} holding for some functions $\v{}{a}$, while the covariant derivative of the tangent vector $X$ in the frame is normal to $\gamma$, \begin{gather}\label{parallelconxperp} \gcovder{x} X = \v{a}{}(\frame{a}{})_{\perp} . \end{gather} Equivalently, along $\gamma$ the connection $1$-forms of the parallel frame are given by the skew matrix $\xconx{}{ab}:=X \lrcorner \conx{}{ab} = 2 \xframe{[a} \v{b]}{}$ where $\xframe{a}:=g(X,\frame{}{a})$ is the row matrix of the frame in the tangent direction. In matrix notation we have \begin{gather}\label{riemannframe} \xframe{a}=(1, \vec{0}) ,\qquad \xconx{a}{b}= \begin{pmatrix} 0 & \v{b}{}\\ -\v{}{a}&\bdsymb{0} \end{pmatrix} , \end{gather} with $\vec{0}$, $\bdsymb{0}$ respectively denoting the $1 \times (n-1)$ zero row-matrix and $(n-1) \times (n-1)$ zero skew-matrix. (Hereafter, upper/lower frame indices will represent row/column matrices.) This matrix description \eqref{riemannframe} of a parallel frame has a purely algebraic characterization: $\xframe{a}$ is a f\/ixed unit vector in $\Rnum{n}$ preserved by a $SO(n-1)$ rotation subgroup of the local frame structure group $SO(n)$, while $\xconx{a}{b}$ belongs to the orthogonal complement of the corresponding rotation subalgebra $\vs{so}(n-1)$ in the Lie algebra $\vs{so}(n)$ of $SO(n)$. The curve f\/low has associated to it the pullback of the Cartan structure equations \cite{KobayashiNomizu} expressing that the covariant derivatives $\gcovder{x}:=X \lrcorner \gcovder{}$ along the curve and $\gcovder{t}:=Y \lrcorner \gcovder{}$ along the f\/low have vanishing torsion \begin{gather}\label{torseq} \gcovder{x} \gamma_{t} - \gcovder{t} \gamma_{x} = [X,Y] = 0 \end{gather} and carry curvature determined from the metric $g$, \begin{gather}\label{curveq} [\gcovder{x},\gcovder{t}]=\curv{}{}(X,Y) \end{gather} given by the Riemann tensor $\curv{}{}(X,Y)$ which is a linear map on $T_x M$ depending bilinearly on~$X$,~$Y$. In frame components the torsion and curvature equations look like \cite{KobayashiNomizu} \begin{gather}\label{frametorseq} 0 = \D{x} \tframe{a} - \D{t} \xframe{a} + \tframe{b} \xconx{b}{a} - \xframe{b} \tconx{b}{a} , \\ \label{framecurveq} \curv{a}{b}(X,Y) = \D{t} \xconx{a}{b} - \D{x} \tconx{a}{b} +\tconx{a}{c} \xconx{c}{b} - \xconx{a}{c} \tconx{c}{b} . \end{gather} Here $\tframe{a}:= g(Y,\frame{}{a})$ and $\tconx{a}{b}:= Y \lrcorner\conx{a}{b} = g(\frame{}{b},\gcovder{t} \frame{a}{})$ are respectively the frame row-matrix and connection skew-matrix in the f\/low direction, and $\curv{a}{b}(X,Y):= g(\frame{}{b}, [\gcovder{x},\gcovder{t}]\frame{a}{})$ is the curvature matrix. As outlined in \cite{Anco,SandersWang2}, these frame equations \eqrefs{frametorseq}{framecurveq} directly encode a bi-Hamiltonian structure based on geometrical variables when the geometry of $M$ is characterized by having its frame curvature matrix $\curv{a}{b}(\frame{c}{},\frame{d}{})$ be constant on $M$. In this situation the Hamiltonian variable is given by the principal normal $\v{}{}:=\gcovder{x} X = \v{a}{}(\frame{a}{})_{\perp}$ in the tangent direction of $\gamma$, while the principal normal in the f\/low direction $\w{}{}:= \gcovder{t} X =\w{a}{}(\frame{a}{})_{\perp}$ represents a Hamiltonian covector f\/ield, and the normal part of the f\/low vector $h_\perp:=Y_{\perp}=\h{a}{}(\frame{a}{})_{\perp}$ represents a Hamiltonian vector f\/ield\footnote{See \cite{Dorfman,Olver} and the appendix of \cite{Anco} for a summary of Hamiltonian theory relevant to PDE systems.}. In a parallel frame these variables $\v{a}{}$, $\w{a}{}$, $\h{a}{}$ are encoded respectively in the top row of the connection matrices $\xconx{a}{b}$, $\tconx{a}{b}$, and in the row matrix $(\tframe{a})_{\perp}= \tframe{a} - h_\parallel \xframe{a}$ where $h_\parallel:= g(Y,X)$ is the tangential part of the f\/low vector. A wide class of Riemannian manifolds $(M,g)$ in which the frame curvature matrix $\curv{a}{b}(\frame{c}{},\frame{d}{})$ is constant on $M$ consists of the symmetric spaces $M=G/H$ for compact semisimple Lie groups $G \supset H$ (such that $H$ is invariant under an involutive automorphism of $G$). In such spaces the Riemannian curvature tensor and the metric tensor are covariantly constant and $G$-invariant \cite{KobayashiNomizu}, which implies constancy of the curvature matrix $\curv{a}{b}(\frame{c}{},\frame{d}{})$. The metric tensor $g$ on $M$ is given by the Cartan--Killing inner product $\langle \cdot,\cdot\rangle$ on $T_x G \simeq \vs{g}$ restricted to the Lie algebra quotient space $\vs{p} = \vs{g}/\vs{h}$ with $T_x H \simeq\vs{h}$, where $\vs{g} = \vs{h} \oplus \vs{p}$ decomposes such that $[\vs{h},\vs{p}] \subseteq \vs{p}$ and $[\vs{p},\vs{p}] \subseteq \vs{h}$ (corresponding to the eigenspaces of the adjoint action of the involutive automorphism of $G$ that leaves $H$ invariant). A complete classif\/ication of symmetric spaces is given in \cite{Helgason}; their geometric properties are summarized in \cite{KobayashiNomizu}. In these spaces $H$ acts as a gauge group so consequently the bi-Hamiltonian structure encoded in the frame equations will be invariant under the subgroup of $H$ that leaves $X$ f\/ixed\footnote{More details will be given elsewhere \cite{forthcoming}.}. Thus in order to obtain $O(N-1)$-invariant bi-Hamiltonian operators, as sought here, we need the group $O(N-1)$ to be the isotropy subgroup in $H$ leaving $X$ f\/ixed. Hence we restrict attention to the symmetric spaces $M=G/SO(N)$ with $H=SO(N) \supset O(N-1)$. From the classif\/ication in \cite{Helgason} all examples of these spaces are exhausted by $G=SO(N+1),SU(N)$. The example $M=SO(N+1)/SO(N) \simeq S^{N}$ is isometric to the $N$-sphere, which has constant curvature. In this symmetric space, the encoding of bi-Hamiltonian operators in terms of geometric variables has been worked out in \cite{Anco} using the just intrinsic Riemannian geometry of the $N$-sphere, following closely the ideas in \cite{SandersWang1,SandersWang2}. An extrinsic approach based on Klein geometry \cite{Sharpe,AncoWolf} will be used here, as it applicable to both symmetric spaces $SO(N+1)/SO(N)$ and $SU(N)/SO(N)$. In a Klein geometry the left-invariant $\vs{g}$-valued Maurer--Cartan form on the Lie group $G$ is identif\/ied with a zero-curvature connection $1$-form $\omega_G$ called the Cartan connection \cite{Sharpe}. Thus \begin{gather*} 0 = d\omega_G + \frac{1}{2} [\omega_G,\omega_G], \end{gather*} where $d$ is the total exterior derivative on the group manifold $G$. Through the Lie algebra decomposition $\vs{g}=\vs{so}(N) \oplus\vs{p}$ with $[\vs{p},\vs{p}] \subset \vs{so}(N)$ and $[\vs{so}(N),\vs{p}] \subset \vs{p}$, the Cartan connection determines a Riemannian structure on the quotient space $M = G/SO(N)$ where $G$ is regarded~\cite{Sharpe} as a principal $SO(N)$ bundle over $M$. Fix any local section of this bundle and pull-back $\omega_G$ to give a $\vs{g}$-valued $1$-form ${}^{\vs{g}}\omega{}$ at $x$ in $M$. The ef\/fect of changing the local section is to induce a~$SO(N)$ gauge transformation on ${}^{\vs{g}}\omega{}$. Let $\sigma$ denote an involutive automorphism of $\vs{g}$ such that $\vs{so}(N)$ is the eigenspace $\sigma=+1$, $\vs{p}$ is the eigenspace $\sigma=-1$. We consider the corresponding decomposition of ${}^{\vs{g}}\omega{}$: it can be shown that \cite{Sharpe} the symmetric part \begin{gather}\label{kleinconx} \conx{}{}:= \frac{1}{2}({}^{\vs{g}}\omega{} + \sigma({}^{\vs{g}}\omega{})) \end{gather} def\/ines a $\vs{so}(N)$-valued connection $1$-form for the group action of $SO(N)$ on the tangent space $T_x M \simeq \vs{p}$, while the antisymmetric part \begin{gather}\label{kleinframe} \coframe{}:=\frac{1}{2}({}^{\vs{g}}\omega{} - \sigma({}^{\vs{g}}\omega{})) \end{gather} def\/ines a $\vs{p}$-valued coframe for the Cartan--Killing inner product $\langle\cdot,\cdot\rangle_{\vs{p}}$ on $T_x G \simeq \vs{g}$ restricted to $T_x M \simeq \vs{p}$. This inner product $\langle\cdot,\cdot\rangle_\vs{p}$ provides a Riemannian metric \begin{gather*} g=\langle \coframe{} \otimes \coframe{}\rangle_\vs{p} \end{gather*} on $M=G/SO(N)$, such that the squared norm of any vector $X\in T_x M$ is $|X|_g^2 = g(X,X)=\langle X\lrcorner \coframe{},X\lrcorner \coframe{}\rangle_\vs{p}$. Moreover there is a $G$-invariant covariant derivative $\covder{}$ associated to this structure whose restriction to the tangent space $T_\gamma M$ for any curve f\/low $\gamma(t,x)$ in $M=G/SO(N)$ is def\/ined via \begin{gather}\label{ewrelation} \covder{x} \coframe{} = [\coframe{},\gamma_{x}\lrcorner \conx{}{}] \qquad\eqtext{ and }\qquad \covder{t} \coframe{} = [\coframe{},\gamma_{t} \lrcorner\conx{}{}] . \end{gather} These derivatives $\covder{x}$, $\covder{t}$ obey the Cartan structure equations \eqrefs{torseq}{curveq}, namely they have zero torsion \begin{gather}\label{cartantors} 0 = (\covder{x} \gamma_{t} - \covder{t} \gamma_{x})\lrcorner \coframe{} = \D{x}\tframe{} - \D{t}\xframe{} + [\xconx{}{},\tframe{}] -[\tconx{}{},\xframe{}] \end{gather} and carry $G$-invariant curvature \begin{gather} \label{cartancurv} \curv{}{}(\gamma_{x},\gamma_{t}) \coframe{} = [\covder{x},\covder{t}] \coframe{} = \D{x} \tconx{}{} - \D{t} \xconx{}{} + [\xconx{}{},\tconx{}{}] =-[\xframe{},\tframe{}], \end{gather} where \begin{gather*} \xframe{}:= \gamma_{x} \lrcorner \coframe{} ,\qquad \tframe{}:= \gamma_{t} \lrcorner \coframe{} ,\qquad \xconx{}{}:= \gamma_{x} \lrcorner \conx{}{} ,\qquad \tconx{}{}:= \gamma_{t} \lrcorner \conx{}{} . \end{gather*} The $G$-invariant covariant derivative and curvature on $T_\gamma M$ are thus seen to coincide with the Riemannian ones determined from the metric $g$. More generally, in this manner \cite{Sharpe} the relations~\eqrefs{kleinconx}{kleinframe} canonically solder a Klein geometry onto a Riemannian symmetric-space geometry. Geometrically, $\xframe{}$ and $\xconx{}{}$ represent the tangential part of the coframe and the connection $1$-form along $\gamma$. For a non-stretching curve $\gamma$, where $x$ is the arclength, note $\xframe{}$ has unit norm in the inner product, $\langle \xframe{},\xframe{}\rangle_{\vs{p}}=1$. This implies $\vs{p}$ has a decomposition into tangential and normal subspaces $\vs{p}_{\parallel}$ and $\vs{p}_{\perp}$ with respect to $\xframe{}$ such that $\langle \xframe{},\vs{p}_{\perp}\rangle_{\vs{p}}=0$, with $\vs{p}=\vs{p}_{\perp} \oplus \vs{p}_{\parallel}$ and $\vs{p}_{\parallel} \simeq \Rnum{}$. \begin{remark} A main insight now, generalizing the results in \cite{SandersWang2,Anco}, is that the Cartan structure equations on the surface swept out by the curve f\/low $\gamma(t,x)$ in $M=G/SO(N)$ will geometrically encode $O(N-1)$-invariant bi-Hamiltonian operators if the gauge (rotation) freedom of the group action $SO(N)$ on $\coframe{}$ and $\conx{}{}$ is used to f\/ix them to be a parallel coframe and its associated connection $1$-form related by the Riemannian covariant derivative. The groups $G=SO(N+1)$ and $G=SU(N)$ will produce a dif\/ferent encoding except when $N=2$, since in that case $T_x M \simeq \vs{so}(3)/\vs{so}(2) \simeq \vs{su}(2)/\vs{so}(2)$ is the same tangent space for $M=SO(3)/SO(2)$ and $M=SU(2)/SO(2)$ due to the Lie-algebra isomorphism $\vs{so}(3) \simeq \vs{su}(2)$. This will be seen to account for the existence of the two dif\/ferent vector generalizations of the scalar mKdV hierarchy. \end{remark} The algebraic characterization of a parallel frame for curves in Riemannian geometry extends naturally to the setting of Klein geometry, via the property that $\xframe{}$ is preserved by a $SO(N-1)$ rotation subgroup of the local frame structure group $SO(N)$ acting on $\vs{p} \subset \vs{g}$, while $\xconx{}{}$ belongs to the orthogonal complement of the $SO(N-1)$ rotation Lie subalgebra $\vs{so}(N-1)$ contained in the Lie algebra $\vs{so}(N)$ of $SO(N)$. Their geometrical meaning, however, generalizes the Riemannian properties \eqrefs{parallelconxtang}{parallelconxperp}, as follows. Let $\frame{a}{}$ be a frame whose dual coframe is identif\/ied with the $\vs{p}$-valued coframe $\coframe{}$ in a f\/ixed orthonormal basis for $\vs{p}\subset \vs{g}$. Decompose the coframe into parallel/perpendicular parts with respect to $\xframe{}$ in an algebraic sense as def\/ined by the kernel/cokernel of Lie algebra multiplication $[\xframe{},\cdot\ ]_\vs{g}={\rm ad}(\xframe{}{})$. Thus we have $\coframe{}=(\coframe{C},\coframe{C^\perp})$ where the $\vs{p}$-valued covectors $\coframe{C}$, $\coframe{C^\perp}$ satisfy $[\xframe{},\coframe{C}]_\vs{g}=0$, and $\coframe{C^\perp}$ is orthogonal to $\coframe{C}$, so $[\xframe{},\coframe{C^\perp}]_\vs{g} \neq0$. Note there is a corresponding algebraic decomposition of the tangent space $T_x M \simeq \vs{p}=\vs{g}/\vs{so}(N)$ given by $\vs{p}=\vs{p}_C \oplus \vs{p}_{C^\perp}$, with $\vs{p}_\parallel \subseteq \vs{p}_C$ and $\vs{p}_{C^\perp} \subseteq \vs{p}_\perp$, where $[\vs{p}_\parallel,\vs{p}_C]=0$ and $\langle \vs{p}_{C^\perp},\vs{p}_C\rangle _\vs{p}=0$, so $[\vs{p}_\parallel,\vs{p}_{C^\perp}]\neq 0$ (namely, $\vs{p}_C$ is the centralizer of $\xframe{}$ in $\vs{p} \subset \vs{g}$). This decomposition is preserved by ${\rm ad}(\xconx{}{})$ which acts as an infinitesimal rotation, ${\rm ad}(\xconx{}{})\vs{p}_C \subseteq \vs{p}_{C^\perp}$, ${\rm ad}(\xconx{}{})\vs{p}_{C^\perp} \subseteq \vs{p}_C$. Hence, from equation \eqref{ewrelation}, the derivative $\covder{x}$ of a covector perpendicular (respectively parallel) to $\xframe{}$ lies parallel (respectively perpendicular) to $\xframe{}$, namely $\covder{x}\coframe{C}$ belongs to $\vs{p}_{C^\perp}$, $\covder{x}\coframe{C^\perp}$ belongs to $\vs{p}_C$. In the Riemannian setting, these properties correspond to $\gcovder{x}(\frame{}{a})_C = \v{a}{\ b}(\frame{}{b})_{C^\perp}$, $\gcovder{x}(\frame{}{a})_{C^\perp} = -\v{\ a}{b}(\frame{}{b})_C$ for some functions $\v{ab}{}=-\v{ba}{}$. Such a frame will be called {\it $SO(N)$-parallel} and def\/ines a strict generalization of a~Riemannian parallel frame whenever $\vs{p}_C$ is larger than $\vs{p}_\parallel$. Existence of a $SO(N)$-parallel frame for curve flows in Klein geometries $G/SO(N)$ is guaranteed by the $SO(N)$ gauge freedom on $\frame{}{}$ and $\conx{}{}$ inherited from the local section of $G$ used to pull back the Maurer-Cartan form to $G/SO(N)$. \section[Bi-Hamiltonian operators and vector soliton equations for $SO(N+1)/SO(N) \simeq S^N$]{Bi-Hamiltonian operators and vector soliton equations\\ for $\boldsymbol{SO(N+1)/SO(N) \simeq S^N}$} Recall $\vs{so}(k)$ is a real vector space isomorphic to the Lie algebra of $k\times k$ skew-symmetric matrices. So the tangent space $T_x M= \vs{so}(N+1)/\vs{so}(N)$ of the Riemannian manifold $M=SO(N+1)/SO(N)$ is isomorphic to $\vs{p} \simeq \Rnum{N}$, as described by the following canonical decomposition \begin{gather*} \begin{pmatrix} 0 & p \\ -\trans{p} & \bdsymb{0} \end{pmatrix} \in \vs{p} \subset \vs{so}(N+1)=\vs{g} ,\qquad \bdsymb{0} \in \vs{so}(N) =\vs{h} ,\qquad p \in \Rnum{N} \end{gather*} parameterized by the $N$ component vector $p$. The Cartan--Killing inner product on $\vs{g}$ is given by the trace of the product of an $\vs{so}(N+1)$ matrix and a transpose $\vs{so}(N+1)$ matrix, multiplied by a normalization factor $\frac{1}{2}$. The norm-squared on the quotient space $\vs{p}$ thereby reduces to the ordinary dot product of vectors $p$: \begin{gather*} \left< \begin{pmatrix}0 & p\\ -\trans{p} &\bdsymb{0} \end{pmatrix}, \begin{pmatrix}0 & p\\ -\trans{p} &\bdsymb{0} \end{pmatrix} \right> = \frac{1}{2} {\rm tr}\left( \trans{\begin{pmatrix}0 & p\\ -\trans{p} & \bdsymb{0} \end{pmatrix}} \begin{pmatrix}0 & p\\ -\trans{p} & \bdsymb{0} \end{pmatrix} \right) = p \cdot p . \end{gather*} Note the Cartan--Killing inner product provides a canonical identif\/ication between $\vs{p} \simeq \Rnum{N}$ and its dual $\vs{p}^{*} \simeq \Rnum{N}$. Let $\gamma(t,x)$ be a f\/low of a non-stretching curve in $M =SO(N+1)/SO(N) \simeq S^N$. We introduce a $SO(N)$-parallel coframe $\coframe{} \in T^*_\gamma M\otimes\vs{p}$ and its associated connection $1$-form $\conx{}{} \in T^*_\gamma M\otimes\vs{so}(N)$ along $\gamma$ by putting\footnote{Note $\conx{}{}$ is related to $\coframe{}$ by the Riemannian covariant derivative \eqref{ewrelation} on the surface swept out by the curve f\/low~$\gamma(t,x)$.} \begin{gather} \xframe{} = \gamma_{x} \lrcorner \coframe{} = \begin{pmatrix} 0 & (1,\vec{0}) \\ -\trans{(1,\vec{0})} & \bdsymb{0} \end{pmatrix} \in \vs{p} ,\qquad (1,\vec{0}) \in \Rnum{N} ,\qquad \vec{0} \in \Rnum{N-1} \end{gather} and \begin{gather*} \xconx{}{} = \gamma_{x} \lrcorner \conx{}{} = \begin{pmatrix} 0 & (0,\vec{0}) \\ \trans{(0,\vec{0})} & \bdsymb{\xconx{}{}} \end{pmatrix} \in \vs{so}(N+1), \end{gather*} where \begin{gather*} \bdsymb{\xconx{}{}} = \begin{pmatrix} 0 & \vec{v} \\ -\trans{\vec{v}} & \bdsymb{0} \end{pmatrix} \in \vs{so}(N) ,\qquad \vec{v} \in \Rnum{N-1} ,\qquad \bdsymb{0} \in \vs{so}(N-1) . \end{gather*} Note the form of $\xframe{}$ indicates the coframe $\coframe{}$ is adapted to $\gamma$, with $(1,\vec{0})$ representing a choice of a constant unit-norm vector in $\vs{p} \simeq \Rnum{N}$, so $\langle \xframe{},\xframe{}\rangle_{\vs{p}} = (1,\vec{0}) \cdot (1,\vec{0}) =1$. All such choices are equivalent through the $SO(N)$ rotation gauge freedom on the coframe and connection $1$-form. Consequently, there is a decomposition of $SO(N+1)/SO(N)$ matrices \begin{gather*} \begin{pmatrix} 0 & p \\ -\trans{p} & \bdsymb{0} \end{pmatrix} = \begin{pmatrix} 0 & (p_{\parallel},\vec{0}) \\ -\trans{(p_{\parallel},\vec{0})} & \bdsymb{0} \end{pmatrix} + \begin{pmatrix}0 & (0,\vec{p}_{\perp}) \\ -\trans{(0,\vec{p}_{\perp})} & \bdsymb{0} \end{pmatrix} \in \vs{p} \end{gather*} into tangential and normal parts relative to $\xframe{}$ via a corresponding decomposition of vectors given by \begin{gather*} p = (p_{\parallel},\vec{p}_{\perp}) \in \Rnum{N} \end{gather*} relative to $(1,\vec{0})$. Thus $p_{\parallel}$ is identif\/ied with $\vs{p}_\parallel=\vs{p}_C$, and $\vec{p}_{\perp}$ with $\vs{p}_\perp=\vs{p}_{C^\perp}$. In the f\/low direction we put \begin{gather*} \tframe{} = \gamma_{t} \lrcorner \coframe{} = \begin{pmatrix} 0 & (h_\parallel,\vec{h}_{\perp}) \\ -\trans{(h_\parallel,\vec{h}_{\perp})} & \bdsymb{0} \end{pmatrix} \in \vs{p} ,\qquad (h_\parallel,\vec{h}_{\perp}) \in \Rnum{N} ,\qquad \vec{h}_{\perp} \in \Rnum{N-1} \end{gather*} and \begin{gather}\label{soflowconx} \tconx{}{} = \gamma_{t} \lrcorner \conx{}{} = \begin{pmatrix} 0 & (0,\vec{0}) \\ \trans{(0,\vec{0})} & \bdsymb{\tconx{}{}} \end{pmatrix} \in \vs{so}(N+1), \end{gather} where \begin{gather}\label{soflowconxh} \bdsymb{\tconx{}{}} = \begin{pmatrix} 0 & \vec{\varpi} \\ -\trans{\vec{\varpi}} & \bdsymb{\Theta} \end{pmatrix} \in \vs{so}(N) ,\qquad \vec{\varpi} \in \Rnum{N-1} ,\qquad \bdsymb{\Theta} \in \vs{so}(N-1) . \end{gather} The components $h_\parallel$, $\vec{h}_{\perp}$ correspond to decomposing $\tframe{} = g(\gamma_{t},\gamma_{x})\xframe{}+(\gamma_{t})_{\perp} \lrcorner \coframe{\perp}$ into tangential and normal parts relative to $\xframe{}$. We now have \begin{gather*} [\xframe{},\tframe{}] = -\begin{pmatrix} 0 & 0 \\ 0 & \bdsymb{h_{\perp}} \end{pmatrix} \in \vs{so}(N+1) ,\qquad \bdsymb{h_{\perp}} = \begin{pmatrix} 0 & \vec{h}_{\perp} \\ -\trans{\vec{h}_{\perp}} & \bdsymb{0} \end{pmatrix} \in \vs{so}(N ) , \\ [\tconx{}{},\tframe{}] = - \begin{pmatrix} 0 & (0,\vec{\varpi}) \\ -\trans{(0,\vec{\varpi})} & 0 \end{pmatrix} \in \vs{p}_{\perp} , \\ [\xconx{}{},\tframe{}] = -\begin{pmatrix} 0 & (-\vec{v} \cdot \vec{h}_{\perp}, h_\parallel \vec{v}) \\ -\trans{(-\vec{v} \cdot\vec{h}_{\perp}, h_\parallel \vec{v})} & \bdsymb{0} \end{pmatrix} \in \vs{p} . \end{gather*} Hence the curvature equation \eqref{cartancurv} reduces to \begin{gather} \D{t} \vec{v} - \D{x} \vec{\varpi} - \vec{v} \lrcorner \bdsymb{\Theta} = -\vec{h}_{\perp} , \label{soveq}\\ -\D{x} \bdsymb{\Theta} + \vec{v} \otimes \vec{\varpi} -\vec{\varpi} \otimes \vec{v} =0 , \label{sothetaeq} \end{gather} while the torsion equation \eqref{cartantors} yields \begin{gather} \label{sohpareq} 0 = \D{x} h_\parallel + \vec{v} \cdot \vec{h}_{\perp} , \\ \label{soweq} 0 = \vec{\varpi} - h_\parallel \vec{v} + \D{x} \vec{h}_{\perp} . \end{gather} Here $\otimes$ denotes the outer product of pairs of vectors ($1 \times N$ row matrices), producing $N \times N$ matrices $\vec{A} \otimes \vec{B} = \trans{\vec{A}} \vec{B}$, and $\lrcorner$ denotes multiplication of $N \times N$ matrices on vectors ($1 \times N$ row matrices), $\vec{A} \lrcorner (\vec{B} \otimes \vec{C}) := (\vec{A} \cdot \vec{B}) \vec{C}$ which is the transpose of the standard matrix product on column vectors, $(\vec{B} \otimes \vec{C}) \vec{A} = (\vec{C} \cdot \vec{A}) \vec{B}$. To proceed we use equations \eqrefs{sothetaeq}{sohpareq} to eliminate \begin{gather}\label{sotheta} \bdsymb{\Theta} = -\Dinv{x}(\vec{\varpi} \otimes \vec{v} -\vec{v} \otimes \vec{\varpi}) ,\qquad h_\parallel = -\Dinv{x} (\vec{v} \cdot \vec{h}_{\perp}) \end{gather} in terms of the variables $\vec{v}$, $\vec{h}_{\perp}$, $\vec{\varpi}$. Then equation \eqref{soveq} gives a f\/low on $\vec{v}$, \begin{gather*} \vec{v}_{t} = \D{x} \vec{\varpi} - \vec{v} \lrcorner \Dinv{x}( \vec{\varpi} \otimes \vec{v} - \vec{v} \otimes \vec{\varpi}) - \chi \vec{h}_{\perp} \end{gather*} with \begin{gather*} \vec{\varpi} = -\Dinv{x} (\vec{v} \cdot \vec{h}_{\perp}) \vec{v} - \D{x}\vec{h}_{\perp} \end{gather*} obtained from equation \eqref{soweq}. Here $\chi=1$ represents the Riemannian scalar curvature of $SO(N+1)/SO(N) \simeq S^{N}$ (see \cite{Anco}). In these equations we read of\/f the operators \begin{gather*} {\mathcal H} = \D{x} + \vec{v} \lrcorner \Dinv{x}(\vec{v} \wedge\ ) ,\qquad {\mathcal J} = \D{x} + \Dinv{x}(\vec{v} \cdot\ ) \vec{v} , \end{gather*} where $\vec{A} \wedge \vec{B} = \vec{A} \otimes \vec{B} - \vec{B} \otimes \vec{A}$. The results in \cite{SandersWang2} prove the following properties of ${\mathcal H}$, ${\mathcal J}$. \begin{theorem}\label{thm1} ${\mathcal H}$, ${\mathcal J}$ are compatible $O(N-1)$-invariant Hamiltonian cosymplectic and symplectic operators with respect to the Hamiltonian variable $\vec{v}$. Consequently, the flow equation takes the Hamiltonian form \begin{gather*} \vec{v}_{t} = {\mathcal H}(\vec{\varpi}) - \chi \vec{h}_{\perp} = {\mathcal R}(\vec{h}_{\perp}) - \chi \vec{h}_{\perp} ,\qquad \vec{\varpi} = {\mathcal J}(\vec{h}_{\perp}) \end{gather*} where ${\mathcal R} = {\mathcal H} \circ {\mathcal J}$ is a hereditary recursion operator. \end{theorem} On the $x$-jet space of $\vec{v}(t,x)$, the variables $\vec{h}_{\perp}$ and $\vec{\varpi}$ have the respective meaning of a Hamiltonian vector f\/ield $\vec{h}_{\perp}\lrcorner \partial/\partial \vec{v}$ and covector f\/ield $\vec{\varpi}\lrcorner d\vec{v}$ (see the Appendix of \cite{Anco}). Thus the recursion operator\footnote{This $O(N-1)$-invariant form of the recursion operator appeared already in \cite{Anco2}.} \begin{gather} {\mathcal R} = \D{x}(\D{x} + \Dinv{x}(\vec{v} \cdot\ ) \vec{v}) + \vec{v} \lrcorner \Dinv{x}(\vec{v} \wedge \D{x}\ ) \nonumber\\ \phantom{{\mathcal R}}{} = \D{x}^2 +|\vec{v}|^2 +\Dinv{x}(\vec{v}\cdot\ )\vec{v}_x -\vec{v} \lrcorner \Dinv{x}(\vec{v}_x \wedge\ )\label{soRop} \end{gather} generates a hierarchy of commuting Hamiltonian vector f\/ields $\vec{h}^{(k)}_{\perp}$ starting from $\vec{h}^{(0)}_{\perp}=\vec{v}_{x}$ given by the inf\/initesimal generator of $x$-translations in terms of arclength $x$ along the curve. The adjoint operator ${\mathcal R}^*$ generates a related hierarchy of involutive Hamiltonian covector f\/ields $\vec{\varpi}^{(k)} = \delta H^{(k)}/\delta \vec{v}$ in terms of Hamiltonians $H =H^{(k)}(\vec{v},\vec{v}_{x},\vec{v}_{2x},\ldots)$ starting from $\vec{\varpi}^{(0)}=\vec{v}$, $H^{(0)}=\frac{1}{2} |\vec{v}|^{2}$. These hierarchies are related by $\vec{h}^{(k)}_{\perp} = {\mathcal H}(\vec{\varpi}^{(k)})$, $\vec{\varpi}^{(k+1)}={\mathcal J}(\vec{h}^{(k)}_{\perp})$, $k=0,1,2,\ldots$. Both hierarchies share the mKdV scaling symmetry $x\rightarrow\lambda x$, $\vec{v}\rightarrow\lambda^{-1}\vec{v}$, under which we see $\vec{h}^{(k)}_{\perp}$ and $H^{(k)}$ have scaling weight $2+2k$, while $\vec{\varpi}^{(k)}$ has scaling weight $1+2k$. \begin{corollary}\label{cor1} Associated to the recursion operator ${\mathcal R}$ there is a corresponding hierarchy of commuting bi-Hamiltonian flows on $\vec{v}$ given by $O(N-1)$-invariant vector evolution equations \begin{gather}\label{floweq} \vec{v}_{t} = \vec{h}^{(k+1)}_{\perp} - \chi \vec{h}^{(k)}_{\perp} ={\mathcal H}(\delta H^{(k,\chi)}/\delta \vec{v}) = {\mathcal J}^{-1}(\delta H^{(k+1,\chi)}/\delta \vec{v}) ,\qquad k = 0,1,2,\ldots \end{gather} with Hamiltonians $H^{(k+1,\chi)} = H^{(k+1)} -\chi H^{(k)}$, where ${\mathcal H},{\mathcal J}^{-1}$ are compatible Hamiltonian opera\-tors. An alternative (explicit) Hamiltonian operator for these flows is ${\mathcal E}:= {\mathcal H}\circ{\mathcal J}\circ{\mathcal H}={\mathcal R}\circ{\mathcal H}$. \end{corollary} \begin{remark}\label{rem1} Using the terminology of \cite{AncoWolf}, $\vec{h}^{(k)}_{\perp}$ will be said to produce a $+(k+1)$ f\/low on $\vec{v}$. This dif\/fers from the terminology in \cite{Anco} which refers to equation \eqref{floweq} as the $+k$ f\/low. \end{remark} The $+1$ f\/low given by $\vec{h}_{\perp} = \vec{v}_{x}$ yields \begin{gather}\label{somkdveq} \vec{v}_{t} = \vec{v}_{3x} + \frac{3}{2} |\vec{v}|^{2} \vec{v}_{x} - \chi \vec{v}_{x} \end{gather} which is a vector mKdV equation up to a convective term that can be absorbed by a Galilean transformation $x \rightarrow x - \chi t$, $t \rightarrow t$. The $+(k+1)$ f\/low gives a vector mKdV equation of higher order $3 + 2k$ on $\vec{v}$. There is also a $0$ f\/low $\vec{v}_{t} = \vec{v}_{x}$ arising from $\vec{h}_{\perp}=0$, $h_\parallel=1$, which falls outside the hierarchy generated by ${\mathcal R}$. All these f\/lows correspond to geometrical motions of the curve $\gamma(t,x)$, given by \begin{gather*} \gamma_{t}= f(\gamma_{x},\covder{x}\gamma_{x},\covder{x}^{2}\gamma_{x},\ldots) \end{gather*} subject to the non-stretching condition \begin{gather*} |\gamma_{x}|_{g}=1 . \end{gather*} The equation of motion is obtained from the identif\/ications $\gamma_{t} \leftrightarrow \tframe{}$, $\covder{x}\gamma_{x} \leftrightarrow \covD{x}\xframe{} = [\xconx{}{},\xframe{}]$, and so on, using $\covder{x} \leftrightarrow \D{x} + [\xconx{}{},\cdot] = \covD{x}$. These identif\/ications correspond to $T_x M \leftrightarrow \vs{p}$ as def\/ined via the parallel coframe. Note we have \begin{gather*} [\xconx{}{},\xframe{}] = -\begin{pmatrix} 0 & (0,\vec{v})\\ -\trans{(0,\vec{v})} & \bdsymb{0} \end{pmatrix} , \\ [\xconx{}{},[\xconx{}{},\xframe{}]] = -\begin{pmatrix} 0 & (|\vec{v}|^{2},\vec{0})\\ -\trans{(|\vec{v}|^{2},\vec{0})} & \bdsymb{0} \end{pmatrix} = -|\vec{v}|^{2} \xframe{} , \end{gather*} and so on. In particular, for the $+1$ f\/low, \begin{gather*} \vec{h}_{\perp} = \vec{v}_x ,\qquad h_\parallel = -\Dinv{x}(\vec{v} \cdot \vec{v}_{x}) =-\frac{1}{2}|\vec{v}|^{2} , \end{gather*} thus \begin{gather*} \tframe{} = \begin{pmatrix} 0 & (h_\parallel,\vec{h}_{\perp}) \\ -\trans{(h_\parallel,\vec{h}_{\perp})} & \bdsymb{0} \end{pmatrix} = -\frac{1}{2}|\vec{v}|^2 \begin{pmatrix} 0 & (1,\vec{0})\\ -\trans{(1,\vec{0})} & \bdsymb{0} \end{pmatrix} + \begin{pmatrix} 0 & (0,\vec{v}_x)\\ -\trans{(0,\vec{v}_x)} & \bdsymb{0} \end{pmatrix} \\ \nonumber \phantom{\tframe{}}{}= -\D{x}[\xconx{}{},\xframe{}]+\frac{1}{2}[\xconx{}{},[\xconx{}{},\xframe{}]] = -\covD{x}[\xconx{}{},\xframe{}]-\frac{3}{2}|\vec{v}|^2\xframe{} . \nonumber \end{gather*} We identify the f\/irst term as $-\covder{x}(\covder{x}\gamma_x)=-\covder{x}^2\gamma_x$. For the second term we observe $|\vec{v}|^2= \langle [\xconx{}{},\xframe{}],\xconx{}{},\xframe{}]\rangle_{\vs{p}} \leftrightarrow g(\covder{x}\gamma_x,\covder{x}\gamma_x)=|\covder{x}\gamma_x|_{g}^{2}$ since the Cartan--Killing inner product corresponds to the Riemannian metric $g$. Hence we have $\tframe{} \leftrightarrow -(\covder{x}^2\gamma_x + \frac{3}{2}|\covder{x}\gamma_x|_{g}^{2} \gamma_x)$. This describes a geometric nonlinear PDE for $\gamma(t,x)$, \begin{gather}\label{mkdvmap} -\gamma_t = \covder{x}^2\gamma_x +\frac{3}{2}|\covder{x}\gamma_x|_{g}^{2} \gamma_x \end{gather} which is referred to as the {\it non-stretching mKdV map equation} on the symmetric space $M = SO(N+1)/SO(N) \simeq S^N$. A dif\/ferent derivation using just the Riemannian geometry of $S^N$ was given in \cite{Anco}. Since in the tangent space $T_x S^N\simeq \vs{so}(N+1)/\vs{so}(N)$ the kernel of $[\xframe{},\cdot\ ]$ is spanned by $\xframe{}$, a parallel frame in the setting of the Klein geometry of $SO(N+1)/SO(N)$ is precisely the same as in the Riemannian geometry of $S^N$. The convective term $|\covder{x}\gamma_x|_{g}^{2} \gamma_x$ can be written in an alternative form using the Klein geometry of $SO(N+1)/SO(N) \simeq S^N$. Let ${\rm ad}(\cdot)$ denote the standard adjoint representation acting in the Lie algebra $\vs{g}=\vs{p} \oplus \vs{so}(N)$. We f\/irst observe \begin{gather*} {\rm ad}([\xconx{}{},\xframe{}])\xframe{} = \begin{pmatrix} 0 & (0,\vec{0})\\ \trans{(0,\vec{0})} & \bdsymb{v} \end{pmatrix} \in \vs{so}(N+1) , \end{gather*} where \begin{gather*} \bdsymb{v} = -\begin{pmatrix} 0 & \vec{v}\\ -\trans{\vec{v}} & \bdsymb{0} \end{pmatrix} \in \vs{so}(N) . \end{gather*} Applying ${\rm ad}([\xconx{}{},\xframe{}])$ again, we obtain \begin{gather*} {\rm ad}([\xconx{}{},\xframe{}])^2\xframe{} = -|\vec{v}|^2 \begin{pmatrix} 0 & (1,\vec{0})\\ -\trans{(1,\vec{0})} &\bdsymb{0} \end{pmatrix} = -|\vec{v}|^2 \xframe{} . \end{gather*} Hence, $|\vec{v}|^2 \xframe{} \leftrightarrow -\chi^{-1}{\rm ad}(\covder{x}\gamma_x)^2 \gamma_x = |\covder{x} \gamma_x|_{g}^{2} \gamma_x$, and thus the mKdV map equation is equivalent to \begin{gather}\label{symmspmkdvmap} -\gamma_t = \covder{x}^2 \gamma_x - \frac{3}{2}\chi^{-1}{\rm ad}(\covder{x} \gamma_x)^2 \gamma_x . \end{gather} Note here that ${\rm ad}(\covder{x} \gamma_x) = [\covder{x}\gamma_x,\cdot\ ]$ maps $\vs{p} \simeq T_x M$ into $\vs{so}(N)$ and maps $\vs{so}(N)$ into $\vs{p} \simeq T_x M$, so ${\rm ad}(\covder{x}\gamma_x)^2$ is well-def\/ined on the tangent space $T_x M \simeq \vs{p}$ of $M=SO(N+1)/SO(N)$. Higher f\/lows on $\vec{v}$ yield higher-order geometric PDEs. The $0$ f\/low on $\vec{v}$ directly corresponds to \begin{gather}\label{convmap} \gamma_t=\gamma_x \end{gather} which is just a convective (linear traveling wave) map equation. There is a $-1$ f\/low contained in the hierarchy, with the property that $\vec{h}_{\perp}$ is annihilated by the symplectic operator ${\mathcal J}$ and hence gets mapped into ${\mathcal R}(\vec{h}_{\perp})=0$ under the recursion operator. Geometrically this f\/low means simply ${\mathcal J}(\vec{h}_{\perp})=\vec{\varpi}=0$ which implies $\tconx{}{}=0$ from equations~\eqref{soflowconx}, \eqref{soflowconxh}, \eqref{sotheta}, and hence $0=[\tconx{}{},\xframe{}]=\covD{t}\xframe{}$ where $\covD{t} = \D{t}+ [\tconx{}{},\cdot]$. The correspondence $\covder{t} \leftrightarrow \covD{t}$, $\gamma_x \leftrightarrow \xframe{}$ immediately leads to the equation of motion \begin{gather}\label{wavemap} 0 = \covder{t}\gamma_x \end{gather} for the curve $\gamma(t,x)$. This nonlinear geometric PDE is precisely a wave map equation on the symmetric space $SO(N+1)/SO(N) \simeq S^N$. The resulting f\/low equation on $\vec{v}$ is \begin{gather}\label{sovsgflow} \vec{v}_t = -\chi \vec{h}_{\perp} ,\qquad \chi=1, \end{gather} where \begin{gather* 0=\vec{\varpi} = -\D{x}\vec{h}_{\perp} + h_\parallel \vec{v} ,\qquad \D{x}h_\parallel = -\vec{h}_{\perp}\cdot\vec{v} . \end{gather*} Note this f\/low equation possesses the conservation law $0=\D{x}( h_\parallel{}^2+|\vec{h}_{\perp}|^2 )$ with \begin{gather*} h_\parallel{}^2+|\vec{h}_{\perp}|^2 = \langle\tframe{},\tframe{}\rangle_\vs{p} = |\gamma_t|_g^2 \end{gather*} corresponding to \begin{gather}\label{wavemapconslaw} 0=\covder{x}|\gamma_t|_g^2 . \end{gather} Thus a conformal scaling of $t$ can be used to put $|\gamma_t|_g=1$, and so \begin{gather*} 1 = h_\parallel{}^2+|\vec{h}_{\perp}|^2 . \end{gather*} Substitution of $h_\parallel=\sqrt{1-|\vec{h}_{\perp}|^2}$ along with $\vec{h}_{\perp} = -\chi^{-1}\vec{v}_t$ into the equation $\D{x}\vec{h}_{\perp} =h_\parallel \vec{v}$ consequently reduces the wave map equation to a hyperbolic vector equation \begin{gather}\label{sosghyperboliceq} \vec{v}_{tx} = -\sqrt{\chi^2-|\vec{v}_{t}|^2} \vec{v} ,\qquad \chi=1. \end{gather} Equivalently, $\vec{v}$ satisfies a nonlocal evolution equation \begin{gather*} \vec{v}_{t} = -\Dinv{x}\Big(\sqrt{1-|\vec{v}_{t}|^2} \vec{v}\Big) \end{gather*} describing the $-1$ f\/low. It also follows from $\vec{v} = h_\parallel^{-1}\D{x}\vec{h}_\perp$ combined with the flow equation \eqref{sovsgflow} that $\vec{h}_{\perp}$ obeys the vector $SG$ equation \begin{gather}\label{soSGeq} \Big(\sqrt{(1-|\vec{h}_{\perp}|^2)^{-1}} \vec{h}_{\perp x}\Big){}_t = -\vec{h}_{\perp} \end{gather} which has been derived previously in \cite{Bakas,Pohlmeyer,Wang} from a dif\/ferent point of view. These equations~\eqrefs{sosghyperboliceq}{soSGeq} possess the mKdV scaling symmetry $x\rightarrow\lambda x$, $\vec{v}\rightarrow\lambda^{-1}\vec{v}$, where $\vec{h}_{\perp}$ has scaling weight $0$. The hyperbolic vector equation \eqref{sosghyperboliceq} admits the vector mKdV equation \eqref{somkdveq} as a higher symmetry, which is shown by the symmetry-integrability classif\/ication results in \cite{AncoWolf}. As a~consequence of Corollary~\ref{cor1}, we see that the recursion operator ${\mathcal R}={\mathcal H}\circ{\mathcal J}$ generates a hierarchy of vector mKdV symmetries \begin{gather} \vec{v}_{t}^{(0)} = \vec{v}_x , \label{so0flow}\\ \vec{v}_{t}^{(1)} = {\mathcal R}(\vec{v}_x) = \vec{v}_{3x} + \frac{3}{2}|\vec{v}|^2 \vec{v}_x , \label{so1flow}\\ \vec{v}_{t}^{(2)} = {\mathcal R}^{2}(\vec{v}_x) = \vec{v}_{5x} +\frac{5}{2}(|\vec{v}|^2\vec{v}_{2x})_x + \frac{5}{2}\left((|\vec{v}|^2)_{xx} +|\vec{v}_x|^2 + \frac{3}{4}|\vec{v}|^4\right)\vec{v}_x - \frac{1}{2}|\vec{v}_x|^2 \vec{v} , \label{so2flow} \end{gather} and so on, all of which commute with the $-1$ f\/low \begin{gather}\label{so-1flow} \vec{v}_{t}^{(-1)} =\vec{h}_{\perp} \end{gather} associated to the vector SG equation \eqref{soSGeq}. Moreover the adjoint operator ${\mathcal R}^* ={\mathcal J}\circ{\mathcal H}$ generates a hierarchy of mKdV Hamiltonians \begin{gather*} H^{(0)} = \frac{1}{2}|\vec{v}|^2 , \\ H^{(1)} = -\frac{1}{2}|\vec{v}_x|^2+\frac{1}{8}|\vec{v}|^4 , \\ H^{(2)} = \frac{1}{2}|\vec{v}_{2x}|^2-\frac{3}{4}|\vec{v}|^2|\vec{v}_x|^2 -\frac{1}{2}(\vec{v}\cdot \vec{v}_x)^2 + \frac{1}{16}|\vec{v}|^6 , \end{gather*} and so on, all of which are conserved densities for the $-1$ f\/low. It f\/ollows that the hyperbolic vector equations \eqrefs{sosghyperboliceq}{soSGeq} admit these respective hierarchies of vector mKdV symmetries and conserved densities. Viewed as f\/lows, the entire hierarchy of vector PDEs \eqref{so-1flow}, \eqsref{so0flow}{so2flow}, etc. possesses the mKdV scaling symmetry $x\rightarrow\lambda x$, $\vec{v}\rightarrow\lambda^{-1}\vec{v}$, with $t\rightarrow\lambda^{1+2k} t$ for $k=-1,0,1,2$, etc. Moreover for $k\geq 0$, all these expressions will be local polynomials in the variables $\vec{v},\vec{v}_x,\vec{v}_{xx},\ldots$ as established by general results in \cite{Wang-thesis,Sergyeyev} concerning nonlocal recursion operators. \begin{theorem}\label{thm2} In the symmetric space $SO(N+1)/SO(N)$ there is a hierarchy of bi-Hamiltonian flows of curves $\gamma(t,x)$ described by geometric map equations. The $0$ flow is a convective (traveling wave) map \eqref{convmap}, while the $+1$ flow is a non-stretching mKdV map \eqref{mkdvmap} and the $+2,\ldots$ flows are higher order analogs. The kernel of the recursion operator \eqref{soRop} in the hierarchy yields the $-1$ flow which is a non-stretching wave map \eqref{wavemap}. \end{theorem} \section[Bi-Hamiltonian operators and vector soliton equations for $SU(N)/SO(N)$]{Bi-Hamiltonian operators and vector soliton equations\\ for $\boldsymbol{SU(N)/SO(N)}$} Recall $\vs{su}(k)$ is a complex vector space isomorphic to the Lie algebra of $k\times k$ skew-hermitian matrices. The real and imaginary parts of these matrices respectively belong to the real vector space $\vs{so}(k)$ of skew-symmetric matrices and the real vector space $\vs{s}(k) \simeq \vs{su}(k)/\vs{so}(k)$ def\/ined by $k\times k$ symmetric trace-free matrices. Hence $\vs{g}= \vs{su}(N)$ has the decomposition $\vs{g} =\vs{h} +{\rm i} \vs{p}$ where $\vs{h}=\vs{so}(N)$ and $\vs{p} =\vs{s}(N)$. The Cartan--Killing inner product is given by the trace of the product of an $\vs{su}(N)$ matrix and a hermitian-transpose $\vs{su}(N)$ matrix, multiplied by $1/2$. Note any matrix in $\vs{s}(N)$ can be diagonalized under the action of the group $SO(N)$. Let $\gamma(t,x)$ be a f\/low of a non-stretching curve in $M=SU(N)/SO(N)$ where we identify $T_x M \simeq \vs{p}$ (dropping a factor ${\rm i}$ for simplicity\footnote{Retaining the ${\rm i}$ in this identif\/ication will change only the sign of the scalar curvature factor $\chi$ in the f\/low equation.}). We consider a $SO(N)$-parallel coframe $\coframe{} \in T^*_\gamma M\otimes\vs{p}$ and its associated connection $1$-form $\conx{}{} \in T^*_\gamma M\otimes\vs{so}(N)$ along $\gamma$ given by\footnote{As before, $\conx{}{}$ is related to $\coframe{}$ by the Riemannian covariant derivative \eqref{ewrelation} on the surface swept out by the curve f\/low $\gamma(t,x)$.} \begin{gather} \xframe{} = \gamma_x \lrcorner \coframe{}= \kappa\left( \begin{pmatrix} -1 & \vec{0}\\ \trans{\vec{0}} & \bdsymb{0} \end{pmatrix} + \frac{1}{N} {\mathbb I} \right) =\frac{\kappa}{N} \begin{pmatrix} 1-N & \vec{0}\\ \trans{\vec{0}} & \bdsymb{1} \end{pmatrix} \in \vs{p} ,\label{suframe}\\ \bdsymb{0},{\rm i}\bdsymb{1} \in \vs{u}(N-1) ,\qquad \vec{0} \in \Rnum{N-1}\nonumber \end{gather} up to a normalization factor $\kappa$ which we will f\/ix shortly, and \begin{gather}\label{suconx} \xconx{}{} = \begin{pmatrix} 0 & \vec{v}\\ -\trans{\vec{v}} & \bdsymb{0} \end{pmatrix} \in \vs{so}(N) ,\qquad \vec{v} \in \Rnum{N-1} . \end{gather} Since the form of $\xframe{}$ is a constant matrix, it indicates that the coframe is adapted to $\gamma$ provided~$\xframe{}$ has unit norm in the Cartan--Killing inner product. We have \begin{gather}\label{normalize} \langle \xframe{},\xframe{}\rangle_{\vs{p}} = \frac{\kappa^2}{2} {\rm tr} \begin{pmatrix}(N^{-1}-1)^2 & 0\\ 0 & N^{-1}\bdsymb{1} \end{pmatrix} = \kappa^2 (N-1)/(2N)=1 \end{gather} after putting $\kappa^2=2 N(N-1)^{-1}$. As a consequence, all matrices in $\vs{p}=\vs{s}(N)$ will have a canonical decomposition into tangential and normal parts relative to $\xframe{}$, \begin{gather*} \begin{pmatrix} (N^{-1}-1) p_{\parallel} & \vec{p}_{\perp}\\ \trans{\vec{p}_{\perp}} & \bdsymb{p_{\perp}}-N^{-1} p_{\parallel}\bdsymb{1} \end{pmatrix} =\frac{1}{N} \begin{pmatrix} (1-N) p_{\parallel} & \vec{0}\\ \trans{\vec{0}} & p_{\parallel}\bdsymb{1} \end{pmatrix} + \begin{pmatrix} 0 &\vec{p}_{\perp}\\ \trans{\vec{p}_{\perp}} &\bdsymb{p_{\perp}} \end{pmatrix} \end{gather*} parameterized by the $(N-1)\times (N-1)$ matrix $\bdsymb{p_{\perp}} \in \vs{s}(N-1)$ and the $N$ component vector $(p_{\parallel},\vec{p}_{\perp}) \in \Rnum{N}$, corresponding to the decomposition $\vs{s}(N) = \vs{s}(N)_{\parallel} \oplus \vs{s}(N)_{\perp}$ given by $\langle \vs{s}(N)_{\perp},\xframe{}\rangle_{\vs{p}}=0$ and $\langle\vs{s}(N)_{\parallel},\xframe{}\rangle_{\vs{p}}=\kappa p_{\parallel}$ under the previous normalization of $\xframe{}$. Here $(p_\parallel,\bdsymb{p_\perp})$ is identif\/ied with $\vs{p}_C \supset \vs{p}_\parallel$, and $\vec{p}_\perp$ with $\vs{p}_{C^\perp}\subset \vs{p}_\perp$. Note $\bdsymb{p_\perp}$ is empty only if $N=2$, so consequently for $N>2$ the $SO(N)$-parallel frame \eqrefs{suframe}{suconx} is a strict generalization of a~Riemannian parallel frame. In the f\/low direction we put \begin{gather} \tframe{} = \gamma_{t} \lrcorner \coframe{} = \kappa\left( h_\parallel \begin{pmatrix} N^{-1}-1 & \vec{0} \\ \trans{\vec{0}} & N^{-1}\bdsymb{1} \end{pmatrix} + \begin{pmatrix} 0 &\vec{h}_{\perp} \\ \trans{\vec{h}_{\perp}} & \bdsymb{h}_{\perp} \end{pmatrix} \right) , \nonumber\\ \phantom{\tframe{}}{} = \kappa \begin{pmatrix} (N^{-1}-1)h_\parallel & \vec{h}_{\perp} \\ \trans{\vec{h}_{\perp}} & \bdsymb{h}_{\perp}+N^{-1}h_\parallel\bdsymb{1} \end{pmatrix} \in \vs{p}=\vs{s}(N), \label{suet}\\ (h_\parallel,\vec{h}_{\perp}) \in \Rnum{N} ,\qquad \bdsymb{h}_{\perp} \in \vs{s}(N-1) \nonumber \end{gather} and \begin{gather}\label{suflowconx} \tconx{}{} = \gamma_{t} \lrcorner \conx{}{} = \begin{pmatrix} 0 & \vec{\varpi} \\ -\trans{\vec{\varpi}} & \bdsymb{\Theta} \end{pmatrix} \in \vs{so}(N) ,\qquad \vec{\varpi} \in \Rnum{N-1} ,\qquad \bdsymb{\Theta} \in \vs{so}(N-1) . \end{gather} Note the components $h_\parallel$, $(\vec{h}_{\perp},\bdsymb{h}_{\perp})$ correspond to decomposing $\tframe{} = g(\gamma_{t},\gamma_{x})\xframe{}+(\gamma_{t})_{\perp} \lrcorner \coframe{\perp}$ into tangential and normal parts relative to $\xframe{}$. We thus have \begin{gather*} [\xframe{},\tframe{}] = -\kappa^2 \begin{pmatrix} 0 &\vec{h}_{\perp} \\ -\trans{\vec{h}_{\perp}} & \bdsymb{0} \end{pmatrix} \in \vs{so}(N) , \\ [\xconx{}{},\tframe{}] = \kappa \begin{pmatrix} 2\vec{h}_{\perp} \cdot \vec{v} & \vec{v} \lrcorner \bdsymb{h}_{\perp} + h_\parallel \vec{v} \\ \trans{(\vec{v} \lrcorner \bdsymb{h}_{\perp} + h_\parallel \vec{v})} & -(\vec{v} \otimes \vec{h}_{\perp} + \vec{h}_{\perp} \otimes \vec{v}) \end{pmatrix} \in \vs{s}(N) , \\ [\tconx{}{},\xframe{}] = \kappa \begin{pmatrix}0 & \vec{\varpi} \\ \trans{\vec{\varpi}} & \bdsymb{0} \end{pmatrix} \in \vs{s}(N)_{\perp} . \end{gather*} Now the curvature equation \eqref{cartancurv} yields \begin{gather} \D{t}\vec{v} - \D{x}\vec{\varpi} - \vec{v} \lrcorner\bdsymb{\Theta} = \kappa^2\vec{h}_{\perp} , \label{suveq}\\ -\D{x}\bdsymb{\Theta}+\vec{v}\otimes\vec{\varpi}-\vec{\varpi}\otimes\vec{v} = 0 , \label{suthetaeq} \end{gather} which are unchanged from the case $G=SO(N+1)$ up to the factor in front of $\vec{h}_{\perp}$. The torsion equation \eqref{cartantors} reduces to \begin{gather} 0 =2\kappa^{-2} \D{x}h_\parallel - 2\vec{v}\cdot\vec{h}_{\perp} , \label{suhpareq}\\ 0 =\vec{\varpi} - h_\parallel\vec{v} - \D{x}\vec{h}_{\perp} - \vec{v} \lrcorner \bdsymb{h}_{\perp} , \label{suweq} \end{gather} which are similar to those in the case $G=SO(N+1)$, plus \begin{gather}\label{suheq} 0 = -\D{x}(\bdsymb{h}_{\perp}+N^{-1}h_\parallel \bdsymb{1}) +\vec{v}\otimes\vec{h}_{\perp}+\vec{h}_{\perp}\otimes\vec{v} . \end{gather} Proceeding as before, we use equations \eqref{suthetaeq}, \eqref{suhpareq}, \eqref{suheq} to eliminate \begin{gather} \bdsymb{\Theta} = \Dinv{x}(\vec{v} \otimes \vec{\varpi} - \vec{\varpi} \otimes \vec{v}) , \label{sutheta}\\ h_\parallel = \kappa^2 \Dinv{x}(\vec{v}\cdot \vec{h}_{\perp}) , \nonumber\\ \bdsymb{h}_{\perp} = \Dinv{x}(2(1-N)^{-1} \vec{v} \cdot \vec{h}_{\perp} \bdsymb{1}+\vec{v}\otimes\vec{h}_{\perp} + \vec{h}_{\perp}\otimes\vec{v})\nonumber \end{gather} in terms of the variables $\vec{v}$, $\vec{h}_{\perp}$, $\vec{\varpi}$. Then equation \eqref{suveq} gives a f\/low on $\vec{v}$, \begin{gather*} \vec{v}_t= \D{x}\vec{\varpi} + \vec{v}\lrcorner \Dinv{x}(\vec{v} \otimes \vec{\varpi} - \vec{\varpi}\otimes \vec{v})+ \kappa^2\vec{h}_{\perp} \end{gather*} with \begin{gather*} \vec{\varpi} = \D{x}\vec{h}_{\perp}+2\Dinv{x}(\vec{v} \cdot \vec{h}_{\perp})\vec{v} + \vec{v} \lrcorner \Dinv{x}(\vec{v} \otimes \vec{h}_{\perp} + \vec{h}_{\perp} \otimes \vec{v}) \end{gather*} obtained from equation \eqref{suweq} after we combine $h_\parallel\vec{v}$ terms. We thus read of\/f the operators \begin{gather} {\mathcal H} = \D{x}+\vec{v} \lrcorner \Dinv{x}(\vec{v} \wedge\ ) ,\qquad {\mathcal J} = \D{x}+2\Dinv{x}(\vec{v} \cdot\ )\vec{v} + \vec{v} \lrcorner\Dinv{x}(\vec{v} \odot\ ) , \label{suHJop} \end{gather} where $\vec{A} \wedge \vec{B} = \vec{A} \otimes \vec{B} - \vec{B}\otimes\vec{A}$ and $\vec{A} \odot \vec{B} = \vec{A} \otimes\vec{B}+\vec{B}\otimes\vec{A}$. \begin{proposition} The results in Theorem~\ref{thm1} and Corollary~\ref{cor1} carry over verbatim (with the same method of proof used in \cite{SandersWang2}) for the operators ${\mathcal H}$ and ${\mathcal J}$ here, up to a change in the scalar curvature factor \begin{gather*} \chi=-\kappa^2 = 2N/(1-N) \end{gather*} connected with the Riemannian geometry of $SU(N)/SO(N)$. \footnote{ Restoring ${\rm i}$ in the identification $T_x M \simeq {\rm i}\vs{p}$ will change the sign of $\chi$. } \end{proposition} In particular, ${\mathcal R}={\mathcal H} \circ {\mathcal J}$ yields a~hereditary recursion operator \begin{gather} {\mathcal R} = \D{x}(\D{x}+2\Dinv{x}(\vec{v} \cdot\ )\vec{v} + \vec{v} \lrcorner\Dinv{x}(\vec{v} \odot\ )) + \vec{v} \lrcorner \Dinv{x}(\vec{v} \wedge (\D{x}+\vec{v} \lrcorner \Dinv{x}(\vec{v} \odot\ ))) \nonumber\\ \phantom{{\mathcal R}}{}=\D{x}^2 +2(|\vec{v}|^2 +(\vec{v}\cdot\ )\vec{v}) +2\Dinv{x}(\vec{v} \cdot\ )\vec{v} + \vec{v}_x \lrcorner\Dinv{x}(\vec{v} \odot\ )\nonumber\\ \phantom{{\mathcal R}=}{} + \vec{v} \lrcorner \Dinv{x}(\vec{v} \wedge (\vec{v} \lrcorner \Dinv{x}(\vec{v} \odot\ )) -\vec{v}_x \wedge\ )\label{suRop} \end{gather} generating a hierarchy of $O(N-1)$-invariant commuting bi-Hamiltonian f\/lows on $\vec{v}$, correspon\-ding to commuting Hamiltonian vector f\/ields $\vec{h}_{\perp}^{(k)}\lrcorner\partial/\partial\vec{v}$ and involutive covector f\/ields $\vec{\varpi}^{(k)}\lrcorner d\vec{v}$, $k=0,1,2,\ldots$ starting from $\vec{h}_{\perp}^{(0)}=\vec{v}_x$, $\vec{\varpi}^{(0)}=\vec{v}$. In the terminology of \cite{AncoWolf}, $\vec{h}_{\perp}^{(k)}$ is said to produce the $+(k+1)$ f\/low equation \eqref{floweq} on $\vec{v}$ (cf.\ Remark~\ref{rem1}). Note these f\/lows admit the same mKdV scaling symmetry $x\rightarrow\lambda x$, $\vec{v}\rightarrow\lambda^{-1}\vec{v}$ as in the case $SO(N+1)/SO(N)$. They also have similar recursion relations $\vec{h}^{(k)}_{\perp} = {\mathcal H}(\vec{\varpi}^{(k)})$, $\vec{\varpi}^{(k+1)}={\mathcal J}(\vec{h}^{(k)}_{\perp}) = \delta H^{(k+1)}/\delta \vec{v}$, $k=0,1,2,\ldots$, in terms of Hamiltonians $H =H^{(k)}(\vec{v},\vec{v}_{x},\vec{v}_{2x},\ldots)$. The $+1$ f\/low is given by $\vec{h}_{\perp}=\vec{v}_x$, yielding \begin{gather}\label{sumkdveq} \vec{v}_t = \vec{v}_{3x}+3|\vec{v}|^2\vec{v}_x+3(\vec{v} \cdot\vec{v}_x)\vec{v} - \chi\vec{v}_x . \end{gather} Up to the convective term, which can be absorbed by a Galilean transformation, this is a dif\/ferent vector mKdV equation compared to the one arising in the case $SO(N+1)/SO(N)$ for $N>2$. The $+(k+1)$ f\/low yields a higher order version of this equation \eqref{sumkdveq}. The hierarchy of f\/lows corresponds to geometrical motions of the curve $\gamma(t,x)$ obtained in a similar fashion to the ones in the case $SO(N+1)/SO(N)$ via identifying $\gamma_t \leftrightarrow\tframe{}$, $\gamma_x \leftrightarrow \xframe{}$, $\covder{x}\gamma_x \leftrightarrow[\xconx{}{},\xframe{}]=\covD{x}\xframe{}$, and so on as before, where $\covder{x} \leftrightarrow \covD{x}=\D{x}+[\xconx{}{},\xframe{}]$. Note here we have \begin{gather*} [\xconx{}{},\xframe{}] = \kappa \begin{pmatrix} 0 & \vec{v} \\\trans{\vec{v}} & \bdsymb{0} \end{pmatrix} , \qquad [\xconx{}{},[\xconx{}{},\xframe{}]] = 2\kappa \begin{pmatrix} |\vec{v}|^2 & \vec{0} \\ \vec{0} & -\vec{v} \otimes \vec{v} \end{pmatrix} , \nonumber \end{gather*} and so on. In addition, \begin{gather*} {\rm ad}([\xconx{}{},\xframe{}])\xframe{} = \kappa^2 \begin{pmatrix} 0 & \vec{v} \\ -\trans{\vec{v}} & \bdsymb{0} \end{pmatrix} , \\ \nonumber {\rm ad}([\xconx{}{},\xframe{}])^2\xframe{} = -2\kappa^3 \begin{pmatrix}| \vec{v}|^2 & \vec{0} \\ \vec{0} & -\vec{v} \otimes \vec{v} \end{pmatrix} = \chi [\xconx{}{},[\xconx{}{},\xframe{}]] . \end{gather*} Thus, for the $+1$ f\/low, \begin{gather*} \vec{h}_{\perp}=\vec{v}_x ,\qquad h_\parallel= \frac{1}{2}\kappa^2 |\vec{v}|^2 ,\qquad \bdsymb{h}_{\perp}= \vec{v} \otimes \vec{v} +(1-N)^{-1} |\vec{v}|^2 \bdsymb{1} , \end{gather*} we obtain (through equation \eqref{suet}) \begin{gather*} \tframe{} = \kappa \begin{pmatrix} (N^{-1}-1)h_\parallel & \vec{h}_{\perp} \\ \trans{\vec{h}_{\perp}} & \bdsymb{h}_{\perp}+N^{-1}h_\parallel\bdsymb{1} \end{pmatrix} =\kappa \begin{pmatrix} -|\vec{v}|^2 & \vec{v}_x \\\trans{\vec{v}_x} & \vec{v} \otimes \vec{v} \end{pmatrix} \\ \phantom{\tframe{}}{} = \D{x}[\xconx{}{},\xframe{}] -\frac{1}{2}[\xconx{}{},[\xconx{}{},\xframe{}]] . \nonumber \end{gather*} Then writing these expressions in terms of $\covD{x}$ and ${\rm ad}([\xconx{}{},\xframe{}])$, we get \begin{gather*} \tframe{} = \covD{x}[\xconx{}{},\xframe{}] -\frac{3}{2}\chi^{-1} {\rm ad}([\xconx{}{},\xframe{}])^2\xframe{} \leftrightarrow \covder{x}^2\gamma_x -\frac{3}{2}\chi^{-1}{\rm ad}(\covder{x}\gamma_x)^2\gamma_x . \end{gather*} Thus, up to a sign, $\gamma(t,x)$ satisf\/ies a geometric nonlinear PDE given by the non-stretching mKdV map equation \eqref{symmspmkdvmap} on the symmetric space $SU(N)/SO(N)$. The higher f\/lows on $\vec{v}$ determine higher order map equations for $\gamma$. The $0$ f\/low as before is $\vec{v}_t=\vec{v}_x$ arising from $\vec{h}_{\perp}=0,h_\parallel=1$, which corresponds to the convective (traveling wave) map \eqref{convmap}. There is also a $-1$ f\/low contained in the hierarchy, with the property that $\vec{h}_{\perp}$ is annihilated by the symplectic operator ${\mathcal J}$ and hence lies in the kernel ${\mathcal R}(\vec{h}_{\perp})=0$ of the recursion operator. The geometric meaning of this f\/low is simply ${\mathcal J}(\vec{h}_{\perp})=\vec{\varpi}=0$ implying $\tconx{}{}=0$ from equations \eqrefs{suflowconx}{sutheta} so $0=[\tconx{}{},\xframe{}]=\covD{t}\xframe{}$ where $\covD{t} = \D{t}+ [\tconx{}{},\cdot]$. Thus, as in the case $SO(N+1)/SO(N)$, we see from the correspondence $\covder{t} \leftrightarrow \covD{t}$, $\gamma_x\leftrightarrow \xframe{}$ that $\gamma(t,x)$ satisf\/ies a nonlinear geometric PDE given by the wave map equation \eqref{wavemap} on the symmetric space $SU(N)/SO(N)$. The $-1$ f\/low equation produced on $\vec{v}$ is again a nonlocal evolution equation \begin{gather}\label{suvsgflow} \vec{v}_t=-\chi \vec{h}_{\perp} ,\qquad \chi=-\kappa^2 \end{gather} with $\vec{h}_{\perp}$ satisfying \begin{gather}\label{suwsgflow} 0 = \vec{\varpi} =\D{x} \vec{h}_{\perp} + h\vec{v} + \vec{v} \lrcorner \bdsymb{h} \end{gather} where it is convenient to introduce the variables \begin{gather*} \bdsymb{h} = \bdsymb{h}_{\perp}+N^{-1}h_\parallel\bdsymb{1} ,\qquad h=2\kappa^{-2}h_\parallel={\rm tr}\bdsymb{h} \end{gather*} which satisfy \begin{gather} \D{x}h = 2\vec{v} \cdot\vec{h}_{\perp} , \label{suhparsgflow}\\ \D{x} \bdsymb{h} = \vec{v} \otimes\vec{h}_{\perp} + \vec{h}_{\perp} \otimes \vec{v} . \label{suhsgflow} \end{gather} These equations \eqsref{suwsgflow}{suhsgflow} determine the variables $\vec{h}_{\perp}$, $h$, $\bdsymb{h}$ implicitly as nonlocal functions of $\vec{v}$ (and its $x$ derivatives). To proceed, we will seek an inverse local expression for $\vec{v}$ in terms of $\vec{h}_\perp$, analogous to the one that arises in the case $SO(N+1)/SO(N)$. However, the presence of the additional variable $\bdsymb{h}$ here leads to a quite dif\/ferent expression for the resulting f\/low on $\vec{v}$. Let \begin{gather}\label{suvsgeq} \vec{v} = \alpha \D{x}\vec{h}_{\perp} +\beta \vec{h}_{\perp} \end{gather} for some expressions $\alpha(h)$, $\beta(h)$. Substitution of $\vec{v}$ into equation \eqref{suhsgflow} yields $\D{x}( \bdsymb{h} -\alpha \vec{h}_{\perp} \otimes\vec{h}_{\perp} ) = (2\beta-\D{x}\alpha) \vec{h}_{\perp} \otimes\vec{h}_{\perp}$ which is satisfied by $\beta=\frac{1}{2}\D{x}\alpha$ and \begin{gather}\label{suhsgeq} \bdsymb{h} = \alpha \vec{h}_{\perp} \otimes\vec{h}_{\perp} +c \bdsymb{1} \end{gather} where $c$ is a constant of integration (and $\bdsymb{1}$ is the only available constant matrix that is $O(N-1)$-invariant). Then, substitution of $\bdsymb{h}$ and $\vec{v}$ into equation \eqref{suwsgflow} gives \begin{gather}\label{sualphaeq} \alpha = -(h+c)^{-1} ,\qquad \beta = \frac{1}{2}(h+c)^{-2}\D{x}h ,\qquad c ={\rm const.} \end{gather} which also satisfies equation \eqref{suhparsgflow}. Next, by taking the trace of $\bdsymb{h}$ from equation \eqref{suhsgeq} and using equation \eqref{sualphaeq}, we obtain \begin{gather}\label{suhperpsgeq} |\vec{h}_{\perp}|^2 = Nc (h+c)-(h+c)^2 \end{gather} which enables $h$ to be expressed in terms of $\vec{h}_{\perp}$ and $c$. To determine $c$ we use the wave map conservation law \eqref{wavemapconslaw} where, now, \begin{gather*} |\gamma_t|_{g}^{2} = \langle\tframe{},\tframe{}\rangle_\vs{p} = \kappa^2( |\vec{h}_{\perp}|^2 +\frac{1}{2}( h^2+|\bdsymb{h}|^2) ) . \end{gather*} This corresponds to a conservation law admitted by equations \eqsref{suwsgflow}{suhsgflow}, \begin{gather*} 0 =\D{x}\left( |\vec{h}_{\perp}|^2 +\frac{1}{2}\big( h^2+|\bdsymb{h}|^2\big) \right) , \end{gather*} and as before, a conformal scaling of $t$ can now be used to put $|\gamma_t|_{g}$ equal to a constant. A~convenient value which simplif\/ies subsequent expressions is $|\gamma_t|_{g}=2$, so then \begin{gather*} (2/\kappa)^2 =|\vec{h}_{\perp}|^2 +\frac{1}{2}\big( |\bdsymb{h}|^2+h^2 \big) . \end{gather*} Substitution of equations \eqsref{suhsgeq}{suhperpsgeq} into this expression yields \begin{gather*} c^2 = (2/N)^2 \end{gather*} from which we obtain via equation \eqref{suhperpsgeq} \begin{gather*} h= 2N^{-1} -1 \pm \sqrt{1-|\vec{h}_{\perp}|^2} ,\qquad \alpha =|\vec{h}_{\perp}|^{-2} \Big( 1 \pm \sqrt{1-|\vec{h}_{\perp}|^2} \Big) . \end{gather*} These variables then can be expressed in terms of $\vec{v}$ through the f\/low equation \eqref{suvsgflow}, namely $|\vec{h}_{\perp}|^2 = \chi^{-2}|\vec{v}_t|^2$. Finally, we note equations \eqref{suvsgeq} and \eqref{sualphaeq} yield the explicit relation \begin{gather}\label{suvhsgeq} \vec{v} = \sqrt{\alpha} \D{x}(\sqrt{\alpha}\vec{h}_{\perp}) . \end{gather} Hence the f\/low equation on $\vec{v}$ becomes \begin{gather}\label{sunonlocalvflow} \vec{v}_t=\sqrt{A_{\pm}} \Dinv{x}\big(\sqrt{A_{\pm}}\vec{v}\big) \end{gather} where \begin{gather*} A_{\pm}= 1 \pm \sqrt{1-|\vec{v}_t|^2} =|\vec{v}_t|^2 /A_{\mp} , \end{gather*} with the factor $\chi$ having been absorbed by a scaling of $t$. This nonlocal evolution equation \eqref{sunonlocalvflow} for the $-1$ f\/low is equivalent to the vector SG equation \begin{gather*} \big(\sqrt{A_{\mp}\vec{v}_t}\big)_x=\sqrt{A_{\pm}}\vec{v} \end{gather*} or in hyperbolic form \begin{gather}\label{susghyperboliceq} \vec{v}_{tx}=A_{\pm}\vec{v} - A_{\mp}|\vec{v}_t|^{-2}(\vec{v} \cdot \vec{v}_t)\vec{v}_t . \end{gather} Alternatively, through relations \eqref{suvhsgeq} and \eqref{sualphaeq}, $\vec{h}_{\perp}$ obeys a vector SG equation \begin{gather}\label{suSGeq} (\sqrt{\alpha}(\sqrt{\alpha} \vec{h}_{\perp})_x)_t =\vec{h}_{\perp} . \end{gather} These vector equations \eqrefs{susghyperboliceq}{suSGeq} possess the mKdV scaling symmetry $x\rightarrow\lambda x$, $\vec{v}\rightarrow\lambda^{-1}\vec{v}$, where~$\vec{h}_{\perp}$ in equation \eqref{suSGeq} has scaling weight $0$. In \cite{AncoWolf} the symmetry-integrability classif\/ication results show that the hyperbolic vector equation \eqref{susghyperboliceq} admits the vector mKdV equation \eqref{sumkdveq} as a higher symmetry. From Corollary~\ref{cor1}, it follows that the recursion operator~\eqref{suRop} generates a hierarchy of vector mKdV symmetries \begin{gather} \vec{v}_{t}^{(0)} = \vec{v}_x , \label{su0flow}\\ \vec{v}_{t}^{(1)} = {\mathcal R}(\vec{v}_x) = \vec{v}_{3x}+3(|\vec{v}|^2\vec{v}_x + (\vec{v} \cdot\vec{v}_x)\vec{v}) , \label{su1flow}\\ \vec{v}_{t}^{(2)} = {\mathcal R}^2(\vec{v}_x) = \vec{v}_{5x}+5( |\vec{v}|^2\vec{v}_{3x}+3(\vec{v} \cdot \vec{v}_x)\vec{v}_{2x} + (2|\vec{v}_x|^2 + 3\vec{v} \cdot \vec{v}_{2x} +2|\vec{v}|^4)\vec{v}_x \nonumber\\ \phantom{\vec{v}_{t}^{(2)} = {\mathcal R}^2(\vec{v}_x) =}{} + (3\vec{v} \cdot \vec{v}_{3x} + 2\vec{v}_x\cdot \vec{v}_{2x} + 4|\vec{v}|^2 \vec{v} \cdot \vec{v}_x)\vec{v} ) , \label{su2flow} \end{gather} and so on, while the adjoint of this operator~\eqref{suRop} generates a hierarchy of mKdV Hamiltonians \begin{gather*} H^{(0)} = \frac{1}{2} |\vec{v}|^2 , \nonumber\\ H^{(1)} = -\frac{1}{2}|\vec{v}_x|^2+\frac{1}{2}|\vec{v}|^4 , \nonumber\\ H^{(2)} = -\frac{1}{2}|\vec{v}_{2x}|^2 -2|\vec{v}|^2|\vec{v}_x|^2 -3(\vec{v} \cdot \vec{v}_x)^2 + |\vec{v}|^6 , \nonumber \end{gather*} and so on. All of these Hamiltonians are conserved densities for the $-1$ flow \begin{gather}\label{su-1flow} \vec{v}_{t}^{(-1)} = \vec{h}_{\perp} \end{gather} associated to the vector SG equation \eqref{suSGeq}, and all of the mKdV symmetries commute with this flow. Hence these hierarchies are admitted symmetries and conserved densities for the hyperbolic vector equation \eqref{susghyperboliceq}. Viewed as flows, the vector PDEs \eqsref{su0flow}{su2flow}, etc., including the $-1$ f\/low \eqref{su-1flow}, is seen to possess the mKdV scaling symmetry $x\rightarrow\lambda x$, $\vec{v}\rightarrow\lambda^{-1}\vec{v}$, with $t\rightarrow\lambda^{1+2k} t$ for $k=-1,0,1,2$, etc.. Moreover for $k\geq 0$, all these expressions will be local polynomials in the variables $\vec{v},\vec{v}_x,\vec{v}_{xx},\ldots$ as established by results in~\cite{Sergyeyev2} applied to the separate Hamiltonian (cosymplectic and symplectic) operators \eqref{suHJop}\footnote{Due the doubly nonlocal form of the last term in the recursion operator \eqref{suRop}, the general results in \cite{Wang-thesis,Sergyeyev} are not directly applicable.}. \begin{theorem}\label{thm3} In the symmetric space $SU(N)/SO(N)$ there is a hierarchy of bi-Hamiltonian flows of curves $\gamma(t,x)$ described by geometric map equations. The $0$ flow is a convective (traveling wave) map \eqref{convmap}, while the $+1$ flow is a non-stretching mKdV map \eqref{symmspmkdvmap} and the $+2,\ldots$ flows are higher order analogs. The kernel of the recursion operator \eqref{suRop} in the hierarchy yields the $-1$ flow which is a non-stretching wave map \eqref{wavemap}. \end{theorem} \section{Concluding remarks} In the compact Riemannian symmetric spaces $G/SO(N)$, as exhausted by the Lie groups $G=SO(N+1)$ and $G=SU(N)$, there is a hierarchy of integrable bi-Hamiltonian f\/lows of non-stretching curves $\gamma(t,x)$, where the $+1$ f\/low is described by the mKdV map equation \eqref{symmspmkdvmap} and the $+2,\ldots$ f\/lows are higher-order analogs, while the wave map equation \eqref{wavemap} describes a $-1$ f\/low that is annihilated by the recursion operator of the hierarchy. In a parallel frame the principal normal components along $\gamma$ for these f\/lows respectively satisfy a vector mKdV equation and a vector hyperbolic equation, which are $O(N-1)$-invariant. The hierarchies for $SO(N+1)/SO(N),SU(N)/SO(N)$ coincide in the scalar case $N=2$. Moreover the scalar hyperbolic equation in this case is equivalent to the SG equation. These results account for the existence of the two known versions of vector generalizations of the mKdV and SG equations \cite{AncoWolf}. Similar results hold for hermitian symmetric spaces $G/U(N)$. In particular, there is a~hierarchy of f\/lows of curves in such spaces yielding scalar-vector generalizations of the mKdV equation and the SG equation. A further generalization of such results for all symmetric spaces $G/H$ will be given elsewhere \cite{forthcoming}. \subsection*{Acknowledgments} I am grateful to Thomas Wolf and Jing Ping Wang for stimulating discussions in motivating this research. I also thank the referees for many valuable comments. Tom Farrar is thanked for assistance with typesetting this paper. The author acknowledges support by an N.S.E.R.C. grant.
physics/0512232
\section{Introduction} \PARstart{P}{hysics} analyses at modern collider experiments enter a new dimension of event complexity. At the LHC, for instance, physics events will consist of the final state products of the $\mathrm{O}(20)$ collisions taking place during each readout cycle. In addition, a number of physics questions is studied in channels with complex event topologies and configuration ambiguities occurring during event analysis. \FigIns{ttHFeyn.eps}{a) Associated Higgs production in the channel $t\bar tH$ with $H\decaysto b\bar b$ and $t\bar t \rightarrow WW\,b\bar b \rightarrow qq'\, \mu \bar \nu_\mu\, b\bar b$. ~~b) The visible reconstructed partons of this channel.} One item in the long list of examples is a channel of $t$-quark associated Higgs production, $t\bar tH$ with $H\decaysto b\bar b$ (see \Fig{ttHFeyn.eps}.a). The event topology of four $b$-jets, two light-quark-jets, an isolated muon, missing energy and possible additional jets from initial state radiation (ISR) and final state radiation (FSR) imposes highest demands on detectors and reconstruction algorithms. In addition, non-trivial ambiguities must be resolved during event analysis. Even if all final state products could be reconstructed perfectly (as illustrated in \Fig{ttHFeyn.eps}.b) and no ISR or FSR effects occured, at least 24 different configurations would be possible. Finite jet resolutions, limited efficiency and purity of the $b$-tagging as well as the presence of additional jets complicate ambiguity resolution and signal identification. This task can be approached with a likelihood method based on characteristical event variables, where each possible event configuration is developed individually and rated with the likelihood function; the most probable of all interpretations finally is selected. Such an approach can be implemented by object-oriented coding and suggests the use of a class collection, that provides event containers for the reconstructed objects (muons, jets, missing energy, vertices, collisions, etc.) and handles relations between the individual objects (as, for instance, vertex relations for particle decays). Due to the large number of ambiguities occurring during the reconstruction of $t\bar tH$ events, these classes are required to offer automated copy functionality for containers, objects and corresponding relations. The application of a \emph{generalized event container} comes with a number of desirable side-effects. If used to define an abstraction interface between the output of event generator, simulation or reconstruction software and the physics analysis code, the latter is protected from changes in the underlying software packages to a large extent. This reduces code maintainance and increases code lucidity. In addition, unnecessary duplication of the analysis code can be avoided: so can the influence of detector effects (studied by direct comparison of the results on generator, simulation and on real data level) be investigated ad hoc, i.e.\ with the same analysis source code. Analysis factories, in which a number of analyses are executed at the same runtime, identifying and distinguishing different physics processes or studying systematic uncertainties, can easily be realized when using common physics objects and a common event container model in each of the analyses. Analysis environments based on a well-defined, generalized event container also provide a basis for efficient team work. Collaboration in (and supervision of) groups of students is facilitated, and knowledge transfer between subsequent generations of analysts as well as between different experiments is fostered. In this article, we present the Physics Analysis eXpert (PAX), a C++ toolkit for particle physics analysis that provides such a generalized event container together with various built-on functionalities. \section{The PAX class structure} The PAX kernel, introduced in the year 2002 \cite{PAX02} and released at the CHEP03 conference in 2003 \cite{PAX03}, is currently available as 2.00 version. For the convenience of connecting to existing software packages, PAX is realized in the C++ programming language \cite{CPPSTL}. It provides additional functionality in top of the vector algebra of the widely-spread libraries CLHEP \cite{CLHEP} or ROOT \cite{ROOT}.\footnote{ At compile-time, the user can choose between the vector algebra packages of CLHEP \cite{CLHEP} (default) or ROOT \cite{ROOT}. Depending on a compiler switch, the two type definitions \CClass{PaxLorentzVector}\ and \CClass{PaxThreeVector}\ are set to \CClass{HepLorentzVector} and \CClass{Hep3Vector} of CLHEP or to \CClass{TLorentzVector} and \CClass{TVector3} of ROOT.} The PAX container model as well as file I/O are based on the C++ Standard Template Library (STL) \cite{CPPSTL}. The PAX toolkit provides three types of generalized physics objects: \begin{itemize} \item{particles (or reconstructed objects), i.e.\ Lorentz-vectors, represented by the class \CClass{PaxFourVector},} \item{vertices, i.e.\ three-vectors, represented by the class \CClass{PaxVertex},} \item{and collisions, represented by the class \CClass{PaxCollision}.} \end{itemize} These objects are able to establish relations, and can be stored and managed in event containers, represented by the \CClass{PaxEventInterpret}\ class. \subsection{Physics objects} \FigIns{PaxFourVector.eps} {The \CClass{PaxFourVector}\ class extends the basic functionalities of the \CClass{PaxLorentzVector}\ in order to represent particles in HEP decays.} \FigIns{PaxVertex.eps} {The \CClass{PaxVertex}\ class extends the basic functionalities of the \CClass{PaxThreeVector}\ in order to represent vertices in HEP particle decays.} \FigIns{PaxCollision.eps} {The \CClass{PaxCollision}\ class represents collisions in bunch crossings at high luminosity colliders. Besides storage of general properties, the \CClass{PaxCollision}\ allows the user to establish and manage relations to \CClass{PaxVertex}\ and \CClass{PaxFourVector}\ objects.} The \CClass{PaxFourVector}\ class (see \Fig{PaxFourVector.eps}) represents particles or reconstructed objects (such as muons, electrons, missing energy, jets etc.). It inherits its basic Lorentz-vector characteristics from the well-known libraries CLHEP or ROOT. Commonly needed, additional properties such as particle-id, status, charge etc.\ can be stored in data members. Specific information (such as b-tags, jet cone sizes or energy corrections, for instance) can be stored in the so-called user records. User records are collections of string-double pairs, meant to hold object information complementary to data members. All PAX physics objects own user records (instances of the class \CClass{PaxUserRecord}) and provide methods for quick access to individual user record entries. Each instance of a PAX physics object carries an unique integer key (the so-called \CClass{PaxId}) and a string name (the so-called \CClass{PaxName}). An integer workflag facilitates tagging of individual objects. Print methods are provided to allow monitoring of object state and established relations on various verbosity levels. Copy constructors are provided to perform deep copies of PAX physics objects. The \CClass{PaxVertex}\ class, sketched in \Fig{PaxVertex.eps}, represents the spatial point of decays in particle reactions. Thus, in analogy with the \CClass{PaxFourVector}, it obtains its basic three-vector characteristics also from the CLHEP or ROOT package. The \CClass{PaxCollision}\ class (see \Fig{PaxCollision.eps}) allows the separation of collisions in multicollision events, as they occur at high-rate hadron colliders. It provides the relation management necessary to associate \CClass{PaxVertex}\ and \CClass{PaxFourVector}\ objects with different collisions in the event. \subsection{Access to primordial C++ classes}\label{sectionExpClassRel} Each PAX physics object can record pointers to an arbitrary number of instances of arbitrary C++ classes. This way, the user can keep track of the data origin within the detector reconstruction software, for instance. Access to the pointers is possible at the same runtime during any later stage of the analysis. A typical use case is the need to re-fit a track which requires access to the hits in the tracking chamber. The PAX object that represents this track, i.e.\ a \CClass{PaxFourVector}\ instance, provides the two template methods \CClass{addPointer$<$Type$>$(name, ID, pointer)} and \CClass{findPointer$<$Type$>$(name, ID)}. The argument \CClass{name} is supposed to correspond to the C++ class name, e.g.\ \CClass{Type}, the argument \CClass{ID} is a unique integer identifier for the referenced instance of the C++ class \CClass{Type}, and the third argument is a pointer to this instance. The mechanism behind is sketched in \Fig{PaxExperimentClass.eps}. The class template \CClass{PaxExperiment$<$Type$>$} provides storage, access, and clone of the pointer of type \CClass{Type}. Its base class \CClass{PaxExperimentClass}\ is used as the interface to the PAX classes which are enabled to store and access the pointer through the C++ \verb+dynamic_cast+ operator. When copying a PAX physics object, all pointers are copied as well by making use of the \CClass{clone()} method. \FigIns{PaxExperimentClass.eps} {The classes \CClass{PaxExperimentClass}\ and \CClass{PaxExperiment$<$Type$>$}\ provide recording of arbitrary pointers with PAX objects.} \subsection{Relation management} The principal duty of the PAX relation management is handling of decay trees. The manager is based on the Mediator design pattern, described in detail in reference \cite{Mediator}. In this design all relations are kept locally (i.e.\ every object knows about all their directly related objects), so that global relation directories can be avoided. \FigIns{PaxRelMgr.eps} {The PAX classes for relation management inherit from the class \CClass{PaxRelationManager}.} Speaking of PAX physics objects, this means, that each \CClass{PaxCollision}\ object owns relation managers (see \Fig{PaxRelMgr.eps}) that carry pointers to the related \CClass{PaxVertex}\ and \CClass{PaxFourVector}\ objects. At the same time, the \CClass{PaxVertex}\ objects hold pointers to their related \CClass{PaxCollision} s as well as to their incoming and outgoing \CClass{PaxFourVector} s. By the same token, \CClass{PaxFourVector} s know about their related \CClass{PaxCollision} s and about their begin and end \CClass{PaxVertex}\ objects. With this functionality, PAX allows to store complete multicollision events from parton to stable particle level, including four-momenta and spatial vertex information. In addition, the PAX relation management is used to record analysis histories: each object, which is copied via copy constructors, keeps pointers to its original instances. This way the user may always go back and ask for original properties of objects which might have changed during the development of the analysis. A powerful feature, implemented by means of the relation management, is the so-called locking mechanism. It is implemented to enable the user to exclude parts of decay trees from the analysis (i.e.\ excluding a lepton from a jet finding algorithm, etc.). If one particle or vertex is locked, all the objects down the decay tree (and the history) will be locked, too. Locking and unlocking are relaized by setting or removing the lock-flag owned by each PAX physics object. \subsection{Maps \& object containers} The PAX kernel provides the base classes \PaxMapKI{key}{item}\ and \PaxMultiMapKI{key}{item}, which inherit from the STL classes \CClass{map$<$key, item$>$} and \CClass{multimap$<$key, item$>$}, respectively. The explicit inheritance has been chosen to provide the use of existing STL objects and methods with these PAX classes. This way, iterations of PAX maps can be performed by using either the PAX iterator classes (\CClass{PaxIterator}, \CClass{PaxMapIterator}, \CClass{PaxMultiMapIterator}) or the commonly known STL iterators. All PAX classes which serve as containers are based on the class \CClass{PaxMap}\ (see \Fig{PaxContainers.eps}). \FigIns{PaxContainers.eps} {The PAX container classes inherit from the class \CClass{PaxMap}.} \subsection{Event container} \FigIns{PaxEventInterpret.eps} {The \CClass{PaxEventInterpret}\ class represents the generalized container for complete HEP events. It stores and handles multiple collisions, vertices and particles as well as event specific information in the user records.} The \CClass{PaxEventInterpret}\ class, illustrated in \Fig{PaxEventInterpret.eps}, is the generalized event container provided by PAX. By incorporating the previously described functionalities, it is capable of holding the complete information of one multicollision event with decay trees, spatial vertex information, four-momenta as well as additional reconstruction data in the user records. Physics objects (i.e.\ instances of the classes \CClass{PaxFourVector}, \CClass{PaxVertex}\ and \CClass{PaxCollision}) can be added or created with the \CClass{PaxEventInterpret}\CMethod{add()} and \CClass{PaxEventInterpret}\CMethod{create()} methods. Depending on the object type, a pair of \CClass{PaxId}\ and Pointer to the individual object is stored in one of three maps (\CClass{PaxFourVectorMap}, \CClass{PaxVertexMap}\ or \CClass{PaxCollisionMap}). Access to these maps as well as direct access to the physics objects is guaranteed via methods such as \CClass{PaxEventInterpret}\CMethod{getFourVectors()} and \CClass{PaxEventInterpret}\CMethod{findFourVector()}. At deletion of a \CClass{PaxEventInterpret}\ instance, all contained physics objects will be deleted, too. The \CClass{PaxEventInterpret}\ class is so named, because it is intended to represent a distinct interpretation of an event configuration (e.g.\ connecting particles to the decay tree according to one out of a number of hypotheses, applying different jet energy corrections, etc.). To facilitate the development of numerous parallel or subsequent event interpretations, the \CClass{PaxEventInterpret}\ class features a copy constructor, which provides a deep copy of the event container with all data members, physics objects, and their (redirected) relations. \subsection{PAX file I/O}\label{sectionPaxIoFile} The PAX toolkit offers a file I/O scheme for persistent storage of the event container, based on STL streams. It allows the user to write the contents of \CClass{PaxEventInterpret}\ instances with all contained physics objects\footnote{ For obvious reasons, pointers recorded with PAX physics objects by means of the \CClass{PaxExperimentClass}\ functionality (as described in section \ref{sectionExpClassRel}) are not stored to disk. } as well as their relations to PAX data files. When restoring the data from file, an empty \CClass{PaxEventInterpret}\ instance is filled with the stored data and objects and all object relations are reproduced. The PAX data file format provides multi-version and multi-platform compatibility. It is built of a hierarchy of binary data chunks: the top level unit is an event, which consists of an arbitrary number of event interpretations. The event interpretation chunk consists of data members, user records as well as chunks for each of the contained physics objects. Each chunk carries header information (one byte for unit type and four bytes for data amount information) and the actual binary data. This allows file structure checks and fast positioning. Therefore, the user can quickly skip arbitrary numbers of events in PAX data files, without having to sequentially read and discard. PAX also provides the possibility to write event units to strings (and to restore the \CClass{PaxEventInterpret}\ instances from those strings). This way, the user can store PAX objects to any data format supporting strings or binary data fields (like databases or experiment specific data formats). \subsection{Accessories and interfaces} As a complement to the PAX kernel, we released two accessory packages for reading standard event generator file formats. The \CClass{PaxTuple} package provides transfilling of decay trees stored in the HEPEVT or ROOT Ntuple data formats to \CClass{PaxEventInterpret}\ containers. Accordingly, the \CClass{PaxHepMC} package gives access to HepMC files. In addition, interfaces developed and posted by PAX users, that fill PAX objects with specific data of HEP experiments, are available via the PAX web page \cite{PAXWWW}. \subsection{Software development procedure} The PAX kernel and its officially supported accessories are coded and maintained by a core group of currently six developers at CERN and the Aachen and Karlsruhe universities. New developments and code modifications pass a certification procedure and are discussed and adopted in regular video meetings. As a guideline, new developments focus on aspects of performance improvement and on user feedback. New releases are to be backward compatible. Version management of the software project is handled with a web-browsable Version Control System (CVS) \cite{CVS}\cite{PAXCVS}. \subsection{Availability, documentation and support} The continuously updated PAX web page \cite{PAXWWW} provides download of the various versions of PAX kernel and accessories (based on the aforementioned web-browsable CVS repository). It also provides the PAX Users Guide\cite{PAXGuide}, a comprehensive text documentation of the PAX toolkit, as well as class reference and fast navigator pages for download or online use. The web page also offers access to mailing lists, in which PAX users are informed about new developments and in which technical issues of PAX analyses can be discussed. \section{How PAX physics analyses can be structured} \FigIns{PAXAnaI.eps} {One possible realization of a physics analysis with PAX; a dedicated, experiment-specific class for filling the PAX containers represents the interface between detector reconstruction software and PAX-based physics analysis. The PAX persistency scheme is used to store the data to PAX data files for later use.} \FigIns{PAXAnaII.eps} {a) Exchangeability of the filling class allows PAX physics analyses to be applied to various input sources, e.g.\ to Monte Carlo event generator data. b) The use of PAX data files allows fast analysis of the reconstruction data decoupled from the experiment-specific environment.} To exploit the features offered by the PAX toolkit, physics analyses might be realized, for instance, according to the example structure illustrated in \Fig{PAXAnaI.eps}. There, a dedicated, experiment-specific interface class for filling the PAX containers (i.e.\ \CClass{PaxEventInterpret}\ instances) represents the interface between detector reconstruction software and PAX-based physics analysis. Once all relevant information is filled, the analysis code is called, and the PAX objects (as obtained by the filling class or at any subsequent stage of the event analysis) can be stored persistently to PAX data files for later use. Analysis results might be further processed with help of the ROOT package. With an analysis consistently formulated with PAX objects, the filling class can be exchanged easily, and the identical analysis code can be applied, for instance, directly to the output of a Monte Carlo event generator or a fast simulation software, see \Fig{PAXAnaII.eps}.a. Furthermore, the use of PAX data files, which provide the distilled experimental event information, allows fast analysis of the reconstruction data decoupled from the experiment-specific software and data storage environment, see \Fig{PAXAnaII.eps}.b. \section{Implementation of PAX into experiment specific software environments} PAX has been made available within the software environments of the experiments CDF, D0\footnote{Interfaces to the D0 software are available as $\beta$-version since April 2005.} (both Tevatron) and CMS (LHC). Following the same principles, the integration of PAX into the latter is described as a general example. The PAX toolkit is provided by the CMS software environment as an external package \cite{PAXAFS}, enabling the physicists inside the CMS collaboration to use PAX without having to care about installation or setup of the package. An extensive example analysis for the use of PAX with the detector reconstruction software ORCA \cite{ORCA} is included in the CMS CVS repository \cite{PAXExPaxAnalysis}. In this example, the (ambiguous) reconstruction of the partonic process of the decay $W\rightarrow \mu \bar \nu_\mu$ is carried out by using reconstructed muons and missing transverse energy. The missing information about the longitudinal component of the neutrino momentum is obtained with a $W$-mass constraint, which yields (up to) two analytical solutions, and thus two possible event interpretations. Subsequently, both interpretations are developed in two separate \CClass{PaxEventInterpret}\ instances, and a number of example histograms is filled. The class design of this example analysis is based on the structure described in the previous section, including interface classes for filling \CClass{PaxEventInterpret}\ containers with the reconstructed objects of ORCA. To facilitate the start-up for new PAX users, a tutorial video for this example plus supplementary material can be found in the CMS section of the PAX web page \cite{PAXTutorial}. \section{PAX physics analyses for Tevatron and LHC} Provided for the software environments of the CDF, D0 and CMS experiments, PAX is being explored by a growing user community. In the following, two successful applications of PAX in complex physics analyses are presented. \subsection{A PAX-based $t \bar t$ analysis for CDF} \FigIns{ttFeyn.eps}{The channel $t \bar t$ on parton level (a) and the visible reconstructed partons of this channel (b).} \FigIns{ttResults.eps}{ Verification of the $t$-quark reconstruction in generated $t \bar t$ events. The full histograms show reconstructed properties of the event interpretation which reproduces the partonic $t \bar t$ state best. Further information results from the selection procedure using reconstructed quantites of the event only: the symbols represent the selected event interpretation, the dashed histogram summarizes the other possible interpretations. a) Reconstructed mass of the $t$-quark with a subsequent leptonic $W$-decay. b) Angular distribution of the $W$-boson in the rest frame of the $t$-quark. c) Angular distribution of the charged lepton in the rest frame of the $W$-boson. (For this study, the HERWIG Monte Carlo generator \cite{HERWIG} and CDF detector simulation \cite{CDFMC} have been used.)} In this section, an analysis of top-antitop-quark events ($t \bar t$ events) with the CDF experiment at Tevatron is described \cite{DHDiss}. As illustrated in \Fig{ttFeyn.eps}, the electron-plus-jet decay channel shows similar combinatorial tasks as the aforementioned $t\bar tH$ channel. In this $t \bar t$ study, an analysis factory based on the PAX event interpretation concept is used to perform complete reconstruction of the partonic scattering process and to optimize the separation of signal and background processes. The partonic process of the decay $\bar t \rightarrow W \bar b \rightarrow e \bar \nu_e \bar b$ is reconstructed as follows. First, the W-boson decaying into electron and neutrino is reconstructed. From the W-mass constraint two possible solutions can be deduced for the longitudinal neutrino momentum. This results in two event interpretations for the W-boson. Combining each of those with one of the jets leads to the interpretations for the $t$-quark (with different kinematics and reconstructed masses). The remaining part of the process, i.e.\ $t \rightarrow Wb \rightarrow q\bar q'b$, is reconstructed from three of the remaining jets. Consequently, in a four jet $t\bar t$ event, 24 interpretations can be constructed. The most likely $t\bar t$ event interpretation is selected by first demanding non-zero b-probability for one of the jets of one of the $t$-quark candidates. Finally, one of these solutions is selected by evaluating the most likely event interpretation based on kinematic properties, the reconstructed mass of the W boson decaying to $q\bar q'$, and the mass difference of the two reconstructed $t$-quarks. The resulting example plots are shown in \Fig{ttResults.eps}. \subsection{A PAX-based $t\bar tH$ analysis for CMS} \FigIns{ttHResults.eps}{Reconstructed Higgs mass in the channel $t\bar tH$ with $H\decaysto b\bar b$ on generator (a) and full simulation level (b). The gray shaded area corresponds to the combinatorial background, i.e.\ to those events, in which a wrong $H\decaysto b\bar b$ configuration was selected. (For this study, the PYTHIA Monte Carlo generator \cite{PYTHIA} and CMS detector simulation \cite{CMSMC} have been used.)} The channel of associated Higgs production, $t\bar tH$ with $H\decaysto b\bar b$, by means of which the requirements to a particle physics analysis toolkit have been motivated in the introduction of this article, is studied in the CMS experiment at the LHC \cite{HiggsDiscovPot}\cite{SKDiss}, for instance. The most recent of these studies makes use of the PAX event interpretation concept to develop possible event interpretations in a manner similar to the one described in the previous CDF example. After development of all interpretations, a likelihood function is used to select the most probable one by rating the different configurations on the basis of kinematics variables and masses of the two $t$-quarks and their decay products. \Fig{ttHResults.eps} illustrates the performance of this method in simulations with and without detector effects. Please notice, that \Fig{ttHResults.eps}.a and \Fig{ttHResults.eps}.b have been produced with the identical analysis code, by simply exchanging the interface classes (compare \Fig{PAXAnaI.eps} and \Fig{PAXAnaII.eps}). In this way, a good measure for how detector and reconstruction methods influence the results can directly be obtained -- with almost no analysis code duplication. \section{Conclusions} The PAX toolkit is designed to assist physicists at modern collider experiments in the analysis of complex scattering processes. PAX provides a generalized HEP event container with three types of physics objects (particles, vertices and collisions), relation management and file I/O scheme. The PAX event container is capable of storing the complete information of multicollision events (including decay trees with spatial vertex information, four-momenta as well as additional reconstruction data). An automated copy functionality for the event container allows the user to consistently duplicate event containers with physics objects and relations. The PAX file I/O scheme can be used to write (and read) complete event containers to (from) disk file; this offers an easy realization of distilled experiment data streams. By structuring physics analyses based on PAX objects, the identical source code can be applied to various data levels. This adds a desirable aspect of flexibility to the software-side of particle physics analysis. PAX is available within the software environments of experiments at Tevatron and LHC, where it is applied in a number of physics analyses. Two thereof are outlined in this article, demonstrating typical use cases and successful applications of the PAX toolkit. Evident advantages arising from the usage of the PAX toolkit are avoidance of code duplication, increased code lucidity, unified data model and nomenclature, and therefore more efficient team work in the complex physics analyses at modern HEP experiments. \section*{Acknowledgment} The authors would like to thank Rene Brun, Anne-Sylvie Giolo-Nicollerat, Christopher Jung, Yves Kemp, Klaus Rabbertz, Jens Rehn, Sven Schalla, Patrick Schemitz, Thorsten Walter, and Christian Weiser for helpful contributions and feedback.
2004.10918
\section{Introduction} Featuring high flexibility, swift deployment, and wide coverage, unmanned aerial vehicles (UAVs) have been extensively applied to activities such as search and rescue in disaster areas, inspection of landscapes, and surveillance of forrest fires. Recently, UAVs have found many use cases in wireless communication networks as cost-effective and on-demand aerial wireless platforms for areas without cellular coverage~\cite{Zeng, wu19, zeng19}, {\color{black} or as flying mobile users within a cellular network~\cite{survey1, survey2}. Cellular-connected UAVs can enhance connectivity, coverage, flexibility and reliability of wireless communication networks~\cite{survey1, survey2}. } The UAVs are anticipated to engage significantly in the fifth-generation (5G) and beyond 5G (B5G) wireless networks, and provide new services such as real-time image transmission \cite{mot17}, caching and multicasting \cite{xiaoli, zeng18}, data dissemination or collection \cite{Zeng, wu18mar, wu18dec}, mobile relaying and edge computing \cite{kli16, zeng16, yang19}, and wireless power transfer \cite{moza17, xu18, wu19}. As the applications of the internet-of-things (IoT) continue to expand in the {\color{black} current and future wireless networks,} many infrastructure-free wireless links (such as bluetooth, Wi-Fi, and UAV-enabled transmission) have been established to support communications among IoT devices. Yet, these convenient networks can be abused for crimes and terrorism, if in the wrong hands. Therefore, it is necessary for authorized parties to surveil these suspicious communication links (see \cite{huang18, zeng2016, jie2018, xu17, xu17surv, cai17, hu17, haiquan19, moon19, wu19oct}). Optimization metrics for legitimate monitoring typically focused on maximizing the eavesdropping rate or the non-outage probability \cite{huang18}. Spoofing schemes were proposed for a malicious transmission link to maximize the eavesdropping rate \cite{zeng2016}, or to intervene and change the communicated data \cite{jie2018}. For a suspicious communication link in \cite{xu17}, the largest achievable monitoring non-outage probability and comparative intercepting rate were obtained under delay-sensitive and delay-tolerant scenarios, respectively. Proactive jamming schemes were developed to maximize the average monitoring rate for multi-input multi-output (MIMO) channels \cite{cai17}, relay networks \cite{hu17}, UAV-aided links \cite{haiquan19}, and with a deep-learning approach \cite{moon19}. Most existing works considered a fixed ground node (GN) as the legitimate monitor, whose channel typically suffers from severe large-scale path loss and small-scale fading. Yet, the UAV-enabled monitor can enjoy high-probability of line-of-sight (LoS) channels as its flying altitude rises. It is therefore easier for the UAV to obtain its channel gains with the GNs if their locations are known. Thanks to its flexibility, the UAV can dynamically adjust its positions for better eavesdropping rate, e.g., by flying closer to the suspicious transmitter. Categorized by the power sources, there are two types of UAVs, namely, the tethered UAVs and the untethered UAVs \cite{qq18}. A tethered UAV is linked with a ground control platform, and is powered stably through a cable or a wire. The lack of mobility has constrained tethered UAVs to a targeted area only \cite{yali16, yang17, lyu17, alze17}. In particular, the horizontal positions of the UAVs were optimized in \cite{lyu17} to cover a set of GNs with the least possible number of UAVs. The optimal three-dimensional (3D) deployment scheme of a UAV was developed in \cite{alze17} to cover as many GNs as possible with a minimum transmit power budget. By contrast, untethered UAVs are powered by laser-beam, on-board battery, and/or solar panel. They can fly freely and enjoy full mobility in wide 3D space. Communication throughput was maximized for a laser-powered UAV in \cite{ouyang18}. In spite of their flexibility, the battery-powered UAVs have to revisit their home base repeatedly to refill their batteries during operations, due to the limited capacity of on-board batteries \cite{derrick}. The optimal UAV trajectory and power management schemes were developed in \cite{zeng16} to obtain the largest achievable data rate of a relaying system, and in \cite{zeng18} to minimize the data dissemination time of a multicasting system. Since solar panels at the UAVs can harness and convert energy to electric power, supporting long endurance flights, solar-powered UAVs have also received great research interests. The optimal 3D trajectory optimization and resource assignment for a solar-powered UAV-aided communication system were developed in \cite{derrick} to achieve the largest overall data rate in a fixed time horizon. Apart from transmit power, the UAVs consume additional propulsion power to support hovering and moving activities. As a result, the energy management for UAV-enabled communications noticeably differs from that in current systems on the ground. The largest value of energy efficiency in bits/Joule was obtained in \cite{zeng17} for a fixed-wing UAV via trajectory optimization. Total (including communication and propulsion) energy usage of a rotary-wing UAV was minimized in \cite{zyong} to satisfy the throughput requirement of each GN. In this paper, we propose a simple model for a rotary-wing UAV enabled monitoring system. The suspicious transmission link on the ground consists one source (transmitting) node S and one destination (receiving) node D. When the UAV's channel condition is worse than that of node D, the UAV sends jamming signal to the latter as noise to degrade its channel for successful eavesdropping. The total jamming energy consumption of the UAV is minimized in a finite period via joint trajectory optimization and power allocation, based on the assumption of successful eavesdropping at each slot. By judicious reformulation, we transform this non-convex optimization task into two separable subproblems, each of which is convex when the other set of variables are fixed. The alternating optimization method is leveraged to develop an efficient approach that is ensured to converge to a locally optimal solution. Based on such a solution, some useful insights are also drawn on the changing patterns of the UAV's trajectory and jamming policy. To achieve energy-efficient UAV operations in practice, we further consider a solar-powered rotary-wing UAV enabled monitoring system by including the propulsion power consumption besides the jamming power. Capitalizing on the successive convex approximation (SCA) method, an efficient iterative approach is put forth to find a feasible solution fulfilling the Karush-Kuhn-Tucker (KKT) conditions. Numerical results demonstrate that with UAV trajectory optimization, the overall energy consumption can be greatly suppressed. The rest of the paper is organized as follows. Section \ref{sec:model} describes the system models. Section \ref{sec:jam} develops an approach to the UAV trajectory design and jamming energy minimization, while Section \ref{sec.energy} addresses the trajectory design and total (jamming and propulsion) energy management for a solar-powered UAV. Numerical results are provided in Section \ref{sec.sim}. The paper is concluded in Section~\ref{sec.con}. \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{systmod.pdf} \caption{A UAV-enabled legitimate monitoring system.} \label{syst} \end{figure} \section{System Models}\label{sec:model} Consider a point-to-point, frequency non-selective wireless communication link from a suspicious source node S to a suspicious destination node D which are geographically set apart by $d$ meters on the ground. An untethered UAV, traveling at a fixed altitude of $H$ meters, serves as the legitimate monitor to eavesdrop this link; see Fig. \ref{syst}.\footnote{\color{black}{Although design freedom can be increased by further optimizing UAV's altitude, energy consumption as well as risks of instability and collision will rise. Therefore, rather than frequently adjusting altitude, it may be better for the UAV to fly at a fixed altitude and avoid vertical movement due to airspace regulation, collision avoidance, energy saving and safety concerns.}} The UAV can move forward horizontally or hover in the air. It can travel in the vicinity above the GNs to improve its eavesdropping performance. Suspicious nodes S and D have one antenna each, and the UAV operates with two antennas, one for monitoring and intercepting information from the S-D link (receiving) and the other for sending jamming signals to node D (transmitting). Therefore the UAV can perform in a full-duplex state to jam and monitor simultaneously. Since its initial and final locations are given, the UAV's channel power gain can be worse than that of D at certain time. In this case, the UAV sends jamming signal to the latter as noise to degrade its channel for successful eavesdropping. We assume that the UAV can completely annul its self-interference from the transmitting antenna to the receiving antenna by adopting state-of-the-art analog and digital self-interference cancelation schemes \cite{xu17}. \subsection{UAV Mobility Model}\label{sec:uavmob} Without loss of generality, we consider a 3D Cartesian coordinate system with nodes S and D located at $(0,0,0)$ and $(d,0,0)$, respectively. The UAV is deployed for the monitoring mission in a finite scheduling horizon of $T$ seconds. We split the period $T$ into $T_w$ time slots given by ${\cal T}:= \{1, \ldots, T_w \}$; the duration of each slot is the same as $\delta$. The slot length is selected to be short enough so that the UAV can be treated as static within each slot. Consequently, the time-varying coordinates of the UAV are given by $(x_t,y_t,H), \forall t\in{\cal T}$, with $x_t$ and $y_t$ being the UAV's x- and y-coordinates over time, respectively. The initial and final locations of the legitimate monitor are pre-defined and given by $(x_0,y_0,H)$, and $(x_T,y_T,H)$, respectively. The minimum traveling distance for the UAV to finish during the scheduling horizon $T$ is thereby $d_{\min}=\sqrt{(x_T-x_0)^2+(y_T-y_0)^2}$. Given the maximum speed of the UAV $\tilde V_m$, we let $\tilde V_m \geq d_{\min}/T$ so that at least one feasible trajectory can be found from the UAV's initial to final locations.\footnote{\color{black}{By considering the time for acceleration, the proposed maximum speed $\tilde V_m$ may be infeasible. However, in practice, the acceleration time could be very short and thus reasonably ignored, especially when the total flying period or distance is sufficiently long. From this perspective, we provide a lower bound for the maximum speed.}} Consequently, the UAV's mobile activity constraints, including its initial and final locations and speed constraints are given by~\cite{zeng16}: \begin{subequations} \begin{align} (x_1-x_0)^2+(y_1-y_0)^2 &\leq V_m^2 \label{eq.mob1} \\ \quad(x_{t+1}-x_t)^2+(y_{t+1}-y_t)^2 &\leq V_m^2, ~\forall t \in {\cal T} \label{eq.mob2} \\ (x_T-x_{T-1})^2+(y_T-y_{T-1})^2 &\leq V_m^2 \label{eq.mob3} \end{align} \end{subequations} where $V_m := \tilde V_m \delta$ stands for the largest traveling distance of the UAV for each slot. {\color{black} \begin{remark}\textit {(The choice of $T_w$):} In general, $T_w$ is chosen such that the UAV can be treated as (quasi-) static within each time slot, observed from the ground. To guarantee a certain accuracy, the ratio of the largest traveling distance within each time slot $\tilde V_m \delta$ and the UAV altitude $H$ can be restricted below a threshold, i.e., $\tilde V_m \delta/H \le \varepsilon_m$, where $\varepsilon_m$ is the given threshold and $\delta = T/T_w$. Then, the minimum number of time slots required for achieving the accuracy with a given $\varepsilon_m$ can be obtained as $T_w \ge \tilde V_m T /(H \varepsilon_m)$. The optimization gets more precise with more discretized time samples, i.e., larger value of $T_w$. Yet, the computational complexity, given by $\mathcal{O}(T_w^{3.5})$, also increases significantly with the value of $T_w$. Therefore, the number of time slots $T_w$ can be properly chosen in practice to balance between the accuracy and complexity~\cite{wu18mar}. \end{remark} } \subsection{Communication Channel Model} Malicious users of infrastructure-free wireless communication networks are more likely to appear in wide rural areas, where surveillance is overlooked. In open rural areas, the buildings and trees are sparsely distributed. LoS channels can be dominant even for communications between GNs. Therefore, we can suppose that the communication links between S, D, and UAV (i.e., node U) are all dominated by LoS channels, which can facilitate analysis on the structural properties of the optimal solution. The case with non-LoS channels will be accordingly addressed later. We further suppose that the Doppler effect resulted from the UAV's mobile activities is completely neutralized \cite{zeng16, lin2018}. The distance between S and D is fixed during the entire scheduling horizon, i.e., $d_{SD} = d$ meters. Hence, the channel power gain of the suspicious link from S to D is constant and can be expressed as \begin{equation} h_{0}=\frac{\beta_0}{{d_{SD}}^2}=\frac{\beta_0}{d^2} \end{equation} where $\beta_0$ stands for the channel power at the reference distance $d_0=1$ meter. At each slot $t$, the channel power gain from S to U for legitimate eavesdropping follows the LoS model as \begin{equation} h_{1}^t=\frac{\beta_0}{{d_{1}^{t}}^2}=\frac{\beta_0}{x_t^2+y_t^2+H^2}, ~~\forall t \in {\cal T} \end{equation} where $d_{1}^t= \sqrt{x_t^2+y_t^2+H^2}$ is the link distance between S and U at slot $t$. Similarly, the channel power gain from U to D for jamming is \begin{equation} h_{2}^t=\frac{\beta_0}{{d_{2}^{t}}^2}=\frac{\beta_0}{(d-x_t)^2+y_t^2+H^2}, ~~\forall t \in {\cal T} \end{equation} where $d_{2}^{t}=\sqrt{(d-x_t)^2+y_t^2+H^2}$ is the separation distance between U and D at slot $t$. Let $P_x^t$ stand for the transmit power by S at time slot $t$, and $P_j^t$ the jamming power from U to D to interfere the channel at the suspicious receiver for a successful eavesdropping. Clearly, the signal-to-interference-plus-noise ratio (SINR) at the suspicious receiver D is \begin{equation}\label{snrd} \gamma_{D}^t= \frac{h_{0}P_x^t}{h_{2}^tP_j^t+\sigma^2}, ~~\forall t \in {\cal T} \end{equation} where $\sigma^2$ is variance of the additive white Gaussian noise (AWGN). On the other hand, the UAV can completely annul its self-interference from its jamming antenna to its receiving antenna. Hence, the SINR (which in fact reduces to signal-to-noise ratio, SNR) of the legitimate eavesdropping channel at U is \begin{equation}\label{snru} \gamma_{U}^t= \frac{h_{1}^tP_x^t}{\sigma^2}, ~~\forall t \in {\cal T}. \end{equation} Successful eavesdropping at the UAV requires $\gamma_U^t \geq \gamma_D^t$. The UAV can achieve this goal by dynamically adjusting its trajectory to fly close to the source node, and/or adjusting its jamming power to reduce the channel gain of the suspicious receiver D at each time slot, when the channel condition of the UAV is worse than that of D.\footnote{\color{black}{When the channel condition of the legitimate monitor (the S-U link) is better than that of the suspicious receiver (the S-D link), eavesdropping is performed successfully without the UAV sending jamming signals to the receiver. However, when the S-U link suffers a worse channel condition than the S-D link, successful eavesdropping can be enabled through letting the UAV send jamming signals to the receiver to degrade its channel condition. }} Note that the assumption of successful eavesdropping at each time slot is tenable and non-trivial. In fact, malicious users of infrastructure-free wireless communication networks can also develop counter-eavesdropping measures to ensure secure transmissions on their behalf. One important method is to transmit secret information in cipher. In order to learn the pattern and decode the secret information, the legitimate agency treasures every bit of information. In this case, the cipher transmitted in each time slot is of equal importance for the legitimate agency to piece together the whole picture. Hence, it is of paramount importance if the eavesdropper can intercept information from the suspicious link successfully in every time slot. {\color{black} \begin{remark}\textit {(Decoding the intercepted information):} In this paper, we aim to investigate the fundamental performance limits of the physical layer approach for eavesdropping, and thus do not consider encryption for the suspicious link, which is a higher layer technique and can be resolved as long as there are powerful computing resources. On the other hand, to avoid being monitored and tracked by legitimate parties, the suspicious link is very likely built, used and discarded or destroyed in a day, which makes the link not complete or mature enough in terms of software and hardware to preserve privacy and security. We can thereby reasonably assume that the temporarily-established infrastructure-free suspicious link is not vigilant against eavesdropping and does not employ any countermeasures such as signal encryption or anti-surveillance detection. From this perspective, the UAV can successfully decode the intercepted information from the suspicious link. \end{remark} } \section{Legitimate Eavesdropping with Jamming}\label{sec:jam} To ensure successful eavesdropping, the UAV may need to jam the transmission from S to D. For an untethered UAV without incessant power supply, it is clear that we wish to minimize its overall jamming energy consumption. Building on the UAV's mobile activity constraints \eqref{eq.mob1}--\eqref{eq.mob3}, together with the SINR expressions \eqref{snrd}--\eqref{snru}, the optimization task of interest can be formulated as \begin{subequations}\label{p1} \begin{align} & \min_{\{P_j^t\}, \{x_t, y_t\}} \sum_{t \in {\cal T}} P_j^t \delta \label{p11}\\ &\text {s.t.} ~\frac{h_{0}P_x^t}{h_{2}^tP_j^t+\sigma^2} \leq \frac{h_{1}^tP_x^t}{\sigma^2}, ~\forall t \label{p12}\\ &(x_1-x_0)^2+(y_1-y_0)^2\leq V_m^2 \label{p13}\\ &(x_{t+1}-x_t)^2+(y_{t+1}-y_t)^2\leq V_m^2, ~\forall t \label{p14}\\ &(x_T-x_{T-1})^2+(y_T-y_{T-1})^2\leq V_m^2 \label{p15}\\ &P_j^t \geq 0, ~\forall t. \label{p16} \end{align} \end{subequations} Here we in fact aim to pursue the optimal jamming policy and trajectory design for the UAV. Note that the transmit power $P_x^t$ by S can be canceled from the both sides of the inequality constraints in \eqref{p12}. This implies that the UAV does not need to know the $P_x^t$ when making its jamming and trajectory decisions. This is of practical interest as the suspicious source is certainly reluctant to let the UAV know its transmit power value. \subsection{Proposed Solution} Problem \eqref{p1} is not a convex program because of the non-convex constraints in \eqref{p12}; hence, it cannot be dealt with by classic convex optimization methods. To make the problem more tractable, we introduce two slack variables $u_t := x_t^2 + y_t^2 + H^2$, and $w_t := (d-x_t)^2 + y_t^2 + H^2$, and rewrite \eqref{p1} as \begin{subequations}\label{p2} \begin{align} & \min_{\{P_j^t, u_t, w_t\}, \{x_t, y_t\}} \sum_{t \in {\cal T}} P_j^t \delta \label{p21}\\ &\text {s.t.} ~x_t^2 + y_t^2 + H^2 - u_t \leq 0, ~\forall t \label{p22}\\ & u_t - 2dx_t + d^2 - w_t \leq 0, ~\forall t \label{p23}\\ &\frac{u_t w_t}{d^2} - w_t - P_j^t \beta_0/ \sigma^2 \leq 0, ~\forall t \label{p24}\\ &w_t \geq H^2, ~\forall t \label{p25}\\ &\eqref{p13} - \eqref{p16} \notag \end{align} \end{subequations} where \eqref{p24} results from \eqref{p12} by the following step \begin{equation} \frac{h_{0}}{P_j^t \beta_0/w_t+\sigma^2} \leq \frac{\beta_0/u_t}{\sigma^2}, ~\forall t. \label{eq.jam} \end{equation} Note that we change the ``='' signs to ``$\leq$'' signs in \eqref{p22} and \eqref{p23} to convexify those constraints. It can be justified that upon obtaining the optimal solution for \eqref{p2}, constraints \eqref{p22} and \eqref{p23} should always be met with equality, since otherwise, we can always decrease $u_t$ and $w_t$, respectively, to improve the channel condition of the corresponding eavesdropping and jamming link, leading to smaller total jamming energy consumption. Therefore, problems \eqref{p1} and \eqref{p2} are equivalent. Although problem \eqref{p2} is not convex, it is easy to see that the problem becomes convex with regard to $\{P_j^t, x_t, y_t, u_t\}$ for fixed $\{w_t \}$, and it is also convex in $\{w_t \}$ for fixed $\{P_j^t, x_t, y_t, u_t\}$. For this reason, we resort to the alternating optimization method (a.k.a. block coordinate descent) to solve \eqref{p2}. The proposed algorithm is summarized in Algorithm \ref{algo:bcd}. Since both subproblems are convex, the globally optimal solution for each of them can be obtained by standard convex optimization solvers, e.g., the interior point methods, in polynomial time \cite{Boyd}. Clearly, the total jamming energy of UAV is bounded above zero. For the proposed block coordinate descent method, the resultant total jamming energy is decreased in each iteration. Consequently, the proposed approach is ensured to converge to a locally optimal solution for problem \eqref{p2}. As problems \eqref{p1} and \eqref{p2} are equivalent, a locally optimal solution for \eqref{p1} can be readily obtained. \begin{algorithm}[t] \caption{Alternating Optimization for Problem \eqref{p2}} \label{algo:bcd} \begin{algorithmic}[1] \State {\bf Initialize} $\{P_j^t(0), x_t(0), y_t(0), u_t(0)\}$, and set initial feasible values of $\{w_t(0) \}$ for Problem \eqref{p2}. \For {$m$ = 0, 1, 2, ...} \State Obtain the optimal solution of $\{P_j^t(m+1), x_t(m+1), y_t(m+1), u_t(m+1)\}$ with $\{w_t(m) \}$ fixed. \State Compute the optimal solution of $\{w_t(m+1)\}$ with $\{P_j^t(m+1), x_t(m+1), y_t(m+1), u_t(m+1)\}$ fixed. \State Update $m=m+1$. \EndFor \end{algorithmic} \end{algorithm} \subsection{Structural Properties} To draw useful insights on the optimal trajectory optimization and jamming power allocation scheme, we analyze the structural properties of the optimal solution for the UAV-aided eavesdropping system. \begin{lemma}\label{lemma.free} When the UAV is in the circular area of ${\cal A} :=\{(x_t, y_t) | \sqrt{x_t^2 + y_t^2 + H^2} \leq d, \forall t \}$, it can eavesdrop successfully without jamming, i.e., $P_j^t=0, \forall t$. \end{lemma} \begin{proof} Lemma 1 can be proven through analyzing the characteristics of the transmit and eavesdropping rate. When the UAV is in the circular area of ${\cal A} :=\{(x_t, y_t) | \sqrt{x_t^2 + y_t^2 + H^2} \leq d, ~\forall t \}$, the quality of the channel from S to U ($h_1^t=\beta_0/(x_t^2+y_t^2+H^2)$) is the same as or better than that from S to D ($h_0=\beta_0/d^2$). It then readily follows that the UAV can eavesdrop successfully without jamming. \end{proof} The circular area of ${\cal A}$ can be referred to as the jamming-free area. When the UAV is out of the range of $\cal A$, the channel quality from S to U is worse than that from S to D. In this case, the UAV can only eavesdrop successfully by degrading the SINR of the S-D link through jamming. The amount of the jamming power at each time slot increases with the UAV's distance to S. Based on Lemma \ref{lemma.free}, it can be inferred that when both the initial and final locations of the UAV are inside ${\cal A}$, the optimal jamming policy is always zero, i.e., ${P_j^t}^* = 0, ~\forall t$. As a result, the optimization problem \eqref{p1} reduces to find a feasible trajectory within the circular area of ${\cal A}$ with $P_j^t = 0, ~\forall t$, i.e., \begin{equation}\label{p1easy} \begin{aligned} & \text{find}~ { \{x_t, y_t\}} \\ &\text {s.t.} ~x_t^2 + y_t^2 + H^2 \leq d^2, ~\forall t \\ &\eqref{p13}-\eqref{p15}. \end{aligned} \end{equation} Since problem \eqref{p1easy} is convex, a classic convex solver can be leveraged to obtain the optimal solution, which is not necessarily unique. \begin{lemma}\label{lemma.time} When the scheduling horizon $T$ is larger than the minimum traveling time of the UAV $T_{\min}= d_{\min}/ \tilde V_m$, the UAV will first fly towards the jamming-free area, then fly to its final location. \end{lemma} Lemma \ref{lemma.time} is quite intuitive, as the UAV enjoys a better channel condition when it is closer to S. Based on Lemma \ref{lemma.time}, we can further characterize the changing patterns of the UAV's jamming policy. \begin{proposition}\label{prop.jam} In general, the UAV's jamming power obeys the rule of first non-increasing then non-decreasing. In some special cases, the jamming power either always non-increasing, or always non-decreasing. \end{proposition} \begin{proof} When the UAV trajectory is fixed, \eqref{p1} reduces to a jamming energy minimization problem: \begin{equation}\label{p1reduce} \begin{aligned} & \min_{\{P_j^t\}} \sum_{t \in {\cal T}} P_j^t \delta \\ &\text {s.t.} ~\frac{h_{0}P_x^t}{h_{2}^tP_j^t+\sigma^2} \leq \frac{h_{1}^tP_x^t}{\sigma^2}, ~\forall t \\ &P_j^t \geq 0, ~\forall t. \end{aligned} \end{equation} For each time slot, the optimal solution of the jamming power is given by ${P_j^t}^* = \max\{0, \frac{\sigma^2}{h_2^t}(\frac{h_0}{h_1^t}-1)\}$, where ${P_j^t}^* =0$ when the UAV is in the jamming-free area of $\cal A$, and ${P_j^t}^* = \frac{\sigma^2}{h_2^t}(\frac{h_0}{h_1^t}-1) >0$ when the UAV is outside $\cal A$. The latter can be rewritten into \begin{equation} {P_j^t}^* = \frac{\sigma^2}{\beta_0 d^2}[(d-x_t)^2+y_t^2 + H^2] [(x_t^2+y_t^2 + H^2)-d^2] \end{equation} where $x_t^2 + y_t^2 + H^2 \geq d^2$. The projection of the jamming-free area on the ground is a circle centered at S (0,0), with the radius of $\sqrt{d^2-H^2}$. To observe how ${P_j^t}^*$ changes with $x_t$ outside $\cal A$, we let $y_t^2 = d^2 - H^2$ and take the first-order partial derivative of ${P_j^t}^*$ over $x_t$: \begin{equation}\label{eq.derix} \partial {P_j^t}^* / \partial x_t = 4x_t^3 - 6dx_t^2 + 4d^2x_t=x_t[(2x_t-3d/2)^2+7d^2/4]. \end{equation} Clearly, the optimal jamming power ${P_j^t}^*$ increases with $x_t$ when $x_t >0$, and decreases with it when $x_t <0$. The same pattern can be drawn from ${P_j^t}^*$ with respect to $y_t$. In one word, ${P_j^t}^*$ increases as the UAV flies away from S. Now consider the following three cases. Case i): Initial and final locations are both outside ${\cal A}$. When the UAV's traveling time is abundant, i.e., $T>T_{\min}$, it always seeks the trajectory that yields the least energy consumption. Therefore, the UAV first flies towards ${\cal A}$, then to its final destination. The jamming power experiences the process of first decreasing then increasing. The same jamming policy applies when $T=T_{\min}$ and the line segment connecting the initial and final points goes through ${\cal A}$. Case ii): Initial (or final) location is inside (or outside) ${\cal A}$, or vice versa. In the first scenario, the jamming power first decreases to zero, then stays constant till the eavesdropping mission is accomplished. The jamming power is always non-increasing. If we switch the initial and final locations, the jamming power then experiences a non-decreasing process. Case iii): Both the initial and final locations are inside ${\cal A}$. The jamming power is always zero in this scenario. Combining Cases i)--iii), the proposition follows. \end{proof} Proposition \ref{prop.jam} provides important insights on the optimal jamming policy of the UAV according to different initial and final locations. It shows that the UAV is willing to travel slowly inside the jamming-free area and even take detours to reduce the jamming power consumption. Such a strategy of the UAV is typically the consequence of minimizing the jamming energy only. {\color{black} \begin{remark}\textit {(In and out of the jamming-free area):} The UAV usually stays in the home base, awaiting mission assignment, and is dispatched as a legitimate monitor once a suspicious link is detected. As the exact location of the suspicious link is not predictable, it is not likely that the UAV happens to be within the jamming-free area every time. Furthermore, by studying the UAV's trajectory with its initial and final locations in or out of the jamming-free area, we can provide more perspectives and insights for UAV trajectory design when it is assigned a mission of monitoring. In fact, this is why we consider a more general problem formulation and the proposed solution is applicable to different scenarios, wherever the suspicious link is located. \end{remark} } \subsection{Extension to Non-LoS Channels} If the suspicious transmission and legitimate monitoring links are located in an urban area, the channel between the suspicious source and destination experiences Rayleigh fading, which can be modeled as \cite{guangchi} \begin{equation}\label{rayleigh} h_0^t = \beta_0 \xi_t d^{-\kappa}, ~\forall t \end{equation} where $\xi_t$ is an exponentially distributed random variable with unit mean, and $\kappa \ge 2$ is the path loss exponent. The UAV-GN links can be formulated by considering the probabilities of both LoS and non-LoS (NLoS) channels, where the LoS probability at each time slot for the S-U $(j=1)$ or U-D $(j=2)$ link $p_{LoS,j}^t$ is given by \cite{zyong} \begin{equation} p_{LoS,j}^t = \frac{1}{1+C\exp{(-D[\theta_j^t-C])}}, ~\forall t. \end{equation} Here the values of $C$ and $D$ is reliant on the propagation environment, and $\theta_j^t = \frac{180}{\pi} \sin^{-1}(H/d_j^t)$ is the elevation angle in degree, which is closely related to the UAV's distance from the source node $d_1^t$ or the destination node $d_2^t$. Thereby, the channel power gains of the UAV-GN links are given by \cite{zyong} \begin{equation}\label{nonlos} h_j^t = p_{LoS,j}^t \beta_0 {d_j^t}^{-\kappa} + (1-p_{LoS,j}^t) \zeta \beta_0 {d_j^t}^{-\kappa},~\forall t \end{equation} where $\zeta<1$ is the extra reduction factor for the NLoS channel. The Rayleigh fading in \eqref{rayleigh} does not affect the original problem \eqref{p1}, while the NLoS component in \eqref{nonlos} renders problem \eqref{p1} hardly tractable for existing solvers. To deal with it, we consider the case when $\kappa = 2$; then \eqref{nonlos} can be approximated by \cite{moza17} \begin{equation}\label{quadratic} h_j^t \approx \eta_1 {d_j^t}^{-2} +\eta_2, ~\forall t \end{equation} where $\eta_1$ and $\eta_2$ are two coefficients relying on the UAV altitude. Using the expressions in \eqref{quadratic}, the objective function and constraints for the NLoS scenario are generally in the same form as those in the original problem \eqref{p1}. Similar to problem \eqref{p2}, the variables in the NLoS scenario can be separated into three blocks, namely, $\{P_j^t, x_t, y_t\}, \{u_t\}$, and $\{w_t\}$, due to the product of $P_j^t u_t w_t$ invited in constraints \eqref{p24} by the NLoS component. The NLoS problem is convex regarding each block of variables when the other two blocks are fixed, and can thus be solved by the proposed block coordinate descent approach. Note that due to the approximation in \eqref{quadratic}, only a sub-optimal solution can be obtained. {\color{black} \subsection{Generalization to eavesdropping non-outage events} In this section, we propose a stochastic model for the eavesdropping system by considering Rayleigh fading for the suspicious S-D link, i.e., $h_0^t = \beta_0 \xi_t d^{-\kappa}, ~\forall t$ [cf. Eq. (14)]. The UAV channels are all LoS, and successful eavesdropping is not required within each time slot anymore. Instead, we impose a constraint of non-outage probability to guarantee that the total successful eavesdropping events satisfy a certain threshold over time. We introduce the following indicator function $I_t, \forall t$ to denote the successful eavesdropping event of the UAV: \begin{equation}\label{eq.it} I_t = \left\{ \begin{aligned} &1, ~\text{if} ~ \gamma_U^t \geq \gamma_D^t \\ &0, ~\text{otherwise}\\ \end{aligned}\right. \end{equation} where $I_t = 1$ and $I_t=0$ indicate eavesdropping non-outage and outage events, respectively. The original problem is extended to the following form. \begin{subequations}\label{non-out} \begin{align} & \min_{\{P_j^t\}, \{x_t, y_t\}} \sum_{t \in {\cal T}} P_j^t \delta \label{non1}\\ &\text {s.t.} ~\sum_t I_t \ge p_{\text {non}}T_w \label{non2}\\ &\qquad \eqref{p13}-\eqref{p16} \notag \end{align} \end{subequations} where $p_{\text {non}} \in [0,1]$ is the eavesdropping non-outage probability and constraint \eqref{non2} guarantees that at least $100p_{\text {non}}\%$ of the total eavesdropping performances are successfully operated. Constraint \eqref{non2} is actually a relaxed (or generalized) version of constraints \eqref{p12}, which can also take the form of non-outage probability: \begin{equation} \mathbb{P}(\gamma_U^1 \ge \gamma_D^1, \ldots, \gamma_U^{T_w} \ge \gamma_D^{T_w}) \ge p_{\text {non}}. \end{equation} When $p_{\text {non}} =1$, problem \eqref{non-out} specializes to the original problem \eqref{p1}. On the other hand, if $p_{\text {non}} =0$, jamming is not needed at all and the optimal value of the objective function $\sum_{t} P_j^t \delta$ is zero. In this case, problem \eqref{non-out} reduces to the feasibility problem of finding a trajectory constrained by the UAV's maximum speed with $P_j^t = 0, \forall t$. At optimality, jamming signals will be suppressed for at most $100(1-p_{\text {non}})\%$ of the $T_w$ time slots with worse S-U channels (or higher jamming power consumptions), and they will be sent, if necessary, in time slots with better S-U channels. As problem \eqref{non-out} is a relaxed one of the original problem \eqref{p1}, the optimal UAV trajectory for \eqref{p1} is also an optimal one for \eqref{non-out}, and the optimal jamming energy in \eqref{p1} serves as an upper bound for that in \eqref{non-out}. Problem \eqref{non-out} can be solved by first solving \eqref{p1}, then ranking the values of $\{P_j^{t*}\}_t$ from large to small and setting the top $100(1-p_{\text {non}})\%$ to zero. \subsection{Extension to two suspicious links} In this section, we extend the original problem (7) to include two suspicious links for the UAV to monitor simultaneously. The second pair of suspicious ground source and destination nodes, S$_2$ and D$_2$, are located at $(0, s_2)$ and $(d, s_2)$, respectively, where $s_2$ is the given y-coordinate of the nodes. All communication links are assumed to be LoS for simplicity. We assume that the UAV has three antennas with one of them for jamming and the other two for monitoring each link. Note that in this case, jamming signal is sent to both links as long as eavesdropping is unsuccessful over one of the links. The channel power gain for the S$_2$-D$_2$ link is the same as the S-D link, $h_0 = \beta_0 / d^2$. The channel power gains for the S$_2$-U and U-D$_2$ links are given by \begin{subequations} \begin{align} &h_{21}^t = \frac{\beta_0}{x_t^2 + (s_2-y_t)^2 + H^2}, ~~\forall t \\ &h_{22}^t= \frac{\beta_0}{(d-x_t)^2 + (s_2-y_t)^2 + H^2}, ~~\forall t. \end{align} \end{subequations} The SINR (or SNR) of the S$_2$-D$_2$ and S$_2$-U links are given by \begin{subequations} \begin{align} &\gamma_{D_2}^t = \frac{h_{0}P_x^t}{h_{22}^tP_j^t+\sigma^2}, ~~\forall t \\ &\gamma_{U_2}^t = \frac{h_{21}^tP_x^t}{\sigma^2}, ~~\forall t. \end{align} \end{subequations} Successful eavesdropping requires that $\gamma_{U_2}^t \ge \gamma_{D_2}^t$, and $\gamma_{U}^t \ge \gamma_{D}^t$ for both links. The new problem of interest can be formulated as \begin{subequations}\label{ptwo} \begin{align} & \min_{\{P_j^t\}, \{x_t, y_t\}} \sum_{t \in {\cal T}} P_j^t \delta \label{ptwo1}\\ &\text {s.t.} ~~\frac{h_{0}P_x^t}{h_{2}^tP_j^t+\sigma^2} \leq \frac{h_{1}^tP_x^t}{\sigma^2}, ~\forall t \label{ptwo2}\\ &\qquad \frac{h_{0}P_x^t}{h_{22}^tP_j^t+\sigma^2} \leq \frac{h_{21}^tP_x^t}{\sigma^2}, ~\forall t \label{ptwo3}\\ &\qquad \eqref{p13}-\eqref{p16}.\notag \end{align} \end{subequations} Problem \eqref{ptwo} can be solved by following the same procedure summarized in Algorithm 1. The jamming-free area of the S$_2$-D$_2$ link is ${\cal A}_2:= \{(x_t, y_t)| \sqrt{x_t^2 + (s_2-y_t)^2 + H^2} \le d, \forall t\}$. When $|s_2| \le 2d$, the common jamming-free area, i.e., the jamming-free area for problem \eqref{ptwo} is the intersection of ${\cal A}$ and ${\cal A}_2$, which is essentially the intersection of two circles centered at $(0,0)$ and $(0,s_2)$, respectively, both with a radius of $d$. When $|s_2| > 2d$, the common jamming-free area does not exist as ${\cal A} \mathop{\cap} {\cal A}_2 = \varnothing$. It is worth noting that the problem of two suspicious links can be further extended to address multiple suspicious links with different separating distances, or aerial (rather than ground) suspicious nodes with 3D optimization of the UAV trajectory. } In a nutshell, we address the problem of jamming energy minimization for a UAV-enabled monitoring system based on the assumption of sufficient power supply. We provide useful insights on the UAV trajectory design and reveal its impact on the jamming policy. However, in practice, such a trajectory design could result in a great cost (and waste) of propulsion power. Furthermore, it is not possible for untethered UAVs to possess infinite power supply during flight. Motivated by this, we next investigate the energy optimization based on a more practical setting, by considering finite power supply and propulsion power consumption at the UAV. \section{Energy Management for Solar-Powered UAV}\label{sec.energy} Compared with cables, laser-beams, and on-board batteries, the solar-powered UAVs enjoy a high flexibility and a long flight endurance in practical deployment. Apart from communication and jamming power, the UAV consumes additional propulsion power to maintain airborne and support its movement. Energy-efficient operation of the UAV needs to be achieved by considering propulsion energy management in system design \cite{zyong}. Suppose that the UAV has a solar panel to harvest energy and an on-site battery to save energy. The UAV's battery is initially charged with $E_0$ amount of energy, and that it can consume $\vartheta$ portion of $E_0$ during the entire working period, and save the $(1-\vartheta)$ portion for emergency during landing (to a prescribed platform or home base). The UAV can fly horizontally and adjust its positions dynamically to enhance the eavesdropping performance. We pursue the optimal trajectory design and energy management scheme of the UAV by minimizing the total jamming and propulsion energy consumption for the solar-powered rotary-wing UAV enabled monitoring system. \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{pm.pdf} \caption{Propulsion power versus UAV speed.} \label{pm} \end{figure} \begin{table}[t] \caption{Parameters for propulsion power \cite{zyong}} \begin{center} \begin{tabular}{|c|c|} \hline \text{UAV weight} &2 kg \\ \hline \text{Blade profile power and induced power, $P_0$, $P_1$} &3.4 W, 118 W \\ \hline \text{Rotor solidity and disc area, $s$, $A$} &0.03, 0.28 m$^2$\\ \hline \text{Tip speed of the rotor blade, $U_{tip}$} &60 m/s \\ \hline \text{Mean rotor induce velocity, $v_0$ } &5.4 m/s\\ \hline \text{Atmospheric density and fuselage drag ratio, $\rho$, $d_f$} &1.225 kg/m$^3$, 0.3 \\ \hline \end{tabular} \end{center} \label{tab.pm} \end{table} \subsection{UAV Propulsion Power Model} Besides transmit (jamming) power, the communication related power includes also that for communication circuitry, information receiving and decoding, signal processing, etc. For simplicity, we suppose that such communication connected power is a constant, represented by $P_c$ in watt (W) \cite{szhang, hu20}. The propulsion power, which typically depends on the UAV speed, is essential to support the UAV's hovering and moving activities. For a rotary-wing UAV with speed $V_t$, the propulsion power at time slot $t$, denoted by $P_m^t$, is given by \cite{zyong} \begin{equation}\label{speed} \begin{aligned} P_m^t = & P_0 \left(1+\frac{3V_t^2}{U_{tip}^2}\right) + P_1\left(\sqrt{1+\frac{V_t^4}{4v_0^4}}-\frac{V_t^2}{2 v_0^2}\right)^{\frac{1}{2}}\\ &+ \frac{1}{2} d_f\rho s AV_t^3 \end{aligned} \end{equation} where $P_0$ and $P_1$ have fixed values and stand for the {\it blade profile power} and {\it induced power} under hovering mode, respectively, $U_{tip}$ is the tip speed of the rotor blade, $v_0$ denotes the average rotor induced velocity in hover, $d_f$ and $s$ represent the fuselage drag ratio and rotor solidity, and $\rho$ and $A$ are the atmospheric density and rotor disc area, respectively. When $V_t=0$, \eqref{speed} corresponds to the power consumption of the hovering state. Fig. \ref{pm} depicts the typical curve of $P_m^t$ versus $V_t$. The parameters are set according to Table \ref{tab.pm} \cite{zyong}. It is revealed by Fig. \ref{pm} that the UAV speed achieving the least power consumption, i.e., about $41.84$ W, happens at approximately $V_e = 22.36$~m/s. \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{solar.pdf} \caption{Harvested solar power versus UAV altitude.} \label{solar} \end{figure} \begin{table}[t] \caption{Parameters for solar power \cite{kokh04}} \begin{center} \begin{tabular}{|c|c|} \hline \text{Atmospheric transmittance, $\alpha$, $\beta_1$} &0.8978, 0.2804 \\ \hline \text{Interception factor of clouds, $\beta_c$} &0.01 \\ \hline \text{Mean radiant power and scaling altitude, $F$, $\Delta$} &1367 W/m$^2$, 8000 m\\ \hline \text{Efficiency and size of solar panel, $\eta$, $S$} &0.4, 0.5 m$^2$ \\ \hline \text{Altitude of cloud, $L$, $U$} &500 m, 1000 m \\ \hline \end{tabular} \end{center} \label{tab.solar} \end{table} We suppose that within each slot $t$, the UAV maintains a constant speed, which is given by \begin{equation}\label{vt} V_t = \sqrt{(x_t-x_{t-1})^2 + (y_t - y_{t-1})^2}/ \delta, ~\forall t. \end{equation} By substituting \eqref{vt} into \eqref{speed}, we find that the first and the third terms of \eqref{speed} are jointly convex with respect to $(x_t,x_{t-1}, y_t, y_{t-1})$, whereas the second term is neither convex nor concave. \subsection{Solar Power Model} Generally, the amount of the harvested solar power depends on the atmospheric transmittance and clouds. As higher altitude results in higher solar intensity, the atmospheric transmittance increases with the altitude, which can be empirically approximated by the following equation at altitude $H$ \cite{lee17} \begin{equation} \phi(H)=\alpha - \beta_1 e^{-H/\Delta} \end{equation} where $\alpha$ is the largest possible amount of the atmospheric transmittance, $\beta_1$ is the extinguishing coefficient of the air, and $\Delta$ is the scaling altitude. On the other hand, the solar strength is diminished by cloud. The reduction of sun light traveling through a cloud can be formulated as \cite{derrick, kokh04} \begin{equation} \psi(d_c)=e^{-\beta_c d_c} \end{equation} where $\beta_c \geq 0$ is the interception factor of the cloud, and $d_c$ represents the spacial length that the sunlight travels through the cloud. Overall, the electric generation power of the solar panels at height $H$ is given by \cite{lee17, kokh04} \begin{equation}\label{eq.solar} E(H)=\left\{ \begin{aligned} &\eta SF\phi(H)\psi(0), H \geq U \\ &\eta SF\phi(H)\psi(U-H), L \leq H < U \\ &\eta SF\phi(H)\psi(U-L), H < L \\ \end{aligned} \right. \end{equation} where $\eta \in (0,1)$ and $S$ (in m$^2$) denote the efficiency and size of the solar panel, respectively. Constant $F$ is the mean radiant power on the ground, while $U$ and $L$ are the heights of the upper and lower limits of the cloud, respectively. Fig.~\ref{solar} illustrates the influence of UAV's altitude on the harvested solar power. The setting of the corresponding parameters are listed in Table \ref{tab.solar} \cite{kokh04}. \subsection{Problem Formulation} We aim to design the joint energy management and trajectory planning scheme for the solar-powered UAV aided eavesdropping system by minimizing its total energy consumption, including the jamming energy and the propulsion energy. Since the UAV flies at a fixed altitude, we can simply use $E$ to denote the amount of solar power instead of $E(H)$. The problem is formulated as \begin{subequations}\label{p3} \begin{align} &\min_{\{P_j^t, x_t, y_t \}} \sum_{t \in {\cal T}} (P_j^t+P_m^t) \delta \label{p31}\\ &\text {s.t.} ~\sum_{i=1}^t (P_j^t + P_m^t +P_c) \delta \leq \sum_{i=1}^t E + \vartheta E_0, ~\forall t \label{p32}\\ &\eqref{p12} - \eqref{p16}. \notag \end{align} \end{subequations} Constraint \eqref{p32} is the energy-harvesting causality constraint, which is imposed to bound the total consumed energy up to the current time slot not to exceed the harvested energy plus the battery capacity. The minimum level of the initially stored energy $\underline E_0$ is chosen such that the UAV can finish the eavesdropping mission without harvested solar energy, following the shortest trajectory at a constant speed. In particular, given the line segment connecting its horizontal initial and final locations $(x_0, y_0)$ and $(x_T, y_T)$, the UAV travels at a fixed speed $\overline V = \sqrt{(x_T-x_0)^2+(y_T-y_0)^2}/T$. With $\overline V$, we can obtain the total propulsion energy $P_m$ and the UAV's coordinates $(x_t, y_t)$ at each time slot. Based on the coordinates, we can further calculate its jamming power $P_j^t$ according to \eqref{p12} per time slot. Then, we can readily obtain the value of $\underline E_0 = P_m + \sum_t (P_j^t+P_c)\delta$. {\color{black} \begin{remark}\textit {(3D UAV trajectory design with altitude optimization):} 3D UAV trajectory design can be pursued by including the UAV altitude as an optimization variable $H_t, \forall t$. Considering problem (7) and the S-U channel condition $h_1^t = \frac{\beta_0}{{d_{1}^{t}}^2}=\frac{\beta_0}{x_t^2+y_t^2+H_t^2}, \forall t$, the optimal altitude for the UAV is the lowest height within the regulated range that it can stay, since the UAV enjoys the best channel condition in this way and there is no performance gain by increasing its altitude. On the other hand, considering the model of harvested solar power in Section IV-B [cf. \eqref{eq.solar}], a tradeoff can be observed between the UAV channel conditions and the amount of harvested energy [cf. \eqref{p3}]. The UAV has to decide at each time slot whether to fly lower or higher to strike a balance between achieving better eavesdropping performance and harvesting more energy. With the UAV altitude included as an optimization variable $H_t, \forall t$, constraints \eqref{p32} become non-convex as the amount of harvested energy $E_t$ is altitude-dependent and time-varying. It is difficult to convert \eqref{p32} to convex constraints due to the complicated expression of $E_t$, thus rendering the new problem hardly tractable for existing solvers. Furthermore, to the best of our knowledge, there is not a general model to capture the power consumptions incurred by both horizontal and vertical movements of the UAV, which in turn makes it difficult to pursue a joint 3D UAV trajectory design and power allocation. It will be an interesting direction to pursue in our future works with altitude optimization. \end{remark} } \subsection{SCA-based Convexification and Solution} The problem \eqref{p3} is not convex since it consists the non-convex term $\left(\sqrt{1+\frac{V_t^4}{4v_0^4}}-\frac{V_t^2}{2 v_0^2}\right)^{\frac{1}{2}}$ in $P_m^t$, and the non-convex constraints \eqref{p12}. The latter can be handled by leveraging the same method as in Section \ref{sec:jam}. With slack variables $\{u_t, w_t, \forall t\}$, \eqref{p12} can be replaced with the constraints \eqref{p22}-\eqref{p25}. To tackle the non-convexity with $P_m^t$, we first bring in slack variables $\{q_t \geq 0\}$ such that \begin{equation} q_t^2 = \sqrt{1+\frac{V_t^4}{4v_0^4}}-\frac{V_t^2}{2 v_0^2}, ~\forall t \end{equation} which is equivalent to \begin{equation}\label{mu} \frac{1}{q_t^2} = q_t^2 + \frac{V_t^2}{v_0^2}, ~\forall t. \end{equation} The second term of \eqref{speed} can thus be substituted by the linear component $P_1 q_t$, with the additional constraints \eqref{mu}. For the purpose of exposition, we now integrate the expression for $V_t$ in \eqref{vt} and let \begin{equation}\label{newpm} \begin{aligned} {\tilde P}_m^t := & P_0 + \frac{3P_0}{U_{tip}^2 \delta^2}\left[(x_t - x_{t-1})^2 + (y_t - y_{t-1})^2\right] + P_1 q_t \\ & + \frac{d_f}{2\delta^3} \rho s A\left[(x_t - x_{t-1})^2 + (y_t - y_{t-1})^2\right]^{3/2}, ~\forall t. \end{aligned} \end{equation} We can see that ${\tilde P}_m^t$ is now jointly convex with respect to $(x_t,x_{t-1}, y_t, y_{t-1}, q_t)$. With such a manipulation, problem \eqref{p3} can be written as \begin{subequations}\label{p3.1} \begin{align} & \min_{\substack{\{P_j^t, q_t,u_t \} \\ \{x_t, y_t,w_t\}}} \sum_{t \in {\cal T}} ( P_j^t+{\tilde P}_m^t ) \delta \label{p3.11}\\ &\text {s.t.}~ \sum_{i=1}^t ( P_j^t + \tilde P_m^t + P_c )\delta \leq \sum_{i=1}^t E + \vartheta E_0, ~\forall t \label{p3.12} \\ &\frac{1}{q_t^2} \leq q_t^2 + \frac{(x_t - x_{t-1})^2 + (y_t - y_{t-1})^2}{\hat v_0^2 }, ~\forall t \label{p3.14}\\ &\eqref{p13} - \eqref{p16}, \eqref{p22} - \eqref{p25} \notag \end{align} \end{subequations} where $\hat v_0^2=v_0^2 \delta^2$. Note that constraints \eqref{p3.14} are obtained from \eqref{mu} by replacing the equations with inequalities. Yet, equivalence still holds between problems \eqref{p3} and \eqref{p3.1}. To examine this, we assume that if any of the constraints in \eqref{p3.14} is met with strict inequality when achieving optimality for problem \eqref{p3.1}, we can decrease the related value of variable $q_t$ to enable constraint \eqref{p3.14} met with equality, while reducing the total energy consumption (objective value) at the same time. Therefore, all constraints in \eqref{p3.14} are met with equality at optimality. The same equivalence also holds for constraints \eqref{p22} and \eqref{p23} as explained in Section \ref{sec:jam}. Hence, problems \eqref{p3} and \eqref{p3.1} are equivalent. Problem \eqref{p3.1} is still non-convex since it consists the non-convex constraints in \eqref{p3.14}. However, it can be tackled with the successive convex approximation (SCA) method by calculating the global lower bounds at a given local point. In particular, for \eqref{p3.14}, the left-hand-side (LHS) is a convex function in $q_t$, and the right-hand-side (RHS) is a jointly convex function regarding $q_t$ and $(x_t,x_{t-1}, y_t, y_{t-1})$. Since the first-order Taylor expansion serves as the global lower bound of a convex function, we can obtain the following inequality for the RHS of \eqref{p3.14} \begin{equation}\label{eq.q} \begin{aligned} &q_t^2 + \frac{(x_t - x_{t-1})^2 + (y_t - y_{t-1})^2}{\hat v_0^2} \geq q_t^{(l)2} + 2q_t^{(l)}(q_t - q_t^{(l)}) \\ &+\frac{2}{\hat v_0^2}[(x_t^{(l)} - x_{t-1}^{(l)})(x_t - x_{t-1})+(y_t^{(l)} - y_{t-1}^{(l)})(y_t - y_{t-1})]\\ &- \frac{1}{\hat v_0^2}[(x_t^{(l)} - x_{t-1}^{(l)})^2 + (y_t^{(l)} - y_{t-1}^{(l)})^2] \end{aligned} \end{equation} where $q_t^{(l)}$, $x_t^{(l)}$, and $y_t^{(l)}$ are the present values of the corresponding variables at the $l$-th iteration, respectively. \begin{algorithm}[t] \caption{SCA-based Method for Problem \eqref{p3.2}} \label{algo:sco} \begin{algorithmic}[1] \State {\bf Initialization:} Find an initially feasible solution $\{P_j^t(0), x_t(0), y_t(0), q_t(0),u_t(0), w_t(0)\}$ for Problem \eqref{p3.2}. \For {$l$ = 0, 1, 2, ...} \State Obtain the optimal solution of $\{P_j^t(l+1), q_t(l+1), u_t(l+1)\}$ with $\{q_t(l), x_t(l), y_t(l), w_t(l) \}$ fixed. \State Compute the optimal solution of $\{x_t(l+1), y_t(l+1), w_t(l+1) \}$ with $\{P_j^t(l+1), q_t(l+1), u_t(l+1)\}$ fixed. \State Update $l=l+1$. \EndFor \end{algorithmic} \end{algorithm} By substituting the non-convex constraints \eqref{p3.14} with its lower bound at the $l$-th iteration acquired by \eqref{eq.q}, we can establish the following optimization problem \begin{subequations}\label{p3.2} \begin{align} & \min_{\substack{\{P_j^t, q_t,u_t \} \\ \{x_t, y_t,w_t\}}} \sum_{t \in {\cal T}} ( P_j^t+{\tilde P}_m^t ) \delta \label{p3.21}\\ &\text {s.t.}~ \frac{1}{q_t^2} \leq q_t^{(l)2} + 2q_t^{(l)}(q_t - q_t^{(l)}) +\frac{2}{\hat v_0^2}[(x_t^{(l)} - x_{t-1}^{(l)})(x_t - x_{t-1})+(y_t^{(l)} - y_{t-1}^{(l)})(y_t - y_{t-1})] \notag \\ & \qquad \qquad -\frac{1}{\hat v_0^2}[(x_t^{(l)} - x_{t-1}^{(l)})^2 + (y_t^{(l)} - y_{t-1}^{(l)})^2], ~\forall t \label{p3.22}\\ &q_t \geq 0, ~\forall t \label{p3.23} \\ &\eqref{p13} - \eqref{p16}, \eqref{p22} - \eqref{p25}, \eqref{p3.12}. \notag \end{align} \end{subequations} It can be justified that problem \eqref{p3.2} is convex in $\{P_j^t, q_t,u_t \}$ for fixed $\{x_t, y_t,w_t\}$, and it is convex in $\{x_t, y_t,w_t\}$ for fixed $\{P_j^t, q_t,u_t \}$. Similarly, we can leverage the alternating optimization method to acquire the optimal values of one block of variables with the other fixed iteratively. The proposed algorithm is summarized in Algorithm \ref{algo:sco}. {\color{black} In the proposed algorithm, each subproblem is a convex program, which can be efficiently tackled via classic convex optimization methodologies in polynomial time. It is worth noting that because of the global lower bounds in \eqref{eq.q}, when the constraints of problem \eqref{p3.2} are fulfilled, those for the original problem \eqref{p3.1} are also fulfilled; yet the reverse does not necessarily hold. Thereby, the feasible region of \eqref{p3.2} is a subset of that for \eqref{p3.1}, and the optimal value of \eqref{p3.2} draws an upper limitation to that of \eqref{p3.1}. By sequentially renewing the local point at each iteration through solving \eqref{p3.2}, our proposed approach is established for the non-convex optimization problem \eqref{p3.1} or its original problem \eqref{p3}. Through the similar statements in \cite{zyong} and \cite{zappone}, it is demonstrated that the proposed approach is ensured to converge to at least a solution that fulfills the KKT conditions of problem \eqref{p3.1}. A high-quality sub-optimal solution can therefore be obtained by our proposed algorithm with a computational complexity of $\mathcal{O}(T_w^{3.5})$ at a fast convergence speed, as will be corroborated by simulation results provided in Section V. } \begin{figure}[t] \centering \includegraphics[width=0.65\textwidth]{bothben.pdf} \caption{UAV trajectory designs and jamming power allocations under NF scenario.} \label{bothben} \end{figure} \section{Numerical Results}\label{sec.sim} In this section, we provide numerical results for the proposed approaches. The reference channel power $\beta_0$ is set as $10^{-12}$, the noise $\sigma$ is set as $-169$ dBm/Hz, and the communication bandwidth is $10$ MHz. The distance between S and D is $d = 200$ m, and the UAV flies at an altitude of $H = 100$ m. The maximum horizontal speed of the UAV is set as $\tilde V_m = 40$ m/s. The slot length is $\delta = 0.1$ s. The original capacity of the battery $E_0$ is $7 \times 10^3$ J. Parameters concerning the propulsion power and the harvested solar power are the same as in Tables \ref{tab.pm} and \ref{tab.solar}. To evaluate the proposed optimal trajectory design and power allocation schemes, we test three pairs of coordinates for the initial and final locations of the UAV. The three test cases are: 1) JF (Jamming Free) scenario: both the initial and final locations are inside the jamming-free area of $\cal A$, namely, $(x_0, y_0) = (-50~{\text m},-100~{\text m})$, and $(x_T, y_T) = (100~{\text m},140~{\text m})$; 2) IF (Initial jamming Free) scenario: the initial location is inside $\cal A$ and the final location is outside $\cal A$, namely, $(x_0, y_0) = (-50~{\text m},0)$, and $(x_T, y_T) = (100~{\text m},350~{\text m})$; and 3) NF (No jamming Free) scenario: both locations are outside $\cal A$, namely, $(x_0, y_0) = (300~{\text m},200~{\text m})$, and $(x_T, y_T) = (200~{\text m},400~{\text m})$. \begin{figure}[t] \centering \includegraphics[width=0.65\textwidth]{totaljam.pdf} \caption{Total jamming energy consumptions of the UAV under NF scenario.} \label{totaljam} \end{figure} To further observe the UAV's behavior, we adopt three time horizons for each scenario, namely, $T=10$ s, $30$ s and $60$ s. Trajectory and power consumptions of the UAV are depicted every second. Note that all pairs of coordinates are carefully selected such that at least one feasible trajectory can be found for the UAV in the shortest time horizon. Fig. \ref{bothben} depicts the UAV's trajectory designs (Fig. \ref{bothben}(a)) and jamming power allocations (Fig. \ref{bothben}(b)) for the simple system model in problem \eqref{p1} under the NF scenario. The jamming-free area of $\cal A$ for the LoS links is illustrated by an orange dash-dot line. It can be observed from Fig. \ref{bothben} that when both the initial and final locations are outside $\cal A$, the UAV intends to fly towards $\cal A$ first, then travel to the final location. With sufficient traveling time ($T=30$ s and $60$ s), the UAV first flies fast to $\cal A$, then takes a detour at a very low speed inside $\cal A$, and finally travels quickly to its final location. During this process, the jamming power first decreases, then stays at zero, and finally increases quickly in the last few time slots. This is consistent with the results in Lemma~\ref{lemma.time} and Proposition~\ref{prop.jam}. We also include the performance of the UAV with NLoS links when $T=30$ s in Fig. \ref{bothben} (labeled as ``NLoS''). It can be seen that the UAV's trajectory does not vary much under this scenario, and that its jamming power does not change smoothly with its distance from the source due to the randomness invited by the S-D link. To further validate the advantage of trajectory design on energy reduction, we examine three baseline schemes of the UAV under the NF scenario when $T=30$ s. The first scheme is labeled as ``Low speed'', where the UAV travels straightly from the initial location to the final location at a fixed speed ($7.46$ m/s). The second scheme is labeled as ``Fly half'', where the UAV flies straightly to the final location at a constant speed ($14.91$ m/s) during the first half of the period ($15$ s), and hovers at the destination for the rest of the period. The third scheme is labeled as ``Two lines'', where the UAV first flies directly towards the point $(200~{\text m},200~{\text m})$, then flies to the final location, following the trajectory of two line segments at a constant speed of $10$ m/s. \begin{figure}[t] \centering \includegraphics[width=0.65\textwidth]{scojftj.pdf} \caption{UAV trajectory designs under JF scenario.} \label{scojftj} \end{figure} \begin{figure}[th] \centering \includegraphics[width=0.65\textwidth]{scojfp.pdf} \caption{UAV power allocations under JF scenario.} \label{scojfp} \end{figure} \begin{figure}[th] \centering \includegraphics[width=0.65\textwidth]{scone.pdf} \caption{UAV trajectory designs and power allocations under IF scenario.} \label{scone} \end{figure} \begin{figure}[th] \centering \includegraphics[width=0.65\textwidth]{ben6.pdf} \caption{UAV power allocations for baseline schemes under NF scenario.} \label{ben4} \end{figure} Fig. \ref{totaljam} shows the total energy consumptions of the UAV under the NF scenario. It is unveiled by Figs. \ref{bothben} and \ref{totaljam} that the UAV consumes significantly more jamming energy without careful trajectory design. {\color{black} The overall energy consumption of the ``Fly half'' scheme is almost ninefold of that of our proposed scheme, since the UAV flies quickly to the destination and hovers there for a relatively long period. As the destination is far from the source node, the longer the UAV stays there, the more jamming energy it consumes.} The ``Two lines'' scheme is the most energy-efficient among the baseline schemes as it amounts to a simple optimization of the trajectory. Under the same parameter setting, the UAV consumes more energy with NLoS links than with LoS links, since it experiences greater path loss with the former. Note that for the proposed scheme, the total jamming energy does not increase with the length of the scheduling period, since the UAV always spends the same duration outside $\cal A$. \begin{figure}[t] \centering \includegraphics[width=0.65\textwidth]{total.pdf} \caption{Total energy consumptions of the UAV under NF scenario.} \label{total} \end{figure} Figs. \ref{scojftj}--\ref{total} depict the trajectory designs and energy management schemes of the UAV based on the system model proposed for problem \eqref{p3} in Section \ref{sec.energy}, where harvested solar energy, propulsion power and other circuit consumptions are considered. Specifically, Figs. \ref{scojftj} and \ref{scojfp} demonstrate the convergence of the proposed approach for trajectory design and propulsion power assignment of the UAV under the JF scenario. Since the lines for the $50$-th and $60$-th iterations completely overlap, a fast convergence within $50$ iterations can be readily observed in both figures. The UAV travels either inside the jamming-free area of ${\cal A}$ or at the edge of ${\cal A}$ and incurs no jamming power at any time, which corroborates Lemma \ref{lemma.free}. When $T=10$ s, the UAV can only choose to finish the journey at the speed incurring as little power consumption as possible. When the time horizon is sufficiently long ($T=30$ s and $60$ s), the UAV carefully designs its trajectory and takes a detour inside ${\cal A}$ so that it can travel at the best energy-efficient speed $V_e$ all the way. Furthermore, it is shown in Fig. \ref{scone} that for the IF scenario when $T=10$ s, the UAV has to travel straightly from the initial location to the final location at a speed much faster than $V_e$, thus leading to a significant amount of propulsion power consumption at each slot. If there is surplus time, the UAV first travels inside $\cal A$ at a constant speed of $V_e$, which minimizes the propulsion power consumption. Then it flies to the final location, which is outside $\cal A$, in the last few time slots. This trajectory design enables the UAV to stay inside $\cal A$ for as long as possible, since the shorter time it stays outside $\cal A$, the less jamming energy it consumes. To fully demonstrate the influence and merits of delicate trajectory design for the UAV, we again compare with three baseline schemes where the UAV adopts different flying protocols for the same trajectory as the ``Low speed'' scheme when $T=30$ s. The first protocol is labelled as ``Fly first'', where the UAV flies to the final location at approximately $V_e = 22.36$ m/s in the first $10$ s, then hovers at the destination for the rest $20$ s. The second protocol is labelled as ``Hover first'', where the UAV hovers above the initial location in the first $20$ s, then flies to the final location for the rest $10$ s at speed $V_e$. The third protocol is labelled as ``Round trip'', where the UAV first takes a round trip between the initial and final locations, then flies again to the destination, at speed $V_e$ during the flight period. To facilitate comparison, we also include the ``Low speed'', ``Two lines'', and our proposed schemes under the NF scenario. Figs. \ref{ben4} and \ref{total} depict the jamming and propulsion power allocations at each time slot, and the total energy consumptions of the UAV, respectively. {\color{black} Table \ref{tab.respective} lists the respective jamming and propulsion energy consumptions for different schemes.} It can be readily seen from Fig. \ref{ben4} that the UAV needs to send jamming signals at every time slot under the five baseline schemes, as it is always traveling outside $\cal A$. The propulsion power consumption for hovering triples that for traveling at speed $V_e$. The ``Fly first'' scheme incurs the largest energy consumption for both jamming and propulsion, due to its $20$ s hovering at the farthest point from the suspicious source. It is further revealed in Fig. \ref{total} that the total energy consumption of the ``Round trip'' scheme is the lowest among the baseline schemes, since it adopts a simple trajectory design with the energy-efficient speed. {\color{black} It is observed from Table \ref{tab.respective} that the jamming energy consumption is the highest for the ``Fly first'' and ``Round trip'' schemes, while the propulsion energy consumption is the highest for the ``Fly first'' and ``Hover first'' schemes. Our proposed scheme consumes the least jamming energy and propulsion energy.} Clearly, all of the baseline schemes consume more energy than our proposed scheme. In a nutshell, the UAV suffers significant waste of energy without careful trajectory optimization. \begin{table}[t] \caption{Respective jamming and propulsion energy consumptions for different schemes under NF scenario.} \begin{center} \begin{tabular}{|c|c|c|} \hline \text{Optimization schemes} &Jamming energy (J) & Propulsion energy (J) \\ \hline \text{Fly first} & 1042.9 & 2850.0 \\ \hline \text{Hover first} & 396.3 & 2850.0 \\ \hline \text{Round trip} & 1042.9 &1255.2 \\ \hline \text{Low speed} & 623.7 & 2427.5 \\ \hline \text{Two lines} & 412.8 & 1968.6 \\ \hline \text{Proposed} & 165.2 & 1255.2 \\ \hline \end{tabular} \end{center} \label{tab.respective} \end{table} {\color{black} \begin{remark}\textit {(Mitigating the interference on other links):} When the suspicious link intentionally chooses to be located in a wild rural area to avoid surveillance by existing monitoring infrastructures, there would be few communication links in the vicinity, and the interference caused by jamming could thereby be reduced to the minimum level, which is negligible. In fact, it typically depends on the access scheme whether jamming suppresses communications of other links. For instance, if the suspicious link occupies a certain frequency band all to itself, the UAV is able to send exclusive jamming signals to it, which will not affect other links. On the other hand, if serious communication degradation is reported by legitimate users within the neighborhood, the UAV can release the specific transmitted (and encrypted) information to these users so that they can decode the jamming signals and will not be interfered. Note that the maximum jamming power of $40$ W in Figs. \ref{bothben} and \ref{ben4} is the worst-case value tested in the simulation. Yet in practice, the UAV is not usually that far away from the suspicious source and does not incur such a high jamming power consumption. \end{remark} } \section{Conclusion}\label{sec.con} We addressed joint energy management and trajectory optimization for a rotary-wing UAV enabled legitimate monitoring system. Building on a judicious (re-)formulation, we leveraged the alternating optimization and successive convex approximation methodologies to minimize the overall energy consumption of the UAV. Efficient algorithms were developed to compute the locally optimal solution or at least a feasible solution fulfilling the KKT conditions. We provided extensive numerical test results to justify the effectiveness of the proposed schemes. The proposed framework also inspires new directions for future researches on security issues in UAV-aided wireless networks such as wireless power transfer and/or mobile edge computing based ones, especially with non-LoS channels and 3D trajectory planning. \balance \bibliographystyle{IEEEtran}
1411.4608
\section{Introduction} Data assimilation is the process of blending estimates of a given system state, in the form of observational information and a prior knowledge \cite{LeDimet-1986-VAA}. The Kalman filter/smoother (KF/KS) \cite{Burgers-1998-ASE, Evensen-2009-DAE,Kalnay-2010-EKF} and the three and four-dimensional variational assimilation system (3DVAR/4DVAR) \cite{Courtier-1994-SOI, Tremolet-2007-ME4} are among well-known algorithms used in data assimilation. Kalman filters estimate the state sequentially by seeking an analysis that minimizes the posterior variance, while the 3DVAR and 4DVAR methods produce posterior maximum likelihood solutions through minimization of an objective function. For high-dimensional problems, the ensemble Kalman filter/smoother (EnKF/EnKS) \cite{Khare-2008-IAE,Evensen-2009-DAE} and their variants have been proposed as Monte Carlo derivative-free alternatives to the KF and KS, with the intractable state covariance in the KF or in the KS replaced by the sample covariance computed from an ensemble of realizations. The purpose of this paper is to provide theoretical results for the method originally proposed in \cite{Mandel-2013-4EK}, called EnKS-4DVAR. The EnKS-4DVAR method uses an ensemble Kalman smoother as a linear solver in the Gauss-Newton or Levenberg-Marquardt method to minimize the weak-constraint 4DVAR objective function. Further details on implementation and computational results can be found in \cite{Mandel-2013-4EK}. The equivalence of the Kalman smoother and incremental variational data assimilation has been known for a long time; see, e.g., \cite{Bell-1993-IKF,Li-2001-OVD}. Hybridization of variational and ensemble-based methods has been a topic of interest among researchers in recent years \cite{Hamill-2000-HEK-x, Zupanski-2005-MLE, Wang-2010-IEC, Sakov-2012-IES, Bocquet-2012-CII, Bocquet-2014-IEK}. The maximum ensemble likelihood filter (MELF) \cite{Zupanski-2005-MLE} uses repeated EnKF on the tangent problem to minimize the objective function over the span of the ensemble. The iterated ensemble Kalman filer (IEnKF) \cite{Sakov-2012-IES} solves the Euler equations for the minimum by Newton's method, preconditioned by a square root ensemble Kalman filter, while \cite{Bocquet-2012-CII} adds a regularization term, similar to the Levenberg-Marquardt method, and \cite{Bocquet-2014-IEK} extends the IEnK method to strong-constraint 4DVAR. The IEnKF uses a scaling of the ensemble, called the \textquotedblleft bundle variant" to approximate the derivatives (tangent operators), achieving a similar effect as the use of finite differences here. The four-dimensional ensemble-based variational data assimilation (4DEnVar) of \cite{Liu-2008-EFV, Liu-2009-EFV, Liu-2013-EFV} minimizes the 4DVAR objective function over the span of the ensemble. Usually, in the formulation of the ensemble based methods (EnKF/EnKS and their variants), each ensemble member is considered as a vector in $\mathbf{R}^{n}$, that is, each vector is regarded as a sample point of a random vector. In this paper, we investigate a different way to interpret such algorithms, similarly as in \cite{Mandel-2011-CEK, LeGland-2011-LSA}, namely, each ensemble member is considered as a random vector and not merely as vector of $\mathbf{R}^{n}$. In fact, the elements of the EnKF/EnKS can be seen as random vectors instead of their realizations. Surprisingly, in this case little is known about the asymptotic behavior of the EnKF/EnKS and other related ensemble methods. This is in contrast to particle filters, for which the asymptotic behavior as the number of particles increases to infinity is well studied. An important question related to EnKF/EnKS and related ensemble methods is a law of large numbers-type theorem as the size of the ensemble grows to infinity. In \cite{Mandel-2011-CEK, LeGland-2011-LSA}, it was proved that the ensemble mean and covariance of EnKF converge to those of the KF, as the number of ensemble members grows to infinity, but the convergence results are not dimension independent. The analysis in \cite{Mandel-2011-CEK} relies on the fact that ensemble members are exchangeable and uses the uniform integrability theorem, which does not provide convergence rates; in \cite{LeGland-2011-LSA}, stochastic inequalities for the random matrices and vectors are used to obtain the classical rate $1/\sqrt{N}$, where $N$ is the ensemble size, but it relies on entry-by-entry arguments. Convergence in $L^{p}$ with the rate $1/\sqrt{N}$ independent of dimension (including infinite) was obtained recently for the square root ensemble Kalman filter \cite{Kwiatkowski-2014-CSR}. These analyses apply to each time step separately rather than for the long-time behavior. The EnKF was proved to be well-posed and to stay within a bounded distance from the truth, for a class of dynamical systems, with the whole state observed, and when a sufficiently large covariance inflation is used \cite{Kelly-2014-WAE}. In this paper, we extend the convergence result of \cite{Mandel-2011-CEK} to EnKS, and apply the extension to EnKS-4DVAR. The randomness of the elements of EnKS implies that, in contrast to the EnKS-4DVAR algorithm presented in \cite{Mandel-2013-4EK}, the coefficients and the solution of the linearized subproblem at each iteration are random. We investigate also the asymptotic behavior of this algorithm. We show the convergence of the EnKS to the KS in $L^{p}$ for all $p\in[1,\infty)$ in the large ensemble limit, in the sense that the ensemble mean and covariance constructed by EnKS method converge to the mean and covariance of the KS respectively in $L^{p}$. Finally, we show the convergence of the EnKS-4DVAR iterates to their corresponding iterates in the classical Levenberg-Marquardt algorithm. Since the EnKS-4DVAR algorithm uses finite differences for approximating derivatives, (i) we start by showing the convergence in probability of its iterates to the iterates generated by the algorithm with exact derivatives as the finite differences parameter goes to zero, (ii) then we prove the convergence in $L^{p}$ of its iterates as the size of the ensemble grows to infinity. The paper is organized as follows: in Section \ref{sec:preliminaries} we recall some definitions and preliminary results that will be useful throughout the paper. Section \ref{sec:problem} introduces nonlinear data assimilation. Section \ref{sec:kalman-fltering} contains the statements of the KF and the EnKF, and recalls the convergence properties of the EnKF as the ensemble size increases to infinity. Section \ref{sec:KS_EnKS} gives statements of the KS and the EnKS, and extends the convergence properties of the EnKF as the ensemble size goes to infinity, to the EnKS. Finally, Section \ref{sec:vda-4DVAR} recalls the EnKS-4DVAR algorithm and presents the convergence properties. \section{Preliminaries} \label{sec:preliminaries} We recall definition of sequence of random vectors exchangeability, the notion of convergence in probability and in $L^{p}$ of random elements. Then we present several lemmas, which will be useful for the following of the paper. \begin{definition} [Exchangeability of random vectors] A set of $N$ random vectors $[X^{1 ,\ldots,X^{N}]$ is exchangeable if their joint distribution is invariant to a permutation of the indices; that is, for any permutation $\pi$ of the numbers $1,\ldots,N$ and any Borel set $B$, \[ \mathbb{P}\left( [X^{\pi(1)},\ldots,X^{\pi(N)}]\in B\right) =\mathbb{P \left( [X^{1},\ldots,X^{N}]\in B\right) . \] \end{definition} Clearly, an i.i.d sequence is exchangeable. If $X$ is a random element (either vector or matrix), we use $|X|$ to denote the usual Euclidean norm (for vectors) or spectral norm (for a matrix). For $1\leq p<\infty$, denote \[ \Vert X\Vert_{p}=E\left( \left\vert X\right\vert ^{p}\right) ^{1/p}. \] The space $L^{p}$ (of vectors or matrices) consists of all random elements $X$ (with values in the same space) such that the $E\left( \left\vert X\right\vert ^{p}\right) <\infty$. Identifying random elements equal a.s., we have that $\Vert.\Vert_{p}$ is a norm on the space $L^{p}$. Convergence in $L^{p}$ is defined as the convergence in this norm. Note that if the element $X$ is deterministic \[ \Vert X\Vert_{p}=E\left( \left\vert X\right\vert ^{p}\right) ^{1/p}=\left( \left\vert X\right\vert ^{p}\right) ^{1/p}=\left\vert X\right\vert . \] \begin{definition} [Convergence in probability] A sequence $(X^{k})$ of random vectors converges in probability towards the random vector $X$ if for all $\epsilon>0$, \[ \lim_{k\rightarrow\infty}\mathbb{P}\left( \left\vert X^{k}-X\right\vert \geq\epsilon\right) =0, \] i.e. \[ \forall\epsilon>0\ \forall\tilde{\epsilon}>0\ \exists k_{0}\ \forall k\geq k_{0}:\mathbb{P}\left[ \left\vert X^{k}-X\right\vert \leq\epsilon\right] \geq1-\tilde{\epsilon}. \] Convergence in probability will be denoted b \[ X^{k \xrightarrow{\mathrm{P} X\text{ as }k\rightarrow\infty. \] The concept of convergence in probability and the notation are extended in an obvious manner to the case when the random vectors are indexed by $\tau>0$. The \[ X^{\tau \xrightarrow{\mathrm{P} X\text{ as }\tau\rightarrow0 \] mean \[ \forall\varepsilon>0\ \forall\tilde{\varepsilon}>0\ \exists\tau_{0 >0\ \forall0<\tau<\tau_{0}:\mathbb{P}\left[ \left\vert X^{\tau}-X\right\vert \leq\varepsilon\right] \geq1-\tilde{\varepsilon}. \] \end{definition} We state the following lemmas, which will be used in this paper. \begin{lemma} \label{lem:exchangeability_sum} If random elements $Y^{1},\ldots,Y^{N}$ are exchangeable, and $Z^{1},\ldots,Z^{N}$ are also exchangeable, and independent from $Y^{1},\ldots,Y^{N}$, then $Y^{1}+Z^{1},\ldots,Y^{N}+Z^{N}$ are exchangeable. \end{lemma} \begin{lemma} \label{lem:exchangeability_F} If random elements $Y^{1},\ldots,Y^{N}$ are exchangeable, and \[ Z^{k}=F\left( Y^{1},\ldots,Y^{N},Y^{k}\right) , \] where $F$ is measurable and permutation invariant in the first $N$ arguments, then $Z^{1},\ldots,Z^{N}$ are also exchangeable. \end{lemma} For the proof of the previous two lemmas, we refer to \cite{Mandel-2011-CEK}. \begin{lemma} [Uniform integrability] If $(X^{k})$ is a bounded sequence in $L^{p}$ and $X^{k \xrightarrow{\mathrm{P} X$, then $\Vert X^{k}-X\Vert_{q}\rightarrow0$ for all $1\leq q<p$. \end{lemma} \begin{proof} The proof is an exercise on uniform integrability \cite[page 338] {Billingsley-1995-PM}: Let $1\leq q<p$. The sequence $\left( E\left( \left\vert X^{k}-X\right\vert ^{q\left( p/q\right) }\right) \right) $ is bounded and $p/q>1$, thus the sequence $\left( \left\vert X^{k}-X\right\vert ^{q}\right) $ is uniformly integrable. Since $\left\vert X^{k}-X\right\vert \xrightarrow{\mathrm{P} 0$, and thus $\left\vert X^{k}-X\right\vert ^{q \xrightarrow{\mathrm{P} 0$, it follows that $E\left( \left\vert X^{k}-X\right\vert ^{q}\right) \rightarrow0$. \end{proof} \begin{lemma} [Continuous mapping theorem] Let $X^{k}$ be a sequence of random elements with values on a metric space $\mathcal{A}$, such that $X^{k \xrightarrow{\mathrm{P} X$. Let $f$ be a continuous function from $\mathcal{A}$ to another metric space $\mathcal{B}$. Then $f(X^{k} \xrightarrow{\mathrm{P} f(X)$. \end{lemma} We refer to \cite[Theorem 2.3]{vanderVaart-2000-AS} for a proof. \section{The nonlinear data assimilation problem} \label{sec:problem \begin{table}[tb] \centering \begin{tabular} [c]{llll Symbol & Random & Meaning \hspace{6.65cm} First used in & \hspace{-0.48cm} Sec.\\\hline $X_{i}$ & yes & the state at time $i$ & \ref{sec:problem}\\ $X_{i|\ell}$ & no & the mean of $X_{i}$ given data $y_{1:\ell}$ & \ref{sec:filter}\\ $P_{i|\ell}$ & no & the covariance of $X_{i}$ given data $y_{1:\ell}$ & \ref{sec:filter}\\ $X_{i|\ell}^{n}$ & yes & member $n$ of an ensemble approximating $X_{i}$ given $y_{1:\ell}$ & \ref{sec:EnKF}\\ $\bar{X}_{i|\ell}^{N}$ & yes & the sample mean of the ensemble $X_{i|\ell ^{1},\ldots,X_{i|\ell}^{N}$ & \ref{sec:EnKF}\\ ${P}_{i|\ell}^{N}$ & yes & the sample covariance of the ensemble $X_{i|\ell }^{1},\ldots,X_{i|\ell}^{N}$ & \ref{sec:EnKF}\\ $U_{i|\ell}^{n}$ & yes & member $n$ of a reference ensemble approximating $X_{i}$ given $y_{1:\ell}$ & \ref{sec:EnKF-convergence}\\ $X_{0:i}$ & yes & composite state $\left[ X_{0},\ldots,X_{i}\right] $ at times $0,\ldots,i$ & \ref{sec:smoother}\\ $X_{0:i|\ell}$ & no & the mean of $X_{0:i}$ & \ref{sec:smoother}\\ ${P}_{0:i|\ell}$ & no & the covariance of $X_{0:i}$ & \ref{sec:smoother}\\ $X_{0:i|\ell}^{n}$ & yes & member $n$ of ensemble approximating $X_{i}$ given $y_{1:\ell}$ & \ref{sec:EnKS}\\ $\bar{X}_{0:i|\ell}^{N}$ & yes & the sample mean of the ensemble $X_{0:i|\ell }^{1},\ldots,X_{0:i|\ell}^{N}$ & \ref{sec:EnKS}\\ ${P}_{0:i,0:i|\ell}^{N}$ & yes & the sample covariance of the ensemble $X_{0:i|\ell}^{1},\ldots,X_{0:i|\ell}^{N}$ & \ref{sec:EnKS}\\ $U_{0:i|\ell}^{n}$ & yes & member $n$ of reference ensemble approximating $X_{i}$ given $y_{1:\ell}$ & \ref{sec:EnKS_convergence}\\ $x_{\mathrm{b}}$ & no & the background state & \ref{sec:4DVAR}\\ $x_{i}$ & no & the unknown state in 4DVAR minimization & \ref{sec:4DVAR}\\ $x_{0:k}$ & no & the unknown composite state in the 4DVAR minimization & \ref{sec:4DVAR}\\ $x_{i}^{j},x_{0:k}^{j}$ & no & the iterate $j$ in the 4DVAR minimization & \ref{sec:incremental-4DVAR}\\ $X_{0:i|\ell}^{j,n}$ & yes & member $n$ of ensemble approximating $x_{0:i ^{j}$ & \ref{sec:EnKS-4DVAR-WD}\\ $\bar{X}_{0:i|\ell}^{j,N}$ & yes & the sample mean of the ensemble $X_{0:i|\ell}^{j,1},\ldots,X_{0:i|\ell}^{j,N}$ using & \ref{sec:EnKS-4DVAR-WD \\ $X_{0:k|k}^{j,n,N_{j}}$ & yes & member $n$ from ensemble of size $N_{j}$ & \ref{sec:EnKS-4DVAR-WD}\\ $X_{0:i|\ell}^{j,n,\tau}$ & yes & member $n$ of the ensemble approximating $x_{0:i}^{j}$ with step $\tau$ & \ref{sec:EnKS-4DVAR-fd}\\ $\bar{X}_{0:i|\ell}^{j,n,\tau}$ & yes & the sample mean of the ensemble $X_{0:i|\ell}^{j,1,\tau},\ldots,X_{0:i|\ell}^{j,N,\tau}$ & \ref{sec:EnKS-4DVAR-fd}\\ $\Delta_{i|\ell}^{j,n}$ & yes & 4DVAR increment ensemble members $X_{i|\ell }^{j,n}-x_{i}^{j-1,n}$ & \ref{sec:EnKS-4DVAR-fd \end{tabular} \caption{Notation for state vectors.}\label{tab:notation \end{table Consider the following classical system of stochastic equations with additive Gaussian noise, which appears in different fields, such as weather forecasting and hydrology \begin{align} X_{0} & \sim N(x_{\mathrm{b}},B)\label{eq:filter_background_eq}\\ X_{i} & =\mathcal{M}_{i}(X_{i-1})+\mu_{i}+V_{i},\quad V_{i}\sim N(0,Q_{i}),\quad i=1,\ldots k\label{eq:filter_model_eq}\\ y_{i} & =\mathcal{H}_{i}(X_{i})+W_{i},\quad W_{i}\sim N(0,R_{i}),\quad i=1,\ldots k, \label{eq:filter_observation_eq \end{align} with independent perturbations $V_{i}$ and $W_{i}$. The operators $\mathcal{M}_{i}$ and $\mathcal{H}_{i}$ are the model operators and the observation operators, respectively, and they are assumed to be continuously differentiable. When they are linear, we denote them by $M_{i}$ and $H_{i}$, respectively. The index $i$ denotes the time index and $k$ denotes the number of time steps. While the outputs $y_{i}$ are observed, the state $X_{i}$ and the noise variables $V_{i}$ and $W_{i}$ are hidden. The quantities $B$, $Q_i$ and $R_i$ are the covariance matrices of $X_0$, $V_i$ and $W_i$ respectively. The quantity $\mu_{i}$ is a deterministic vector. The objective is to estimate the hidden states $X_{1},\ldots,X_{k}$. \begin{definition} The distribution of $X_{k}$ from (\ref{eq:filter_background_eq )--(\ref{eq:filter_observation_eq}) conditioned on $y_{1},\ldots,y_{k-1}$ is called prior distribution. The filtering, or posterior, distribution is the distribution of $X_{k}$, conditioned on the observations of the data $y_{1},\ldots y_{k}$. The smoothing distribution is the joint distribution of $X_{0},\ldots,X_{k}$, conditioned on the observations of data $y_{1},\ldots y_{k}$. \end{definition} In geosciences, the prior is usually called forecast and the posterior is called analysis. In Table \ref{tab:notation}, we collect the notation for state vectors and their ensembles for reference. \section{Kalman filtering} \label{sec:kalman-fltering} \nopagebreak \subsection{Kalman filter} \label{sec:filter} The Kalman filter \cite{Kalman-1960-NAL} provides an efficient computational recursive means to estimate the state of the process $X_{k}$ in the linear case, i.e., when $\mathcal{M}_{i}$ and $\mathcal{H}_{i}$, $i=1,\ldots,k,$ are linear. Denote the mean and the covariance of $X_{i}$ given the data $y_{1},\ldots,y_{\ell}$, by \[ X_{i|\ell}=E(X_{i}|y_{1},\ldots,y_{\ell}),\quad P_{i|\ell}=P(X_{i |y_{1},\ldots,y_{\ell}), \] respectively. In the linear case, the probability distribution of the process $X_{k}$ given the data up to the time $k$ is Gaussian, therefore it is characterized by its mean and covariance matrix, which can be computed as follows. \begin{algorithm} [Kalman filter]\label{alg:KF} For $i=0$, set $X_{0|0}=x_{\mathrm{b}}$ and $P_{0|0}=B$. For $i=1,\ldots,k, \begin{align} X_{i|i-1} & =M_{i}X_{i-1|i-1}+\mu_{i},\text{ (advance the mean in time)}\label{eq:Kalman-model}\\ P_{i|i-1} & =M_{i}P_{i-1|i-1}M_{i}^{\mathrm{T}}+Q_{i},\text{ (advance the covariance in time)}\nonumber\\ K_{i} & =P_{i|i-1}H_{i}^{\mathrm{T}}(H_{i}P_{i|i-1}H_{i}^{\mathrm{T} +R_{i})^{-1}\text{ (the Kalman gain)}\nonumber\\ X_{i|i} & =X_{i|i-1}+K_{i}(y_{i}-H_{i}X_{i|i-1}),\text{ (update the mean from the observation }i\text{)}\label{eq:Kalman-update-mean}\\ P_{i|i} & =(I-K_{i}H_{i})P_{i|i-1}\text{ (update the covariance from the observation }i\text{)} \label{eq:Kalman-update-covariance \end{align} \end{algorithm} In atmospheric sciences, the update (\ref{eq:Kalman-update-mean )--(\ref{eq:Kalman-update-covariance}) is referred to as the analysis step. \begin{lemma} \label{lem:KF}The distribution $N\left( X_{k|k},P_{k|k}\right) $ from the Kalman filter is the filtering distribution. \end{lemma} See, e.g., \cite{Anderson-1979-OF,Simon-2006-OSE} for the proof. If the dimension of the hidden state $X_{k}$ is large, the covariance matrices $P_{k|k-1}$ and $P_{k|k}$ are large dense matrices, hence storing such matrices in memory with the current hardware is almost impossible, and the matrix products in the computation of $P_{k|k-1}$ are also problematic. To solve these problems, the idea is to use ensemble methods. \subsection{Ensemble Kalman filter (EnKF)} \label{sec:EnKF}The idea behind the ensemble Kalman filter is to use Monte Carlo samples and the corresponding empirical covariance matrix instead of the forecast covariance matrix $P_{k|k-1}$ \cite{Evensen-2009-DAE}. Denote by $n$ the ensemble member index, $n=1,\ldots,N$. \begin{algorithm} [EnKF] For $i=0$, $X_{0|0}^{n}\sim N\left( x_{\mathrm{b}},B\right) $. For $i=1,\ldots,k$, given an analysis ensemble $X_{i-1|i-1}^{1},\ldots ,X_{i-1|i-1}^{N}$ at time $i-1$, the ensemble at time $i$ is built as \begin{align} X_{i|i-1}^{n} & ={M}_{i}X_{i-1|i-1}^{n}+\mu_{i}+V_{i}^{n},\quad V_{i ^{n}\sim N\left( 0,{Q}_{i}\right) ,\label{eq:advance_model}\\ X_{i|i}^{n} & =X_{i|i-1}^{n}+{P}_{i|i-1}^{N}{H}_{i}^{\mathrm{T}}\left( {H}_{i}{P}_{i|i-1}^{N}{H}_{i}^{\mathrm{T}}+{R}_{i}\right) ^{-1}(y_{i -W_{i}^{n}-H_{i}X_{i|i-1}^{n})\quad W_{i}^{n}\sim N\left( 0,{R}_{i}\right) , \label{eq:ens_analysis \end{align} where ${P}_{i|i-1}^{N}$ is the covariance estimate from the ensemble $\left[ X_{i|i-1}^{n}\right] _{n=1}^{N}$, \[ {P}_{i|i-1}^{N}=\frac{1}{N-1}\sum_{n=1}^{N}\left( X_{i|i-1}^{n}-\bar {X}_{i|i-1}^{N}\right) \left( X_{i|i-1}^{n}-\bar{X}_{i|i-1}^{N}\right) ^{\mathrm{T}},\text{ where }\bar{X}_{i|i-1}^{N}=\frac{1}{N}\sum_{n=1 ^{N}X_{i|i-1}^{n}. \] The empirical covariance matrix ${P}_{i|i-1}^{N}$ is never computed or stored, indeed to compute the matrix products ${P}_{i|i-1}^{N}H_{i}^{\mathrm{T}}$ and $H_{i}{P}_{i|i-1}^{N}H_{i}^{\mathrm{T}}$ only matrix-vector products are needed: \begin{align} {P}_{i|i-1}^{N}H_{i}^{\mathrm{T}} & =\frac{1}{N-1}\sum_{n=1}^{N}\left( X_{i|i-1}^{n}-\bar{X}_{i|i-1}^{N}\right) \left( X_{i|i-1}^{n}-\bar {X}_{i|i-1}^{N}\right) ^{\mathrm{T}}H_{i}^{\mathrm{T}}\label{eq:PHt}\\ & =\frac{1}{N-1}\sum_{n=1}^{N}\left( X_{i|i-1}^{n}-\bar{X}_{i|i-1 ^{N}\right) h_{n}^{\mathrm{T}},\nonumber\\ H_{i}{P}_{i|i-1}^{N}H_{i}^{\mathrm{T}} & =H_{i}\frac{1}{N-1}\sum_{n=1 ^{N}\left( X_{i|i-1}^{n}-\bar{X}_{i|i-1}^{N}\right) \left( X_{i|i-1 ^{n}-\bar{X}_{i|i-1}^{N}\right) ^{\mathrm{T}}H_{i}^{\mathrm{T}}=\frac{1 {N-1}\sum_{n=1}^{N}h_{n}h_{n}^{\mathrm{T}}, \label{eq:HPHt \end{align} where \begin{equation} h_{n}=H_{i}\left( X_{i|i-1}^{n}-\bar{X}_{i|i-1}^{N}\right) . \label{eq:h_n \end{equation} \end{algorithm} Note that the i.i.d. random vectors $(V_{i}^{1},\ldots,V_{i}^{N})$ are simulated here with the same statistics as the additive Gaussian noise $V_{i}$ in the original state in eq.~(\ref{eq:filter_model_eq}). The i.i.d. random vectors $(W_{i}^{1},\ldots,W_{i}^{N})$ are simulated here with the same statistics as the additive Gaussian noise $W_{i}$ in the original state in eq.~(\ref{eq:filter_observation_eq}). The initial ensemble $\left[ X_{0|0}\right]_{n=1}^{N}$ is simulated as i.i.d. Gaussian random vectors with mean $x_{\mathrm{b}}$ and covariance $B$, i.e. with the same statistics as the initial state $X_{0}$. \subsection{Convergence of the EnKF} \label{sec:EnKF-convergence}For theoretical purposes, we define an auxiliary ensemble $U_{i|i}=[U_{i|i}^{n}]_{n=1}^{N}$, $i=0,\ldots,k,$ called the \emph{reference ensemble}, in the same way as the ensemble $X_{i|i =[X_{i|i}^{n}]_{n=1}^{N}$, but this time for the updates of the ensemble $U_{i|i}$ we use the exact covariances instead of their empirical estimates. The realizations of the random perturbations $V_{i}^{n}$ and $W_{i}^{n}$ in both ensembles are the same. Thus, for $i=0$, $U_{0|0}^{n}=X_{0|0}^{n}$ and for $i=1,\ldots,k$, we build ${U}_{i|i}$ up to time $i$ conditioned on observations up to time $i$ \begin{align} U_{i|i-1}^{n} & ={M}_{i}U_{i-1|i-1}^{n}+\mu_{i}+V_{i}^{n},\quad V_{i ^{n}\sim N\left( 0,{Q}_{i}\right) ,\quad n=1,\ldots,N,\label{eq:U_forcast}\\ U_{i|i}^{n} & =U_{i|i-1}^{n}+{P}_{i|i-1}{H}_{i}^{\mathrm{T}}\left( {H _{i}{P}_{i|i-1}{H}_{i}^{\mathrm{T}}+{R}_{i}\right) ^{-1}\left( y_{i -W_{i}^{n}-H_{i}U_{i|i-1}^{n}\right) , \label{eq:U_analysis \end{align} where $W_{i}^{n}\sim N\left( 0,{R}_{i}\right) $ is a random perturbation, and $P_{i|i-1}$ is the covariance of $U_{i|i-1}^{1}$ \[ {P}_{i|i-1}=E\left[ \left( U_{i|i-1}^{n}-E(U_{i|i-1}^{n})\right) \left( U_{i|i-1}^{n}-E(U_{i|i-1}^{n})\right) ^{\mathrm{T}}\right] . \] Note that the only difference between the two ensembles $X_{k|k}$ and $U_{k|k}$ is that for the construction of $X_{k|k}$, we use the empirical prediction covariance ${P}_{k|k-1}^{N}$ of the ensemble, which depends on all ensemble members, instead of the exact covariance. Therefore, $X_{k|k}^{n}$, $n=1,\ldots,N$, are in general dependent. On the other hand: \begin{lemma} \label{lem:EnKF-Uk-exact} The members of the ensemble $[U_{k|k}^{n}]_{n=1}^{N}$ are i.i.d and the distribution of each $U_{k|k}^{n}$ is the same as the filtering distribution. \end{lemma} \begin{proof} The proof is by induction and the same as in \cite[Lemma 4]{Mandel-2011-CEK}, except we take the additional perturbation $V_{k}^{n}$ into account. Since $\left[ V_{k}^{n}\right] _{n=1}^{N}$ are Gaussian and independent of everything else by assumption, $[U_{k|k}^{n}]_{n=1}^{N}$ are independent and Gaussian. The forecast covariance ${P}_{k|k-1}$ is constant (non-random), and, consequently, the analysis step (\ref{eq:U_analysis}) is a linear transformation, which preserves the independence of the ensemble members and the Gaussianity of the distribution. It is known that the members of the reference ensemble have the same mean and covariance as given by the Kalman filter \cite[eq.~(15)\ and (16)]{Burgers-1998-ASE}. The proof is completed by noting that a Gaussian distribution is determined by its mean and covariance. \end{proof} \begin{theorem} \label{thm:EnKF_convergence} For any $i=0,\ldots,k$, the random matrix \begin{equation} \left[ \begin{array} [c]{c X_{i|i}^{1},\ldots,X_{i|i}^{N}\\ U_{i|i}^{1},\ldots,U_{i|i}^{N \end{array} \right] \label{eq:exch \end{equation} has exchangeable columns, an \[ X_{i|i}^{1}\rightarrow U_{i|i}^{1}, \] in all $L^{p}$, $1\leq p<\infty$, as $N\rightarrow\infty$. Also, \begin{align*} \bar{X}_{i|i-1}^{N} & =\frac{1}{N}\sum_{n=1}^{N}X_{i|i}^{n}\rightarrow E\left( U_{i|i}^{1}\right) ,\\ {P}_{i|i-1}^{N} & =\frac{1}{N-1}\sum_{n=1}^{N}\left( X_{i|i-1}^{n}-\bar {X}_{i|i-1}^{N}\right) \left( X_{i|i-1}^{n}-\bar{X}_{i|i-1}^{N}\right) ^{\mathrm{T}}\\ & \rightarrow{P}_{i|i-1}=E\left[ \left( U_{i|i-1}^{1}-E\left( U_{i|i-1}^{1}\right) \right) \left( U_{i|i-1}^{1}-E\left( U_{i|i-1 ^{1}\right) \right) ^{\mathrm{T}}\right] , \end{align*} in all $L^{p}$, $1\leq p<\infty$, as $N\rightarrow\infty$. \end{theorem} \begin{proof} The theorem is again a simple extension of that of \cite[Theorem 1]{Mandel-2011-CEK}, by adding the model error $V_{i}^{n}$ in each step of the induction over $i$. \end{proof} Note that since (\ref{eq:exch}) has exchangeable columns and $X_{i|i ^{1}\rightarrow U_{i|i}^{1}$ in $L^{p}$, we have the same convergence result for every fixed $n$, $X_{i|i}^{n}\rightarrow U_{i|i}^{n}$ in all $L^{p}$, as $N\rightarrow\infty$. \section{Kalman smoothing} \label{sec:KS_EnKS} \subsection{Kalman smoother (KS)} \label{sec:smoother} A smoother estimates the composite hidden state \[ X_{0:i}=\left[ \begin{array} [c]{c X_{0}\\ \vdots\\ X_{i \end{array} \right] \] given all observations $y_{1},\ldots,y_{i}$. Again, the Kalman smoother provides the exact result in the linear Gaussian case. Denote by $X_{0:i|\ell}$ the expectation of the composite state $X_{0:i}$ given the observations $y_{1},\ldots,y_{\ell}$, and by $P_{0:i|\ell}$ the corresponding covariance. In the linear case, we write the stochastic system (\ref{eq:filter_model_eq )--(\ref{eq:filter_observation_eq}) in terms of the composite state $X_{0:i}$ as \begin{align} X_{0:i} & =\left[ \begin{array} [c]{cccc I_{m} & 0 & \ldots & 0\\ 0 & I_{m} & \vdots & \vdots\\ \vdots & \ddots & \ddots & 0\\ 0 & \ldots & \ddots & I_{m}\\ 0 & \ldots & 0 & M_{i \end{array} \right] X_{0:i-1}+\left[ \begin{array} [c]{c 0\\ \vdots\\ \mu_{i \end{array} \right] +\left[ \begin{array} [c]{c 0\\ \vdots\\ V_{i \end{array} \right] \label{eq:smoother_model}\\ & =\left[ \begin{array} [c]{c I_{m(i-1)}\\ \tilde{M}_{i \end{array} \right] X_{0:i-1}+\left[ \begin{array} [c]{c 0\\ \vdots\\ \mu_{i \end{array} \right] +\left[ \begin{array} [c]{c 0\\ \vdots\\ V_{i \end{array} \right] ,\quad V_{i}\sim N\left( 0,Q_{i}\right) ,\nonumber\\ y_{i} & =\left[ 0,\ldots,{H}_{i}\right] X_{0:i}+W_{i}=\tilde{H}_{i X_{0:i}+W_{i},\quad W_{i}\sim N\left( 0,R_{i}\right) , \label{eq:smoother_obs \end{align} where $m$ is the dimension of the state $X_{i}$, $I_{d}$ is the identity matrix in $\mathbf{R}^{d\times d}$, and \begin{equation} \tilde{H}_{i}=\left[ 0,\ldots,{H}_{i}\right] ,\text{\quad}\tilde{M _{i}=\left[ 0,\ldots,M_{i}\right] . \label{eq:smoother_obs_op \end{equation} Applying the Kalman filter analysis step (\ref{eq:Kalman-model )--(\ref{eq:Kalman-update-covariance}) to the observation (\ref{eq:smoother_obs}) of the composite state $X_{0:i}$, we obtain the Kalman smoother \begin{align*} X_{0:i|i-1} & =\left[ \begin{array} [c]{c I_{m(i-1)}\\ \tilde{M}_{i \end{array} \right] X_{0:i-1|i-1}+\left[ \begin{array} [c]{c 0\\ \vdots\\ \mu_{i \end{array} \right] =\left[ \begin{array} [c]{c X_{0:i-1|i-1}\\ M_{i}X_{i-1,i-1}+\mu_{i \end{array} \right] ,\\ P_{0:i|i-1} & =\left[ \begin{array} [c]{c I_{m(i-1)}\\ \tilde{M}_{i \end{array} \right] P_{0:i-1|i-1}\left[ \begin{array} [c]{c I_{m(i-1)}\\ \tilde{M}_{i \end{array} \right] ^{\mathrm{T}}+\left[ \begin{array} [c]{cc 0 & 0\\ 0 & Q_{i \end{array} \right] \\ & =\left[ \begin{array} [c]{cc P_{0:i-1|i-1} & P_{0:i-1|i-1}\tilde{M}_{i}^{\mathrm{T}}\\ \tilde{M}_{i}P_{0:i-1|i-1} & \tilde{M}_{i}P_{0:i-1|i-1}\tilde{M _{i}^{\mathrm{T}}+Q_{i \end{array} \right] ,\\ K_{i} & =P_{0:i|i-1}\tilde{H}_{i}^{\mathrm{T}}\left( R_{i}+\tilde{H _{i}P_{0:i|i-1}\tilde{H}_{i}^{\mathrm{T}}\right) ^{-1}\\ & =P_{0:i|i-1}\tilde{H}_{i}^{\mathrm{T}}\left( R_{i}+{H}_{i}P_{i,i|i-1 {H}_{i}^{\mathrm{T}}\right) ^{-1},\\ X_{0:i|i} & =X_{0:i|i-1}+K_{i}\left( y_{i}-\tilde{H}_{i}X_{0:i|i-1}\right) =X_{0:i|i-1}+K_{i}\left( y_{i}-{H}_{i}X_{i|i-1}\right) ,\\ P_{0:i|i} & =\left( I_{mi}-K_{i}\tilde{H}_{i}\right) P_{0:i|i-1}. \end{align*} \begin{lemma} \label{lem:KS}The distribution $N\left( X_{0:k|k},P_{0:k,0:k|k}\right) $ from the Kalman smoother is the smoothing distribution, and its mean $X_{0:k|k}$ is the solution of the least squares problem \begin{equation} \text{ }X_{0:k|k} \mathop{\mathrm{argmin} _{x_{0:k}}\biggl(\left\vert x_{0}-x_{\mathrm{b}}\right\vert _{B^{-1}}^{2 +\sum_{i=1}^{k}\left\vert x_{i}-{M}_{i}x_{i-1}-\mu_{i}\right\vert _{Q_{i ^{-1}}^{2}+\sum_{i=1}^{k}\left\vert y_{i}-{H}_{i}x_{i}\right\vert _{R_{i ^{-1}}^{2}\biggr). \label{eq:eq_ks_ls \end{equation} \end{lemma} \begin{proof} The mean $X_{0:k|k}$ maximizes the joint posterior probability density of the composite state $X_{0:k}$ given $y_{1:k}$, which is proportional to \[ e^{-\frac{1}{2}\left\vert x_{0}-x_{\mathrm{b}}\right\vert _{B^{-1}}^{2 }e^{-\frac{1}{2}\sum_{i=1}^{k}\left\vert x_{i}-{M}_{i}x_{i-1}-\mu _{i}\right\vert _{Q_{i}^{-1}}^{2}}e^{-\frac{1}{2}\sum_{i=1}^{k}\left\vert y_{i}-{H}_{i}x_{i}\right\vert _{R_{i}^{-1}}^{2} \] from the Bayes theorem. \end{proof} Again, when $m$ is large, the covariance matrices $P_{0:i|i-1}$ and $P_{0:i|i}$ are very large and the matrix products in the computation of $P_{0:i|i-1}$ is also problematic to implement, and we turn to ensemble methods. \subsection{Ensemble Kalman smoother (EnKS)} \label{sec:EnKS} In the ensemble Kalman smoother \cite{Evensen-2009-DAE}, the covariances are replaced by approximations from the ensemble. Let \[ \left[ \left[ \begin{array} [c]{c X_{0|j}^{1}\\ \vdots\\ X_{i|j}^{1 \end{array} \right] ,\ldots,\left[ \begin{array} [c]{c X_{0|j}^{N}\\ \vdots\\ X_{i|j}^{N \end{array} \right] \right] =\left[ X_{0:i|j}^{1},\ldots,X_{0:i|j}^{N}\right] =\left[ X_{0:i|j}^{n}\right] _{n=1}^{N \] denote an ensemble of $N$ model states over time up to $i$, conditioned on the observations up to time $j$. \begin{algorithm} [EnKS]\label{alg:EnKS} For $i=0$, the ensemble $\left[ X_{0|0}^{n}\right] _{n=1}^{N}$ consists of i.i.d. Gaussian random variables \begin{equation} X_{0|0}^{n}\sim N\left( x_{\mathrm{b}},B\right) . \label{eq:EnKS-background \end{equation} For $i=1,\ldots,k$, advance the model to time $i$ b \begin{equation} X_{i|i-1}^{n}={M}_{i}X_{i-1|i-1}^{n}+\mu_{i}+V_{i}^{n},\quad V_{i}^{n}\sim N\left( 0,{Q}_{i}\right) ,\quad n=1,\ldots,N. \label{eq:EnKS-advance \end{equation} Incorporate the observation at time $i$, \[ y_{i}=\tilde{H}_{i}X_{i}+W_{i},\quad W_{i}\sim N\left( 0,{R}_{i}\right) \] into the ensemble of composite states $\left[ X_{0:i|i-1}^{1},\ldots ,X_{0:i|i-1}^{N}\right] $ in the same way as for the EnKF update \begin{equation} X_{0:i|i}^{n}=X_{0:i|i-1}^{n}+{P}_{0:i|i-1}^{N}\tilde{H}_{i}^{\mathrm{T }\left( \tilde{H}_{i}{P}_{0:i|i-1}^{N}\tilde{H}_{i}^{\mathrm{T}}+{R _{i}\right) ^{-1}\left( y_{i}-W_{i}^{n}-H_{i}X_{i|i-1}^{n}\right) \label{eq:EnKS-analysis \end{equation} where ${P}_{0:i|i-1}^{N}$ is a covariance estimate from the ensemble $X_{0:i|i-1}$ and $W_{i}^{n}\sim N\left( 0,{R}_{i}\right) $ are random perturbations. Similarly as in (\ref{eq:PHt})--(\ref{eq:h_n}), only the following matrix-vector products are needed: \begin{align} {P}_{0:i|i-1}^{N}\tilde{H}_{i}^{\mathrm{T}} & =\frac{1}{N-1}\sum_{n=1 ^{N}\left( X_{0:i|i-1}^{n}-\bar{X}_{0:i|i-1}^{N}\right) \left( X_{i|i-1}^{n}-\bar{X}_{i|i-1}^{N}\right) ^{\mathrm{T}}H_{i}^{\mathrm{T }\label{eq:PHt_EnKS}\\ & =\frac{1}{N-1}\sum_{n=1}^{N}\left( X_{0:i|i-1}^{n}-\bar{X}_{0:i|i-1 ^{N}\right) h_{n}^{\mathrm{T}},\nonumber\\ \tilde{H}_{i}^{\mathrm{T}}{P}_{0:i|i-1}\tilde{H}_{i}^{\mathrm{T}} & =H_{i}\frac{1}{N-1}\sum_{n=1}^{N}\left( X_{i|i-1}^{n}-\bar{X}_{i|i-1 ^{N}\right) \left( X_{i|i-1}^{n}-\bar{X}_{i|i-1}^{N}\right) ^{\mathrm{T }H_{i}^{\mathrm{T}}=\frac{1}{N-1}\sum_{n=1}^{N}h_{n}h_{n}^{\mathrm{T}}, \label{eq:HPHt_EnKS \end{align} where again \begin{equation} h_{n}=H_{i}\left( X_{i|i-1}^{n}-\bar{X}_{i|i-1}^{N}\right) \label{eq:h_n_EnKS \end{equation} and \begin{equation} \bar{X}_{\ell|i-1}^{N}=\frac{1}{N}\sum_{n=1}^{N}X_{\ell|i-1}^{n}. \label{eq:sample_mean_EnKS \end{equation} \end{algorithm} \subsection{Convergence of the EnKS} \label{sec:EnKS_convergence} Just as for the EnKF, we construct an ensemble $U_{0:k|k}=[U_{0:k|k}^{n}]_{n=1}^{N}$ in the same way as the ensemble $\left[X_{0:k|k}^n \right]_{n=1}^N$, but for the updates of the ensemble $U_{0:k|k}$ we use the exact covariances instead of their empirical estimates. So, for $i=0$, $U_{0|0 ^{n}=X_{0|0}^{n}$, and for $i=1,\ldots,k$, $n=1,\ldots,N, \begin{align*} U_{i|i-1}^{n} & ={M}_{i}U_{i-1|i-1}^{n}+\mu_{i}+V_{i}^{n},\quad V_{i ^{n}\sim N\left( 0,{Q}_{i}\right) ,\\ U_{0:i|i}^{n} & =U_{0:i|i-1}^{n}+{P}_{0:i|i-1}\tilde{H}_{i}^{\mathrm{T }\left( \tilde{H}_{i}{P}_{0:i|i-1}\tilde{H}_{i}^{\mathrm{T}}+{R}_{i}\right) ^{-1}\left( y_{i}-W_{i}^{n}-H_{i}U_{i|i-1}^{n}\right) ,\\ & W_{i}^{n}\sim N\left( 0,{R}_{i}\right) , \end{align*} where ${P}_{0:i|i-1}$ is the covariance of $U_{0:i|i-1}^{1}$. Since the Kalman smoother is nothing else than the Kalman filter for the composite state $X_{0:k}$, the same induction step as in Theorem \ref{thm:EnKF_convergence} applies for each $i$, and we have the following. \begin{lemma} \label{lem:EnKS-Uk-exact} The random elements $U_{0:k|k}^{1},\ldots ,U_{0:k|k}^{N}$ are i.i.d and the distribution of each $U_{0:k|k}^{n}$ is the same as the smoothing distribution. In particular, $E(U_{0:k|k})=X_{0:k|k}$, with $X_{0:k|k}$ is the least squares solution (\ref{eq:eq_ks_ls}). \end{lemma} \begin{theorem} \label{thm:EnKS_convergence} For each time step $i=0,\ldots,k$, the random matrix \[ \left[ \begin{array} [c]{c X_{0:i|i}^{1},\ldots,X_{0:i|i}^{N}\\ U_{0:i|i}^{1},\ldots,U_{0:i|i}^{N \end{array} \right] . \] has exchangeable columns, and $X_{0:i|i}^{1}\rightarrow U_{0:i|i}^{1}$, $\bar{X}_{0:i|i}^N \rightarrow E(U_{0:i|i}^{1}),\text{ and }{P}_{0:i|i ^{N}\rightarrow{P}_{0:i|i}\text{ in }L^{p}$ as $N\rightarrow\infty$, for all $1\leq p<\infty$. \end{theorem} \section{ Variational data assimilation and 4DVAR} \label{sec:vda-4DVAR} \subsection{4DVAR as an optimization problem} \label{sec:4DVAR}We estimate the compound state $X_{0:k}$ of the stochastic system (\ref{eq:filter_background_eq})--(\ref{eq:filter_observation_eq}), conditioned on the observations $y_{1},\ldots,y_{k}$, by the maximum posterior probability density, \[ \mathbb{P}\left( x_{0:k}|y_{1:k}\right) \propto e^{-\frac{1}{2}\left( \left\vert x_{0}-x_{\mathrm{b}}\right\vert _{B^{-1}}^{2}+\sum_{i=1 ^{k}\left\vert x_{i}-\mathcal{M}_{i}(x_{i-1})-\mu_{i}\right\vert _{Q_{i}^{-1 }^{2}+\sum_{i=1}^{k}\left\vert y_{i}-\mathcal{H}_{i}(x_{i})\right\vert _{R_{i}^{-1}}^{2}\right) }\rightarrow\max_{x_{0:k}}, \] which is the same as solving the nonlinear least squares problem for the composite state $x_{0:k}$ \begin{equation} \left\vert x_{0}-x_{\mathrm{b}}\right\vert _{B^{-1}}^{2}+\sum_{i=1 ^{k}\left\vert x_{i}-\mathcal{M}_{i}(x_{i-1})-\mu_{i}\right\vert _{Q_{i}^{-1 }^{2}+\sum_{i=1}^{k}\left\vert y_{i}-\mathcal{H}_{i}(x_{i})\right\vert _{R_{i}^{-1}}^{2}\rightarrow\min_{x_{0:k}}. \label{eq:nonlinear-ls \end{equation} Numerical solution of the nonlinear least squares problem (\ref{eq:nonlinear-ls}) is the essence of weak-constraint 4-dimensional variational data assimilation (4DVAR) \cite{Fisher-2005-EKS,Tremolet-2007-ME4}. \subsection{The Levenberg-Marquardt method and incremental 4DVAR} \label{sec:incremental-4DVAR}Consider an approximate solution $x_{0:k}^{j-1}$ of the nonlinear least squares problem (\ref{eq:nonlinear-ls}). We seek a better approximation $x_{0:k}^{j}$. Linearizing the model and the observation operators at $x_{0:k}^{j-1}$ by their tangent operators and adding a penalty term to control the size of the increment $x_{0:k}^{j}-x_{0:k}^{j-1}$, yields the linear least squares problem for $x_{0:k}^{j}$ in the Levenberg-Marquardt (LM) method \cite{Levenberg-1944-MSC,Marquardt-1963-ALE} for the solution of the nonlinear least squares (\ref{eq:nonlinear-ls}). \begin{algorithm} [LM method]\label{alg:LM_D} Given $x_{0:k}^{0}$ and $\gamma\geq0$, compute the iterations $x_{0:k}^{j}$ for $j=1,2,\ldots$, as the solutions of the least squares problem linearized at $x_{0:k}^{j-1}$, \begin{align} x_{0:k}^{j} & \mathop{\mathrm{argmin} _{x_{0:k}}\left\vert x_{0}-x_{b}\right\vert _{B^{-1}}^{2}+\sum_{i=1 ^{k}\left\vert x_{i}-\mathcal{M}_{i}\left( x_{i-1}^{j-1}\right) -\mathcal{M}_{i}^{\prime}\left( x_{i-1}^{j-1}\right) \left( x_{i-1 -x_{i-1}^{j-1}\right) -\mu_{i}\right\vert _{Q_{i}^{-1}}^{2}\nonumber\\ & +\sum_{i=1}^{k}\left\vert y_{i}-\mathcal{H}_{i}\left( x_{i}^{j-1}\right) -\mathcal{H}_{i}^{\prime}\left( x_{i}^{j-1}\right) \left( x_{i}-x_{i ^{j-1}\right) \right\vert _{R_{i}^{-1}}^{2}+\sum_{i=1}^{k}\gamma\left\vert x_{i}-x_{i}^{j-1}\right\vert ^{2}. \label{eq:tangent-ls \end{align} \end{algorithm} For $\gamma=0$, (\ref{eq:tangent-ls}) becomes the Gauss-Newton method, which can converge at a rate close to quadratic, but convergence is not guaranteed even locally. Under suitable technical assumptions, the LM method is guaranteed to converge globally if the regularization parameter $\gamma$ is large enough \cite{Osborne-1976-NLL, Gill-1978-ASN}, and a suitable sequence of penalty parameters $\gamma_{j}\geq0$ changing from step to step can be found adaptively. The LM method is a precursor of the trust-region method in the sense that it seeks to determine when the faster Gauss-Newton method ($\gamma=0$) is applicable and when it is not and should be blended with a slower but safer gradient descent method ($\gamma>0$). In this paper, we consider only the case of a constant penalty parameter $\gamma>0$. The Gauss-Newton method for the solution of nonlinear least squares is known in atmospheric sciences as incremental 4DVAR \cite{Courtier-1994-SOI}. The use of Levenberg-Marquardt iterations was proposed by \cite{Tshimanga-2008-LPA}. \subsection{LM-EnKS with tangent operators} \label{sec:EnKS-4DVAR-WD}From (\ref{eq:eq_ks_ls}), it follows that the linear least squares problem (\ref{eq:tangent-ls}) can be interpreted as finding the maximum posterior probability density for a linear stochastic system with all Gaussian probability distributions. The penalty terms $\gamma |x_{i} -x_{i}^{j-1}|^{2}$ are implemented as additional independent observations \cite{Johns-2008-TEK} of the for \[ x_{i}^{j-1}=x_{i}+E_{i},\quad E_{i}\sim N\left( 0,\frac{1}{\gamma I_{m}\right) ,\quad i=1,\ldots,k. \] \begin{lemma} \label{lem:LM-KS} The LM iterate $x_{0:k}^{j}$, defined by (\ref{eq:tangent-ls ), equals to the mean \[ x_{0:k}^{j}=E\left( X_{0:k|k}^{j}\right) , \] of the smoothing distribution of the stochastic syste \begin{align} X_{0}^{j} & \sim N\left( x_{\mathrm{b}},B\right) ,\label{eq:increment-background}\\ X_{i}^{j} & =\mathcal{M}_{i}^{\prime}\left( x_{i-1}^{j-1}\right) \left( X_{i-1}^{j}-x_{i-1}^{j-1}\right) +\mathcal{M}_{i}\left( x_{i-1 ^{j-1}\right) +\mu_{i}+V_{i}^{j},\quad V_{i}^{j}\sim N\left( 0,Q_{i}\right) \quad i=1,\ldots,k,\label{eq:time_evol_j}\\ \tilde{y}_{i} & =\mathcal{\tilde{H}}_{i}\left( x_{i}^{j-1}\right) +\mathcal{\tilde{H}}_{i}^{^{\prime}}\left( x_{i}^{j-1}\right) \left( X_{i}^{j}-x_{i}^{j-1}\right) +\tilde{W}_{i}^{j},\quad\tilde{W}_{i}^{j}\sim N\left( 0,\tilde{R}_{i}\right) ,\quad i=1,\ldots,k, \label{eq:joint_obs_gamma \end{align} wher \begin{equation} \tilde{y}_{i}=\left[ \begin{array} [c]{c y_{i}\\ x_i^{j-1} \end{array} \right] ,\quad\mathcal{\tilde{H}}_{i}=\left[ \begin{array} [c]{c \mathcal{H}_{i}\\ I_m \end{array} \right] ,\quad\tilde{R}_{i}=\left[ \begin{array} [c]{cc R_{i} & 0\\ 0 & \frac{1}{\gamma}I_{m \end{array} \right] , \label{eq:joint_obs_gamma_def \end{equation} or, equivalentl \begin{align} X_{0}^{j} & \sim N\left( x_{\mathrm{b}},B\right) ,\label{eq:increment-background-equiv}\\ X_{i}^{j} & =M_{i}^{j}X_{i-1}^{j}+\tilde{\mu}_{i}^{j}+V_{i}^{j},\quad V_{i}^{j}\sim N\left( 0,Q_{i}\right) \quad i=1,\ldots ,k,\label{eq:time_evol_j-equiv}\\ \tilde{y}_{i}^{j} & =\tilde{H}_{i}^{j}X_{i}^{j}+\tilde{W}_{i}^{j ,\quad\tilde{W}_{i}^{j}\sim N\left( 0,\tilde{R}_{i}\right) ,\quad i=1,\ldots,k, \label{eq:joint_obs_gamma-equiv \end{align} wher \begin{align*} M_{i}^{j} & =\mathcal{M}_{i}^{\prime}\left( x_{i-1}^{j-1}\right), \quad \tilde{\mu}_{i}^{j} =\mathcal{M}_{i}\left( x_{i-1}^{j-1}\right) +\mu _{i}-\mathcal{M}_{i}^{\prime}\left( x_{i-1}^{j-1}\right) x_{i-1}^{j-1},\\ \tilde{H}_{i}^{j} & =\mathcal{\tilde{H}}_{i}^{\prime}\left( x_{i ^{j-1}\right) ,\quad \tilde{y}_{i}^{j} =\tilde{y}_{i}+\mathcal{\tilde{H}}_{i}^{\prime}\left( x_{i}^{j-1}\right) x_{i}^{j-1}-\mathcal{\tilde{H}}_{i}\left( x_{i ^{j-1}\right) . \end{align*} \end{lemma} \begin{proof} The system (\ref{eq:increment-background})--(\ref{eq:joint_obs_gamma}) has the same form as the original problem (\ref{eq:filter_background_eq )--(\ref{eq:filter_observation_eq}) and all distributions are Gaussian, hence Lemma \ref{lem:KS} applies. \end{proof} \begin{corollary} \label{cor:LM-KS} The LM iterate $x^{j}$ is the mean found from the Kalman smoother (\ref{eq:smoother_model})--(\ref{eq:smoother_obs_op}), applied to the linear stochastic system (\ref{eq:increment-background )--(\ref{eq:joint_obs_gamma}). \end{corollary} However, since the dimension of the state is generally large, we apply the EnKS (\ref{eq:EnKS-background})--(\ref{eq:sample_mean_EnKS}) to solve (\ref{eq:increment-background})-(\ref{eq:joint_obs_gamma}) approximately. In each LM iteration $j=1,2,\ldots$, the linearized least squares solution $x^{j}$ is approximated by the sample mean $\bar{X}_{0:k|k}^{j,N_{j}}$ from the EnKS and the least squares are linearized at the previous iterate $\bar {X}_{0:k|k}^{j-1,N_{j-1}}$ rather than at $x^{j-1}$. However, for $j=0$ this notation is formal for the sake of consistency only. There is no ensemble for $j=0.$ \begin{algorithm} \label{alg:LM-EnKS-WD} Given an initial approximation $x_{0:k}^{0}$, and $\gamma\geq0$. Initializ \[ \tilde{x}^{j}=\bar{X}_{0:k|k}^{j,N_{j}}=x_{0:k}^{0}\quad\text{for }j=0. \] LM loop: For $j=1,2,\ldots$. Choose an ensemble size $N_j$. EnKS loop: For $i=0$, the ensemble $\left[ X_{0|0}^{j,n}\right] _{n=1}^{N_{j}}$ consists of i.i.d. Gaussian random variables \[ X_{0|0}^{j,n}\sim N\left( \tilde{x}_{0}^{0},B\right) . \] For $i=1,\ldots,k$, advance the model in time (the forecast step) b \begin{align} X_{i|i-1}^{j,n} & =\mathcal{M}_{i}^{\prime}\left( \bar{X}_{i-1|k ^{j-1,N_{j-1}}\right) \left( X_{i-1|i-1}^{j,n}-\bar{X}_{i-1|k}^{j-1,N_{j-1 }\right) +\mathcal{M}_{i}\left( \bar{X}_{i-1|k}^{j-1,N_{j-1}}\right) +\mu _{i}+V_{i}^{j,n},\label{eq:LM-EnKS-WD-adv}\\ V_{i}^{j,n} & \sim N\left( 0,{Q}_{i}\right) ,\quad n=1,\ldots ,N_{j}.\nonumber \end{align} Incorporate the observations at time $i$ into the ensemble of composite states $\left[ X_{0:i|i-1}^{j,n}\right] _{n=1}^{N_{j}}$ in the same way as in the EnKF analysis step \begin{align} X_{0:i|i}^{j,n}= & X_{0:i|i-1}^{j,n}+{P}_{0:i|i-1}^{j,N_{j}}\tilde{H _{i}^{j\mathrm{T}}\left( \tilde{H}_{i}^{j}{P}_{0:i|i-1}^{j,N_{j}}\tilde {H}_{i}^{j\mathrm{T}}+\tilde{R}_{i}\right) ^{-1}\label{eq:LM-EnKS-WD-ana}\\ & \cdot\left( \tilde{y}_{i}-\tilde{W}_{i}^{j,n}-\mathcal{\tilde{H}}_{i}\left( \bar{X}_{i|k}^{j-1,N_{j-1}}\right) -\mathcal{\tilde{H}}_{i}^{^{\prime}}\left( \bar{X}_{i|k}^{j-1,N_{j-1}}\right) \left( X_{i|i-1}^{j,n}-\bar{X _{i|k}^{j-1,N_{j-1}}\right) \right) ,\nonumber\\ \tilde{W}_{i}^{j,n}\sim & N\left( 0,\tilde{R}_{i}\right) \nonumber \end{align} where ${P}_{0:i|i-1}^{j,N_{j}}$ is the sample covariance from the ensemble $\left[ X_{0:i|i-1}^{j,n}\right] _{n=1}^{N_{j}}$. Similarly as in (\ref{eq:PHt})--(\ref{eq:h_n}), only the following matrix-vector products are needed: \begin{align*} {P}_{0:i|i-1}^{j,N_{j}}\tilde{H}_{i}^{j\mathrm{T}} & =\frac{1}{N_{j}-1 \sum_{n=1}^{N_{j}}\left( X_{0:i|i-1}^{j,n}-\bar{X}_{0:i|i-1}^{j,N_{j }\right) \left( X_{i|i-1}^{j,n}-\bar{X}_{i|i-1}^{j,N_{j}}\right)^{\mathrm{T}} \tilde {H}_{i}^{j\mathrm{T}}\\ & =\frac{1}{N_{j}-1}\sum_{n=1}^{N_{j}}\left( X_{0:i|i-1}^{j,n}-\bar {X}_{0:i|i-1}^{j,N_{j}}\right) h_{i}^{j,n\mathrm{T}},\\ \tilde{H}_{i}^{j}{P}_{0:i|i-1}^{j,N_{j}}\tilde{H}_{i}^{j\mathrm{T}} & =\frac{1}{N_{j}-1}\sum_{n=1}^{N_{j}}\tilde{H}_{i}^{j}\left( X_{i|i-1 ^{j,n}-\bar{X}_{i|i-1}^{j,N_{j}}\right) \left( X_{i|i-1}^{j,n}-\bar {X}_{i|i-1}^{j,N_{j}}\right)^{\mathrm{T}} \tilde{H}_{i}^{j\mathrm{T}}\\ & =\frac{1}{N_{j}-1}\sum_{n=1}^{N_{j}}h_{i}^{j,n}h_{i}^{j,n\mathrm{T}}, \end{align*} where \begin{equation} h_{i}^{j,n}=\tilde{H}_{i}^{j}\left( X_{i|i-1}^{j,n}-\bar{X}_{i|i-1}^{j,N_{j }\right) =\mathcal{\tilde{H}}_{i}^{\prime}\left( \bar{X}_{i|k}^{j-1,N_{j-1 }\right) \left( X_{i|i-1}^{j,n}-\bar{X}_{i|i-1}^{j,N_{j}}\right) \label{eq:LM-EnKS-WD-h \end{equation} and \[ \bar{X}_{i|i-1}^{j,N_{j}}=\frac{1}{N_{j}}\sum_{n=1}^{N_{j}}X_{i|i-1 ^{j,n},\quad\bar{X}_{0:i|i-1}^{j,N_{j}}=\frac{1}{N_{j}}\sum_{n=1}^{N_{j }X_{0:i|i-1}^{j,n}. \] The next iterate is $\tilde{x}^{j}=\bar{X}_{0:k|k}^{j,N_{j}}$. \end{algorithm} In the rest of this section, we study the asymptotic behavior of Algorithm \ref{alg:LM-EnKS-WD} when the ensembles sizes $N_1,\ldots N_{j}\rightarrow\infty$. We start with an a-priori $L^{p}$ bound on the ensemble members, independent of the ensemble size. \begin{assumption} \label{ass:M_H} The model and observation operators, $\mathcal{M}_{i}$, and $\mathcal{H}_{i}$ are continuously differentiable, with at most polynomial growth at infinity, and their Jacobians have at most polynomial growth at infinity, i.e. there exists $\kappa>0$ and $s\geq0$, such that $\left\vert \mathcal{M}_{i}(x)\right\vert \leq\kappa(1+\left\vert x\right\vert ^{s})$, $\left\vert \mathcal{M}_{i}^{\prime}(x)\right\vert \leq\kappa(1+\left\vert x\right\vert ^{s})$, $\left\vert \mathcal{H}_{i}(x)\right\vert \leq \kappa(1+\left\vert x\right\vert ^{s})$, and $\left\vert \mathcal{H _{i}^{\prime}(x)\right\vert \leq\kappa(1+\left\vert x\right\vert ^{s})$ for all $i$ and all $x$. \end{assumption} Since we are interested in the convergence with the ensemble size, we need a notation to distinguish between $X_{0:k|k}^{j,n}$ coming from ensembles of different sizes $N_{j}$. Thus, when we need to make such distinction, we denote by $X_{0:k|k}^{j,n,N_{j}}$ the $n$-th ensemble member from the ensemble $\left[ X_{0:k|k}^{j,1},\ldots,X_{0:k|k}^{j,N_{j}}\right] $ of size $N_{j}$ in Algorithm \ref{alg:LM-EnKS-WD}, and similarly for other subscripts and superscripts. \begin{lemma} \label{lem:boundness_X} For any $1\leq p<\infty$, any $i=0,\ldots,k$ and any $j=1,2,\ldots$, there exists a constant $C\left( i,j,p\right) $ such that in Algorithm \ref{alg:LM-EnKS-WD}, \begin{equation} \left\Vert {X}_{0:i|i}^{j,n,N_{j}}\right\Vert _{p}\leq C\left( i,j,p\right) \label{eq:boundness_X \end{equation} for all $n=1,\ldots,N_{j}$ and all $N_{j}.$ \end{lemma} \begin{proof} Let $p\in\lbrack1,\infty)$. We will prove (\ref{eq:boundness_X}) by induction on the iteration number $j$. For $j=1$, $\tilde{x}^{j-1}$ is constant, otherwise, for $j\geq2$, $\left\Vert \tilde{x}^{j-1}\right\Vert _{p}$ is bounded independently of the ensemble sizes by induction assumption because \[ \tilde{x}^{j-1}=\bar{X}_{0:k|k}^{j-1,N_{j-1}}=\frac{1}{N_{j-1}}\sum _{n=1}^{N_{j-1}}X_{0:k|k}^{j-1,n,N_{j-1}}. \] For a fixed $j$, we now proceed by induction on the time step $i$. For $i=0$, $X_{0|0}^{j,n}\sim N\left( x_{0}^{0},B\right) $, thus $\left\Vert X_{0|0}^{j,n}\right\Vert _{p}$ does not depend on $n$ or $N_{j}$. For $i=1,\ldots,k$, from (\ref{eq:LM-EnKS-WD-adv}), we hav \[ \left\Vert {X}_{i|i-1}^{j,n}\right\Vert _{p}\leq\left\Vert \mathcal{M _{i}^{\prime}(\tilde{x}_{i-1}^{j-1})\right\Vert _{2p}\left( \left\Vert X_{i-1|i-1}^{j,n}\right\Vert _{2p}+\left\Vert \tilde{x}_{i-1}^{j-1}\right\Vert _{2p}\right) +\left\Vert \mathcal{M}_{i}(\tilde{x}_{i-1}^{j-1})\right\Vert _{p}+\left\vert \mu_{i}\right\vert +\left\Vert V_{i}^{j,n}\right\Vert _{p}. \] From Assumption \ref{ass:M_H} and the fact that $V_{i}^{j,n}$ is normally distributed, there exist a constant $C_{p}$ such that \begin{align*} \left\Vert {X}_{i|i-1}^{j,n}\right\Vert _{p} & \leq\kappa C_{p}\left( 1+\left\Vert \tilde{x}_{i-1}^{j-1}\right\Vert _{2ps}^{s}\right) \left( \left\Vert X_{i-1|i-1}^{j,n}\right\Vert _{2p}+\left\Vert \tilde{x}_{i-1 ^{j-1}\right\Vert _{2p}\right) \\ & +\kappa C_{p}\left( 1+\left\Vert \tilde{x}_{i-1}^{j-1}\right\Vert _{ps}^{s}\right) +\left\vert \mu_{i}\right\vert +C_{p}. \end{align*} Bounding $\tilde{x}_{i-1}^{j-1}$ by the induction assumption on $j$ and $X_{i-1|i-1}^{j,n}$ by the induction assumption on $i$, we have that $\left\Vert {X}_{0:i|i-1}^{j,n,N_{j}}\right\Vert _{p}$ is bounded independently of $n$ and $N_{j}$. From equation (\ref{eq:LM-EnKS-WD-ana}), and the fact that $\tilde{H}_{i}^{j}=\mathcal{\tilde{H}}_{i}^{\prime}\left( \tilde{x}_{i}^{j-1}\right) =\left[ 0,\ldots,\mathcal{{H}}_{i}^{\prime }\left( \tilde{x}_{i}^{j-1}\right) \right] $ we conclude that \begin{align*} \left\Vert {X}_{0:i|i}^{j,n}\right\Vert _{p}\leq & \left\Vert {X _{0:i|i-1}^{j,n}\right\Vert _{p}+\left\Vert {P}_{0:i|i-1}^{j,N_{j} \tilde{\mathcal{H}}_{i}^{\prime\mathrm{T}}\left( \tilde{x}_{i}^{j-1}\right) \left( \tilde{\mathcal{H}}_{i}^{\prime}\left( \tilde{x}_{i}^{j-1}\right) {P}_{0:i|i-1}^{j,N_{j}}\tilde{\mathcal{H}}_{i}^{\prime\mathrm{T}}\left( \tilde{x}_{i}^{j-1}\right) +\tilde{R}_{i}\right) ^{-1}\right\Vert _{2p}\\ & .\left\Vert \tilde{y}_{i}-\tilde{W}_{i}^{j,n}-\mathcal{H}_{i}\left( \tilde{x _{i}^{j-1}\right) -\tilde{\mathcal{H}}_{i}^{\prime}\left( \tilde{x _{i}^{j-1}\right) \left( X_{i|i-1}^{j,n}-\tilde{x}_{i}^{j-1}\right) \right\Vert _{2p},\\ \leq & \left\Vert {X}_{0:i|i-1}^{j,n}\right\Vert _{p}+\left\Vert {P}_{0:i|i-1}^{j,N_{j}}\right\Vert _{8p}\left\Vert \tilde{\mathcal{H} _{i}^{\prime\mathrm{T}}\left( \tilde{x}_{i}^{j-1}\right) \right\Vert _{8p}\\ & \cdot\left\Vert \left( \tilde{\mathcal{H}}_{i}^{\prime}\left( \tilde {x}_{i}^{j-1}\right) {P}_{0:i|i-1}^{j,N_{j}}\tilde{\mathcal{H}}_{i ^{\prime\mathrm{T}}\left( \tilde{x}_{i}^{j-1}\right) +\tilde{R}_{i}\right) ^{-1}\right\Vert _{4p}\\ & .\left( \left\vert \tilde{y}_{i}\right\vert +\left\Vert \tilde{W}_{i}^{j,n}\right\Vert _{2p}+\left\Vert \mathcal{H}_{i}\left( \tilde{x}_{i}^{j-1}\right) \right\Vert _{2p} +\left\Vert \tilde{\mathcal{H}}_{i}^{\prime}\left( \tilde{x}_{i ^{j-1}\right) \right\Vert _{4p}\left( \left\Vert X_{i|i-1}^{j,n}\right\Vert _{4p}+\left\Vert \tilde{x}_{i}^{j-1}\right\Vert _{4p}\right) \right) . \end{align*} Since $\tilde{R}_{i}$ is positive definite and $P_{0:i|i-1}^{j,N_{j}}$ is positive semi definite, we hav \begin{equation} \left\Vert \left( \tilde{\mathcal{H}}_{i}^{\prime}\left( \tilde{x}_{i ^{j-1}\right) {P}_{0:i|i-1}^{j,N_{j}}\tilde{\mathcal{H}}_{i}^{\prime \mathrm{T}}\left( \tilde{x}_{i}^{j-1}\right) +\tilde{R}_{i}\right) ^{-1 \right\Vert _{4p}\leq\left| \tilde{R}_{i}^{-1}\right| . \label{eq:kg-bound \end{equation} From \cite[lemma 31]{Mandel-2011-CEK} we have \begin{equation} \left\Vert P_{0:i|i-1}^{j,N_{j}}\right\Vert _{8p}\leq2\left\Vert X_{0:i|i-1}^{j,1}\right\Vert _{16p}^{2}. \label{eq:cov-bound \end{equation} From the inequalities (\ref{eq:kg-bound}) and (\ref{eq:cov-bound}), Assumption \ref{ass:M_H}, and the fact that $\tilde{W}_{i}^{j,n}$ is normally distributed, there exists a constant $\tilde{C}_{p}$ such that \begin{align*} \left\Vert {X}_{0:i|i}^{j,n}\right\Vert _{p}\leq & \left\Vert {X _{0:i|i-1}^{j,n}\right\Vert _{p}+2\left\Vert {X}_{0:i|i-1}^{j,1}\right\Vert _{16p}^{2}\kappa\tilde{C}_{p}\left( 1+\left\Vert \tilde{x}_{i}^{j-1 \right\Vert _{8ps}^{s}\right) \\ & \left\vert \tilde{R}_{i}^{-1}\right\vert \left\vert \tilde{y}_{i}\right\vert +\tilde{C}_{p}+\kappa\tilde{C}_{p}\left(1+\left\Vert \tilde{x}_{i}^{j-1}\right\Vert _{2ps}^{s}\right)\\ & +\kappa\tilde{C}_p\left( 1+\left\Vert \tilde{x}_{i}^{j-1}\right\Vert _{4ps}^{s}\right) \left( \left\Vert X_{i|i-1}^{j,1}\right\Vert _{4p}+\left\Vert \tilde{x}_{i}^{j-1}\right\Vert _{4p}\right) \end{align*} Bounding $\tilde{x}_{i}^{j-1}$ by the induction assumption on $j$ and $X_{i-1|i-1}^{j,n}$ by the induction assumption on $i$, we obtain that $\left\Vert {X}_{0:i|i}^{j,n,N_{j}}\right\Vert _{p}$ is bounded independently of $n$ and $N_{j}$. \end{proof} At each iteration $j$ of Algorithm \ref{alg:LM-EnKS-WD}, we define for theoretical purposes a reference ensemble $\left[ U_{0:k|k}^{1},\ldots,U_{0:k|k}^{N_{j }\right] $, similarly as in Section \ref{sec:EnKS_convergence}, and with the derivatives taken at the mean, rather than sample mean as in Algorithm \ref{alg:LM-EnKS-WD}: For $j=0$, all $U_{i|i}^{j,n}=X_{i|i}^{j,n}=x^{0}$ are constants. For $j=1,2,\ldots,$ $U_{0|0}^{j,n}=X_{0|0}^{j,n}$ for $i=0,$ and for $i=1,\ldots,k$, \begin{align} U_{i|i-1}^{j,n} & =\mathcal{M}_{i}^{\prime}\left( E\left( U_{i-1|i-1 ^{j-1,1}\right) \right) \left( U_{i-1|i-1}^{j,n}-x_{i-1}^{j-1}\right) +\mathcal{M}_{i}^{\prime}\left( E\left( U_{i-1}^{j-1,1}\right) \right) +\mu_{i}+V_{i}^{j,n},\label{eq:def-Uk-adv-KS}\\ U_{0:i|i}^{j,n} & = {U}_{0:i|i-1}^{j,n}+Q_{0:i}^{j}\tilde{\mathcal{H}}_{i}^{\prime}\left(E\left(U_{i|k}^{j-1,1}\right)\right)^{\mathrm{T}} \left( \mathcal{H}_{i}^{\prime}\left(E\left(U_{i|k}^{j-1,1}\right)\right)Q_{i ^{j}\mathcal{H}_{i}^{\prime}\left( E\left( U_{i|k}^{j-1,1}\right) \right) ^{\mathrm{T}}+\tilde{R}_{i}\right) ^{-1} \nonumber \\ & \left(\hat{y}_{i}+\tilde{\mathcal{H}}_{i}^{\prime}\left( E\left( {U}_{i|k}^{j-1,1}\right) \right)\left( E\left( {U}_{i|k}^{j-1,1}\right) - {U}_{i|i-1}^{j,1} \right) -\mathcal{H}_{i}\left( E\left( {U}_{i|i}^{j-1,1}\right) \right)- \tilde{W}_{i}^{j,1} \right) \label{eq:def-Uk-ana-KS \end{align} where $Q_{0:i}^{j}=\operatorname{Cov}\left( U_{0:i|i-1}^{j-1,1}\right) $ and $\hat{y}_{i} = \left[ \begin{array} [c]{c y_{i}\\ E\left( U_{i|k}^{j-1,1}\right) \end{array} \right]$. We now show that the mean of the reference ensemble members is the solution of the linearized least squares (\ref{eq:tangent-ls}), and thus the next LM iterate: \begin{lemma} \label{lem:U_eq_min_r} $E(U_{0:k|k}^{j,1})=x^{j}$, where $x^{j}$ is the $j$-th iterate generated by the algorithm (\ref{alg:LM_D}). \end{lemma} \begin{proof} The proof proceeds by induction on the iteration number $j$. For $j=0$ we have $U^{0,n}=x^{0}$ for all $n$ by definition, thus $E(U^{0,1})=x^{0}$. Let $j\geq1$. From the induction assumption, the stochastic system (\ref{eq:def-Uk-adv-KS})--(\ref{eq:def-Uk-ana-KS}) is linearized about the previous LM iterate $x^{j-1}$. By Lemma \ref{lem:EnKS-Uk-exact}, $\left[ U_{0:k}^{j,n}\right] _{n=1}^{N_{j}}$ are i.i.d. with the smoothing distribution, whose mean is the solution of the linearlized least squares (\ref{eq:tangent-ls}). \end{proof} \begin{lemma} For all iterations $j$ and all times $i=0,\ldots,k$, the columns of the random matrix \begin{equation} \left[ {X}_{0:i|i}^{j};U_{0:i|i}^{j}\right] =\left[ \begin{array} [c]{c {X}_{0:i|i}^{j,1},\ldots,{X}_{0:i|i}^{j,N_{j}}\\ U_{0:i|i}^{j,1},\ldots,U_{0:i|i}^{j,N \end{array} \right] \label{eq:exchange_j \end{equation} are exchangeable. \end{lemma} \begin{proof} We recall that for all $j\geq1$, $\tilde{x}^{j}=\bar{X}_{0:k|k}^{j,N_{j}}$. In this proof we omit the subscripts of $N_j$ and $N_{j-1}$. We use induction on the LM iteration number $j$. For $j=0$ we have that for all $i=0,\ldots,k,$ $X_{0:i|i}^{0,n}=x_{0:i}^{0}=U_{0:i|i}^{0,n}$, and $U_{0:i|i}^{0,n}$ are i.i.d., therefore $\left[ {X}_{0:i|i}^{0};U_{0:i|i ^{0}\right] $ are exchangeable. For $j\geq1$, we use the induction on the time index $i$. For $i=0$, $\left[ U_{0|0}^{j,n}\right] _{n=1}^{N}$ are i.i.d, and $X_{0|0}^{j,n}=U_{0|0}^{j,n}$, therefore $\left[ X_{0|0 ^{j};U_{0|0}^{j}\right] $ are exchangeable. For $i=1,\ldots,k$, consider first the forecast step, \begin{align*} \left[ \begin{array} [c]{c X_{i|i-1}^{j,n}\\ U_{i|i-1}^{j,n \end{array} \right] & =\left[ \begin{array} [c]{ll \mathcal{M}_{i}^{\prime}\left( \bar{X}_{i-1|k}^{j-1,N}\right) & 0\\ 0 & \mathcal{M}_{i}^{\prime}\left( E\left( U_{i-1|k}^{j-1,1}\right) \right) \end{array} \right] \left[ \begin{array} [c]{c X_{i-1|i-1}^{j,n}\\ U_{i-1|i-1}^{j,n \end{array} \right] \\ & +\left[ \begin{array} [c]{c \mathcal{M}_{i}\left( \bar{X}_{i-1|k}^{j-1,N}\right) -\mathcal{M _{i}^{\prime}\left(\bar{X}_{i-1|k}^{j-1,N}\right)\bar{X}_{i-1|k}^{j-1,N}+\mu_{i}\\ \mathcal{M}_{i}\left(E\left(U_{i-1|k}^{j-1,1}\right)\right)-\mathcal{M}_{i}^{\prime}\left(E\left(U_{i-1|k ^{j-1,1}\right)\right)E\left(U_{i-1|k}^{j-1,1}\right)+\mu_{i \end{array} \right] +\left[ \begin{array} [c]{c V_{i}^{j,n}\\ V_{i}^{j,n \end{array} \right] \\ & =F^{i}\left( \bar{X}_{i-1|k}^{j-1,N},\left[ \begin{array} [c]{c X_{i-1|i-1}^{j,n}\\ U_{i-1|i-1}^{j,n \end{array} \right] ,\left[ \begin{array} [c]{c V_{i}^{j,n}\\ V_{i}^{j,n \end{array} \right] \right) , \end{align*} where $F^{i}$ is a measurable function. The ensemble sample mean $\bar {X}_{i-1|k}^{j-1,N}$ is invariant to a permutation of ensemble members, and $V_{i}^{j}=\left[ V_{i}^{j,1},\ldots,V_{i}^{j,N}\right] $ is exchangeable because its members are i.i.d. From the induction assumption on $i$ we have that $\left[ X_{i-1|i-1}^{j};U_{i-1|i-1}^{j}\right] $ is exchangeable, and it is also independent from $\left[ \begin{array} [c]{c V_{i}^{j}\\ V_{i}^{j \end{array} \right] $, therefore, using Lemma~\ref{lem:exchangeability_F}, $\left[ \begin{array} [c]{c X_{i|i-1}^{j}\\ U_{i|i-1}^{j \end{array} \right] $ is exchangeable. The analysis step also preserves exchageability \begin{align*} \left[ \begin{array} [c]{c X_{0:i|i}^{j,n}\\ U_{0:i|i}^{j,n \end{array} \right] & =\left[ \begin{array} [c]{c X_{0:i|i-1}^{j,n}\\ U_{0:i|i-1}^{j,n \end{array} \right] +\left[ \begin{array} [c]{ll K_{i}^{N} & 0\\ 0 & K_{i \end{array} \right] \\ & \left( \left[ \begin{array} [c]{c \tilde{y}_{i}-\mathcal{H}_{i}^{\prime}\left( \bar{X}_{i|k}^{j-1,N}\right) \bar {X}_{i|i}^{j-1,N}-\mathcal{H}_{i}\left( \bar{X}_{i|k}^{j-1,N}\right) -\tilde{W}_{i}^{j,n}\\ \hat{y}_{i}-\mathcal{H}_{i}^{\prime}\left( E\left( {{U}}_{i|k}^{j-1,1}\right) \right) E\left( {U}_{i|k}^{j-1,1}\right) -\mathcal{H}_{i}\left( E\left( {U}_{i|k}^{j-1,1}\right) \right) -\tilde{W}_{i}^{j,n \end{array} \right] \right. \\ & -\left. \left[ \begin{array} [c]{ll \mathcal{H}_{i}^{\prime}\left( \bar{X}_{i|k}^{j-1,N}\right) & 0\\ 0 & \mathcal{H}_{i}^{\prime}\left( E\left( U_{i|k}^{j-1,1}\right) \right) \end{array} \right] \left[ \begin{array} [c]{c X_{i|i-1}^{j,n}\\ U_{i|i-1}^{j,n \end{array} \right] \right) \\ & =F^{i}\left( \bar{X}_{i|k}^{j-1,N},P_{0:i|i-1}^{j,N},\left[ \begin{array} [c]{c X_{i|i-1}^{j,n}\\ U_{i|i-1}^{j,n \end{array} \right] ,\left[ \begin{array} [c]{c \tilde{W}_{i}^{j,n}\\ \tilde{W}_{i}^{j,n \end{array} \right] \right) \end{align*} because the Kalman gain matrices are functions of the ensemble members through $\bar{X}_{i|k}^{j-1,N}$ and $P_{0:i|i-1}^{j,N}$ only, \begin{align*} K_{i}^{N} & =\left[ \begin{array} [c]{c P_{0|i-1}^{j,N}\mathcal{H}_{i}^{\prime}(\bar{X}_{i|k}^{j-1,N})^{\mathrm{T }\\ \vdots\\ P_{i|i-1}^{j,N}\mathcal{H}_{i}^{\prime}\left( \bar{X}_{i|k}^{j-1,N}\right) ^{\mathrm{T} \end{array} \right] \left( \mathcal{H}_{i}^{\prime}\left( \bar{X}_{i|k}^{j-1,Nj \right) P_{i|i-1}^{j,N}\mathcal{H}_{i}^{\prime}\left( \bar{X _{i|k}^{j-1,N}\right) ^{\mathrm{T}}+\tilde{R}_{i}\right) ^{-1},\\ K_{i} & =\left[ \begin{array} [c]{c Q_{0}^{j}\mathcal{H}_{i}^{\prime}\left( E\left( U_{i|k}^{j-1,1}\right) \right) ^{\mathrm{T}}\\ \vdots\\ Q_{i}^{j}\mathcal{H}_{i}^{\prime}\left( E\left( U_{i|k}^{j-1,1}\right) \right) ^{\mathrm{T} \end{array} \right] \left( \mathcal{H}_{i}^{\prime}\left(E\left(U_{i|k}^{j-1,1}\right)\right)Q_{i} ^{j}\mathcal{H}_{i}^{\prime}\left( E\left( U_{i|k}^{j-1,1}\right) \right) ^{\mathrm{T}}+\tilde{R}_{i}\right) ^{-1 \end{align*} and the ensemble sample mean $\bar{X}_{i|k}^{j-1,N}$, and the ensemble sample covariance $P_{0:i|i-1}^{j,N}$ are invariant to a permutation of ensemble members, $\tilde{W}_{i}^{j}=[\tilde{W}_{i}^{j,1},\ldots,\tilde{W}_{i}^{j,N}]$ is exchangeable because, its members are i.i.d. and they are independent from $\left[ X_{i|i-1 ^{j,n};U_{i|i-1}^{j,n}\right] $ is exchangeable. Therefore, using again Lemma~\ref{lem:exchangeability_F}, $\left[ X_{0:i|i}^{j,n};U_{0:i|i ^{j,n}\right] $ are exchangeable. \end{proof} \begin{theorem} \label{thm:conv-N} For all $j$, and $n=1,\ldots N_{j}$, ${X}_{0:k|k ^{j,n,N_{j}}\rightarrow U_{0:k|k}^{j,n}$ and $\bar{X}_{0:k|k}^{j,N_{j }\rightarrow E\left( U_{0:k|k}^{j,1} \right) $ as $\min\{N_{1},\ldots ,N_{j}\}\rightarrow\infty\text{, in all }L^{p}$, $1\leq p<\infty$. \end{theorem} \begin{proof} We will prove that for all $j\geq0$ and all $1\leq i\leq k$, ${X _{0:i|i}^{j,1,N_{j}}\rightarrow U_{0:i|i}^{j,1}$ as\newline$\min\{N_{1 ,\ldots,N_{j}\}\rightarrow\infty\text{, in all }L^{p}$, $1\leq p<\infty$, and the convergence of the mean follows. Since $\left[ X_{0:i|i}^{j,n ;U_{0:i|i}^{j,n}\right] _{n=1}^{N_{j}}$ are exchangeable, we only need to consider the convergence of $X_{0:i|i}^{j,1,N_{j}}\rightarrow U_{0:i|i}^{j,1 $. We use induction on the LM iteration number $j$. For $j=0$, we have $X_{0:i|i}^{0,1,N_{0}}=x_{0:i}^{0}=U_{0:i|i}^{0,1}$. For $j\geq1$, we use induction on time index $i$. For $i=0$, $X_{0|0}^{j,1,N_{j}}=U_{0|0}^{j,1}$. For $i=1,\ldots,k$, from induction assumption on $j$ and $i$, we have $\bar {X}_{i-1|k}^{j-1,N_{j}}\rightarrow E(U_{i-1|k}^{j-1,1})$ and $X_{i-1|k ^{j,1,N_{j}}\rightarrow U_{i-1|k}^{j,1}$ in all $L^{p}$, $1\leq p<\infty$, as $\min\{N_{1},\ldots,N_{j}\}\rightarrow\infty$. Convergence in $L^{p}$ implies convergence in probability, and by the continuous mapping theorem, \begin{align*} {X}_{i|i-1}^{j,1,N_{j}}= & \mathcal{M}_{i}^{\prime}\left( \bar{X _{i-1|k}^{j-1,N_{j-1}}\right) X_{i-1|i-1}^{j,1,N_{j}}+\mathcal{M}_{i}\left( \bar{X}_{i-1|k}^{j-1,N_{j-1}}\right) \\ & -\mathcal{M}_{i}^{\prime}\left( \bar{X}_{i-1|k}^{j-1,N_{j-1}}\right) \bar{X}_{i-1|k}^{j-1,N_{j-1}}+\mu_{i}+V_{i}^{j,1}\\% \xrightarrow{\mathrm{P} & \mathcal{M}_{i}^{\prime}\left( E\left( U_{i-1|k}^{j-1,1}\right) \right) U_{i-1|i-1}^{j,1}+\mathcal{M}_{i}\left( E\left( U_{i-1|k}^{j-1,1}\right) \right) \\ & -\mathcal{M}_{i}^{\prime}\left( E\left( U_{i-1|k}^{j-1,1}\right) \right) E\left( U_{i-1|k}^{j-1,1}\right) +\mu_{i}+V_{i}^{j,1}\\ = & {U}_{i|i-1}^{j,1 \end{align*} as $\min\{N_{1},\ldots,N_{j}\}\rightarrow\infty$. From Lemma \ref{lem:boundness_X}, the sequence $\left\{ {X_{0:i|i-1}^{j,1,N_{j} }\right\} _{N_{j}=1}^{\infty}$ is bounded in all $L^{p}$, $1\leq p<\infty$, therefore by using the uniform integrability theorem we leverage the convergence in probability to convergence in all $L^{p}$, hence $X_{0:i|i-1 ^{j,1,N_{j}}\rightarrow U_{0:i|i-1}^{j,1}$ and $\bar{X}_{0:i|i-1}^{j,N_{j }\rightarrow E\left( U_{0:i|i-1}^{j,1}\right) $ in all $L^{p}$. From \cite{Mandel-2011-CEK} we have $P_{0:i|i-1}^{j,N_{j} \xrightarrow{\mathrm{P} Q_{0:i}^{j}$, then, from the continuous mapping theorem, $K_{i}^{N_{j }\xrightarrow{\mathrm{P}}K_{i}$. From the fact that convergence in $L^{p}$ implies convergence in probability, and using the continuous mapping theorem again, we conclude that \begin{align*} {X}_{0:i|i}^{j,1,N_{j}} & ={X}_{0:i|i-1}^{j,1,N_{j}}+K_{i}^{N_{j} \left(\tilde{y}_{i}+\tilde{\mathcal{H}}_{i}^{\prime}\left( \bar{X}_{i|k}^{j-1,N_{j-1}}\right) \bar{X}_{i|k}^{j-1,N_{j-1}}-\tilde{\mathcal{H}}_{i}\left( \bar{X}_{i|k}^{j-1,N_{j-1 }\right) \right.\\ & \quad-\left.\tilde{W}_{i}^{j,1}-\tilde{\mathcal{H}}_{i}^{\prime}\left( \bar{X}_{i|k ^{j-1,N_{j-1}}\right) X_{i|i-1}^{j,1,N_{j}}\right)\\ & \xrightarrow{\mathrm{P} {U}_{0:i|i-1}^{j,1}+K_{i}\left(\hat{y}_{i}+\tilde{\mathcal{H}}_{i}^{\prime}\left( E\left( {U}_{i|k}^{j-1,1}\right) \right) E\left( {U}_{i|k}^{j-1,1}\right) -\tilde{\mathcal{H}}_{i}\left( E\left( {U}_{i|i}^{j-1,1}\right) \right) \right. \\ & \quad-\left. \tilde{W}_{i}^{j,1}-\tilde{\mathcal{H}}_{i}^{\prime}\left( E\left( U_{i|k ^{j-1,1}\right) \right) {U}_{i|i-1}^{j,1}\right)\\ & ={U}_{0:i|i}^{j,1}, \end{align*} as $\min\left\{ N_{1},\ldots,N_{j}\right\} \rightarrow\infty$. Then we leverage the last convergence to the convergence in $L^{p}$ using Lemma~\ref{lem:boundness_X} and the uniform integrability again. \end{proof} \subsection{EnKS-4DVAR} \label{sec:EnKS-4DVAR-fd} To avoid computing with the tangent matrices $\mathcal{M}_{i}^{\prime}\left( x_{i-1}^{j-1}\right) $ and \newline$\mathcal{H}_{i}^{\prime}\left( x_{i ^{j-1}\right) $, we take advantage of the fact that they occur in the EnKS only in matrix-vector products, and approximate the matrix-vector multiplications in Algorithm \ref{alg:LM-EnKS-WD} by finite differences with a small step size $\tau>0$, centered at the previous iterate. Thus, we use the approximations of the form \begin{equation} f^{\prime}\left( x\right) y\approx\frac{f\left( x+\tau y\right) -f\left( y\right) }{\tau} \label{eq:fd \end{equation} in (\ref{eq:LM-EnKS-WD-adv}), (\ref{eq:LM-EnKS-WD-ana}), and (\ref{eq:LM-EnKS-WD-h}). Denote by an additional superscript $\tau$ the quantities computed in the resulting algorithm. This is the EnKS-4DVAR method originally proposed in \cite{Mandel-2013-4EK}. \begin{algorithm} [\textbf{\textsc{{EnKS-4DVAR}}}]\label{alg:LM_EnKS2} Given an initial approximation $x_{0:k}^{0}$, $\gamma>0$, and $\tau>0$. Initializ \[ \bar{X}_{0:k|k}^{j,N_{j},\tau}=x_{0:k}^{0}\quad\text{for } j=0. \] LM loop: For $j=1,2,\ldots$. Choose $N_j$ the same as in Algorithm~\ref{alg:LM-EnKS-WD}. EnKS loop: For $i=0$, the ensemble $\left[ X_{0|0}^{j,n,\tau}\right] _{n=1}^{N_{j}}$ consists of i.i.d. Gaussian random variables \[ X_{0|0}^{j,n,\tau}\sim N\left( \tilde{x}_{0}^{0},B\right) . \] For $i=1,\ldots,k$, advance the model in time (the forecast step) b \begin{align} X_{i|i-1}^{j,n,\tau} & =\frac{\mathcal{M}_{i}\left( \bar{X}_{i-1|k ^{j-1,N_{j-1},\tau}+\tau\left( X_{i-1|i-1}^{j,n,\tau}-\bar{X}_{i-1|k ^{j-1,N_{j-1},\tau}\right) \right) -\mathcal{M}_{i}\left( \bar{X _{i-1|k}^{j-1,N_{j-1},\tau}\right) }{\tau} ,\label{eq:LM-EnKS2-adv}\\ & +\mathcal{M}_{i}\left( \bar {X}_{i-1|k}^{j-1,N_{j-1},\tau}\right) +\mu_{i}+V_{i}^{j,n}\quad V_{i}^{j,n}\sim N\left( 0,{Q}_{i}\right) ,\quad n=1,\ldots,N_{j}.\nonumber \end{align} Incorporate the observations at time $i$ into the ensemble of composite states $\left[ X_{0:i|i-1}^{j,n,\tau}\right] _{n=1}^{N_{j}}$ by the analysis step \begin{align} X_{0:i|i}^{j,n,\tau}= & X_{0:i|i-1}^{j,n,\tau}+{P}_{0:i|i-1}^{j,N_{j},\tau }\tilde{H}_{i}^{j,\tau\mathrm{T}}\left( \tilde{H}_{i}^{j,\tau}{P _{0:i|i-1}^{j,N_{j},\tau}\tilde{H}_{i}^{j,\tau\mathrm{T}}+\tilde{R}_{i}\right) ^{-1}\label{eq:LM-EnKS2-ana}\\ & \cdot\Bigg(\tilde{y}_{i}-\tilde{W}_{i}^{j,n}-\mathcal{\tilde{H}}_{i}\left( \bar {X}_{i|k}^{j-1,N_{j-1},\tau}\right) \nonumber\\ & \quad-\frac{\mathcal{\tilde{H}}_{i}\left( \bar{X}_{i|k}^{j-1,N_{j-1},\tau }+\tau\left( X_{i|i-1}^{j,n,\tau}-\bar{X}_{i|k}^{j-1,N_{j-1},\tau}\right) \right) -\mathcal{\tilde{H}}_{i}\left( \bar{X}_{i|k}^{j-1,N_{j-1},\tau }\right) }{\tau}\Bigg),\nonumber\\ & \tilde{W}_{i}^{j,n}\sim N\left( 0,\tilde{R}_{i}\right) \nonumber \end{align} where $P_{0:i|i-1}^{j,N_{j},\tau}$ is the sample covariance from the ensemble $\left[ X_{0:i|i-1}^{j,n,\tau}\right] _{n=1}^{N_{j}}$. Similarly as in (\ref{eq:PHt})--(\ref{eq:h_n}), only the following matrix-vector products are needed: \begin{align} {P}_{0:i|i-1}^{j,N_{j},\tau}\tilde{H}_{i}^{j,\tau\mathrm{T}} & =\frac {1}{N_{j}-1}\sum_{n=1}^{N_{j}}\left( X_{0:i|i-1}^{j,n,\tau}-\bar{X _{0:i|i-1}^{j,N_{j},\tau}\right) \left( X_{i|i-1}^{j,n,\tau}-\bar{X _{i|i-1}^{j,N_{j},\tau}\right)^{\mathrm{T}} \tilde{H}_{i}^{j,\tau\mathrm{T} ,\label{eq:LM-EnKS2-PHt}\\ & =\frac{1}{N_{j}-1}\sum_{n=1}^{N_{j}}\left( X_{0:i|i-1}^{j,n,\tau}-\bar {X}_{0:i|i-1}^{j,N_{j},\tau}\right) h_{i}^{j,n,\tau\mathrm{T}}\nonumber\\ \tilde{H}_{i}^{j}{P}_{0:i|i-1}^{j,N_{j},\tau}\tilde{H}_{i}^{j,\tau\mathrm{T}} & =\frac{1}{N_{j}-1}\sum_{n=1}^{N_{j}}\tilde{H}_{i}^{j,\tau}\left( X_{i|i-1}^{j,n,\tau}-\bar{X}_{i|i-1}^{j,N_{j},\tau}\right) \left( X_{i|i-1}^{j,n,\tau}-\bar{X}_{i|i-1}^{j,N_{j},\tau}\right)^{\mathrm{T}} \tilde{H _{i}^{j,\tau\mathrm{T}}\label{eq:LM-EnKS2-HPHt}\\ & =\frac{1}{N_{j}-1}\sum_{n=1}^{N_{j}}h_{i}^{j,n,\tau}h_{i}^{j,n,\tau \mathrm{T}},\nonumber \end{align} where \small{ \begin{equation} h_{i}^{j,n,\tau}=\tilde{H}_{i}^{j,\tau}\left( X_{i|i-1}^{j,n,\tau}-\bar {X}_{i|i-1}^{j,N_{j},\tau}\right) =\frac{\mathcal{\tilde{H}}_{i}\left( \tau\left( X_{i|i-1}^{j,n,\tau}-\bar{X}_{i|i-1}^{j,N_{j},\tau}\right) +\bar{X}_{i|k}^{j-1,N_{j-1},\tau}\right) -\mathcal{\tilde{H}}_{i}\left( \bar{X}_{i|k}^{j-1,N_{j-1},\tau}\right) }{\tau} \label{eq:LM-EnKS2-hn \end{equation} } and \[ \bar{X}_{i|i-1}^{j,N_{j},\tau}=\frac{1}{N_{j}}\sum_{n=1}^{N_{j} X_{i|i-1}^{j,n,\tau},\quad\bar{X}_{0:i|i}^{j,N_{j},\tau}=\frac{1}{N_{j} \sum_{n=1}^{N_{j}}X_{0:i|i}^{j,n,\tau}. \] The next LM iterate is $\tilde{x}^{j,\tau}=\bar{X}_{0:k|k}^{j,N_{j},\tau}$. \end{algorithm} We now summarize the differences between the previous three algorithms.\ Algorithm \ref{alg:LM_D} solves the linearized problem in each iteration exactly, while Algorithm \ref{alg:LM-EnKS-WD} approximates the solution of the linearized problem by EnKS, and Algorithm \ref{alg:LM_EnKS2} approximates also the linearized problem itself by finite differences. We show that when the finite difference parameter $\tau\rightarrow0$, the iterations of Algorithm \ref{alg:LM_EnKS2} converge to their corresponding iterations of Algorithm \ref{alg:LM-EnKS-WD} in probability. The following lemma is the cornerstone of the analysis of the finite differences here. \begin{lemma} \label{lem:fd-conv} Let $\left( X_{\tau}\right) $ and $\left( Y_{\tau }\right) $ be random vectors such that $X_{\tau \xrightarrow{\mathrm{P} X$ and $Y_{\tau \xrightarrow{\mathrm{P} Y$ as $\tau\rightarrow0$, $\tau>0$, and $f$ be twice continuously differentiable with the matrix of second order derivatives $f^{\prime\prime}$ bounded. Then, \[ \frac{f(X_{\tau}+\tau Y_{\tau})-f(X_{\tau})}{\tau \xrightarrow{\mathrm{P} f^{^{\prime}}(X)Y\text{ as }\tau\rightarrow0\text{, }\tau>0. \] \end{lemma} \begin{proof} From Taylor expansion, for any $x$, $y$, and $t$ \begin{equation} \left\vert \frac{f(x+ty)-f\left( x\right) }{t}-f^{^{\prime}}(x)y\right\vert \leq Mt\left\vert y\right\vert ^{2}, \label{eq:fd-estimate \end{equation} where $M=\frac{1}{2}\sup_{\xi}\left\vert f^{\prime\prime}\left( \xi\right) \right\vert $ in the matrix norm induced by the vector norm $\left\vert \cdot\right\vert $. Let $\varepsilon>0,$ $\tilde{\varepsilon}>0$. Since $Y_{\tau \xrightarrow{\mathrm{P} Y$, $\left\{ Y_{\tau}\right\} $ is uniformly tight, that is, there exists $K$ such that $\mathbb{P}\left[ \left\vert Y_{\tau}\right\vert \leq K\right] \geq1-\tilde{\varepsilon}$ for all $\tau>0$. Choose $\tau_{1}=\frac {\varepsilon}{MK^{2}}>0$. Using (\ref{eq:fd-estimate}), it follows that for all $0<\tau<\tau_{1}$, \begin{equation} \mathbb{P}\left[ \left\vert \frac{f(X_{\tau}+\tau Y_{\tau})-f(X_{\tau}) {\tau}-f^{^{\prime}}(X_{\tau})Y_{\tau}\right\vert \leq\varepsilon\right] \geq1-\tilde{\varepsilon}. \label{eq:fd-prob-est \end{equation} Since the mapping $\left( x,y\right) \mapsto f^{\prime}\left( x\right) y$ is continuous and $\left( X_{\tau},Y_{\tau}\right) \rightarrow\left( X,Y\right) $ in probability, it follows from the continuous mapping theorem that $f^{^{\prime}}(X_{\tau})Y_{\tau}\rightarrow f^{^{\prime}}(X)Y$ in probability, hence there exists $\tau_{2}$ such that for all $\tau<\tau_{2}$, \begin{equation} \mathbb{P}\left[ \left\vert f^{^{\prime}}(X_{\tau})Y_{\tau}-f^{^{\prime }(X)Y\right\vert \leq\varepsilon\right] \geq1-\tilde{\varepsilon}. \label{eq:incr-prob-est \end{equation} Finally, using the triangle inequality, (\ref{eq:fd-prob-est})\ and (\ref{eq:incr-prob-est})\ imply \[ \mathbb{P}\left[ \left\vert \frac{f(X_{\tau}+\tau Y_{\tau})-f(X_{\tau}) {\tau}-f^{^{\prime}}(X)Y\right\vert \leq2\varepsilon\right] \geq 1-2\tilde{\varepsilon}, \] for all $0<\tau<\min\left\{ \tau_{1},\tau_{2}\right\} $. \end{proof} \begin{theorem} \label{thm:conv-tau} At each iteration $j$ and time step $i$ of Algorithm \ref{alg:LM_EnKS2}, $X_{0:i|i}^{j,n,\tau \xrightarrow{\mathrm{P} X_{0:i|i}^{j,n}$ as $\tau\rightarrow0$, where $X_{0:i|i}^{j,n}$ is the $n$-th member of the ensemble generated at $j$-th iteration in Algorithm \ref{alg:LM-EnKS-WD} with the same random perturbations as in Algorithm \ref{alg:LM_EnKS2}. \end{theorem} \begin{proof} In this proof we omit the subscripts of $N_j$ and $N_{j-1}$. The proof is by induction on the number of iterations $j$. For $j=1$ we have $\bar{X}_{0:i|i}^{j-1,N,\tau}=\bar{X}_{0:i|i}^{j-1,N}$. For $j\geq2$, we use induction on time step $i$. For $i=0$ we have $X_{0|0}^{j,n,\tau}=x_{\mathrm{b}}+V_{b}^{n}=X_{0|0}^{j,n}$. For $i=1,\ldots,k$, we have from the induction assumption on $i$, $X_{i-1|i-1 ^{j,n,\tau \xrightarrow{\mathrm{P} X_{i-1|i-1}^{j,n}$ as $\tau\rightarrow0$. Then using Lemma \ref{lem:fd-conv}, we have in (\ref{eq:LM-EnKS2-adv}) as $\tau\rightarrow0$, \begin{align} X_{i|i-1}^{j,n,\tau}= & \frac{\mathcal{M}_{i}\left( \bar{X}_{i-1|k ^{j-1,N_{},\tau}+\tau\left( X_{i-1|i-1}^{j,n,\tau}-\bar{X}_{i-1|k ^{j-1,N_{},\tau}\right) \right) -\mathcal{M}_{i}\left( \bar{X _{i-1|k}^{j-1,N_{},\tau}\right) }{\tau}\label{eq:LM-EnKS2-ana-conv}\\ & +\mathcal{M}_{i}\left( \bar{X}_{i-1|k}^{j-1,N_{},\tau}\right) +\mu _{i}\nonumber\\% \xrightarrow{\mathrm{P} & \mathcal{M}_{i}^{\prime}\left( \bar{X}_{i-1|k}^{j-1,N_{}}\right) \left( X_{i-1|i-1}^{j,n}-\bar{X}_{i-1|k}^{j-1,N_{}}\right) +\mathcal{M}_{i}\left( \bar{X}_{i-1}^{j-1,N}\right) +\mu_{i}+V_{i}^{n}=X_{i|i-1}^{j,n}. \end{align} Similarly, using the induction assumption on $j$ and Lemma \ref{lem:fd-conv}, we have in (\ref{eq:LM-EnKS2-ana}) and in (\ref{eq:LM-EnKS2-hn}), respectively \begin{align*} & \frac{\mathcal{H}_{i}\left( \bar{X}_{i|k}^{j-1,N,\tau}+\tau\left( X_{i|i-1}^{j,n,\tau}-\bar{X}_{i|k}^{j-1,N_{},\tau}\right) \right) -\mathcal{H}_{i}\left( \bar{X}_{i|k}^{j-1,N_{},\tau}\right) }{\tau}\\ & \qua \xrightarrow{\mathrm{P} \mathcal{H}_{i}^{\prime}\left( \bar{X}_{i|i}^{j-1,N}\right)\left( X_{i|i-1 ^{j,n,\tau}-\bar{X}_{i|k}^{j-1,N_{},\tau}\right),\\ & \frac{\mathcal{H}_{i}\left( \bar{X}_{i|k}^{j-1,N,\tau}+\tau\left( X_{i|i-1}^{j,n,\tau}-\bar{X}_{i|i-1}^{j-1,N_{},\tau}\right) \right) -\mathcal{H}_{i}\left( \bar{X}_{i|k}^{j-1,N_{},\tau}\right) }{\tau}\\ & \qua \xrightarrow{\mathrm{P} \mathcal{H}_{i}^{\prime}\left( \bar{X}_{i|k}^{j-1,N_{}}\right) \left(X_{i|i-1}^{j,n,\tau}-\bar{X}_{i|-1}^{j-1,N_{},\tau}\right) \end{align*} as $\tau\rightarrow0$. In (\ref{eq:h_n}) gives \begin{equation} h_{i}^{j,n,\tau \xrightarrow{\mathrm{P} h_{i}^{j,n}\text{ as }\tau\rightarrow0. \label{eq:LM-EnKS2-hn-conv \end{equation} Using (\ref{eq:LM-EnKS2-hn-conv}) and the continuous mapping theorem in (\ref{eq:LM-EnKS2-HPHt}) and (\ref{eq:LM-EnKS2-PHt}) gives \begin{align*} {P}_{0:i|i-1}^{j,N_{},\tau}\tilde{H}_{i}^{j,\tau\mathrm{T} \xrightarrow{\mathrm{P} {P}_{0:i|i-1}^{j,N_{}}\tilde{H}_{i}^{j\mathrm{T}}\text{\quad as }\tau & \rightarrow0,\\ \tilde{H}_{i}^{j,\tau}{P}_{0:i|i-1}^{j,N_{},\tau}\tilde{H}_{i}^{j,\tau\mathrm{T} \xrightarrow{\mathrm{P} \tilde{H}_{i}^{j}{P}_{0:i|i-1}^{j,N_{}}\tilde{H}_{i}^{j\mathrm{T }\text{\quad as }\tau & \rightarrow0. \end{align*} Using also (\ref{eq:LM-EnKS2-ana-conv}) in (\ref{eq:LM-EnKS2-ana}) and the continuous mapping theorem once more gives $X_{0:i|i}^{j,n,\tau \xrightarrow{\mathrm{P} X_{0:i|i}^{j,n}$ as $\tau\rightarrow0$. \end{proof} \begin{corollary} \label{cor:composite-limit} For each $j$, $\lim_{\min\{N_{1},\ldots ,N_{j}\}\rightarrow\infty}\lim_{\tau\rightarrow0}\bar{X}_{0:k|k}^{j,N,\tau }=x^{j}$ in probability, where $x^{j}$ is the $j$-th iterate of Algorithm \ref{alg:LM_D}. \end{corollary} \begin{proof} The proof follows immediately from Theorem \ref{thm:conv-tau}, Theorem \ref{thm:conv-N}, and Lemma \ref{lem:U_eq_min_r}. \end{proof} \section{Conclusion} In this paper we have shown that: when the observation and the model operators are linear for any time step, the empirical mean and covariance of EnKS converge to the KS mean and covariance in the limit for large ensemble size in $L^{p}$ for any $p \in [1,\infty)$. In the nonlinear case, i.e., in the case where the observation and the model operators are not necessary linear, we have shown the convergence of LM-EnKS iterations (Algorithm~\ref{alg:LM_EnKS2}) in the limit for large ensemble size. The convergence is in the sense that (i) each iterate generated by Algorithm~\ref{alg:LM_EnKS2} converges in probability to its corresponding iterate of Algorithm~\ref{alg:LM-EnKS-WD} as the finite differences parameter goes to zero, (ii) and that each iterate generated by Algorithm~ \ref{alg:LM-EnKS-WD} converges, in $L^{p}$ for any $p \in [1,\infty)$, to its corresponding iterate of Algorithm~\ref{alg:LM_D} (the Levenberg-Marquardt algorithm) in the large-ensemble limit. These proofs of convergence, and more generally the asymptotic behavior of the ensemble-based algorithms deserve further investigation. Here in the nonlinear case, we have given only the limit in probability of each iterate of Algorithm~\ref{alg:LM_EnKS2} as the finite differences parameter goes to zero and the ensemble sizes go to infinity. One may, for instance, try to prove stronger convergence results, especially to leverage the convergences in probability to convergences in $L^{p}$, and show the convergence rate of these algorithms following the spirit of \cite{LeGland-2011-LSA}. The approach followed in this paper could be also extended to the case in which other variants of ensemble method, such as the square root ensemble Kalman filter \cite{Kwiatkowski-2014-CSR}, are used to approximately solve the linearized subproblem. \bibliographystyle{siam}
2111.15443
\section{Introduction} Galaxy clusters are located at the peaks of the (dark) matter density field and, as they evolve, they accrete galaxies, galaxy groups, and other clusters from the cosmic web. Some of those merging events are among the most energetic and violent events in the Universe, releasing energies up to 10$^{64}$ ergs \citep{Sarazin2002, Sarazin2004}, providing extreme conditions to study a range of phenomena, from particle physics \citep[e.g.][]{markevitch04,harvey15,kim2017} to cosmology \citep[e.g.][]{clowe06,thompson15}, including galaxy evolution \citep[e.g.][]{ribeiro13,zenteno2020}. The cluster assembly process affects galaxies via several physical processes, including harassment, galaxy-galaxy encounters \citep[e.g.,][]{ToomreToomre72}, tidal truncation, starvation, and ram pressure stripping \citep{GunnGott72}, which act upon the galaxies at different cluster centric distances \citep[e.g.,][]{treu03}. Such events not just change the galaxies in terms of stellar populations and morphologies \citep[e.g., ][]{Kapferer2009, mcpartland16, poggianti16, Kelkar20}, but also by destroying them, as indicated by a Halo Occupation Number index lower than 1 \citep[e.g.,][]{lin04a,zenteno11,zenteno16,hennig17}. In such extreme environments, galaxies are exposed to conditions that may quench \citep[e.g.][]{Poggianti2004, Pallero2021} or trigger star formation \citep[e.g.][]{Ferrari2003, Owers2012} . For example, \citet{Kalita2019} found evidence of a Jellyfish galaxy in the dissociative merging galaxy cluster A1758N ($z\sim0.3$), concluding that it suffered from ram-pressure striping due to the merging event. \citet{pranger14} studied the galaxy population of the post-merger system Abell 2384 (z$\sim$0.094), finding that the population of spiral galaxies at the center of the cluster does not show star formation activity, and proposing that this could be a consequence of ram-pressure stripping of spiral galaxies from the field falling into the cluster. \citet{ma10} discovered a fraction of lenticular post-starburst galaxies in the region in-between two colliding structures, in the merging galaxy cluster MACS J0025.4-1225 (z$\sim$0.59), finding that the starburst episode occurred during the first passage ($\sim$0.5-1 Gyr ago), while the morphology was already affected, being transformed into lenticular galaxies because of either ram-pressure events or tidal forces towards the central region. On the other hand, \citet{Yoon2020} found evidence of increase in the star formation activity of galaxies in merging galaxy clusters, alleging that it could be due to an increment of barred galaxies in this systems \citep{Yoon2019}. \citet{Stroe2014} found an increase of H$$\alpha$$ emission in star-forming galaxies in the merging cluster ``Sausage''(CIZA J2242.8+5301) and, by comparing the galaxy population with the more evolved merger cluster ``Toothbrush'' (1RXS J0603.3+4213), concluded that merger shocks could enhance the star formation activity of galaxies, causing them to exhaust their gas reservoirs faster \citep{Stroe2015}. To understand how the merger process impacts cluster galaxies, it is crucial to assemble large samples of merging clusters and determine their corresponding merger phase: pre, ongoing or post. The SZ-selected samples are ideal among the available cluster samples, as they are composed of the most massive clusters in the Universe and are bound to be the source of the most extreme events. The South Pole Telescope \citep[SPT][]{carlstrom11} has completed a thermal SZ survey, finding 677 cluster candidates \citep{bleem15b}, providing a well understood sample to study the impact of cluster mergers on their galaxy population. There is rich available information on those clusters, including the gas centroids (via SZ and/or X--ray), optical imaging, near-infrared imaging, cluster masses, photometric redshifts, etc. Furthermore, as the SPT cluster selection is nearly independent of redshift, a merging cluster sample will also allow evolutionary studies to high redshifts. Using SPT-SZ selected clusters and optical imaging, \citet[][]{song12b} reported the brightest cluster galaxy (BCG) positions on 158 SPT cluster candidates and, by using the separation between the cluster BCG and the SZ centroid as a dynamical state proxy, found that SPT-CLJ0307-6225 is the most disturbed galaxy cluster of the sample. Recently, \cite{zenteno2020} employed optical data from the first three years of the Dark Energy Survey \citep[DES; ][]{abbott18,morganson18,des16} to use the BCG in 288 SPT SZ-selected clusters \citep{bleem15b} to classify their dynamical state. They identified the 43 most extreme systems, all with a separation greater than 0.4 $r_{200}$, including once again SPT-CLJ0307-6225. Furthermore, an X--ray morphological analysis done by \cite{Nurgaliev2017} over 90 SPT-selected galaxy clusters shows SPT-CLJ0307-6225 as one of two most extreme cases \citep[the other one is `El Gordo'][]{Marriage2011,Menanteau2012}, making this cluster an interesting system to test the impact of a massive merging event in galaxy evolution, the goal of this paper. We use VLT/MUSE and Gemini/GMOS spectroscopy, X--ray data from {\it Chandra}, and Megacam imaging to characterize the SPT-CLJ0307-6225 merger stage, and its impact on galaxy population. The paper is organized as follow: in $\S$\ref{sec:observations} we provide details of the observations and data reduction. In $\S$\ref{sec:analysis} we show the analysis for the spectroscopic and optical data, while in $\S$\ref{sec:results} we report our findings for both the merging scenario and the galaxy population. In $\S$\ref{sec:discussion} we propose an scenario for the merging event and connect it to the galaxy population. In $\S$\ref{sec:conclusions} we give a summary of the results. Throughout the paper we assume a flat Universe, with a $\Lambda$CDM cosmology, $h=0.7$, $\Omega_m = 0.27$ \citep[][]{Komatsu2011}. Within this cosmology, 1 arcsec corresponds to $\sim$6.66 kpc. \begin{figure*} \centering \includegraphics[width=\linewidth]{plots/0307_muse4.pdf} \vskip-0.13in \caption{Pseudo-color image, from $gri$ filters combination, of the central area of SPT-CLJ0307-6225. Magenta squares show the MUSE footprints, where the numbers on the top-right corner of each square shows the cube's number. Orange contours where derived from archival \textit{Chandra} images. The cyan plus-sign marks the X--ray centroid \citep{mcdonald13}. The arrows show the positions of the two brightest galaxies of the cluster. The white bar on the bottom shows the scale of 1 arcminute. The inset shows the 2D galaxy number density with the same scale as the main figure, where the two highest intensity areas correspond to the areas around the BCGs.} \label{fig:rgb_image} \end{figure*} \section{Observations and Data Reduction} \label{sec:observations} \subsection{Optical Imaging} \label{sec:imaging} Optical images were obtained using Magellan Clay with Megacam during a single night on November 26, 2011 (UT). Megacam has a 24\hbox{$^{\prime}$} x 24\hbox{$^{\prime}$} field-of-view, which at redshift 0.579 correspond to $\sim$10 Mpc. Several dithered exposures were taken in $g$, $r$, and $i$ filters for a total time of 1200 s, 1800 s, and 2400 s respectively. The median seeing of the images was approximately 0.79 arcsec or about 5 kpc, with a better seeing in r-band, averaging 0.60 arcsec. The 10$\sigma$ limit magnitudes in $gri$ are 24.24, 24.83, and 23.58, respectively \citep{chiu16b}. In Fig.~\ref{fig:rgb_image} we show the $gri$ pseudo-color image, centered on the SZ cluster position of SPT-CLJ0307-6225, with the white bar on the bottom right showing the corresponding scale. The catalogs for the photometric calibration were created following \citet{High2012} and \citet{Dietrich2019} including standard bias subtraction and bad-pixel masking, as well as flat fielding, illumination, and fringe (for i-band only) corrections. The stellar locus regression \cite{High2009} were constrained by cross-matching with 2MASS catalogs and give uncertainties in absolute magnitude of 0.05 mag and in color of 0.03 mag. For the creation of the galaxy photometric catalogs, we use a combination of Source Extractor \citep[\texttt{SExtractor};][]{bertin96} and the Point Spread Function Extractor \citep[\textsc{PSFex};][]{Bertin2011} softwares. \textsc{SExtractor} is run in dual mode, using the $i$-band image as the reference given the redshift of the cluster, we extract all detected sources with at least 6 pixels connected above the 4$\sigma$ threshold, using a 5 pix Gaussian kernel. Deblending is performed with 64 sub-thresholds and a minimum contrast of 0.0005. Galaxy magnitudes are \textsc{SExtractor}'s \texttt{MAG\_AUTO} estimation, whereas colors are derived from aperture magnitudes. The star-galaxy separation in our sample is performed following \citet{Crocce2019}, by using the \textsc{SExtractor} parameter \textsc{spread\_model}, and its corresponding error, \textsc{spreaderr\_model}, derived from the $i$-band image, for objects within $R_{200}$ from the SZ center \citep[$R_{200} = 3.84'$;][]{song12b, zenteno2020}. \citet{Crocce2019} classified a source as a galaxy if its satisfies \begin{equation}\label{eq:phot_gals} \textsc{spread\_model} + \left(\frac{5}{3}\right)\times\textsc{spreaderr\_model} > 0.007 \end{equation} \noindent ensuring a 97\% purity galaxy catalog. With this separation we find 423 sources which are classified as galaxies. A visual inspection reveals that only 3 of them are not galaxies, however, most of the galaxies spectroscopically classified as cluster members are not included. To remedy this, we change the limit in Eq. \ref{eq:phot_gals} to $>0.004$ \citep[for reference, 0.005 is $\sim$95\% purity;][]{Sevilla-Noarbe2018}, which includes then most of the spectroscopic galaxies (30 missing out of 131, see $\S$\ref{sec:redshifts}), but also increasing the contamination by other sources (e.g. stars). To improve upon this, we apply a second cut in magnitude for a source to be classified as a galaxy, such that $i_{\rm{auto}}<18.5$ mag, which is $\sim0.5$ mag brighter than the BCG. On the faint end the cut is set at $i_{\rm{auto}} < m^*+3 = 23.39$, which is beyond the limit of our spectroscopic catalogue (see $\S$\ref{sec:completeness}). With this we obtain 789 galaxies, plus the 30 spectroscopic galaxies which did not make the cut. We inspect the properties of these 30 missing objects by comparing their measured \textsc{SExtractor} parameter \texttt{class\_star} on the same filter with that of the other 789. \texttt{class\_star} is derived using a neural network, giving a probability of a source to be a star (\texttt{class\_star} $\approx$ 1) or a galaxy (\texttt{class\_star} $\approx$ 0). The 30 missing galaxies are all in the higher end of this parameter with respect to the other 789, with \texttt{class\_star}$_i$ $\geq$ 0.80. On the other hand, \cite{Sevilla-Noarbe2018} uses a limit of $<0.002$ in Eq. \ref{eq:phot_gals} to classify a source as a star, and these 30 sources are located in the area between the star cut and the galaxy cut we set above. The flux weighted galaxy surface density maps (see $\S$~\ref{sec:subcluster} and the inset in Fig.~\ref{fig:rgb_image}) is generated from the population of red sequence galaxies, determined from the star-galaxy separation. Missing potential galaxies from the photometric catalog does not alter the general form of the surface density map, thus not altering our conclusions. \subsection{Spectroscopic data} \label{sec:reductions} The MUSE \citep{bacon12} observations were taken on August 22nd, 23rd and 24th, 2016 (program id: 097.A-0922(A), PI: Zenteno), and November 10 and December 20, 2017 (program id: 100.A-0645(A), PI: Zenteno). The observations consisted of four pointings, with a total exposure time of 1.25 hours per data cube, with an airmass = 1.4. The positions of the pointings were selected to cover the two BCGs (labeled as BCG1 and BCG2 on Fig.~\ref{fig:rgb_image}) and the area between them. The MUSE footprints for the 4 observed data cubes are shown as magenta squares on Fig.~\ref{fig:rgb_image}, with the cubes enumerated in the top right corner of each square. We use these numbers to refer to the cubes throughout the paper. \begin{table} \caption{Central coordinates and seeing conditions of the observed MUSE fields} \label{tab:muse_data} \centering \resizebox{\linewidth}{!}{% \begin{tabular}{c c c c c} \hline \hline CUBE & Program & \multicolumn{2}{c}{Coordinates} & Seeing \\ & ID & R.A. (J2000) & Dec. (J2000) & (")\\ \hline 1 & 097.A-0922(A) & $03^{\rm h}\ 07^{\rm m}\ 16.34^{\rm s}$ & $-62^\circ\ 26^{\prime}\ 54.98^{\prime\prime}$ & 0.56 \\ 2 & 097.A-0922(A) & $03^{\rm h}\ 07^{\rm m}\ 19.052^{\rm s}$ & $-62^\circ\ 25^{\prime}\ 36.430^{\prime\prime}$ & 0.70 \\ 3 & 0100.A-0645(A) & $03^{\rm h}\ 07^{\rm m}\ 22.271^{\rm s}$ & $-62^\circ\ 24^{\prime}\ 42.140^{\prime\prime}$ & 0.68 \\ 4 & 0100.A-0645(A) & $03^{\rm h}\ 07^{\rm m}\ 25.302^{\rm s}$ & $-62^\circ\ 23^{\prime}\ 46.570^{\prime\prime}$ & 0.97 \\ \hline \hline \end{tabular} } \end{table} The data was taken in WFM-NOAO-N mode, with a position angle of 18 deg for three of the cubes and 72 deg for the one to the south, and using the dithering pattern recommended for best calibration: 4 exposures with offsets of 1" and 90 degrees rotations (MUSE User Manual ver. 1.3.0). The raw data were reduced through the MUSE pipeline \citep{weilbacher14, Weilbacher2016} provided by ESO. We construct 1D spectra from the MUSE cube using the {\tt MUSELET} software \citep{bacon16}. {\tt MUSELET} finds source objects by constructing line-weighted (spectrally) 5x1.25 Angstrom wide narrow band images and running \texttt{SExtractor} on them. In order to create well fitted masks to their respective sources, the parameter {\tt DETECT\_THRESH} is set to be 2.5. If the chosen value is below that, {\tt SExtractor} will detect noise and output wrong shapes in the segmentation map. We proceed to use the source file to extract the {\tt SExtractor} parameters {\tt A\_WORLD}, {\tt B\_WORLD} and {\tt THETA\_WORLD} to create an elliptical mask centered in each source. Finally, we use the {\tt MUSELET} routines {\tt mask\_ellipse} and {\tt sum} to create the 1D weighted spectra of the sources. To make sure the objects fit into their apertures, the SExtractor parameter {\tt PHOT\_FLUXFRAC} is set at 0.9, which means that 90\% of the source's flux will be contained within the mask's radius. We complemented MUSE galaxy redshifts with Gemini/GMOS data published by \citet{bayliss16}. Bayliss galaxy redshift sample consists in 35\ cluster galaxies redshifts, with 8\ not present in our MUSE data. The spectroscopic data from their sample can be found online at the \textsc{VizieR Catalogue Service} \citep{Vizier}, with the details on the data reduction described in \cite{bayliss16} and \cite{Bayliss2017}. For SPT-CLJ 0307-6225, they used 2 spectroscopic masks with an exposure time of 1 hour each. The target selection consisted mostly of galaxies from the red sequence (selected as an overdensity in the color-magnitude and color-color spaces) up to $m^* + 1$, prioritising BCG candidates. \subsection{X-ray data} \label{subsec:xraydata} SPT-CLJ0307-6225 was observed by {\it Chandra} as part of a larger, multi-cycle effort to follow up the 100 most massive SPT-selected clusters spanning $0.3 < z < 1.8$ \citep{mcdonald13,McDonald2017}. In particular, this observation (12191) was obtained via the ACIS Guaranteed Time program (PI: Garmire). A total of 24.7\,ks was obtained with ACIS-I in VFAINT mode, centering the cluster $\sim$1.5$^{\prime}$ from the central chip gap. The data was reprocessed using \textsc{ciao} v4.10 and \textsc{caldb} v.4.8.0. For details of the observations and data processing, see \cite{mcdonald13}. The derived X--ray centroid is shown as a cyan plus-sign on Fig.~\ref{fig:rgb_image}. \section{Analysis} \label{sec:analysis} \subsection{Color-Magnitude Diagram and RCS selection} \label{sec:photometric} The color-magnitude diagram (CMD) for the cluster is shown in Fig.~\ref{fig:cmd}, where the magenta triangles are galaxies from our spectroscopic sample (see $\S$\ref{sec:redshifts}) and the dots represent galaxies from our photometric sample (selected as described in \S~\ref{sec:imaging}). For the selection of the red cluster sequence (RCS) galaxies, which consist mostly of passive galaxies which are likely to be at the redshift of the cluster \citep{Gladders2000}, we examine the location of the galaxies from our spectroscopic sample in the CMD. With this information, we then select all galaxies with $r-i>0.65$ and perform a 3$\sigma$-clipping cut on the color index to remove outliers. We keep all the galaxies from our previous magnitude cut in $\S$~\ref{sec:imaging} ($i_{\rm auto}< 23.39$). Finally, we fit a linear regression to the remaining objects, which is shown with a red dashed line in Fig.~\ref{fig:cmd}. The green dotted lines denote the limits for the RCS, chosen to be $\pm$0.22 [mag] from the fit, which corresponds to the average scatter of the RCS at 3$\sigma$ \citep{lopez04}. This gives us a total of 187\ optically selected RCS galaxy candidates, with 64\ of those being spectroscopically confirmed members. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/0307_cmd_v2.pdf} \vskip-0.13in \caption{Color-magnitude diagram (CMD) of SPT-CLJ0307-6225 from Megacam data within R$_{200}$. The $y$-axis shows the color index $r-i$ estimated from aperture magnitudes, with a fix aperture of $\sim$40 kpc ($\sim$6 arcseconds) at the cluster redshift, while the $x$-axis shows \textsc{SExtractor}'s \texttt{MAG\_AUTO}. Magenta triangles represent galaxies from our spectroscopic sample, whereas dots are galaxies from the photometric sample. The red cluster sequence (RCS) estimated for the cluster is shown as a red-dashed line, while the green dotted lines are the 0.22 mag width established for the RCS.} \label{fig:cmd} \end{figure} \subsection{Spectroscopic catalog} \subsubsection{Galaxy redshifts} \label{sec:redshifts} To obtain the redshifts, we use an adapted version of \textsc{MARZ} \citep{Hinton2016} for MUSE spectra\footnotemark \footnotetext{\url{http://saimn.github.io/Marz/\#/overview} (Hinton, private communication)}. \textsc{MARZ} is an automatic redshifting Javascript web application that can be used interactively or via command line, for which we give the 1D spectra of each object as an input, obtaining the spectral type (late-type galaxy, star, quasar, etc.) and the redshift that best fits as an output. The results are examined visually for each of the objects, calibrating them using the 4000\AA\ break and the Calcium $H$ and $K$ lines. Heliocentric correction was applied to all redshifts using the \textsc{rvcorrect} task from \textsc{iraf}. There are three sources in the cube 4 region which appeared to be part of the cluster, but were not well fitted by \textsc{MARZ}. These sources are labelled by the white arrows in the top panel of Fig.~\ref{fig:manual_redshift}, whereas their spectra are shown in black on the bottom panel. For comparison, the red arrow points towards a galaxy with a redshift automatically estimated as close to the cluster redshift, whereas the cyan arrow points towards a galaxy with an estimated redshift higher than that of the cluster. In total we estimate spectroscopic redshifts for 116\ objects within the MUSE fields, with 4 of them classified as stars. In Table~\ref{tab:all_objs_properties} we show the redshifts and magnitudes for this objects. For details of the different columns please refer to \ref{sec:appendix_catalog}. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/0307N_manual_redshift.pdf}\\ \includegraphics[width=\linewidth]{plots/manual_redshift_spec_v4.pdf} \vskip-0.13in \caption{\textit{top}: Zoom-in of Fig.~\ref{fig:rgb_image}, into cube 4. White arrows point towards galaxies that had their redshift estimated manually using \textsc{MARZ}, red arrow points towards a galaxy with automatic redshift detection while the cyan arrow points towards a galaxy with similar characteristic to those of the cluster, but at $z=0.716$. \textit{bottom}: Spectrum of the sources pointed by arrows on the top panel. Sources pointed by white arrows are shown as black spectrum, while the others are colored according to the arrow pointing at them. The redshift found by \textsc{MARZ} of each source is written on top of each spectrum. The black dotted lines and cyan dashed lines mark the Calcium $H$ and $K$ lines redshifted at $z=0.58$ and $z=0.7156$, respectively. We also show as a black dotted line the G-band feature at 4304 \AA, redshifted to $z=0.58$.} \label{fig:manual_redshift} \end{figure} In addition, we supplement this data with 35\ GMOS archival spectroscopic reduced data \citep{bayliss16}. Unfortunately, the header of these spectroscopic data did not have the information regarding the wavelength calibration, so we estimate that manually and then use \textsc{fxcor} to estimate redshifts. For the \textsc{fxcor} estimations we use 4 template spectra from the \textsc{IRAF} package \textsc{rvsao}; \textit{eltemp} and \textit{sptemp} that are composites of elliptical and spiral galaxies, respectively, produced with the \textsc{FAST} spectrograph for the Tillinghast Telescope \citep{Fabricant_1998}; \textit{habtemp0} produced with the \textsc{hectospec} spectrograph for the MMT as a composite of absorption line galaxies \citep{Fabricant_1998}; and a synthetic galaxy template \textit{syn4} from stellar spectra libraries constructed using stellar light ratios \citep{Quintana_2000}. The redshifts are solved in the spectrum mode of \textsc{fxcor} taking the $r$-value \citep{Tonry1979} as the main reliability factor of the correlation following \cite{Quintana_2000}. They consider $r > 4$ as the limit for a reliable result, here we use the resulting velocity only if it follows that (a) at least 3 out of the 4 estimated redshifts from the templates agree with the heliocentric velocity within $\pm$100 km s$^{-1}$ from the median and (b) at least 2 of those have $r > 5$. Finally, the radial heliocentric velocity of the galaxy and its error is calculated as the mean of the values from the "on-redshift" correlations. Out of the 35\ GMOS spectra from above, we have 12\ galaxies with a common MUSE measurement, 10\ belonging to the cluster (see below for details on the selection of the cluster members). We use these 12\ galaxies in common to compare the results given by \textsc{fxcor} and \textsc{MARZ}, obtaining a mean difference of $60\pm205$ km s$^{-1}$ on the heliocentric reference frame. However, only one galaxy showed a velocity difference higher than 3$\sigma$. Excluding this galaxy from the analysis gives a mean velocity difference of $4\pm96$ km s$^{-1}$. With respect to the redshift measurements presented in \cite{bayliss16}, we find that the velocity difference within $\pm$5000 km s$^{-1}$ from their redshift estimation of the cluster ($z_{\rm cl} = 0.5801$) is of $|\Delta cz| \approx 300$ km s$^{-1}$ with a big dispersion. Regarding potential cluster members, we select only galaxies where the redshifts reported by \cite{bayliss16} and the ones estimated using \textsc{fxcor} have a difference smaller than 500 km s$^{-1}$, which at $z_{\rm cl} = 0.5801$ corresponds to a difference of $\sim$0.1\%. This eliminates 2 potential cluster members, one from each method. In Table~\ref{tab:all_objs_properties} we show the properties of 22 objects from GMOS, excluding the 12 in common with MUSE and the potential cluster member from our measured redshifts. The other potential cluster member is ID 27 from GMOS-2, where \cite{bayliss16} estimated $z = 0.5811$. Redshifts in Table~\ref{tab:all_objs_properties} correspond to the ones measured using \textsc{fxcor}. Our final spectroscopic catalog is composed of 136 objects; 131 galaxies and 5 stars. \subsubsection{Cluster redshift estimation} \label{sec:cluster_redshifts} The cluster's redshift is estimated following the biweight average estimator from \citet{Beers1990}, using the median redshift from all objects with measured redshift in our sample. This estimated redshift is then used instead of the median in their equation, in order to estimate a new redshift. This process is iterated 3 times. We select only spectroscopic sources with a peculiar velocity (see below) within $\pm$5000 km s$^{-1}$ from the cluster's estimated redshift, in order to exclude most of the foreground and background objects \citep[eg.][]{Bosch2013,pranger14}. We then estimate the velocity dispersion ($\sigma_v$) using the biweight sample variance presented in \citet{Ruel2014}, so that \begin{equation}\label{eq:vdisp} \sigma_{\rm bi}^2 = N \frac{\sum_{|u_i|<1} (1-u_i^2)^4(v_i-\bar{v})^2}{D(D-1)} \end{equation} \begin{equation} D = \sum_{|u_i|<1} (1-u_i^2)(1-5u_i^2) \end{equation} \noindent where the proper velocities of the galaxies, $v_i$, and the biweight weighting, $u_i$, are estimated as \begin{equation} v_i = \frac{c (z_i - z_{\rm cl})}{1+z_{\rm cl}} \end{equation} \begin{equation} u_i = \frac{v_i - \bar{v}}{9\rm{MAD}(v_i)} \end{equation} \noindent with $c$ being the speed of light, $\rm{MAD}$ corresponds to the median absolute deviation and $z_i$, $z$ being the redshifts of the galaxies and the biweight estimation of the redshift of the sample, respectively. Then, the velocity dispersion is estimated as the square root of $\sigma_{\rm bi}^2$, with its uncertainty estimated as 0.92$\sigma_{\rm bi} \times \sqrt{ N_{\rm members} - 1}$. To obtain a final redshift for the cluster we use a 3$\sigma$-clipping iteration (with $\sigma=\sigma_v$), obtaining $z_{\rm cl}=0.5803\pm0.0006$, where the error estimated as the standard error, i.e., the standard deviation over the square root of the number of cluster members. The velocity cut for the selection of the cluster members is discussed below. \subsubsection{Cluster member selection} \label{sec:cluster_member_selection} Observationally, galaxies belonging to a cluster are selected by imposing restrictions on their distance to the center of the cluster and their relative velocities to the BCG. In this section, we study the appropriate cut in the Line of Sight (LoS) projected velocity of the galaxies relative to their BCG using the Illustris TNG300 simulations. Illustris TNG is a suite of cosmological-magnetohydrodynamic simulation which aims to study the physical processes that drive galaxy formation \citep{nelson17, pillepich17, springel17, naiman18, marinacci18}. The TNG300 is the simulation of the suite with the largest volume, having a side length of $L\sim 250 h^{-1 }$ Mpc. This volume contains $2000^3$ Dark Matter (DM) particles and $2000^3$ baryonic particles. The relatively large size of the simulated box allow us to identify a significant number of massive structures to be analyzed. The mass resolution of TNG300 is $5.9\times 10^7 M_{\odot} $, and $1.1\times 10^7 M_{\odot}$ for the DM and baryonic matter respectively. Also, the adopted softening length is 1 h$^{-1}$ kpc for the DM particles and 0.25 h$^{-1}$ kpc for the baryonic particles \citep{marinacci18}. From this simulation we select a total of 80 clusters with masses between $4 \times 10^{14}{M_\odot}\leq M_{200}\leq 9 \times 10^{14}{M_\odot}$, located at redshift between $0.1\leq z\leq 1$. Here $M_{200}$ is the mass within a sphere having a mean mass density of 200 times the critical density of the Universe. To ensure that our results are not affected by numerical resolution effects, we only selected subhalos with at least 1000 dark matter particles per galaxy ($M_{\rm DM}\geq 5.9 \times 10^{10}{M_\odot}$) and at least 100 stellar particles ($M_{\rm stellar}\geq 1.1 \times 10^{9}{M_\odot}$). The final set of 80 virialized and perturbed cluster provides a sample 9163 associated cluster galaxies. The bounded substructures were identified using the SUBFIND algorithm \citep{Springel2001}. To stack information from the 80 selected clusters we normalize the velocity distributions using the $\sigma_v - M_{200}$ scaling relation from \citet{Munari2013}. This scaling relation was obtained from a radiative simulation which included both (a) star formation and supernova triggered feedback, and (b) active galactic nucleus feedback (which they call the AGN-set). The equation is described as follows: \begin{equation}\label{eq:mass} \sigma_{\rm 1D} = A_{\rm 1D} \left[ \dfrac{h(z) M_{200}}{10^{15} M_\odot} \right] ^$\alpha$ \end{equation} \noindent where $\sigma_{\rm 1D}$ is the one-dimensional velocity dispersion and h(z) = H(z)/100 km s$^{-1}$ Mpc$^{-1}$. We choose the values of $A_{\rm 1D} = 1177 \pm 4.2$ and $$\alpha$=0.364 \pm 0.0021$, obtained using galaxies associated to subhaloes in the AGN-set simulation \citep{Munari2013}. To find the intrinsic Line of Sight (LoS) velocity distribution of a simulated cluster with mass $M_{200}=5 \times 10^{14}{M_\odot}$, at a given redshift of $z=0.6$, we followed the following procedure. We first fit the projected 1D velocity distribution of the cluster galaxies relative to the BCG using a Gaussian distribution with mean $\mu_0$ and dispersion $\sigma_0$. After, using the Equation \ref{eq:mass}, we compute the value of the 1D velocity dispersion $\sigma_1$ that the cluster would have if it had a mass of $M_{200}=5 \times 10^{14}{M_\odot}$. Then, we obtain the 1D velocities for each galaxy normalized by the mass and the redshift using the equation \ref{transformation}. Finally, we obtained the LoS velocities applying 200 different randomized rotations to each cluster, \begin{equation} z=\sigma_1 \left( \frac{x-\mu_0}{\sigma_0} +\mu_0\right). \label{transformation} \end{equation} Figure \ref{histogram} presents the histogram of the stacked LoS velocities for the galaxies in the different projections (blue histogram), the best fit normal distribution (red dashed line), and the confidence intervals shaded red areas. We conclude that, for a theoretical cluster of mass $M_{200}=7.64 \times 10^{14}{M_\odot}$, the LoS velocities are normally distributed with a dispersion of $\sigma_v$= 960 km $s^{-1}$. This means that $95 \%$ of the galaxies belonging to this cluster would have LoS velocities lower than 1920 km s$^{-1}$, and $99\% $ of them have LoS velocities lower than 2900 km s$^{-1}$. In what follows we adopt a cut of 3,000 km s$^{-1}$. \begin{figure} \begin{center} \includegraphics[width=0.9\linewidth]{plots/vel_distribution_1D.png} \vskip-0.13in \caption{Histogram for the LoS satellite velocities distribution for a cluster with mass $M_{200}=7.64 \times 10^{14}{M_\odot}$ at redshift $z=0.6$, in red the fitted normal distribution and in light red the confidence intervals. } \label{histogram} \end{center} \end{figure} Applying the $\pm$~3,000 km/s cut we obtain a total number of cluster redshifts of \NgalallMUSEGMOS{}, including 25{} members from cube 1, 21{} from cube 2, 11{} from cube 3, 22{} from cube 4 and 8 from the GMOS data. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/0307_redshift_histogram_correct.pdf} \vskip-0.13in \caption{Redshift distribution of spectroscopic sources with good measurement from \textsc{MARZ} and \textsc{fxcor}. Hashed red bars represent the region within a range of $\pm$3000 km s$^{-1}$ in peculiar velocity from the cluster's redshift. The histogram insert on the top left shows the distribution of galaxies within this velocity range, where the black dashed and dotted lines represent the cuts at $\pm$3000 km s$^{-1}$ and the velocity of the BCG, respectively.} \label{fig:redshift_distribution} \end{figure} \subsubsection{Summary of spectroscopic catalog} \label{sec:spec_summary} In total, we obtain 87 galaxies with spectroscopic redshifts for SPT-CLJ 0307-6225. Out of those, 79 come from the 1D MUSE objects from $\S$\ref{sec:reductions} and 8 from the GMOS archival spectroscopic data \citep{bayliss16}. The final redshift, estimated as the biweight average estimator, is $z_{\rm cl} = 0.5803 \pm 0.0006$. The final galaxy cluster redshift distributions is shown in Fig.~\ref{fig:redshift_distribution}. The inset shows the peculiar velocity of these selected galaxies, with the black dashed lines denoting the velocity cut and the black dotted line marking the velocity of the BCG. The velocity dispersion for the cluster, estimated following Eq.~\ref{eq:vdisp}, is $\sigma_v = 1093\pm108$ km $s^{-1}$. \subsubsection{Completeness of MUSE catalog} \label{sec:completeness} Since our aim is to look at the properties of the galaxy population, we need to first characterise a limiting magnitude to define that population. Fig.~\ref{fig:cmd} shows that the population of spectroscopic RS galaxies stops at $i_{\rm auto} \approx 22.8$, with blue galaxies going as deep as $i_{\rm auto} \approx 23.3$. In order to find out the limiting magnitude we want to use, we compare our red-sequence catalog inside the cubes footprints within a magnitude bin, checking the fraction of spectroscopically confirmed galaxies within each bin. This check allows us to (1) validate our method for selecting RCS members, which will become important when looking for substructures (see $\S$\ref{sec:subcluster}), and (2) to look for potential cluster members not found by \textsc{MARZ}. In Fig.~\ref{fig:completeness} we show the estimated completeness within magnitude bins of 0.5 mags. The $y$-axis shows the ratio of spectroscopically confirmed red sequence cluster galaxies to the total number of red-sequence galaxies (photometrically selected + spectroscopically confirmed) within magnitude bins, while the N$_{\rm red}$ on the top $x$-axis shows the number of red galaxies within a bin. The dashed lines show the limits for the regions with magnitudes $i_{\rm auto}<m^*$, $m^*+1$, $m^*+2$, $m^*+3$, with the completeness of each luminosity range written to the left of each dashed line. The one ``missing'' galaxy at $i_{\rm auto}<m^*$ is at z = 0.611 ($\Delta v=5,940$ km s$^{-1}$), while the two missing galaxies near $i_{\rm auto}<m^* +1$ correspond to spectroscopically confirmed background galaxies at $z=0.612$ and $z=0.716$ ($\Delta v=6,130$ km s$^{-1}$ and $\Delta v=25,867$ km s$^{-1}$, respectively). The latter one showed similar properties to the galaxies that belong to the cluster; size, visual color and spatially close to the BCG. Its $r-i$ color index was also part of, towards the higher end, the rather generous width used for our RCS catalog. At $i_{\rm auto}\geq m^* +2$, galaxies look like they belong to the cluster, but do not show strong spectral features with which we can estimate the redshift accurately. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/completeness_all.pdf} \vskip-0.13in \caption{Ratio of the spectroscopically confirmed members with respect to the red galaxies from our catalog at different bins of magnitudes. The top axis shows the number of red galaxies per magnitude bin. The dashed lines denote the limits for $m^*$, $m^*+1$, $m^*+2$ and $m^*+3$, with the percentages being the accumulated completeness for a given limit.} \label{fig:completeness} \end{figure} We then require a cut at $i_{\rm auto} < m^* +2$ (over 80\% completeness) to the analysis regarding the galaxy population for all the galaxies in our spectroscopic sample. \subsubsection{Spectral classification} \label{sec:spectral_class} To understand if the merger is playing a role in the star formation activity of the galaxies, we make use of two measurements; the equivalent widths (EW) of the [OII] $\lambda$3727 \AA\ and H$\delta$ lines. [OII] $\lambda$3727 \AA\ traces recent star formation activity in timescales $\leq$10 Myr, while the Balmer line H$\delta$ has a scale between 50 Myr and 1 Gyr \citep{Paulino2019}. A strong H$\delta$ absorption line is interpreted as evidence of an explosive episode of star formation which ended between 0.5-1.5 Gyrs ago \citep{Dressler1983}. To measure the equivalent widths of [OII] $\lambda$3727 \AA, EW(OII), and H$\delta$, EW(H$\delta$), the flux spectra for each object is integrated following the ranges described by \citet{balogh99} using the \textsc{IRAF} task \textsc{sbands}. Also, we only make use of MUSE galaxies, excluding the 8 GMOS galaxies added, given that the MUSE selection is unbiased. We do not expect this to change our main results since these galaxies are not located along the merger axis. We use the same scheme defined by \citet{balogh99} to classify our galaxies into different categories; passive, star forming (SF), short-starburst (SSB), post-starburst \citep[PSB, K+A in][]{balogh99} and A+em (which could be dusty star-forming galaxies). For this classification we only take into account galaxies with $i_{\rm auto} < m^* +2$ and a signal-to-noise ratio, SNR $>3$ (62 galaxies), given that galaxies with low SNR can affect the measurements of lines in crowded sections, like in the region of the [OII] $\lambda$3727 \AA\ line \citep{Paccagnella2019}. The median signal-to-noise ratio (SNR) of our MUSE galaxies is 12.0 for sources with a magnitude $i_{\rm{auto}} < m^*$, 7.8 for sources with $m^* \leq i_{\rm{auto}} < m^*+1$, 4.0 for sources with $m^*+1 \leq i_{\rm{auto}} < m^*+2$, and 2.3 for sources with $i_{\rm{auto}} \ge m^*+2$. We estimate the SNR in the entire spectral range of our data by using the \textsc{der\_snr} algorithm \citep{Stoehr2007}. The results of this classification will be further discussed in $\S$\ref{sec:galpopulation}. \subsection{Galaxies association} \label{sec:subcluster} Depending on the stage of the merging event, it can be possible to determine what the main colliding-structures are, and which galaxies belong to each structure. Several techniques are available to estimate the level of substructure in galaxy clusters using the velocities. One of the most common techniques is to analyze the galaxy velocity distribution on a one-dimensional space, where it is assumed that for a relaxed cluster it should be close to a Gaussian shape \citep{Menci1996, ribeiro13}. \citet{Hou2009} used Monte Carlo simulations to show that the Anderson-Darling (AD) test is among the most powerful to classify Gaussian (G) and non-Gaussian (NG) clusters, which is why it has been widely used in astronomy with different separation criteria \citep[e.g.][]{Hou2009,ribeiro13,Nurgaliev2017, Lopes2018}. \citet{Hou2009} estimates an $$\alpha$$ value (the significance value of the statistic) to separate G and NG clusters (see Eq. 17 in their paper), where $$\alpha$<0.05$ indicates a NG distribution. \citet{Nurgaliev2017} uses the p-value of the statistic (p$_{\rm AD}$) and separates the clusters using p$_{\rm AD} < 0.05/n$ for NG clusters, where $n$ indicates the number of tests being conducted. \citet{Roberts2018} also uses the p-value, following that p$_{\rm AD} < 0.1$ indicates a NG cluster. We divide our data in 4 subsets for the application of the AD test; Cubes 2 and 3 for the middle overdensity, Cubes 1 and 4 to compare the two most overdense regions, all the data cubes and all the data cubes plus GMOS data. To test for 3D substructures (using the velocities and the on-sky positions), we use the Dressler-Shectman test \citep[DS-test,][]{dressler88}, which uses the information of the on-sky coordinates along with the velocity information, and can be used to trace perturbed structures \citep[e.g.][]{pranger14, Olave2018}. The DS-test uses the velocity information of the closest (projected) neighbors of each galaxy to estimate a $\Delta$ statistic, which is given by \begin{equation} \Delta = \sum^{N_{\rm tot}}_i \delta_i, \end{equation} \noindent where $N_{\rm tot}$ corresponds to the total number of members of the cluster and \begin{equation} \delta^2 = \frac{N+1}{\sigma^2_{\rm cl}} \left[ (\bar{v}_{\rm loc}-\bar{v}_{\rm cl})^2 + (\sigma_{\rm loc} - \sigma_{\rm cl})^2 \right], \end{equation} \noindent where $\delta$ is estimated for each galaxy. $N$ corresponds to the number of neighbors of the galaxy to use to estimate the statistic, estimated as $N = \sqrt[]{N_{\rm tot}}$ \citep{Pinkney1996}, $\sigma_{\rm cl}$ and $\sigma_{\rm loc}$ correspond to the velocity dispersion of the whole cluster and the velocity dispersion of the $N$ neighbors, respectively, and $\bar{v}_{\rm cl}$ and $\bar{v}_{\rm loc}$ correspond to the mean peculiar velocity of the cluster and the mean peculiar velocity of the $N$ neighbors, respectively. A value of $\Delta/N_{\rm tot} \leq 1$ implies that there are no substructures on the cluster. To calibrate our DS-test results, we perform $10^4$ Monte Carlo simulations by shuffling the velocities, i.e., randomly interchanging the velocities among the galaxies, while maintaining their sky coordinates (meaning that the neighbors are always the same). The p-value of the statistic (p$_{\Delta}$) is estimated by counting how many times the simulated $\Delta$ is higher than that of the original sample, and divide the result by the total number of simulations. Choosing p$_{\Delta} < 0.05$ ensures a low probability of false identification \citep{Hou2012} and is accepted for the distribution to be considered non-random. Both AD and DS test results are shown in Table~\ref{tab:tests} below. When velocities are not available, or the velocity difference of the clusters are small, another common practice is to use the sky positions of the galaxies and build surface density maps to look for substructures \citep[see, e.g., ][]{White2015, Monteiro2017, Monteiro2018, Monteiro2020, Yoon2019}. The galaxy surface density map at the top right of Fig.~\ref{fig:rgb_image} implies that there are at least two colliding-structures. To obtain the density map we use the RCS galaxy catalog and the \textsc{sklearn.neighbors.KernelDensity} python module, applying a gaussian kernel with a bandwidth of 50 kpc. \subsection{X-ray morphology} \label{subsec:xray} An image in the 0.5--4.0\,keV bandpass was extracted and adaptively smoothed using \textsc{csmooth}\footnote{\url{https://cxc.harvard.edu/ciao/ahelp/csmooth.html}}. This smoothed image, shown as orange contours in Fig.~\ref{fig:rgb_image}, reveals a highly asymmetric X-ray morphology, with a bright, dense core offset from the large-scale centroid by $\sim$1$^{\prime}$ ($\sim$400\,kpc). \cite{Nurgaliev2017} used these same data to make an estimate of the X-ray asymmetry for this system, finding it to be the second\footnote{In \citet{zenteno2020}, the most asymmetric system, SPT-CLJ2332-5053, was said to be a cluster in pre-merger state with a close companion, which would then contaminate the estimated asymmetry index. Excluding SPT-CLJ2332-5053 would make SPT-CLJ0307-6225 the most asymmetric system in the sample.} most asymmetric system in the full SPT-Chandra sample, with an X-ray morphology as disturbed as El Gordo, a well-known major merger \citep[][]{williamson11,Menanteau2012}. \section{Results} \label{sec:results} \subsection{Cluster substructures} \label{sec:dynamics} In Table \ref{tab:tests} we show the results of both the AD-test and the DS-test applied to different subsets. The second column corresponds to the number of spectroscopic galaxies belonging to a given subsample. The subset which gives the smallest p-values for both the AD-test and the DS-test is the Cubes 1+4 subset, with these cubes located on top of the two density peaks, enclosing also the area next to the two brightest galaxies (see Fig.~\ref{fig:rgb_image}). We find that both the AD-test and the DS-test provide no evidence of substructure. Applying a 3$\sigma$-clipping iteration to the samples does not change the results. The results, along the X--ray morphology, show no evidence of substructure along the line of sight, and rather support a merger in the plane of the sky, thus we take a look into the spatial distribution of the galaxies. \begin{table}\caption{Results for the substructure-identification tests applied to different subsamples.}\label{tab:tests} \centering \begin{tabular}{l r r r r r} \hline \hline Subsample&N&\multicolumn{2}{c}{AD-test}&\multicolumn{2}{c}{DS-test}\\ &&$$\alpha$$&P-value&$\Delta/N_{\rm{tot}}$&P-value\\ \hline Cubes 2+3 & 32 & 0.264 & 0.674 & 0.967 & 0.421 \\ Cubes 1+4 & 48 & 0.383 & 0.383 & 1.329 & 0.097 \\ All Cubes & 79\ & 0.234 & 0.789 & 1.205 & 0.138 \\ MUSE+GMOS & \NgalallMUSEGMOS\ & 0.272 & 0.662 & 1.203 & 0.152 \\ \hline \end{tabular} \end{table} In Fig.~\ref{fig:mpcdensitymap} we show the contours of the unweighted and flux weighted density maps, top and bottom figures respectively, of the RCS galaxies. The contour levels begin at 100 gal Mpc$^{-2}$ and increase in intervals of 50 gal Mpc$^{-2}$. Dots correspond to galaxies from our spectroscopic samples. In these figures, regardless of whether they are weighted or unweighted, it can be seen the core of the two main structures with corresponding BCGs, and a high density of galaxies in-between them. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/0307_density_100gal_zoomin4_unweighted_weighted_flux_v3.pdf} \vskip-0.13in \caption{Unweighted (top) and flux weighted (bottom) RCS galaxies (photometric and spectroscopic) numerical density map is shown in black contours, where levels begin at 100 galaxies per Mpc$^{2}$ and the flux was estimated from the $i$ band. Galaxies not close to the density levels or classified as not being part of any structure by the \texttt{DBSCAN} algorithm are shown as black dots, while dots in different substructures according to the algorithm are shown with different colors according to the substructure; 0307-6225N (red), 0307-6225S (orange) and a in-between overdensity (green).} \label{fig:mpcdensitymap} \end{figure} For the definition of the substructures we take into account only spectroscopic members within (or near) the limits of our density contours. To distinguish the galaxies with a higher probability of being part of each structure we use the Density-Based Spatial Clustering of Applications with Noise \citep[\texttt{DBSCAN},][]{Ester1996} algorithm. The advantage of using this algorithm is that the galaxies are not necessarily assigned to a given group, leaving some of them out. We use a \textsc{python}-based application of this algorithm, following the work of \citet[][substructure defined as at least three neighbouring galaxies within a separation of $\sim$140 kpc]{Olave2018}. The results of the different structures found are shown with different coloured dots in Fig.~\ref{fig:mpcdensitymap}. Black dots represent galaxies that either were too far from our density contours or were discarded by the \texttt{DBSCAN} algorithm. We name the two most prominent structures, defined by DBSCAN, as 0307-6225N (red dots) and 0307-6225S (orange dots), comprised by 23 members and 25 members, respectively. The BCGs for 0307-6225S and 0307-6225N are marked in Table~\ref{tab:all_objs_properties} by the upper scripts $S_1$ and $N$, respectively. Both structures show a Gaussian velocity distribution when applying the AD test, and the distance between them is: $\sim$1.10 Mpc between their BCGs and $\sim$1.15 Mpc between the peaks of the density distribution. Regarding the in-between overdensity, with 19 galaxies (green dots in Fig.~\ref{fig:mpcdensitymap}), we chose to discard it as an actual structure given that (1) unlike the other two structures, it does not have a massive dominant galaxy and (2) the estimated velocity dispersion is $\sigma_v=1400$ km s$^{-1}$, which translates to an unlikely mass of $1.7\times10^{15}$ M$_\odot$ (see $\S$\ref{subsec:dmass}). We comeback to this overdensity in $\S$~\ref{sec:dis_recoverymh}. \subsection{Cluster dynamical mass}\label{subsec:dmass} We estimate the masses using \citet{Munari2013} scaling relations between the mass and the velocity dispersion of the cluster (see Eq.~ \ref{eq:mass}). The Gaussian velocity distribution together with the large separation between the center of both structures ($\sim$1.1 Mpc between the BCGs) and the fact that the velocity difference between them is $\Delta v_{N-S} = 342$ km s$^{-1}$ (at the cluster's frame of reference) strongly suggest a plane of the sky merger \citep[see, e.g.][]{Dawson2015, Mahler2020} and could therefore, imply that the overestimation of the masses using scaling relations is minimal \citep{Dawson2015}. We further explore this in $\S$\ref{sec:mass_overstimation_merging}. In order to minimize the possible overstimation of using scaling relations, we only use RCS spectroscopic galaxies to estimate $\sigma_v$, since in clusters with a high accretion rate, blue galaxies tend to raise the value of the velocity dispersion \citep{Zhang2012}. In Table \ref{tab:substructures} we show the properties of the two substructures. It can be seen then that the two structures have similar masses with the most probable ratio of $M_{\rm S}/M_{\rm N}\approx$ 1.3{} with large uncertainties. Galaxies selected for the dynamical mass estimation are likely to belong to the core regions of the two clusters. Galaxies in these regions are expected to be virialized and should more closely follow the gravitational potential of the clusters during a collision, giving a better estimation of the masses when using the velocity dispersion. \begin{table} \caption{Substructure properties} \label{tab:substructures} \centering \resizebox{\linewidth}{!}{% \begin{tabular}{c c c c c c c} \hline \hline Structure & R.A. & Dec. & $z$ & $\sigma_{v}$ & M$_{200,\rm{dyn}}$ & N$_{\rm Members}$ \\ 0307-6225 & (J2000) & (J2000) & & km s$^{-1}$ & $\times$10$^{14}$ M$_\odot$ & \\ \hline S & 46.8198 & -62.4465 & 0.5792 $\pm$ 0.0003 & 753 $\pm$ 163 & 3.13 $\pm$ 1.87 & 25\\ N & 46.8494 & -62.4028 & 0.5810 $\pm$ 0.0002 & 686 $\pm$ 145 & 2.42 $\pm$ 1.40 & 23\\ \hline \end{tabular} } \end{table} \subsection{Cluster merger orbit}\label{sec:history} To understand the merging event, we use the Monte Carlo Merger Analysis Code \citep[\texttt{MCMAC},][]{Dawson2013}, which analyzes the dynamics of the merger and outputs its kinematic parameters. The model assumes a two-body collision of two spherically symmetric halos with a NFW profile \citep[][]{NFW0, NFW1}, where the total energy is conserved and the impact parameters is assumed to be zero. The different parameters are estimated from the Monte Carlo analysis by randomly drawing from the probability density functions of the inputs. The inputs required for each substructure are the redshift and the mass, with their respective errors, along with the distance between the structures with the errors on their positions. We use the values shown in Table \ref{tab:substructures} as our inputs, where the errors for the redshifts are estimated as the standard error, while the errors for the distance are given as the distances between the BCGs and the peak of the density distribution of each structure (0.144' and 0.017' for 0307-6225N and 0307-6225S, respectively). The results are obtained by sampling the possible results through 10$^5$ iterations, and are showed and described in Table \ref{tab:mcmac}, with the errors corresponding to the 1$\sigma$ level. \begin{table} \caption{Output from the \texttt{MCMAC} code, with the priors from Table \ref{tab:substructures}. Errors correspond to the $1\sigma$ level.} \label{tab:mcmac} \centering \begin{tabular}{l r c l} \hline \hline Param. & Median & Unit & Description\\ \hline \vspace{3.5pt} $$\alpha$$ & 39$^{+13}_{-11}$ & deg & Merger axis angle\\ \vspace{3.5pt} $d3D_{\rm obs}$ & 1.29$^{+0.32}_{-0.15}$ & Mpc & \footnotesize{3D distance of the halos at T$_{\rm{obs}}$.}\\ \vspace{3.5pt} $d3D_{\rm{max}}$ & 1.72$^{+0.44}_{-0.22}$ & Mpc & \footnotesize{3D distance of the halos at apoapsis.}\\ \vspace{3.5pt} $v3D_{\rm{col}}$ & 2300$^{+122}_{-96}$ & km/s & \footnotesize{3D velocity at collision time.}\\ \vspace{3.5pt} $v3D_{\rm{obs}}$ & 547$^{+185}_{-103}$ & km/s & \footnotesize{3D velocity at T$_{\rm{obs}}$.}\\ \vspace{3.5pt} $v_{\rm{rad}}$ & 339$^{+28}_{-28}$ & km/s & \footnotesize{Radial velocity of the halos at T$_{\rm{obs}}$.}\\ \vspace{3.5pt} TSP0 & 0.96$^{+0.31}_{-0.18}$ & Gyr & \footnotesize{TSP for outgoing system.}\\ \vspace{3.5pt} TSP1 & 2.60$^{+1.07}_{-0.53}$ & Gyr & \footnotesize{TSP for incoming system.}\\ \hline \end{tabular} \end{table} \texttt{MCMAC} gives as outputs the merger axis angle $$\alpha$$, the estimated distances and velocities at different times and two possible current stages of the merger; outgoing after first pericentric passage and incoming after reaching apoapsis. The time since pericentric passage (TSP) for both possible scenarios are described as TSP0 for the outgoing scenario and TSP1 for the incoming one. This last two estimates are the ones that we will further discuss when recovering the merger orbit of the system. To further constrain the stage of the merger we compare the observational features with simulations. We use the Galaxy Cluster Merger Catalog \citep{ZuHone2018}\footnote{\url{http://gcmc.hub.yt/simulations.html}}, in particular, the ``A Parameter Space Exploration of Galaxy Cluster Mergers'' simulation \citep{ZuHone2011}, which consists of an adaptive mesh refinement grid-based hydrodynamical simulation of a binary collision between two galaxy clusters, with a box size of 14.26 Mpc. The binary merger initial configuration separates the two clusters by a distance on the order of the sum of their virial radii, with their gas profiles in hydrostatic equilibrium. With this simulation one can explore the properties of a collision of clusters with a mass ratio of 1:1, 1:3 and 1:10, where the mass of the primary cluster is $M_{200}=6\times10^{14}$ M$_\odot$, similar to the SZ derived mass of $M_{200}=7.63 \times h^{-1}_{70} 10^{14}$ M$_\odot$ for SPT-CLJ 0307-6225 \citep{bleem15b}, and with different impact parameters ($b = 0, 500, 1000$ kpc). We use both a merger mass ratio of 1:3 and 1:1. Since we cannot constrain the impact parameter, we use all of them and study their differences, where, for example, the bigger the impact parameter, the longer it takes for the merging clusters to reach the apoapsis. We also note that for our analysis we use a projection on the $z$-axis, since evidence suggests a collision taking place on the plane of the sky. \subsubsection{Determining TSP0 and TSP1 from the simulations} To determine the collision time, we use the dark matter distribution of both objects, focusing on the distance between their density cusps at different snapshots. Also, to determine the snapshots for an outgoing and an incoming scenario, which would be the closest to what we see in our system, we look for the snapshot where the separation between the peaks is similar to the projected distance between our BCGs ($\sim$1.10 Mpc). In Table \ref{tab:simulation_prop} we show the results for the different impact parameters, where the second column indicates the mass ratio. The third column shows the simulation time where the distance between the two halos is minimal (pericentric passage time). The errors are the temporal resolution of the simulation at the chosen snapshot. Following the previous nomenclature the fourth column, TSP0$_{\rm sim}$, corresponds to the amount of time from the first pericentric passage (minimum approach), while the fifth column, TSP1$_{\rm sim}$, corresponds to the amount of time from the pericentric passage, to the first turn around, and heading towards the second passage. Times are either the snapshot time or an average between two snapshots if the estimated separations are nearly equally close to the $\sim$1.10 Mpc distance. For $b=0$ kpc, the maximum achieved distance between the two dark matter halos in the 1:3 mass ratio simulation was 1.05 Mpc, while for the 1:1 mass ratio it was 0.99 Mpc, meaning that we cannot separate between both scenarios when comparing the projected distance of 0307-6225N and 0307-6225S. \begin{table} \caption{Estimated collision times and times since collision for the simulations with different impact parameters and mass ratios.} \label{tab:simulation_prop} \centering \begin{threeparttable} \begin{tabular}{r c c c c} \hline \hline $b$ & Mass ratio & Collision time & TSP0$_{\rm sim}$ & TSP1$_{\rm sim}$ \\ kpc & & Gyr & Gyr & Gyr \\ \hline 0 & 1:3 & 1.22 $\pm$ 0.02 & 0.78 $\pm$ 0.20 & -\\ 500 & 1:3 & 1.24 $\pm$ 0.02 & 0.66 $\pm$ 0.20 & 0.96 $\pm$ 0.20 \\ 1000 & 1:3 & 1.34 $\pm$ 0.02 & 0.56 $\pm$ 0.20 & 1.46 $\pm$ 0.20 \\ 0 & 1:1 & 1.32 $\pm$ 0.02 & 0.68 $\pm$ 0.20 & -\\ 500 & 1:1 & 1.34 $\pm$ 0.02 & 0.46 $\pm$ 0.20 & - \\ 1000 & 1:1 & 1.40 $\pm$ 0.02 & 0.80 $\pm$ 0.20 & 1.00 $\pm$ 0.20 \\ \hline \end{tabular} \begin{tablenotes} \small \item {\bf Notes.} No TSP1 value is provided when we cannot separate between the outgoing and incoming scenarios by requiring a distance of $\sim$1.1 Mpc. \end{tablenotes} \end{threeparttable} \end{table} In Fig.~\ref{fig:density_simulation} we show the density contours of the galaxies from the simulation with mass ratio 1:3 and $b=1000$ kpc as an example, where the contours were estimated as described in $\S$\ref{sec:subcluster}. The density contours at T = 1.9 Gyrs and T = 2.7 Gyrs are shown on the top (outgoing scenario) and bottom (incoming scenario) panels, respectively, where T is the time since the beginning of the simulation. Dots are from our spectroscopic sample, where the colors are the same as in Fig.~\ref{fig:mpcdensitymap}, with the red contours being the unweighted RCS galaxies numerical density map from the same figure. It is worth noting that, although the density contours from the simulations and the galaxies from our observations do seem to be well correlated, the simulations (and therefore the density contours) were not influenced whatsoever by our observations. The only manipulation to the contours is rotation and translation of the coordinate system from the simulation, so that they would match the position of the galaxies from 0307-6225S. The results shown in Table \ref{tab:simulation_prop} suggest that the estimate of TSP1 by \texttt{MCMAC} is too large, giving preference to the outgoing system scenario. We further discuss this in $\S$\ref{sec:constraining_tsc_with_sims}. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/0307_density_simulation_v4_final_v5.pdf} \vskip-0.13in \caption{Density contours similar to the ones shown in Fig.~\ref{fig:mpcdensitymap}, but drawn from galaxies from the merger simulation (1:3 mass ratio and b=1000 kpc) with t=1.9 Gyrs and t=2.7 Gyrs (top and bottom panels, respectively) since the beginning of the simulation. The coordinates of the density maps were rotated and translated in order to be comparable with the position of the galaxies (dots) from SPT-CLJ0307-6225. For comparison, the red contours show SPT-CLJ0307-6225 unweighted density map from Fig.~\ref{fig:mpcdensitymap}, with dots being the spectroscopic galaxies following the same color scheme.} \label{fig:density_simulation} \end{figure} \subsubsection{X--ray morphology} The hydrodynamical simulations render a gas distribution that can be directly compared to the observations. Fig.~\ref{fig:sim_TSP0_xray} shows the snapshots of the outgoing scenario, while Fig.~\ref{fig:sim_TSP1_xray} shows the snapshots of the incoming scenario, where the X--ray projected emission is overplotted as blue contours on top of the projected total density, for the simulation snapshots close to the derived TSP (Table \ref{tab:simulation_prop}), with the simulation time shown on the bottom left of each panel. Note however that for the 1:1 Mass ratio and $b=500$ kpc, the system has the $\sim$1.1 Mpc distance at turnaround, which means that we cannot differentiate between and outgoing and incoming scenario. We decided to keep the same snapshot in both Figures \ref{fig:sim_TSP0_xray} and \ref{fig:sim_TSP1_xray} just for comparison. It can be seen that the scenarios for 1:3 mass ratio closest resemble the gas distribution from our {\it Chandra} observations (orange contours on Fig.~\ref{fig:rgb_image}). We comeback to this in $\S$~\ref{sec:constraining_tsc_with_sims}. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/simulations_xray_contours_tsc0_v2.png} \vskip-0.13in \caption{Density and X--ray contours of the different simulations. The simulation times are shown on the bottom left corner, and correspond to (or are close to in case of averaging over two snapshots) the collision time plus the TSP0 time since collision (see Table \ref{tab:simulation_prop}). The projected total density of the simulations is shown in red in the background, with the contrast starting at $1\times10^{7}$ M$_\odot$ kpc$^{-2}$. Blue contours where derived from the projected X--ray emission, with the levels being $0.5, 1, 5, 10, 15\times10^{-8}$ photons/s/cm$^{2}$/arcsec$^{2}$. Simulations are divided according to their mass ratio (1:3 on top and 1:1 on the bottom) and according to the impact parameter (500 kpc on the left panels and 1000 kpc on the right panels). The used box size is the same to the one used in Fig.~\ref{fig:rgb_image}. The white bar also corresponds to the same length of 1 arcmin shown in Fig.~\ref{fig:rgb_image}. } \label{fig:sim_TSP0_xray} \end{figure} \begin{figure} \centering \includegraphics[width=\linewidth]{plots/simulations_xray_contours_tsc1_v2.png} \vskip-0.13in \caption{Same as Fig.~\ref{fig:sim_TSP0_xray}, but derived from the simulations at the TSP1 times.} \label{fig:sim_TSP1_xray} \end{figure} \subsection{The impact of the merging event in the galaxy populations} \label{sec:galpopulation} In Fig.~\ref{fig:gal_properties} we show the CMD for each subsample; all galaxies, galaxies belonging to 0307-6225N and 0307-6225S, and galaxies not belonging to either of them. Galaxies are color coded according to their spectral classification. Most of the star-forming galaxies are located within the two main structures (9 out of 10 SF+SSB galaxies), with some of them being classified as RCS galaxies (4; 2 SF and 2 SSB). Galaxies with SNR < 3 and/or $i_{\rm auto}$ > $m^* + 2$ are plotted as black crosses. For simplicity, we use the following notation (and their combinations) to refer to the different galaxy populations throughout the text: \begin{itemize} \item \textit{SSB:} Short starburst galaxies, following \cite{balogh99}. \item \textit{PSB:} Post-starburst galaxies, defined as galaxies with EW(H$\delta$) $\geq 5$ \AA\ and EW(OII) $< 5$ \AA\ \citep[K+A in][]{balogh99}. \item \textit{EL:} To refer to emission-line galaxies (galaxies with EW(OII) $\geq 5$ \AA), including SSB, star-forming (SF) and A+em galaxies, which are believed to be dusty star-forming galaxies \citep{balogh99}. \item \textit{NEL:} To refer to non emission-line galaxies; passive and PSB. This are galaxies with EW(OII) $< 5$ \AA. \item \textit{Red galaxies:} Galaxies belonging to (or redder than) the red cluster sequence from $\S$\ref{sec:photometric}. \item \textit{Blue galaxies:} Galaxies with colors lower than the red cluster sequence. \end{itemize} \begin{figure} \centering \includegraphics[width=\linewidth]{plots/0307_specclas_prop6_v3.pdf} \vskip-0.13in \caption{CMD of the cluster for the different samples. Galaxies are color-coded depending on their spectral classification described in ~\S\ref{sec:galpopulation}. \textit{top left}: entire spectroscopic data sample. \textit{top right}: sample comprising galaxies not belonging to 0307-6225N and 0307-6225S, i.e., galaxies from the in-between overdensity plus galaxies not belonging to any substructure according to \textsc{DBSCAN}. \textit{bottom}: 0307-6225S and 0307-6225N samples shown in left and right panels, respectively. The green dotted lines are the limits for the RCS zone. Black crosses are galaxies with SNR < 3 or $i_{\rm auto} \geq m^* + 2$. Filled colors are galaxies classified as SSB.} \label{fig:gal_properties} \end{figure} Given that most of the SF galaxies seem to be located at the cluster's cores, especially the red SF galaxies, it is plausible that they were part of the merging event, instead of being accreted after it. In Fig.~\ref{fig:phase_spectro} we show a phase-space diagram, with the X-axis being the separation from the SZ-center, negative for objects to the south of it. Circles are red galaxies, while triangles are blue galaxies. Inverted triangles are blue galaxies with no-emission lines (filled for PSB and non-filled for passive), while filled circles are SSB galaxies. Galaxies are color coded dark-red if they belong to 0307-6225N, dark-orange for 0307-6225S, and black for none of the above. In Fig.~\ref{fig:crop_galaxies} we show small crops of 7$\times$7 arcseconds (47$\times$47 kpc at the cluster's redshift) of the EL galaxies plus the two NEL blue galaxies. On the top and middle rows, galaxies from 0307-6225S and 0307-6225N are shown, respectively, while the bottom row shows galaxies which do not belong to the clusters cores. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/0307_phase-space_emission_v5.pdf} \vskip-0.13in \caption{Phase-space diagram of spectroscopic members with SNR $\geq3$ and $i_{\rm auto} < m^* + 2$. Galaxies are colored as dark red, dark orange and black if they were classified as belonging to the 0307-62255N, 0307-6225S or to neither of them, respectively. Crosses are galaxies classified as non-emission line galaxies. Emission line galaxies which belong to (or have redder colors than) the RCS are plotted as circles, triangles are galaxies with colors lower than the RCS, whereas inverted triangles are blue post-starburst (filled) or passive (unfilled) galaxies. The sizes of EL galaxies are correlated with their EW(OII) strength. Filled circles correspond to SSB galaxies.} \label{fig:phase_spectro} \end{figure} \begin{figure*} \centering \includegraphics[width=0.16\linewidth]{plots/13_Red_SSB_1.png} \includegraphics[width=0.16\linewidth]{plots/3_Red_SSB_1.png} \includegraphics[width=0.16\linewidth]{plots/66_Blue_SF_1.png} \includegraphics[width=0.16\linewidth]{plots/64_Blue_SF_1.png} \includegraphics[width=0.16\linewidth]{plots/63_Blue_SF_1.png} \includegraphics[width=0.16\linewidth]{plots/65_Blue_A+em_1.png} \\ \includegraphics[width=0.16\linewidth]{plots/73_Blue_SF_2.png} \includegraphics[width=0.16\linewidth]{plots/40_Red_SF_2.png} \includegraphics[width=0.16\linewidth]{plots/75_Blue_A+em_2.png} \\ \includegraphics[width=0.16\linewidth]{plots/41_Red_SF_0.png} \includegraphics[width=0.16\linewidth]{plots/71_Blue_PSB_0.png} \includegraphics[width=0.16\linewidth]{plots/72_Blue_Passive_0.png} \caption{Pseudo-color crop images (box size of 7$\times$7 arcseconds) of the SF, A+em, SSB and PSB galaxies from our sample (plus one blue passive galaxy). On the bottom left of each image the spectral type of the galaxy is shown, with a white bar on the bottom right representing the scale size of 1 arcsecond. Galaxies on the top and middle row belong to 0307-6225S and 0307-6225N, respectively, while galaxies on the bottom row are those who do not below to any of the aforementioned. } \label{fig:crop_galaxies} \end{figure*} \subsection{The particular case of 0307-6225S}\label{sec:southernstr} Fig.~\ref{fig:gal_properties} shows that 0307-6225S has (1) the bluest members from our sample and (2) two very bright galaxies with nearly the same magnitudes (galaxies with ID 35 and 46 from the MUSE-1 field in Table~\ref{tab:all_objs_properties}, marked with an upper script $S_1$ and $S_2$, respectively). In Fig.~\ref{fig:southernstr} we provide a zoom from Fig.~\ref{fig:rgb_image}, to show in more detail the southern structure. Red circles mark spectroscopic members for this region with SNR $>3$ and $i_{\rm auto} < m^* + 2$. The two brightest galaxies are the two elliptical galaxies in the middle marked with red stars, with $\Delta m_i = 0.0152 \pm 0.0063$ and $\Delta v = 600 km/s$. The on-sky separation between the center of them ($\sim$41 kpc), suggests that these galaxies could be interacting with each other \begin{figure} \centering \includegraphics[width=\linewidth]{plots/0307S_v4.pdf} \vskip-0.13in \caption{Zoom from Fig.~\ref{fig:rgb_image} into 0307S, with the white bar on the top left showing the scale of the image. Spectroscopic members with SNR < 3 or $i_{\rm auto} \geq m^* + 2$ are shown as cyan circles, while red and green circles/stars represent passive and emission-line cluster galaxies, respectively, where emission-line refers SF or SSB galaxies. The 2 brightest galaxies are marked with stars. } \label{fig:southernstr} \end{figure} In Fig.~\ref{fig:0307S_vel_distr} we show the peculiar velocity distribution, with respect to the redshift of 0307-6225S ($z=0.5810$), of all galaxies (black unfilled histogram) and of RCS galaxies (red hashed lines) belonging to this structure. The blue shaded area denotes the area within 1$\sigma_v$ for this structure, and the black dashed lines represent the peculiar velocities of the two BCG candidates, where the one to the south has a peculiar velocity closer to 0 ($\Delta v = -8$ km s$^{-1}$). For this reason, we choose this galaxy (ID 46) as the BCG of 0307-6225. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/0307_southveldistr.png} \vskip-0.13in \caption{Velocity distribution of galaxies belonging to the 0307-6225S substructure, estimated with respect to its redshift. Hatched red lines denote where the RCS members are located. Dashed black lines show the velocities of the two brightest galaxies in this subsample, with the shaded area representing the area within 1$\sigma_v$ for 0307-6225S.} \label{fig:0307S_vel_distr} \end{figure} \section{Discussion} \label{sec:discussion} \subsection{Merging history of 0307-6225S and 0307-6225N}\label{sec:disc_merger} Here we discuss the estimated masses, how they compare with previous estimations and the risks of using scaling relations to study dynamically perturbed systems. Then we discuss how the merging parameters derived by \texttt{MCMAC} could be further constrained by constraining the merging angle, especially the error bars on the estimated times for an outgoing and an incoming system. Finally, we show how the comparison with simulations favors an outgoing scenario given the estimated times and the X--ray morphology, with the latter also showing the preferred mass ratio of 1:3. \subsubsection{Mass estimation of a merging cluster}\label{sec:mass_overstimation_merging} Being able to recover the merging history of two observed galaxy clusters is not trivial. Most methods require a mass estimation of the colliding components, which is not always an easy task \citep[see merging effect on cluster mass in][]{takizawa10,nelson12,nelson14}. The use of lensing measurements is one of the most precise ways of obtaining a mass estimation for the components \citep[e.g.][]{clowe06,Pandge19,Monteiro2020}, however this method requires deep photometric high quality images for the measurement of the distortions. \citet{Dietrich2019} used the same ground-based optical imaging described in this paper to measure the weak lensing surface mass density of SPT-CLJ0307-6225. However, their result shows that for this cluster the signal was not strong enough (as shown in their Figure B4) as the peak of the surface mass density is at a distance greater then R$_{200}$ from the SZ center. The velocity dispersion (along the line-of-sight) of the galaxies of a cluster can also be used to infer its mass, using for example the virial theorem \citep[e.g.][]{Rines2013,White2015} or scaling relations \citep[e.g.][]{Evrard2008, Saro2013, Munari2013, Dawson2015, Monteiro-Oliveira21}. For the mass estimations of our structures we use the later one, although it is important to note that these measurements are also affected by the merging event, as colliding structures could show alterations in the velocities of their members. \citet{White2015} argues that the masses of merging systems estimated by using scaling relations can be overestimated by a factor of two. Since we have a separation of $\sim$1.1 Mpc between the two structures and the distribution of velocities of the two clusters is Gaussian, we believe the overestimation is low. Also, the velocity difference being $|\Delta v_{N-S}|= 342$ km s$^{-1}$ suggests that the merger is taking place close to the plane of the sky, similar to what \citet{Mahler2020} find for the dissociative merging galaxy cluster SPT-CLJ0356-5337. Furthermore, the velocity difference between the BCGs and the redshift of each substructure is $\leq$20 km s$^{-1}$ for both 0307-6225N and 0307-6225S, which might indicate that the two merging substructures were not too dynamically perturbed by the merger. In order to further minimize the bias of using scaling relations, we use only RCS galaxies, however blue galaxies are taken into account when reporting the number of members on Table~\ref{tab:substructures}, and also when analysing the galaxy populations below. It is worth noting that recently \cite{Ferragamo2020} suggested correction factors on both $\sigma_v$ and the estimated mass to account for cases with a low number of galaxies. They also apply other correction factors to turn $\sigma_v$ into an unbiased estimator by taking into account, for example, interlopers and the radius in which the sources are enclosed. However, applying these changes does not change our results drastically, with the new derived masses being within the errors of the previously derived ones. To check how masses derived from the velocity dispersion of merging galaxy clusters could be overestimated, we estimate the masses, following the equations from \cite{Munari2013}, of the simulated clusters from the 1:3 merging simulation (from $\S$\ref{sec:history}) at all times (and $b$) using their velocity dispersion. It is worth noting that we cannot separate RCS members to estimate the velocity dispersions, since the simulation does not give information regarding the galaxy population. Fig.~\ref{fig:masses_sims} shows the $\sigma_v$ derived masses at different times for the 1:3 mass ratio simulation for different values of $b$. The black dotted lines represent the collision time and the dashed lines with the gray shaded areas represent the TSPs and their errors from Table \ref{tab:simulation_prop}, respectively. It can be seen that before the collision and some Gyr after it, the masses are overestimated, especially for the case of the smaller mass cluster. However, near the TSP0 times, the derived masses are in agreement, within the errors, with respect to the real masses. This is true also for the TSP1 with $b=500$ kpc, but for the same time with $b=1000$ kpc, the main cluster's mass is actually underestimated. Although we cannot further constrain the masses from the simulation using only RCS members, this information does suggest that our derived masses are not very affected by the merging itself given the possible times since collision. \begin{figure} \centering \includegraphics[width=\linewidth]{plots/masses_sim1to3.png} \vskip-0.13in \caption{Velocity dispersion derived masses for the 1:3 mass ratio simulations used in this work, with different $b$. The x-axis is the time since the simulation started running, with the blue and orange dots corresponding to the main cluster and the secondary cluster, respectively. The blue and orange dashed lines represent the masses of $6\times10^{14}$ and $2\times10^{14}$ M$_\odot$, respectively. Black dotted lines mark the collision times estimated following $\S$\ref{sec:history}. Vertical black dashed lines mark the estimated TSP0 and TSP1 shown in Table \ref{tab:simulation_prop}, with the gray area being the errors on this estimation.} \label{fig:masses_sims} \end{figure} \citet{bleem15b} estimated a total Sunyaev-Zeldovich based mass of M$_{500,\rm SZ}= 5.06\pm0.90\times 10^{14}$ $h_{70}^{-1}$ M$_{\odot}$, corresponding to M$_{200,\rm SZ}= 7.63\pm1.37\times 10^{14}$ $h_{70}^{-1}$ M$_{\odot}$ \citep{zenteno2020}, which is in agreement to our estimation of the total dynamical mass from scaling relations M$_{200,\rm dyn}$ = M$_{\rm S}$ + M$_{\rm N}$ = $5.55 \pm 2.33 \times 10^{14}$ M$_{\odot}$, at the 1$\sigma$ level. \subsubsection{Recovery of the merger orbit}\label{sec:dis_recoverymh} With the masses estimated, the merging history can be recovered by using a two-body model \citep{Beers1990,Cortese2004,Gonzalez2018} or by using hydrodynamical simulations constrained with the observed properties of the merging system \citep[e.g.][]{Mastropietro2008, Machado+2015, Doubrawa2020,Moura21}, with the disadvantage being that the latter method is computationally expensive. The method presented by \citet{Dawson2013}, \texttt{MCMAC}, is a good compromise between computational time and accuracy of the results, with a dynamical parameter estimation accuracy of about 10\% for two dissociative mergers; Bullet Cluster and Musket Ball Clusters. \begin{figure*} \centering \includegraphics[width=\linewidth]{plots/density_all_sims_v2.pdf} \vskip-0.13in \caption{Density maps for the simulated 1:3 mass ratio cluster merger. Each row represents the time evolution around the TSP0 for the different impact parameters $b=0,500,1000$ kpc shown at the top, middle and bottom rows, respectively. For each panel, the simulation time is written on the bottom left.} \label{fig:density_sims_all} \end{figure*} \texttt{MCMAC} gives as a result two different time since collision, TSP0=0.96$^{+0.31}_{-0.18}${} Gyr and TSP1=2.60$^{+1.07}_{-0.53}${} Gyr, for an outgoing and an incoming merger, respectively, after the first pericentric passage. A more detailed analysis of the X--ray could further constrain both the \texttt{MCMAC} output, e.g. by constraining the merging angle \citep{Monteiro2017, Monteiro2018} and the TSP \citep{Dawson2013, Ng2015, Monteiro2017} from shocks (if any), and also the merging scenario from hydrodynamical simulations, e.g. by comparing the temperature maps or by running a simulation which recovers the features (both of the galaxies and of the ICM) of this particular merger. This is particularly interesting given that the simulations that we use to compare have a merger axis angle of $$\alpha$ = 0.0$ deg. \cite{Dawson2013} runs \texttt{MCMAC} on the Bullet Cluster data and finds $$\alpha$ = 50^{+23}_{-23}$ deg, however, by adding a prior using the X--ray shock information, he is able to constrain the angle to $$\alpha$ = 24^{+14}_{-8}$ deg, which is closer to the plane of the sky and also decreases significantly the error bars on the estimated collision times. For instance, if we assume that the merger is nearly on the plane of the sky and constrain the merging angle, $$\alpha$$, from \texttt{MCMAC} to be between 0$^\circ$ and $45^\circ$, then the resulting values are $$\alpha$ = 25^{+6}_{-6}$ deg, TSP0=$0.73^{+0.09}_{-0.09}$ and TSP1=$2.10^{+0.51}_{-0.30}$, which are still within the previous estimated values (within the errors) and have smaller error bars. However, the estimated TSP1 is still higher than any of the ones estimated from the simulations (see Table~\ref{tab:simulation_prop}). A similar system is the one studied by \citet{Dawson2012}; DLSCL J0916.2+2951, a major merging at $z=0.53$, with a projected distance of $1.0^{+0.11}_{-0.14}$ Mpc. Their dynamical analysis gives masses similar to that of our structures (when using $\sigma_v - M$ scaling relations), with the mass ratio between their northern and southern structures of $M_{\rm S}/ M_{\rm N} = 1.11 \pm 0.81$. Using an analytical model, they were able to recover a merging angle $$\alpha$ = 34^{+20}_{-14}$ degrees and a physical separation of $d3D = 1.3^{+0.97}_ {-0.18}$, both values in agreement with what we found. Furthermore, their time since collision is also similar to the one found for our outgoing system TSP$ = 0.7^{+0.2}_{-0.1}$, however they do not differentiate between an outgoing or incoming system. Regarding the in-between structure, the estimated velocity dispersion is very high ($\sigma_v = 1400$ km s$^{-1}$) and the density map shows that this region is not as dense as the other two. To check whether it is common for a merging of two galaxy clusters, we take a look at how the density map varies in the 1:3 mass ratio simulations near the estimated TSP0. In Fig.~\ref{fig:density_sims_all}, we show on each row, the density maps of the simulations with the corresponding time shown at the bottom left, and the impact parameter of the row at the top left of the first figure of each row. Levels start at 100 galaxies Mpc$^{-2}$ and increase in levels of 50. The cluster with 6$\times$10$^{14}$ M$_\odot$ is located at the bottom. The middle column indicates the density map at TSP0, where the previous and next 2 snapshots are also shown. At different times, the density maps for the same impact parameter show to be rather irregular, with the in-between region changing from snapshot to snapshot. In particular, both $b=0$ kpc and $b=1000$ kpc show an overdense in-between area near the TSP0. However, this is not the case in other snapshots, so we cannot state with confidence that this is common for a merging cluster to show such a pronounced in-between overdense region. \subsubsection{Constraining the TSP with simulations}\label{sec:constraining_tsc_with_sims} We compare the results derived by \texttt{MCMAC} with those estimated from a hydrodynamical simulation of two merging structures with a mass ratio of 1:3 \citep{ZuHone2011, ZuHone2018}. We chose this ratio since the X--ray morphologies of both the simulation and the system are a better match than the 1:1 mass ratio, where the X--ray intensity from the simulation is similar for the two structures (see Fig.~\ref{fig:sim_TSP0_xray} and \ref{fig:sim_TSP1_xray}), unlike our system, which have two distinctly different structures (see the orange contour in Fig.~\ref{fig:rgb_image}). To compare the results from \texttt{MCMAC} with the simulation, it is necessary to have a good estimate of (1) the time when the two structures have their first pericentric passage and (2) the TSP$_{\rm sim}$ for the outgoing and incoming scenarios. For the former, we determined the time on the simulation where the separation of the dark matter halos was at its minimum, while for the later we used the time where the separation was similar to that of our BCGs. For each $b$, the estimated TSP0$_{\rm sim}$ is smaller but in agreement with the result from \texttt{MCMAC}, however the estimated TSP1$_{\rm sim}$ is never in agreement (at least at $1\sigma$). Using dark matter only simulations, \cite{Wittman2019} looked for halos with similar configurations to those of observed merging clusters (such as the Bullet and Musket Ball clusters) and compared the time since collisions to those derived by \texttt{MCMAC} and other hydrodynamical simulations, finding that with respect to the latter the derived merging angles and TSP are consistent. However, both the outgoing and incoming TSP and the angles are lower than those derived by \texttt{MCMAC}, attributing the differences to the \texttt{MCMAC} assumption of zero distance between the structures at the collision time. \citet{Sarazin2002} discuss that most merging systems should have a small impact parameter, of the order of a few kpc. \citet{Dawson2012} argues that, given the displayed gas morphology, the dissociative merging galaxy cluster DLSCL J0916.2+2951, has a small impact parameter. The argument is that simulations show that the morphology for mergers with small impact parameters, is elongated transverse to the merger direction \citep{Schindler1993,Poole2006, Machado2013}. The X--ray morphology shown in this paper is similar to that from \citet{Dawson2012}. It is also similar to that of Abell 3376 \citep{Monteiro2017}, a merging galaxy cluster which was simulated by \citet{Machado2013} with different impact parameters ($b=0, 150, 350$ and $500$ kpc), with their results suggesting that a model with $b<150$ kpc is preferred. Given the similitude between SPT-CLJ0307-6225 X--ray morphology and that of other systems such as Abell 3376 and DLSCL J0916.2+2951, which have small impact parameters, then we suggest that the simulations with $b=0$ kpc or $b=500$ kpc are better representations of our system. This implies that the preferred scenario for this merging cluster is that of an outgoing system or a system very close to turnaround. This can also be seen when comparing the X--ray morphology of SPT-CLJ0307-6225 with that of the 1:3 mass ratio simulations at the estimated TSP0$_{\rm sim}$ and TSP1$_{\rm sim}$, shown in Fig.~\ref{fig:sim_TSP0_xray} and Fig.~\ref{fig:sim_TSP1_xray}, respectively, where it is noticeable that the X--ray contours at TSP0$_{\rm sim}$ are more similar than the ones at TSP1$_{\rm sim}$ for $b=500,1000$ kpc. \subsection{Galaxy population in a merging galaxy cluster}\label{sec:disc_galpop} From Fig.~\ref{fig:gal_properties}, it is noticeable that EL galaxies are located preferentially towards the cluster cores. We will divide the discussion of the galaxy population by studying the differences between the two clumps, analysing the red EL galaxy population and also the population in the area in-between the merger. \subsubsection{Comparison between North and South} One interesting optical feature of 0307-6225S, is the two very bright pairs of galaxies ($d_{\rm proj}=41$ kpc) at the center of its distribution (Fig.~\ref{fig:southernstr}). A similar, but rather extreme case is that of the galaxy cluster Abell 3827 at $z=0.099$, which shows evidence for a recent merger with four nearly equally bright galaxies within 10 kpc from the central region \citep{Carrasco2010, Massey2015}. Using GMOS data, \cite{Carrasco2010} found that the peculiar velocities of at least 3 of these galaxies are within $\sim$300 km s$^{-1}$ from the cluster redshift, with the remaining one having an offset of $\sim$1000 km s$^{-1}$. BCGs have low peculiar velocities in relaxed clusters, whereas for disturbed clusters it is expected that their peculiar velocity is 20-30\% the velocity dispersion of the cluster \citep{Yoshikawa2003, Ye2017}. For 0307-6225S, one of the bright galaxies has a peculiar velocity of $\sim$666 km s$^{-1}$, which is $\sim$88\% the velocity dispersion of this subcluster. This could be evidence of a past merging between 0307-6225S and another cluster previous to the merger with 0307-6225N. The AD test gives a Gaussian distribution, where the results do not change by applying a 3-$\sigma$ iteration, which could indicate that the substructure is a post-merger. \cite{Raouf2019R} uses the magnitude difference between the first and second brightest galaxy of a group ($\Delta M_{12}$), along with the BCG to the luminosity center distance ($D_{\rm offset}$) to separate between relaxed and unrelaxed systems. They propose a value of $\Delta M_{12} < 0.5$ and $\log_{10}(D_{\rm offset}) > 1.8$ to define unrelaxed clusters, whereas for relaxed systems the definition goes as $\Delta M_{12} > 1.7$ and $\log_{10}(D_{\rm offset}) < 1.8$. In our case, we only check the magnitude difference since we are already studying a merging cluster. For 0307-6225S the magnitude difference is $\Delta M_{12} = 0.0152 < 0.5$, which supports the scenario that 0307-6225S suffered a previous merger prior to the one with 0307-6225N. Central galaxies take $\approx$1 Gyr to settle to the cluster centre during the post-merger phase \citep{White1976,Bird1994}, meaning that this previous merger must have taken place over 1 Gyr before observed merger between 0307-6225S and 0307-6225N. On the other hand, for 0307-6225N the value is $\Delta M_{12} \approx 1.8 > 1.7$, meaning 0307-6225N was a relaxed system prior to this merger. Regarding the overall galaxy population, the fraction of EL galaxies in 0307-6225S (24\%) is nearly two times that of 0307-6225N ($\sim$13\%), although consistent within 1$\sigma$. However, it can be seen in Fig.~\ref{fig:phase_spectro} that all the EL galaxies from 0307-6225N have small peculiar velocities (with the SF galaxies within 1$\sigma_v$), while for 0307-6225S we notice that most of the blue SF galaxies have velocities higher than 2$\sigma_v$. These galaxies, which are bluer than the blue EL galaxies of 0307-6225N, could be in the process of being accreted. Considering the scenario where mergers between clusters accelerate the quenching of galaxies by increasing their star formation activity \citep{Stroe2014,Stroe2015}, then the fact that there are less star forming galaxies towards the central region of 0307-6225S compared to 0307-6225N, could be an indication that the previous merger of 0307-6225S already exhausted the star formation of the cluster, with the observed blue SF galaxy population (with larger peculiar velocities) being recently, or in the process of being, accreted from the field. \subsubsection{Red EL galaxies} Of particular interest are our EL galaxies located in the RCS. Out of the 4 red EL galaxies, 3 are located in the cores of the two main structures, with 2 of them classified as SSB. Most of the blue SF galaxies are best matched by a high-redshift star forming or late-type emission galaxy template, whereas most of the red SF galaxies are best matched with an early-type absorption galaxy template. \cite{Koyama2011} studied the region in and around the $z=0.41$ rich cluster CL0939+4713 (A851) using H$$\alpha$$ imaging to distinguish SF emission line galaxies. A851 is a dynamically young cluster with numerous groups at the outskirts. They found that the red H$$\alpha$$ emitters are preferentially located in low-density environments, such as the groups and the outskirts, whereas for in the core of the cluster they did not find red H$$\alpha$$ emitters. \cite{ma10} studied the galaxy population of the merging galaxy cluster MACS J0025.4-1225 at $z=0.586$. In the areas around the cluster cores (with a radius of 150 kpc) they find emission line galaxies corresponding to two spiral galaxies (one for each subcluster), plus some spiral galaxies without spectroscopic information, accounting for 14\% of the total galaxies within the radius. Their Fig. 15 shows that they also have red EL galaxies, however they don't specify whether the 2 spiral galaxies within the cluster core are part of this population. Both results from \cite{ma10} and \cite{Koyama2011} indicate that red EL galaxies are not likely to be found within the cores of dense regions. It can be observed from Fig.~\ref{fig:crop_galaxies} that most of our red EL galaxies do not have close neighbours which can supplement gas to them. It is possible then that these objects accreted gas from the ICM, with the merger triggering then the SF. Given the peculiar velocity of the two SSB galaxy from our sample (which is classified as red), at least one of them was most likely part of the merging event. If, for example, merger shocks travelling through the ICM can trigger a starburst episode on galaxies with gas reservoirs for a few 100 Myr \citep[][]{Owers2012,Stroe2014,Stroe2015}, then these galaxies would make the outgoing scenario a better candidate than the incoming one. \subsubsection{Area in-between the merger} With respect to the in-between area, its mostly comprised by red passive galaxies, with the only EL galaxy belonging to the RCS. Moreover, the 2 blue galaxies are classified as a passive and a PSB. \citet{ma10} found a fraction of post-starburst galaxies in the major cluster merger MACS J0025.4-1225, on the region in-between the collision between the two merging components, where, given the timescales, the starburst episode of them occurred during first passage. Similarly to our blue galaxies in this region, they found that their colors are located between those of blue EL galaxies and red passive galaxies. \section{Summary and Conclusions} \label{sec:conclusions} In this paper we use deep optical imaging and new MUSE spectroscopic data along with archival GMOS data to study the photometric and spectral properties of the merging cluster candidate SPT-CLJ0307-6225, estimating redshifts for 69 new galaxy cluster members. We used the data to characterize (a) its merging history by means of a dynamical analysis and (b) its galaxy population by means of their spectroscopic and photometric properties. With respect to the merging history, we were able to confirm the merging state of the cluster and conclude that: \begin{itemize} \item Using the galaxy surface density map of the RCS galaxies we can see a bi-modality in the galaxy distribution. However, the cluster does not show signs of substructures along the line-of-sight. \item We assign galaxy members to each substructure by means of the \textsc{DBSCAN} algorithm. We name the two main substructures as 0307-6225N and 0307-6225S, referring to the northern and southern overdensities, respectively. \item For each substructure we measured the redshift, velocity dispersion and velocity-derived masses from scaling relations. We find a mass ratio of $M_S/M_N \approx$ 1.3{} and a velocity difference of $v_N-v_s=342$ km s$^{-1}$ between the northern and southern structures. \item To estimate the time since collision we use the \textsc{MCMAC} algorithm, which gave us the times for an outgoing and incoming system. By means of hydrodynamical simulations we constrained the most likely time to that of an outgoing system with TSP=0.96$^{+0.31}_{-0.18}${} Gyr. \item The outgoing configuration is also supported by the comparison between the observed and simulated X--ray morphologies. This comparison between the X--ray morphologies also provide a constraint on the masses, where a merger with a mass ratio of 1:3 seems more likely than that of a 1:1 mass merger. \end{itemize} With respect to the galaxy population, we find that: \begin{itemize} \item EL galaxies are located preferentially near the cluster cores (projected separations), where the average low peculiar velocities of red SF galaxies indicates that they were most likely accreted before the merger between 0307-6225N and 0307-6225S occurred. \item EL galaxies on 0307-6225N have smaller peculiar velocities than those of 0307-6225S, where in the latter it appears that blue SF galaxies were either recently accreted or are in the process of being accreted. \item 0307-6225S shows two possible BCGs, which are very close in projected space. The magnitude and velocity differences between them are $\sim0$ mag and $\sim$674 km s$^{-1}$, respectively, with one of them having a peculiar velocity close to 0 km s$^{-1}$ with respect to 0307-6225S, while the other is close to the estimated $1\sigma_v$. However, the velocity distribution of the cluster shows no signs of being perturbed. This suggests that 0307-6225S could be the result of a previous merger which was at its last stage when the observed merger occurred. \item With respect to the in-between region, the galaxy population is comprised mostly of red galaxies, with the population of blue galaxies classified as passive or PSB, with colors close to the RCS. \end{itemize} In summary, our work supports a nearly face-on, in the plane of the sky, major merger scenario for SPT-CLJ0307-6225. This interaction accelerates the quenching of galaxies as a result of a rapid enhancement of their star formation activity and the subsequent gas depletion. This is in line with literature findings indicating that the dynamical state of a cluster merger has a strong impact on galaxy population. Of particular importance is to differentiate dynamically young and old mergers. Comparisons between such systems will further increase our understanding on the connection between mergers and the quenching of star formation in galaxies. In future studies, we will replicate the analysis performed on SPT-CLJ0307-6225, to a larger cluster sample, including the most disturbed cluster candidates on the SPT sample. These studies will be the basis for a comprehensive analysis of star formation in mergers with a wide dynamical range. \section*{Acknowledgements} DHL acknowledges financial support from the MPG Faculty Fellowship program, the new ORIGINS cluster funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy - EXC-2094 - 390783311, and the Ludwig-Maximilians-Universit\"at Munich. FA was supported by the doctoral thesis scholarship of ANID-Chile, grant 21211648. FAG acknowledges financial support from FONDECYT Regular 1211370. FAG and FA acknowledge funding from the Max Planck Society through a Partner Group grant. J. L. N. C. is grateful for the financial support received from the Southern Office of Aerospace Research and Development of the Air Force Office of the Scientific Research International Office of the United States (SOARD/AFOSR) through grants FA9550-18-1-0018 and FA9550-22-1-0037. AS is supported by the ERC-StG `ClustersXCosmo' grant agreement 716762, by the FARE-MIUR grant `ClustersXEuclid' R165SBKTMA, and by INFN InDark Grant. \section*{Data Availability} \textit{Chandra} and Megacam/Magellan are available upon request from McDonalds, M. and Stalder, B., respectively. GMOS data are available in \cite{bayliss16}. The raw MUSE data are available from the ESO Science Archive (\url{https://archive.eso.org/}, with programs IDs: 097.A-0922(A) and 100.A-0645(A)). Additional data on derived physical parameters are available in this paper. \section*{Affiliations} \noindent {\it $^{1}$Faculty of Physics, Ludwig-Maximilians-Universit\"{a}t, Scheinerstr.\ 1, 81679 Munich, Germany \\ $^{2}$Cerro Tololo Inter-American Observatory, NSF's National Optical-Infrared Astronomy Research Laboratory, Casilla 603, La Serena, Chile\\ $^{3}$Departamento de Astronom\'ia, Universidad de La Serena, Avenida Juan Cisternas 1200, La Serena, Chile\\ $^{4}$School of Physics, University of Melbourne, Parkville, VIC 3010, Australia \\ $^{5}$Instituto de F\'isica y Astronom\'ia, Universidad de Valpara\'iso, Avda. Gran Bretaña 1111, Valpara\'iso, Chile\\ $^{6}$Academia Sinica, Institute of Astronomy and Astrophysics, 11F of AS/NTU Astronomy-Mathematics Building, No.1, Sec. 4, Roosevelt Rd, Taipei 10617, Taiwan, R.O.C.\\ $^{7}$Universidade Estadual de Santa Cruz, Laborat\'orio de Astrof\'isica Te\'orica e Observacional - 45650-000, Ilh\'eus-BA, Brazil\\ $^{8}$Instituto de Investigaci\'on Multidisciplinar en Ciencia y Tecnolog\'ia, Universidad de La Serena, Ra\'ul Bitr\'an 1305, La Serena, Chile\\ $^{9}$ Gemini Observatory, NSF’s National Optical-Infrared Astronomy Research Laboratory, Casilla 603, La Serena, Chile \\ $^{10}$European Southern Observatory, Alonso de Cordova 3107, Vitacura, Casilla 19001, Santiago de Chile, Chile \\ $^{11}$Center for Astrophysics — Harvard \& Smithsonian, 60 Garden Street, Cambridge, MA 02138, USA\\ $^{12}$Vera C. Rubin Observatory Project Office, 950 N. Cherry Ave, Tucson, AZ 85719, USA\\ $^{13}$Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA 02139 \\ $^{14}$ Department of Physics, University of Cincinnati, Cincinnati, OH 45221, USA\\ $^{15}$ Department of Physics and Astronomy, University of Missouri--Kansas City, 5110 Rockhill Road, Kansas City, MO 64110, USA\\ $^{16}$ Huntingdon Institute for X-ray Astronomy, LLC, 10677 Franks Road, Huntingdon, PA 16652, USA\\ $^{17}$Department of Geography, Ludwig-Maximilians-Universit\"at, Luisenstr 37, D-80333 Munich, Germany \\ $^{18}$Potsdam Institute for Climate Impact Research, Telegrafenberg, 14473 Potsdam, Germany\\ $^{19}$Department of Physics and Astronomy, University of Potsdam, Karl-Liebknecht-Str. 24/25, 14476 Potsdam-Golm, Germany\\ $^{20}$Department of Astronomy, University of Michigan, 1085 South University Ave, Ann Arbor, MI 48109, USA\\ $^{21}$Direcci\'on de Investigaci\'on y Desarrollo, Universidad de La Serena, Av. Ra\'ul Bitr\'an Nachary Nº 1305, La Serena, Chile.\\ $^{22}$Dipartimento di Fisica, Sezione di Astronomia, Universit\'a di Trieste, Via Tiepolo 11, I-34143 Trieste, Italy \\ $^{23}$INAF – Osservatorio Astronomico di Trieste, via Tiepolo 11, I-34131 Trieste, Italy \\ $^{22}$IFPU – Institute for Fundamental Physics of the Universe, via Beirut 2, 34151, Trieste, Italy \\ $^{24}$INFN – Sezione di Trieste, I-34100 Trieste, Italy } \bibliographystyle{mnras}
2111.15485
\section{Introduction} For a field $\mathbf{F}$ and a positive integer $h$, we define linear forms \begin{eqnarray}\label{eq11} \varphi(x_1,\ldots,x_h)=c_1 x_1+\cdots+c_h x_h, \end{eqnarray} where $c_i\in \mathbf{F}$ for all $i\in\{1,\ldots,h\}$. Let $V$ be a vector space over the field $\mathbf{F}$. For every nonempty set $A\subseteq V$, let $$ A^h=\left\{ (a_1,a_2,\ldots,a_h):a_i\in A~\text{for~all}~i\in \{1,2,\ldots,h\} \right\} $$ be the set of all $h$-tuples of elements of $A$. For $c\in \mathbf{F}$, the $c$-$dilate$ of $A$ is defined as $$ c\ast A=\{ca: a\in A \}. $$ The $\varphi$-$image$ of $A$ is the set \begin{eqnarray*} \varphi(A)&=&\left\{ \varphi(a_1,a_2,\ldots,a_h):(a_1,a_2,\ldots,a_h)\in A^h \right\}\\ &=&\{c_1a_1+\cdots+c_ha_h:~(a_1,\ldots,a_h)\in A^h\}\\ &=& c_1 \ast A+ \cdots +c_h \ast A . \end{eqnarray*} We call a nonempty subset $A$ of $V$ a $Sidon$ set for the linear form $\varphi$ (or a $\varphi$-$Sidon$ set) if $ \varphi(a_1,a_2,\ldots,a_h)$ with $(a_1,a_2,\ldots,a_h)\in A^h$ are all distinct. That is, for all $h$-tuples $(a_1,a_2,\ldots,a_h)\in A^h$ and $(a'_1,a'_2,\ldots,a'_h)\in A^h$, if $\varphi (a_1,a_2,\ldots,a_h) =\varphi(a'_1,a'_2,\ldots,a'_h),$ then $(a_1,a_2,\ldots,a_h)= (a'_1,a'_2,\ldots,a'_h)$. For every nonempty subset $I$ of $\{1,\ldots,h\}$, define the subset sum \begin{eqnarray}\label{eq12} s_I =\sum_{i\in I}c_i . \end{eqnarray} Let $s_\emptyset =0$. Suppose there exist disjoint subsets $I_1$ and $I_2$ of $\{1,\ldots,h\}$ with $I_1$ and $I_2$ not both empty such that \begin{eqnarray}\label{eq13} s_{I_{1}} =\sum_{i\in I_1}c_i =\sum_{i\in I_2}c_i=s_{I_{2}}. \end{eqnarray} Then $A$ is not a sidon set (See \cite{Nathanson1}). We say that the linear form (\ref{eq11}) has property $N$ if there do not exist disjoint nonempty subsets $I_1$ and $I_2$ of $\{1,\ldots,h\}$ that satisfy (\ref{eq13}). Let $J\subseteq \{1, \ldots, h\}$, we define the linear form in $\operatorname{card}(J)$ variables $$ \varphi_{J}=\sum_{j \in J} c_{j} x_{j}. $$ By definition, $\varphi_{\emptyset}=0$ and $\varphi_{J}=\varphi$ if $J=\{1, \ldots, h\} .$ The linear form $\varphi_{J}$ is called a contraction of the linear form $\varphi$. For every nonempty subset $A$ of $V$, let $$ \varphi_{J}(A)=\left\{\sum_{j \in J} c_{j} a_{j}: a_{j} \in A \text { for all } j \in J\right\}. $$ If $A$ is a $\varphi$-Sidon set, then $A$ is a $\varphi_{J}$-Sidon set for every nonempty subset $J$ of $\{1, \ldots, h\}$. For every subset $X$ of $V$ and vector $v \in V$, the {\em translate} of $X$ by $v$ is the set $$ X+v=\{x+v: x \in X\}. $$ For every subset of $J$ of $\{1, \ldots, h\}$, let $J^{c}=\{1, \ldots, h\} \backslash J$ be the complement of $J$ in $\{1, \ldots, h\}$. For every subset $A$ of $V$ and $b \in V \backslash A$, we define $$ \Phi_{J}(A, b)=\varphi_{J}(A)+\bigg(\sum_{j \in J^{c}} c_{j}\bigg) b=\varphi_{J}(A)+s\left(J^{c}\right) b $$ be the translate of the set $\varphi_{J}(A)$ by the subset sum $s\left(J^{c}\right) b$. We have $\Phi_{\emptyset}(A, b)=\left(\sum_{j=1}^{h} c_{j}\right) b$ and $\Phi_{J}(A, b)=\varphi(A)$ if $J=\{1, \ldots, h\}$. Let $A=\{a_k:k=1,2,3,\ldots \}$ and $B=\{b_k:k=1,2,3,\ldots \}$ be sets of integers. We define $A$ a {\em polynomial perturbation} of $B$ if for some $r>0$ and positive integer $k_0$, $$ |a_k-b_k|< k^r$$ for all integers $k \geq k_0$. The set $A$ is a {\em bounded perturbation} of $B$ if there exists an $m_0 > 0$ such that $$ |a_k-b_k|< m_0$$ holds for all integers $k \geq k_0$. Recently, Nathanson \cite{Nathanson1} posed the following problem. \noindent{\bf Nathanson's Problem} {\em Let $\varphi$ be a linear form with integer coefficients that satisfies condition $N$. Let $B$ be a set of integers. Does there exist a $\varphi$-Sidon set of integers that is a polynomial perturbation of $B$? Does there exist a $\varphi$-Sidon set of integers that is a bounded perturbation of $B$?} For other related results about Sidon set, one can refer to [1]-[4],[6],[7]. In this paper, we give an affirmative answer to the previous problem and give some partial results to the last problem. \begin{theorem}\label{thm1} Let $\varphi=\Sigma_{i=1}^h c_i x_i$ be a linear form with integer coefficients that satisfy condition $N$ and $B=\{b_k:k=1,2,3,\ldots \}$ be a set of integers. Then there exists an infinite $\varphi$-Sidon set $A=\{a_k:k=1,2,3,\ldots \}$ of integers such that $|a_k-b_k|< k^{4h}$ holds for all positive integers $k$. \end{theorem} \begin{theorem}\label{thm2} Let $\varphi=\Sigma_{i=1}^h c_i x_i$ be a linear form with integer coefficients that satisfies condition $N$, $C=\sum^{h}_{i=1}|c_i|$, and $B=\{b_1<b_2<\cdots\}$ be a set of integers. For any $\epsilon>0$, if $|b_t - b_s|\leq \left( t-s+1 \right)^{h-\epsilon}$ for any positive integers $t>s$, then there does not exist a $\varphi$-Sidon set of integers that is a polynomial perturbation of $B$. \end{theorem} \begin{theorem}\label{thm3} Let $\varphi=\sum_{i=1}^h c_i x_i$ be a linear form with integer coefficients that satisfies condition $N$, $C=\sum^{h}_{i=1}|c_i|$, and $B=\{b_1<b_2<\cdots\}$ be a set of integers. If $b_1>m$ and $b_{k+1}> C b_k+(C+1)m$ for some $m\ge 0$ and for all positive integers $k$, then there exists a $\varphi$-Sidon set $A=\{a_1,a_2,\ldots\}$ of integers and a constant $m_0>0$ such that $ |a_k-b_k|<m_0 $ for all positive integers $k$. \end{theorem} \section{Proofs} \begin{lemma}\label{lem21}\cite[Lemma 1]{Nathanson1} Let $\varphi=\Sigma_{i=1}^h c_i x_i$ be a linear form with coefficients in the field $\mathbf{F}$. Let $V$ be a vector space over $\mathbf{F}$. For every subset $A$ of $V$ and $b\in V\backslash A$, $$ \varphi \left(A\cup \{b\}\right)= \bigcup_{J\subseteq \{1,\ldots,h \}} \Phi_J (A,b). $$ If $A\cup \{b\}$ is a $\varphi$-Sidon set, then \begin{eqnarray}\label{eq2.0} \left\{ \Phi_J(A,b): J\subseteq \{1,\ldots,h \} \right\} \end{eqnarray} is a set of pairwise disjoint sets. If $A$ is a $\varphi$-Sidon set and (\ref{eq2.0}) is a set of pairwise disjoint sets, then $A\cup \{b\}$ is a $\varphi$-Sidon set. \end{lemma} \begin{lemma}\label{lem22} If $A$ is a $\varphi$-Sidon set, then any subset of $A$ is also a $\varphi$-Sidon set. \end{lemma} The Proof of Lemma \ref{lem22} is easy, we leave it to the reader. \begin{proof}[Proof of Theorem \ref{thm1}] We construct the $\varphi$-Sidon set $A=\{a_k:k=1,2,3,\ldots \}$ inductively. Let $a_1=b_1$, then $A_1=\{a_1\}$ is a $\varphi$-Sidon set and $|a_1-b_1|=0< 1^{4h}$. Let $k\geq 1$ and let $A_k=\{a_1,a_2,\ldots,a_k\}$ be a set of $k$ distinct positive integers such that $A_k$ is a $\varphi$-Sidon set with $|a_i-b_i|< i^{4h}$ for all integers $i\leq k$. Let $b$ be a positive integer. By Lemma \ref{lem21}, the set $A_k\cup \{b\}$ is a $\varphi$-Sidon set if and only if the sets $$\Phi_{J}(A_k,b)=\varphi_{J} (A_k)+\bigg(\sum_{j\in J^{c}} c_j\bigg)b$$ are pairwise disjoint for all $J\subseteq\{1,\ldots,h\}$. Let $J_1$ and $J_2$ be distinct subsets of $\{1,\ldots,h\}$. We have $$ \Phi_{J_{1}}(A_k,b)\cap \Phi_{J_{2}}(A_k,b)\neq \emptyset$$ if and only if there exist integers $a_{1,j}\in A_k$ for all $j\in J_1$ and $a_{2,j}\in A_k$ for all $j\in J_2$ such that $$ \sum_{j\in J_{1}}c_j a_{1,j}+\bigg(\sum_{j\in J_{1}^{c}} c_j\bigg)b= \sum_{j\in J_{2}}c_j a_{2,j}+\bigg(\sum_{j\in J_{2}^{c}} c_j\bigg)b.$$ Equivalently, the integer $b$ satisfies the equation \begin{eqnarray}\label{eq2.2} \bigg(\sum_{j\in J^{c}_{2}}c_j-\sum_{j\in J^{c}_{1}}c_j\bigg)b=\sum_{j\in J_{1}}c_j a_{1,j}-\sum_{j\in J_{2}}c_j a_{2,j}. \end{eqnarray} The integer $$\begin{aligned} c=\sum_{j\in J^{c}_{2}}c_j-\sum_{j\in J^{c}_{1}}c_j = s(J_{2}^{c})-s(J_{1}^{c}) =s(J_{1}\backslash (J_{1}\cap J_{2}) )-s(J_{2}\backslash (J_{1}\cap J_{2}) ) \end{aligned}$$ is nonzero because $J_{1}\backslash (J_{1}\cap J_{2})$ and $J_{2}\backslash (J_{1}\cap J_{2})$ are distinct and disjoint, and the linear form $\varphi$ satisfies condition $N$, and so there exists at most one integer $b $ that satisfies equation (\ref{eq2.2}). Let $card(J_1)=k_1$ and $card(J_2)=k_2$. The sets $J_1$ and $J_2$ are distinct subsets of $\{1,\ldots,h\}$, and so at least one of the sets $J_1$ and $J_2$ is a proper subset of $\{1,\ldots,h\}$. It follows that $$ k_1+k_2=card(J_1)+card(J_2)\leq 2h-1.$$ The number of integers of the form $$\sum_{j\in J_{1}}c_j a_{1,j}-\sum_{j\in J_{2}}c_j a_{2,j} $$ with $a_{1,j}\in A_n$ and $a_{2,j}\in A_n$ is at most $n^{k_1+k_2}$. The number of ordered pairs $\left( J_1,J_2 \right)\subseteq \{1,\ldots,h\}$ with $|J_1|=k_1$ and $|J_2|=k_2$ is $ \dbinom{h}{k_1}\dbinom{h}{k_2}.$ Thus, the number of equations of the form (\ref{eq2.2}) is less than $$ \sum^{h}_{k_1=0}\sum^{h}_{k_2=0}\dbinom{h}{k_1}\dbinom{h}{k_2}n^{k_1+k_2}<\left( \sum^{h}_{k_1=0}\dbinom{h}{k_1} \right)^2 n^{2h-1}=4^h n^{2h-1},$$ and so there are less than $4^h n^{2h-1}+n$ positive integers $b$ such that $b \notin A_n$ and $A_n\cup \{b\}$ is a $\varphi$-Sidon set. Let $a_{n+1}\in \left( b_{n+1}-4^h n^{2h-1}-n,b_{n+1}+4^h n^{2h-1}+n \right)$ satisfy that $A_{n+1}=A_{n}\cup \{a_{n+1}\}$ is a $\varphi$-Sidon set. Then $$|a_{n+1}-b_{n+1}|<4^h n^{2h-1}+n.$$ Next, we prove that $4^h n^{2h-1}+n<(n+1)^{4h}$ for all positive integers $n$. Case 1. $n=1$. Since $h$ is a positive integer and $2^{2h}\geq 4$, we have $$2^{4h}=\left(2^{2h}\right)^2>2^{2h}+1=4^h+1.$$ Case 2. $n\geq 2$. It follows that $$\begin{aligned} (n+1)^{4h}&=(n+1)^{2h} \cdot (n+1)^{2h} > 2^{2h} \cdot n^{2h}= 4^h \cdot n^{2h-1} \cdot n \\ &\geq 4^h \cdot n^{2h-1} \cdot 2 > 4^h \cdot n^{2h-1}+ n. \end{aligned}$$ This completes the proof of Theorem \ref{thm1}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm2}] Suppose that there exist a $\varphi$-Sidon set $A=\{a_k:k=1,2,\ldots\} $ and an integer $m_0$ such that $ |a_k-b_k|<m_0$ for all positive integers $k$. Let $A_{t-s}=\{a_s,a_{s+1},\ldots,a_t \}, t\geq s$. By Lemma \ref{lem22}, $A_{t-s}$ is also a $\varphi$-Sidon set, so we have $$ \varphi\left(A_{t-s}\right)=\left\{ \sum_{i=1}^{h} c_i a_i :a_i\in A_{t-s} \right\} $$ and \begin{eqnarray}\label{eq2.3} |\varphi\left(A_{t-s}\right)|=(t-s+1)^h. \end{eqnarray} Let $$J_1=\{i:~1\le i\le h ~\text{and}~c_i>0\},\quad J_2=\{i:~1\le i\le h~\text{and}~c_i<0\}.$$ Then $$ \varphi\left(A_{t-s}\right)_{\max}=\sum_{i\in J_1} c_i\cdot \max(A_{t-s}) +\sum_{i\in J_2} c_i \cdot \min(A_{t-s}) <\sum_{i\in J_1} c_i (b_t+m_0) +\sum_{i\in J_2} c_i (b_s-m_0), $$ $$ \varphi\left(A_{t-s}\right)_{\min}=\sum_{i\in J_2} c_i \cdot \max(A_{t-s}) +\sum_{i\in J_1} c_i \cdot \min(A_{t-s}) >\sum_{i\in J_2} c_i (b_t+m_0) +\sum_{i\in J_1} c_i (b_s-m_0). $$ Since $\varphi\left(A_{t-s}\right)$ is a set of integers, it follows that \begin{eqnarray}\label{eq2.4} \begin{aligned} |\varphi\left(A_{t-s}\right)|&\le \sum_{i\in J_1} c_i (b_t+m_0) +\sum_{i\in J_2} c_i (b_s-m_0)- \bigg(\sum_{i\in J_2} c_i (b_t+m_0) +\sum_{i\in J_1} c_i (b_s-m_0)\bigg)-1 \\ &= \sum_{i=1}^{h} |c_i| (b_t+m_0)- \sum_{i=1}^{h} |c_i| (b_s-m_0) -1 \\ &= C(b_t - b_s + 2 m_0)-1. \end{aligned} \end{eqnarray} By (\ref{eq2.3}) and (\ref{eq2.4}), we have $$ (t-s+1)^h\le C(b_t - b_s + 2 m_0)-1 \leq C\left[(t-s+1)^{h-\epsilon} + 2 m_0\right]-1. $$ This inequality can not hold when $t-s+1$ large enough, a contradiction. This completes the proof of Theorem \ref{thm2}. \end{proof} \begin{proof}[Proof of Theorem \ref{thm3}] Take $m_0$ with $0<m_0 \leq m$ and $a_1=b_1$, then $A_1=\{a_1\}$ is a $\varphi$-Sidon set and $|a_1-b_1|=0< m_0$. Now we prove by induction on $k$. Suppose that $k\geq 1$ and $A_k=\{a_1,a_2,\ldots,a_k\}$ is a $\varphi$-Sidon set of integers such that $|a_i-b_i|<m_0$ for $i=1,2,\ldots,k$. Take a positive integer $a_{k+1}$ such that $$a_{k+1}\in \left(b_{k+1}-m_0,b_{k+1}+m_0 \right).$$ By Lemma \ref{lem21}, the set $A_k\cup \{a_{k+1}\}$ is a $\varphi$-Sidon set if and only if the sets $$\Phi_{J}(A_k,a_{k+1})=\varphi_{J} (A_k)+\bigg(\sum_{j\in J^{c}} c_j\bigg)a_{k+1}$$ are pairwise disjoint for all $J\subseteq\{1,\ldots,h\}$. Suppose that there exist two distinct subsets $J_1,J_2\subseteq \{1,\ldots,h\}$ such that $$ \Phi_{J_{1}}(A_k,a_{k+1})\cap \Phi_{J_{2}}(A_k,a_{k+1})\neq \emptyset.$$ It follows that there exist integers $a_{1,j}\in A_k$ for all $j\in J_1$ and $a_{2,j}\in A_k$ for all $j\in J_2$ such that $$ \sum_{j\in J_{1}}c_j a_{1,j}+\bigg(\sum_{j\in J_{1}^{c}} c_j\bigg)a_{k+1}= \sum_{j\in J_{2}}c_j a_{2,j}+\bigg(\sum_{j\in J_{2}^{c}} c_j\bigg)a_{k+1}.$$ Equivalently, the integer $a_{k+1}$ satisfies the equation $$ \bigg(\sum_{j\in J^{c}_{2}}c_j-\sum_{j\in J^{c}_{1}}c_j\bigg)a_{k+1}=\sum_{j\in J_{1}}c_j a_{1,j}-\sum_{j\in J_{2}}c_j a_{2,j}. $$ Since $J_{1}^{c}\neq J_{2}^{c}$ and the linear form $\varphi$ satisfies condition $N$, it follows that $\sum_{j\in J^{c}_{2}}c_j-\sum_{j\in J^{c}_{1}}c_j $ is nonzero, and so $$ a_{k+1}=\frac{\sum_{j\in J_{1}}c_j a_{1,j}-\sum_{j\in J_{2}}c_j a_{2,j}}{\sum_{j\in J^{c}_{2}}c_j-\sum_{j\in J^{c}_{1}}c_j}\le |\sum_{j\in J_{1}}c_j a_{1,j}-\sum_{j\in J_{2}}c_j a_{2,j}| \le \sum_{i=1}^h |c_i|\cdot \max\{a_1,a_2,\ldots,a_k\} <C (b_k+m_0). $$ Thus we have $$ a_{k+1}>b_{k+1}-m> C b_k+(C+1)m-m=C (b_k+m)\geq C (b_k+m_0)> a_{k+1}, $$ which is a contradiction. Hence $ \Phi_{J_{1}}(A_k,a_{k+1})\cap \Phi_{J_{2}}(A_k,a_{k+1})=\emptyset.$ Therefore, the set $A_k\cup \{a_{k+1}\}$ is a $\varphi$-Sidon set and $|a_i-b_i|<m_0$ for all $i\leq k+1$. \end{proof}
1105.3089
\section{Introduction} \begin{figure* \begin{center} \centering{\mbox{\psfig{file=fig1.ps,width=8cm}}} \centering{\mbox{\psfig{file=fig2.ps,width=8cm}}} \centering{\mbox{\psfig{file=fig3.ps,width=8cm}}} \centering{\mbox{\psfig{file=fig4.ps,width=8cm}}} \caption{NVSS image cut-outs for four sources in our sample with overimposed IBIS error circle and 2MASX source positions. In all images, the north is up and east to the left. The scale is reported at the bottom right corner.} \end{center} \end{figure*} \begin{figure* \begin{center} \centering{\mbox{\psfig{file=fig5.ps,width=8cm}}} \centering{\mbox{\psfig{file=fig6.ps,width=8cm}}} \centering{\mbox{\psfig{file=fig7.ps,width=8cm}}} \centering{\mbox{\psfig{file=fig8.ps,width=8cm}}} \caption{NVSS or SUMSS image cut-outs for four sources in our sample with overimposed IBIS error circle and 2MASX source positions. In all images, the north is up and east to the left. The scale is reported at the bottom right corner.} \end{center} \end{figure*} A key strategic objective of the {\it INTEGRAL} mission (Winkler et al. 2003) is a survey of the sky at high energies ($>$ 20 keV), the domain where fundamental changes from primarily thermal to non-thermal sources/phenomena are expected, where the effects of absorption are drastically reduced and where most of the extreme astrophysical behaviour are taking place. To survey the high energy sky, {\it INTEGRAL} makes use of the unique imaging capability of the IBIS instrument (Ubertini et al. 2003), which allows the detection of sources at the mCrab flux level with an angular resolution of 12$'$ and a point source location accuracy of typically 1--3$'$ within a large ($29\times29$ degrees) field of view. So far, several surveys produced from data collected by IBIS have been reported in the literature, the most complete being that of Bird et al. (2010), which lists more than 700 sources of a diverse nature (Galactic and extra-galactic) and class. However, a large fraction ($\sim$30\%) of these new {\it INTEGRAL} sources has no obvious counterpart in other wavebands and cannot be firmly classified; their classification is a primary objective of the survey work but it is made very difficult by the large IBIS error circles. Improved arcsecond-sized localization is therefore necessary to pinpoint the optical counterpart and through spectroscopic observations assess its nature/class (Masetti et al. 2010). For source identification one relies mostly on X-ray observations, but data in other wavebands can be used as well for counterpart search, in particular in those cases where the {\it INTEGRAL} unidentified source may be associated with an Active Galactic Nucleus (AGN). Within this framework, in the present paper we present a new method for AGN identification in the 4th IBIS catalogue, which relies on cross-correlating unidentified sources first with infrared (IR) and then with radio catalogues. The method is a posteriori verified by means of X-ray and optical follow up observations of some of the sources discussed in the paper, which allow exploring their nature. In the first step we use a set of extended IR objects (2MASS Extended Catalogue, Skrutskie et al. 2006) 97\% of which are associated with galaxies; we note that this is one of the most complete lists of galaxies available as it covers also the Galactic plane. Then we search for radio emission from these galaxies in order to identify likely AGN. This second step is justified by the fact that almost all AGN detected so far by {\it INTEGRAL} have a radio counterpart, which is not necessary radio-loud but can emit at a few mJy level. An object which is extragalactic in nature and radio emitting, could also be a starburst galaxy but, in this case, we would not expect a bright X-ray luminosity as typically seen above 20 keV with currently operating hard X-ray detectors; indeed, so far, no starburst galaxy has been detected by IBIS in the 20--100 keV energy band. Hence we use the IR, or better the information on source extension provided by the 2MASS extended catalogue, to pinpoint galaxies and then the information available from radio and hard X-ray emissions to look for AGN. We emphasize that many unidentified objects in the IBIS survey are on the Galactic plane, so that identification of these sources as AGN is not straightforward. Our method however provides a way to overcome problems related to the identification of active galaxies in the ``zone of avoidance''. Our choice of IR and radio catalogues is not intended to be a way to select special types of AGN, but rather a simple way to find galaxies among galactic and extragalactic sources and to select, among them, AGN associated with hard X-ray selected objects. X-ray follow up observations are instead used to provide confirmation of the proposed IR/radio and hard X-ray association while optical spectroscopy is performed to test the source AGN nature and class. The paper is organized as follows: Sect. 2 describes the sample selection criteria and gives an overview of the extracted sample; the multiwavelength observations and analyses are reported in Sect. 3, while results and discussion are shown in Sect. 4. Finally, conclusions are presented in Sect. 5. The present work supersedes the analysis carried out in Maiorano et al. (2010), in which preliminary results for only three sources of the sample were presented. \begin{table* \caption[] {IBIS, 2MASX and radio positions (and corresponding errors) for each source in the sample. The radio positions come from the analysis carried out in this work (Subsect. 2.1).} \scriptsize \begin{center} \begin{tabular}{ccccccc} \noalign{\smallskip} \hline \hline \noalign{\smallskip} Source & IBIS Position & IBIS & 2MASX Object & 2MASX Position & Radio Object & Radio Position \\ & RA (J2000) & error circle & & RA (J2000) & & RA (J2000)(err) \\ & Dec (J2000) & (arcmin) & & Dec (J2000) & & Dec (J2000)(err) \\ \noalign{\smallskip} \hline \hline \noalign{\smallskip} IGR J00556+7708 & 00:55:34.8 & 4.9$'$ & 2MASX J00570148+7708505 & 00:57:01.482 & NVSS J005700+770911 & 00:57:00.016 (0.181s)\\ & +77:08:24 & & & +77:08:50.50 & & +77:09:11.81 (0.62$''$)\\ IGR J03103+5706 & 03:10:16.8 & 4.9$'$ & 2MASX J03095498+5707023 & 03:09:54.987 & NVSS J030954+570704 & 03:09:54.955 (0.128s)\\ & +57:06:07.2 & & & +57:07:02.34 & & +57:07:04.29 (1.04$''$)\\ IGR J05583--1257 & 05:58:17.04 & 4.8$'$ & 2MASX J05580231--1255477 & 05:58:02.313 & NVSS J055802--125545 & 05:58:02.416 (0.172s)\\ & --12:57:43.2 & & (LCSB 0289O) & --12:55:47.78 & & --12:55:45.54 (3.30$''$)\\ IGR J08190--3835 & 08:19:02.16 & 3.5$'$ & 2MASX J08191136--3833104 & 08:19:11.365 & NVSS J081910--383307 & 08:19:10.929 (0.219s)\\ & --38:34:58.8 & & & --38:33:10.46 & & --38:33:06.78 (3.00$''$)\\ IGR J17219--1509 & 17:21:55.68 & 4.2$'$ & 2MASX J17215337--1505384 & 17:21:53.379 & NVSS J172153--150532 & 17:21:53.273 (0.155s)\\ & --15:09:39.6 & & & --15:05:38.49 & & --15:05:32.02 (3.70$''$)\\ IGR J17520--6018 & 17:52:02.16 & 5.0$'$ & 2MASX J17515581--6019430 & 17:51:55.818 & SUMSS J175155--601943 & 17:51:55.585 (0.142s)\\ & --60:18:18 & & & --60:19:43.08 & & --60:19:43.87 (1.30$''$)\\ IGR J21268+6203 & 21:26:46.08 & 4.4$'$ & 2MASX J21262644+6204410 & 21:26:26.440 & NVSS J212628+620457 & 21:26:28.707 (0.027s)\\ & +62:03:43.2 & & & +62:04:41.03 & & +62:04:57.58 (0.19$''$)\\ IGR J21441+4640 & 21:44:04.08 & 4.9$'$ & 2MASX J21441345+4637169 & 21:44:13.455 & NVSS J214413+463718 & 21:44:13.419 (0.092s)\\ & +46:40:51.6 & & (UGC 11806) & +46:37:16.97 & & +46:37:17.79 (0.98$''$)\\ & & & 2MASX J21435408+4637048 & 21:43:54.082 & NVSS J214354+463705 & 21:43:54.055 (0.074s)\\ & & & (UGC 11802) & +46:37:04.84 & & +46:37:04.99 (0.66$''$)\\ \noalign{\smallskip} \hline \hline \noalign{\smallskip} \end{tabular} \end{center} \end{table*} \section{Sample selection} Historically, AGNs were discovered with radio observations, i.e. the radio selection is often a way to recognize active galaxies, except at lower luminosities where star-formation in galaxies can produce radio emission. Therefore, for bright objects, a mere detection in radio provides support for the presence of an active galaxy. Sample contamination from Galactic sources may however come from pulsars, microquasars and Cataclismic Variables (CVs). In many cases, association with a galaxy via cross-correlation with galaxy catalogues can help in selecting only extragalactic objects and, by means of the radio detection, pinpointing those sources that are likely AGNs. So, while a mere radio detection does not imply an identification with an AGN, its combination with high energy X/gamma-ray emission, together with the association with a galaxy, strongly argues in favour of the identification of an unclassified {\it INTEGRAL} source with an active galaxy. Following this reasoning we have cross-correlated our set of unidentified {\it INTEGRAL} sources in the 4th IBIS catalogue (Bird et al. 2010) with IR/radio catalogues, in order to extract a small sample of objects likely associated with an AGN. For the IR bands we have used the Two Micron All Sky Survey Extended (2MASX) Source Catalog (Skrutskie et al. 2006) which is a powerful tool to identify, within the sample of unidentified {\it INTEGRAL} sources, those possibly associated with galaxies. For this survey, the entire sky was uniformly scanned in three near-infrared (NIR) bands (J,H,K) to detect and characterise sources brighter than about 1 mJy in each band, with a signal-to-noise ratio greater than 10 and which are resolved and extended beyond the Two Micron All Sky Survey (2MASS) beam/point spread function. The absolute astrometric accuracy of the 2MASX catalogue is better than one arcsec. The extended source catalogue consists of 1,647,599 objects, 97\% of which are galaxies while the remaining $\sim$3\% is made of sources in the Milky Way (mostly double and triple stars, HII regions, planetary and reflection nebulae), which are not expected to emit at high energies. Therefore, the presence of a 2MASX object inside the IBIS error circle suggests an association with a galaxy. As radio catalogues we have used the NRAO VLA Sky Survey (NVSS, Condon et al. 1998) and the Sydney University Molonglo Sky Survey (SUMSS, Mauch et al. 2003) which are particularly well suited for finding counterparts of unidentified {\it INTEGRAL} sources: they are similar in sensitivity and spatial resolution and together they cover the whole sky. The NVSS catalogue covers the sky north of the J2000.0 Declination of --40 degrees (82\% of the celestial sphere) at 1.4 GHz (20 cm). The catalogue consists of almost 2 million discrete sources stronger than a flux density of about 2.5 mJy. The NVSS images have 45 arcsecond FWHM angular resolution and nearly uniform sensitivity. The {\it rms} uncertainties in right ascension and declination vary from about 1$''$ for the 400,000 sources stronger than 15 mJy to 7$''$ at the survey limit. The SUMSS catalogue covers instead the sky south of the J2000.0 Declination of --30 degrees ($\sim$20\% of the celestial sphere) and is carried out at 843 MHz (36 cm). The survey consists of $4.3^{\circ} \times 4.3^{\circ}$ mosaic images with a resolution of $45\times45$ cosec$|\delta|$ arcsec$^{2}$, and a {\it rms} noise level of 1-2 mJy/beam. Positions in the catalogue are accurate to within 1-2$''$ for sources with peak brightness $>$20 mJy/beam, and are always better than 10$'$. The internal flux density scale is accurate to within 3\%. The radio detection of a 2MASX emitting galaxy strongly indicates that the source is an AGN if also detected above 10 keV. To perform the correlation, we used the standard statistical technique which has been employed very successfully in other cases (Stephen et al. 2005, 2006, 2010). This consists of simply calculating the number of {\it INTEGRAL} sources for which at least one 2MASX counterpart was within a specified distance, out to a distance where all {\it INTEGRAL} sources had at least one NIR counterpart. To have a control group we created a list of fake ``anti-{\it INTEGRAL}'' sources. For every object in the {\it INTEGRAL} list, we made a corresponding source in the fake list with coordinates mirrored in Galactic longitude and latitude (this mirroring was chosen due to the strong Galactic component evident in the {\it INTEGRAL} distribution), and the same correlation algorithm was then applied between this list and the 2MASX catalogue. Subtracting from the number of correlations in the true list those obtained in the false sample, it is possible to estimate the number of true associations. We see that the radius at which the first correlations between the ``anti-{\it INTEGRAL}'' sources sample and 2MASX catalogue appear is about 6 arcmin. This is comparable with the size of the IBIS error circle radius, therefore we expect at most 1 spurious correlation within the lot of selected sources (see below). The sample of associations extracted in this way, i.e. a list of objects likely associated with galaxies, was then cross-correlated to the radio catalogues following the same method. By means of this sequence of cross-correlations we extracted a final sample of 8 objects which are seen in all 3 wavebands (hard X-rays, NIR and radio); all 8 can be considered as AGN candidates because they are classified as galaxies in the 2MASX and are detected both in radio and hard X-rays. Note that in one case we have two radio/NIR objects associated with a unique {\it INTEGRAL} source (IGR J21441+4640); both are detected in the 2MASX and radio catalogues and so are equally possible counterparts of the {\it INTEGRAL} source and as such will be considered in the following sections. In NASA/IPAC Extragalactic Database (NED) these two objects form a galaxy pair. \subsection{Overview of the extracted sample} All objects in the sample are reported in Table 1; for each source we list the {\it INTEGRAL} name, the IBIS position with relative error, the IR (2MASX) and radio (NVSS or SUMSS) positions and associated errors of the putative counterparts. In our sample, 7 sources have a counterpart in the NVSS catalogue while only one has an association in the SUMSS. The 2MASX catalogue provides J, H, K magnitudes for all sources in the sample which are reported in Table 2; note that all objects listed in Table 2 are classified as galaxies in NED. In Table 2 we also show the NIR colour indices J--H and H--K; these can be used as a tool to confirm the AGN nature of our sources. Indeed, all objects but one (2MASX J03095498+5707023) have NIR colours compatible with those of nearby active galaxies (see Fig. 1 of Kouzuma \& Yamaoka, 2010). Despite the uncertainties introduced by this method (i.e. it is not obvious from the work of Kouzuma \& Yamaoka [2010], which is the efficiency of finding an AGN via IR photometry), it is reassuring that almost all of our objects are compatible with NIR AGN colour indices. Hard X-ray information concerning the sources of our sample is reported in Bird et al. (2010). We note here that all but two objects are classified as variable in the 4th IBIS catalogue since they are detected at the level of Revolution, Revolution Sequence or through the bursticity analysis; the only exceptions are IGR J05583--1257 and IGR J08190--3835 which are reported as persistent objects in the IBIS survey. All but 3 objects (IGR J03103+5706, IGR J08190-3835, IGR J21441+4640) are at Galactic latitude $>10^{\circ}$ an additional argument in favour of their extragalactic nature. In order to provide radio images and fluxes, we have used the standard procedure employed within the software package AIPS\footnote{http://www.aips.nrao.edu/}, release 31DEC10 (Astronomical Image Processing System). NVSS/SUMSS fluxes extracted from the radio maps are reported in Table 3. Figures 1 and 2 show the collection of NVSS/SUMSS image cut-outs for all of our sources with overimposed IBIS error circle and 2MASX source positions. For two sources (IGR J00556+7708, IGR J21268+6203) in our sample the radio and NIR positions are significantly different from each other, being separated by 22$''$ and 24$''$, respectively. In both cases we searched among the optical and NIR catalogues selected in the Vizier Catalogues Selection Page\footnote{http://vizier.u-strasbg.fr/viz-bin/VizieR} (USNO-B1, Monet et al. 2003, USNO-A2.0\footnote{http://www.nofs.navy.mil/projects/pmm/USNOSA2doc.html}, 2MASS), but we could not find any optical or NIR counterparts within the radio error circle; furthermore, we note that the 2MASX galaxies lie in each case within the edge of the radio contours and are obviously extended in the NIR band (semimajor axis of 16.6$''$ and 13.7$''$, respectively). Indeed, in both cases the 2MASX and radio objects are associated with each other in NED on the basis of a sophisticated cross-identification analysis which takes into account not only the positional uncertainty but also the source extension (Mazzarella et al. 2001). Taking all these evidences into account, we decide to keep both {\it INTEGRAL} sources in the sample also in view of the fact that either the NIR or the radio object (or in the best case both) are likely counterparts to the high energy emitter. By looking for further radio information in the data archives of different radio telescopes and also in the literature, we found that some sources in the sample have been observed at more than one radio frequency. In these few cases we have calculated the radio spectral index using the available data points and the usual relation F$_\nu \propto \nu^{-\alpha}$ (see Sect. 3). Furthermore, when the redshift is available, we have calculated the radio power at 1.4 GHz; all these values are reported in Table 3. Note that all objects in Table 3 have compact morphology in radio. \begin{table* \caption[]{IR magnitudes as quoted in the 2MASS extended catalogue for all objects in the sample. In the case of IGR J21441+4640, we report the IR magnitudes and colours of both galaxies in the pair individually.} \begin{center} \begin{tabular}{cccccc} \noalign{\smallskip} \hline \hline \noalign{\smallskip} Source & J-mag & H-mag & K-mag & J--H & H--K \\ \noalign{\smallskip} \hline \hline \noalign{\smallskip} 2MASX J00570148+7708505 & 14.922$\pm$0.222 & 14.291$\pm$0.333 & 13.270$\pm$0.173 & 0.63 & 1.02 \\ 2MASX J03095498+5707023 & 15.497$\pm$0.301 & 13.831$\pm$0.143 & 12.772$\pm$0.096 & 1.67 & 1.06 \\ 2MASX J05580231--1255477 & 13.192$\pm$0.080 & 12.608$\pm$0.119 & 12.111$\pm$0.133 & 0.58 & 0.50 \\ 2MASX J08191136--3833104 & 12.529$\pm$0.058 & 11.303$\pm$0.037 & 10.818$\pm$0.049 & 1.23 & 0.49 \\ 2MASX J17215337--1505384 & 15.563$\pm$0.237 & $>$14.520 & 13.985$\pm$0.181 & 1.04 & 0.53 \\ 2MASX J17515581--6019430 & 14.525$\pm$0.133 & 13.564$\pm$0.126 & 13.408$\pm$0.166 & 0.96 & 0.15 \\ 2MASX J21262644+6204410 & 14.777$\pm$0.185 & 14.172$\pm$0.234 & 13.314$\pm$0.174 & 0.6 & 0.86 \\ 2MASX J21441345+4637169 & 12.106$\pm$0.051 & 11.530$\pm$0.062 & 11.312$\pm$0.074 & 0.57 & 0.22 \\ (UGC 11806) & & & \\ 2MASX J21435408+4637048 & 11.814$\pm$0.055 & 11.088$\pm$0.060 & 10.743$\pm$0.064 & 0.73 & 0.34 \\ (UGC 11802) & & & \\ \noalign{\smallskip} \hline \hline \noalign{\smallskip} \end{tabular} \end{center} \end{table*} \begin{table* \caption[]{Radio information of all objects in the sample. NVSS 1.4 GHz (20 cm) and SUMSS 843 MHz (36 cm) fluxes are from this work. Other flux values are from the Westerbork Northern Sky Survey (WENSS) at 92 cm and the Parkes MIT-NRAO (PMN) Surveys at 6 cm. All the redshifts reported in the Table are from NED, except for NVSS J081910--383307 (2MASX J08191136--3833104) that is derived from the spectroscopic data analysis carried out in this work (Sect. 3).} \begin{center} \begin{tabular}{cccccccc} \noalign{\smallskip} \hline \hline \noalign{\smallskip} Source & Redshift & 325 MHz & 843 MHz & 1400 MHz & 4850 MHz & Spectral Index & Radio Power \\ & & 92 cm & $\sim$ 36 cm & 20 cm & $\sim$ 6 cm & $\alpha_{325{\rm MHz}}^{1.4{\rm GHz}}$ & at 1.4 GHz \\ & & (mJy) & (mJy) & (mJy) & (mJy) & (F$_\nu \propto \nu^{-\alpha}$) & (WHz$^{-1}$) \\ \noalign{\smallskip} \hline \hline \noalign{\smallskip} NVSS J005700+770911 & & 51$\pm$2.1 & & 16.0$\pm$1.2 & & 0.79$^{+0.09}_{-0.08}$ & \\ NVSS J030954+570704 & & 21$\pm$3.8 & & 10.2$\pm$1.0 & & 0.50$^{+0.18}_{-0.20}$ & \\ NVSS J055802--125545 & 0.003042 & & & 4.7$\pm$0.8 & & & 9.31$\times10^{19}$ \\ NVSS J081910--383307 & 0.009 & & & 2.7$\pm$0.7 & & & 4.71$\times10^{20}$ \\ NVSS J172153--150532 & & & & 5.2$\pm$1.0 & & & \\ SUMSS J175155--601943 & & & 25.2$\pm$2.2 & & & & \\ NVSS J212628+620457 & & 29$\pm$4.8 & & 45.1$\pm$1.4 & 19$\pm$4 & --0.30$^{+0.13}_{-0.16}$ & \\ NVSS J214413+463718 & 0.011081 & 23$\pm$4 & & 11.0$\pm$1.1 & & 0.51$^{+0.17}_{-0.19}$ & 2.92$\times10^{21}$ \\ (UGC 11806) & & & & & \\ NVSS J214354+463705 & 0.010517 & 40$\pm$4 & & 22.9$\pm$1.2 & & 0.38$^{+0.09}_{-0.10}$ & 5.47$\times10^{21}$\\ (UGC 11802) & & & & & \\ \noalign{\smallskip} \hline \hline \noalign{\smallskip} \end{tabular} \end{center} \end{table*} \section{Follow up observations} In order to test the validity of our method as well as to confirm the AGN nature of our objects, we have obtained a set of X-ray and optical follow up observations. The X-ray observations, carried out with Swift/XRT\footnote{The Swift/XRT observations were performed in the context of the approved follow-up program in collaboration with the Swift team.}, were useful to confirm the association between the NIR/radio source and the IBIS hard X-ray emission and hence to pinpoint the optical counterpart; while optical spectra of this counterpart allowed us to assess the source AGN nature and class. \begin{table* \caption[]{Spectral parameters derived from the X-ray data analysis of four sources observed by Swift/XRT. The values of the Galactic column densities are from Kalberla et al. (2005).} \begin{center} \begin{tabular}{ccccccc} \noalign{\smallskip} \hline \hline \noalign{\smallskip} Source & Exposure & Count rate & $\Gamma$ & N$_{\rm H}(Gal)$ & N$_{\rm H}(intr)$ & Flux(2-10 keV) \\ & (s) & (counts/s) & Photon index & (cm$^{-2}$) & (cm$^{-2}$) & (erg cm$^{-2}$s$^{-1}$) \\ \noalign{\smallskip} \hline \hline \noalign{\smallskip} IGR J05583--1257 & & & & & & \\ (LCSB 0289O) & 4623 & $<$0.7$\times$10$^{-3}$ & 1.8 (fixed) & 1.63$\times$10$^{21}$ & & $<$5.8$\times$10$^{-14}$ \\ IGR J08190--3835 & 10310 & (18.6$\pm$1.3)$\times$10$^{-3}$ & 1.8 (fixed) & 9.6$\times$10$^{21}$ & 13.6$\times$10$^{22}$ & 1.49$\times$10$^{-12}$ \\ IGR J17520--6018 & 11883 & (18.2$\pm$1.2)$\times$10$^{-3}$ & 1.8 (fixed) & 7.0$\times$10$^{20}$ & 13$\times$10$^{22}$ & 2.55$\times$10$^{-12}$\\ IGR J21441+4640 & 2755 & & & & & \\ (UGC 11806) & & (3.3$\pm$1.2)$\times$10$^{-3}$ & 1.8 (fixed) & 2.57$\times$10$^{21}$ & & 1.2$\times$10$^{-13}$ \\ (UGC 11802) & & $<$1.2$\times$10$^{-3}$ & 1.8 (fixed) & 2.8$\times$10$^{21}$ & & $<$8.3$\times$10$^{-14}$ \\ \noalign{\smallskip} \hline \hline \noalign{\smallskip} \end{tabular} \end{center} \end{table*} For four objects in our sample (see Table 4), we have X-ray observations acquired with the X-ray Telescope (XRT, 0.2--10 keV, Burrows et al. 2005) on board the \emph{Swift} satellite (Gehrels et al. 2004). XRT data reduction was performed using the XRTDAS standard data pipeline package ({\sc xrtpipeline} v. 0.12.4), in order to produce screened event files. All data were extracted only in the Photon Counting (PC) mode (Hill et al. 2004), adopting the standard grade filtering (0--12 for PC) according to the XRT nomenclature. Events for spectral analysis were extracted within a circular region of radius 20$''$, centered on the source position, which encloses about 90\% of the PSF at 1.5 keV (see Moretti et al. 2004). The background was taken from various source-free regions close to the X-ray source of interest, using circular regions with different radii in order to ensure an evenly sampled background. In all cases, the spectra were extracted from the corresponding event files using the {\sc XSELECT} software and binned using {\sc grppha} in an appropriate way, so that the $\chi^{2}$ statistic could be applied. We used version v.011 of the response matrices and create individual ancillary response files \textit{arf} using {\sc xrtmkarf v. 0.5.6}. The data have been fitted using an absorbed powerlaw model; due to the poor statistical quality of the X-ray data, we have fixed the photon index to 1.8 in order to evaluate the presence of absorption and the 2--10 keV flux. For four sources in the sample (see Table 5) optical spectroscopy of the proposed counterparts was obtained from data collected at the 1.5-m telescope of the Cerro Tololo Interamerican Observatory (CTIO), Chile, at the 2.1-m telescope of the Observatorio Astron\'omico Nacional in San Pedro M\'artir (SPM), M\'exico, and from the Six-degree Field Galaxy Survey\footnote{http://www.aao.gov.au/local/www/6df/} (6dFGS) archive (Jones et al. 2004) containing spectra acquired with the 4-m Anglo-Australian Telescope (AAT) in Siding Spring, Australia. Table 5 reports the log of these observations. \begin{table* \caption[]{Log of the spectroscopic observations presented in this paper.} \begin{center} \begin{tabular}{llcccc} \noalign{\smallskip} \hline \hline \noalign{\smallskip} \multicolumn{1}{c}{Object} & \multicolumn{1}{c}{Date} & Telescope & Exp. start & Disp. & Exposure \\ & & + instrument & time (UT) & (\AA/pix) & time (s) \\ \noalign{\smallskip} \hline \noalign{\smallskip} LCSB L0289O & 13 Jan 2005 & AAT+6dF & 12:15 & 1.6 & 1200+600 \\ USNO-A2.0 0450\_06519994 & 18 Jan 2010 & CTIO 1.5m + RC Spec. & 03:54 & 5.7 & 2$\times$1200 \\ UGC 11802 & 15 Sep 2009 & SPM 2.1m + B\&C Spec. & 04:37 & 4.0 & 2$\times$1800 \\ UGC 11806 & 15 Sep 2009 & SPM 2.1m + B\&C Spec. & 05:59 & 4.0 & 2$\times$1800 \\ \noalign{\smallskip} \hline \noalign{\smallskip} \end{tabular} \end{center} \end{table*} The spectroscopic data acquired at these telescopes were optimally extracted (Horne 1986) and reduced following standard procedures using IRAF\footnote{IRAF is the Image Reduction and Analysis Facility made available to the astronomical community by the National Optical Astronomy Observatories, which are operated by AURA, Inc., under contract with the US National Science Foundation. It is available at http://iraf. noao.edu}. Calibration frames (flat fields and bias) were taken on the day preceeding or following the observing night. The wavelength calibration was performed using lamp data acquired soon after each on-target spectroscopic acquisiton; the uncertainty in this calibration was $\sim$0.5 \AA~in all cases, according to our checks made using the positions of background night sky lines. Flux calibration was performed using catalogued spectrophotometric standards. As mentioned above an additional spectrum, for 2MASX J05580231--1255477, was retrieved from the 6dFGS archive. Since this archive provides spectra that are not flux-calibrated, we used the optical photometric information in Jones et al. (2005) to calibrate the 6dFGS spectrum presented in this work. The flux calibration was obtained by normalizing the count spectrum and by multiplying it by a cubic spline constructed with the fluxes extracted from the $BRI$ optical magnitudes available for the source using the conversion formulae of Fukugita et al. (1995). The results of these X-ray and optical follow up observations are presented in Table 4 and Table 6. \section{Results} In the following, the results of all available archival information gathered on each individual source are discussed together with the X-ray and optical data when available . As for the X-ray data, we note that in 3 cases (IGR J08190-3835, IGR J17520-6018 and IGR J21441+4640) the X-ray measurements confirm the proposed counterpart; in the case of the galaxy pair only one of the two objects, UGC 11806, was detected while an upper limit was set on the other, UGC 11802. Only in one case (IGR J05583--1257) no X-ray source is detected within the {\it INTEGRAL} error circle (see Section on this source) and an upper limit is reported for the proposed counterpart. For the optical spectra categorization, as they all refer to extragalactic sources (see below), we used the criteria of Veilleux \& Osterbrock (1987) and the line ratio diagnostics of both Ho et al. (1993, 1997) and Kauffmann et al. (2003) which are generally used for emission-line AGN classification. The spectra of the galaxies shown here were not corrected for starlight contamination (see, e.g., Ho et al. 1993, 1997) because of the limited S/N and spectral resolution. In this case, we do not expect this to affect any of our main results and conclusions. \begin{figure* \hspace{-.1cm} \centering{\mbox{\psfig{file=fig9.ps,width=5.7cm}}} \centering{\mbox{\psfig{file=fig10.ps,width=5.6cm}}} \centering{\mbox{\psfig{file=fig11.ps,width=5.6cm}}} \caption{Optical images of the fields of 3 of the {\it INTEGRAL} hard X-ray sources selected in this paper for optical spectroscopic follow-up: IGR J05583--1257 (left panel), IGR J08190--3835 (central panel) and IGR J21441+4640 (right panel). The proposed optical counterparts are indicated with tick marks (see text). Field sizes are 10$'$$\times$10$'$ and are extracted from the DSS-II-Red survey. In all cases, north is up and east to the left.} \end{figure*} Of the four objects for which we have obtained optical spectra, one was found to be a type 2 AGN (IGR J08190--3835), two (i.e. those belonging to the galaxy pair) were classified as LINERs and one shows the features typical of a starburst galaxy (IGR J05583--1257). In the following, we consider a cosmology with $H_{\rm 0}$ = 71 km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\Lambda}$ = 0.73, and $\Omega_{\rm m}$ = 0.27; the luminosity distances of the extragalactic objects reported in this paper were computed for these parameters using the Cosmology Calculator of Wright (2006). \bigskip \noindent {\bf IGR J00556+7708} \noindent This source, located at high Galactic latitude, is detected by IBIS in a Revolution Sequence suggesting that it might be variable on long timescales. As said above, in this case the radio and NIR positions are significantly different and the objects are located on the left edge of the IBIS error box (Fig. 1, top-left panel). The radio object is detected at 20 and 92 cm and has a steep spectrum with index $\alpha$=0.79, typical of a radio loud AGN; no optical counterpart is found within the radio source positional uncertainty. The NIR source is also listed in the USNO-A2.0 catalogue\footnote{http://www.nofs.navy.mil/projects/pmm/USNOSA2doc.html} with name USNO-A2.0 1650\_00208794 and optical magnitudes R$\sim$17.4 and B$\sim$19.5; the NIR colour indices are typical of an active galaxy (Kouzuma \& Yamaoka, 2010). Overall, we conclude that the radio and NIR objects are both likely AGN and, as such, are both possible counterparts of the IBIS detection. \bigskip \noindent {\bf IGR J03103+5706} \noindent This object is also likely variable as it was detected by IBIS during just one revolution (3 days). The radio/NIR counterpart suggested in this work is located within the IBIS error circle (Fig. 1, top-right panel). It is detected in radio at 20 and 92 cm and has $\alpha$=0.50, which is characteristic of a radio flat spectrum galaxy. The source has no optical counterpart in USNO-B1.0 (Monet et al. 2003), suggestive of a highly reddened/absorbed object. Its NIR photometry, as well as its location on the Galactic plane, are not typical of an AGN, so the nature of this object remains dubious. \bigskip \noindent {\bf IGR J05583--1257} \noindent This is one of the two persistent sources in the sample; the radio/NIR source is identified in NED with LCSB L0289O, a low brightness galaxy at z = 0.003. This source is fairly bright in the far IR being detected by IRAS at 60 $\mu$m with a flux of 0.6 Jy (IRAS, 1988). The available radio data (Fig. 1, bottom left panel) for this source only consist of a flux at 20 cm, thus no information on the spectral index can be obtained. The 6dFGS optical spectrum (Fig. 4, first panel from top; see also DSS-II-red image in Fig. 3, left panel) shows several permitted and forbidden narrow emission lines at the above redshift; their ratios (see also Table 6) indicate that this is a starburst galaxy with no noticeable nuclear activity. There is no X-ray detection within the 90\% IBIS error box, but at the border of the 99\% positional uncertainty we find a detection at low significance level, 2.6$\sigma$. This source is located at RA(J2000) = 05h 57m 59.9s, Dec(J2000) = --12$^{\circ}$ 51$'$ 33.0$''$ (6$''$ uncertainty). The X-ray flux is $7.3 \times 10^{-14}$ erg cm$^{-2}$ s$^{-1}$, with photon index fixed to 1.8. At this position no optical or IR or radio source is found. On the other hand, we do not find any X-ray emission at the location of LCSB 0289O. The 2--10 keV flux upper limit provides an indication on the source luminosity which is $<$ 10$^{39}$ erg s$^{-1}$, i.e. quite low for an AGN, unless the source is extremely variable (but this is not evident in the IBIS data), or extremely absorbed (thus masking a Compton thick AGN in a starburst galaxy). The persistent nature of the source, the stringent upper limit in X-rays and the optical spectrum suggest that this is probably not the counterpart of the IBIS source, which is either a spurious detection or has a different association than LCSB 0289O. \bigskip \noindent {\bf IGR J08190--3835} \noindent This is the second persistent source in our sample. Within the IBIS uncertainty, XRT detects only one source with a statistical significance of $5.8\sigma$ in the energy range 0.3--10 keV. This object is positionally coincident with both 2MASX and NVSS sources (Fig. 1, bottom-right panel). The radio analysis provides a flux of $\sim$3 mJy at 20 cm. The source is fairly bright at NIR frequencies (see Table 2) and it is also detected in the optical band, where is listed in the USNO-A2.0 catalogue with name USNO-A2.0 0450\_06519994 (see Table 5 and Table 6) and magnitudes R$\sim$14.6 and B$\sim$16.8. From a comparison with the NIR magnitudes in Table 2, the R--K colour is therefore $\sim$ 4, which suggests a red or obscured galaxy (Fig. 3, central panel and Table 6). The X-ray data analysis is described by an absorbed power law with photon index fixed to 1.8 and an observed 2--10 keV flux of $1.5 \times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$; the intrinsic column density is N$_{\rm H}(intr) \sim 1.4 \times 10^{23}$ cm$^{-2}$, which exceedes the Galactic value of $9.6 \times 10^{21}$ cm$^{-2}$ (Kalberla et al. 2005). Optical spectroscopy (Fig. 4, second panel from top) shows that the source displays H$_{\alpha}$, [N {\sc ii}] and [S {\sc ii}] narrow emission lines at redshift $z=0.009\pm0.001$ superimposed on a very reddened continuum. The flux ratios among these emission features suggest that the object is a Type 2 AGN. Overall, we conclude that the proposed 2MASX/NVSS/XRT source is the actual counterpart of the IBIS-detected object and it is a Narrow Line (obscured) AGN. \bigskip \noindent {\bf IGR J17219--1509} \noindent The radio/NIR source is located at the edge of the IBIS error circle (Fig. 2, top-left panel); it is very weak in radio, being close to the flux limit of the NVSS map ($\sim$5 mJy). Very little is known about this source except that is strongly variable in the IBIS waveband, being detected only in one revolution (R403, i.e. for a few days) and showing a large bursticity factor of 4 (see Bird et al. 2010 for details). The source has an optical magnitude of 18.2 in R; the NIR colours, as well as its location above the Galaxy plane, are compatible with the source being a nearby AGN (Kouzuma \& Yamaoka, 2010). We conclude that this is an extragalactic object, most likely an AGN, on the basis of the above considerations and it is a good association for the {\it INTEGRAL} source. \bigskip \noindent {\bf IGR J17520--6018} \noindent This source can also be considered as variable by Bird et al. (2010). An X-ray source is well detected at $13.6\sigma$ confidence level in the range 0.3--10 keV by XRT within the IBIS uncertainty and is located at RA(J2000) = 17h 51m 55.9s and Dec(J2000) = --60$^{\circ}$ 19$'$ 44$''$ (4$''$ uncertainty). It coincides with both the 2MASS extended object and the radio source reported in the SUMSS (Fig. 2, top-right panel). The source has only one detection in radio at 36 cm. Besides being detected in the NIR, the galaxy is also listed in the USNO-B1 catalogue with optical B and R magnitudes of 15.5 and 14.7, respectively. The X-ray data analysis provides an absorbed power law spectrum with fixed photon index of 1.8 and an observed 2--10 keV flux of $2.6 \times 10^{-12}$ erg cm$^{-2}$ s$^{-1}$; the intrinsic column density is N$_{\rm H}(intr) = 1.3 \times 10^{23}$ cm$^{-2}$ which exceedes the Galactic value (see Table 4). This source is also reported in the BAT 58--Month catalogue\footnote{ http://swift.gsfc.nasa.gov/docs/swift/results/bs58mon/} as SWIFT J1751.8--6019, with flux 1.7$\times$10$^{-11}$ erg cm$^{-2}$ s$^{-1}$ in the range 14--195 keV. Based on all the above information, we conclude that this galaxy is the likely counterpart of the IBIS/BAT source and, on the basis of the detected X-ray absorption, we further suggest that it is a type 2 AGN. \bigskip \noindent {\bf IGR J21268+6203} \noindent This IBIS source is also highly variable being reported with a high bursticity factor by Bird et al. (2010); it is also likely an extragalactic source being located above the Galactic plane. As said in Sect. 2.1, in this case the radio and NIR positions are significantly different (Fig. 2, bottom-left panel). The NIR object (Table 2) has an optical counterpart in USNO-B1.0 1520\_0331686 listed in USNO-B1.0 catalogue with magnitudes B$\sim$20.3, R$\sim$17.5 and I$\sim$16.4; no optical association was found for the radio object. The NIR colours are also in this case typical of a nearby AGN (Kouzuma \& Yamaoka, 2010). The radio data available provide a spectral index $\alpha$=--0.3 between 20 and 92 cm, indicative of a probable GHz-Peaked-Spectrum (GPS) radio source, i.e. a source characterised by a convex radio spectrum peaking near 1 GHz (Stanghellini 2006; Lister 2003; O'Dea 1998). GPS are compact powerful young radio galaxies that reside in gas-rich environments at the center of active galaxies. Thus, the observational evidence suggests that the radio and NIR objects are both likely AGN and, as such, are good candidates for an association with the {\it INTEGRAL} source. \begin{figure \vspace{-.3cm} \mbox{\psfig{file=fig12.ps,width=5.8cm,angle=270}} \mbox{\psfig{file=fig13.ps,width=5.8cm,angle=270}} \vspace{-.7cm} \mbox{\psfig{file=fig14.ps,width=5.8cm,angle=270}} \mbox{\psfig{file=fig15.ps,width=5.8cm,angle=270}} \vspace{-.5cm} \caption{Spectra (not corrected for the intervening Galactic absorption) of the possible optical counterparts of some of the {\it INTEGRAL} sources presented in this paper (IGR J05583--1257, IGR J08190--3835 and IGR J21441+4640; see text). For each spectrum, the main spectral features are labeled. The symbol $\oplus$ indicates atmospheric telluric absorption bands.} \end{figure} \begin{table \caption[]{Fluxes (in units of 10$^{-14}$ erg cm$^{-2}$ s$^{-1}$) of the main emission lines detected in the spectra of the objects reported in Fig. 4. The correction for the Galactic reddening was computed a color excess $E(B-V)$ = 0.479, 1.54, 0.404 and 0.403 mag for LCSB L0289O, USNO-A2.0 0450\_06519994, UGC 11802 and UGC 11806, respectively (from Schlegel et al. 1998). Uncertainties and upper limits for the fluxes are reported at 1$\sigma$ and 3$\sigma$ confidence levels, respectively.} \begin{center} \begin{tabular}{lrr} \noalign{\smallskip} \hline \hline \noalign{\smallskip} \multicolumn{1}{c}{Line} & \multicolumn{1}{c}{Observed flux} & \multicolumn{1}{c}{Corrected flux} \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multicolumn{3}{c}{LCSB L0289O ($z=0.003$)} \\ H$_\beta$ & 17.5$\pm$0.9 & 87$\pm$4 \\ $[$O {\sc iii}$]$ $\lambda$5007 & 92$\pm$3 & 428$\pm$13 \\ $[$O {\sc i}$]$ $\lambda$6300 & $<$0.5 & $<$2.2 \\ H$_\alpha$ & 34.2$\pm$1.0 & 105$\pm$3 \\ $[$N {\sc ii}$]$ $\lambda$6583 & 1.0$\pm$0.3 & 3.3$\pm$1.0 \\ $[$S {\sc ii}$]$ $\lambda$6716 & 1.8$\pm$0.3 & 5.5$\pm$0.9 \\ $[$S {\sc ii}$]$ $\lambda$6731 & 1.2$\pm$0.2 & 3.6$\pm$0.6 \\ & & \\ \multicolumn{3}{c}{USNO-A2.0 0450\_06519994 ($z=0.009$)} \\ H$_\beta$ & $<$0.07 & $<$9 \\ $[$O {\sc iii}$]$ $\lambda$5007 & $<$0.07 & $<$9 \\ $[$O {\sc i}$]$ $\lambda$6300 & $<$0.17 & $<$7 \\ H$_\alpha$ & 0.32$\pm$0.08 & 11$\pm$3 \\ $[$N {\sc ii}$]$ $\lambda$6583 & 0.12$\pm$0.03 & 4.1$\pm$1.0 \\ $[$S {\sc ii}$]^*$ & 0.26$\pm$0.04 & 8.3$\pm$1.2 \\ & & \\ \multicolumn{3}{c}{UGC 11802 ($z=0.0105$)} \\ H$_\beta$ & 1.41$\pm$0.14 & 5.4$\pm$0.5 \\ $[$O {\sc iii}$]$ $\lambda$5007 & 0.76$\pm$0.11 & 2.7$\pm$0.4 \\ $[$O {\sc i}$]$ $\lambda$6300 & 0.24$\pm$0.06 & 0.55$\pm$0.14 \\ H$_\alpha$ & 9.5$\pm$0.3 & 24.1$\pm$0.7 \\ $[$N {\sc ii}$]$ $\lambda$6583 & 2.47$\pm$0.12 & 6.4$\pm$0.3 \\ $[$S {\sc ii}$]$ $\lambda$6716 & 2.26$\pm$0.16 & 5.6$\pm$0.4 \\ $[$S {\sc ii}$]$ $\lambda$6731 & 1.58$\pm$0.11 & 3.9$\pm$0.3 \\ & & \\ \multicolumn{3}{c}{UGC 11806 ($z=0.0110$)} \\ H$_\beta$ & 1.22$\pm$0.12 & 4.8$\pm$0.5 \\ $[$O {\sc iii}$]$ $\lambda$5007 & 0.76$\pm$0.08 & 2.8$\pm$0.3 \\ $[$O {\sc i}$]$ $\lambda$6300 & 0.21$\pm$0.05 & 0.70$\pm$0.18 \\ H$_\alpha$ & 7.0$\pm$0.2 & 17.7$\pm$0.5 \\ $[$N {\sc ii}$]$ $\lambda$6583 & 2.88$\pm$0.14 & 7.2$\pm$0.4 \\ $[$S {\sc ii}$]$ $\lambda$6716 & 1.53$\pm$0.11 & 3.7$\pm$0.3 \\ $[$S {\sc ii}$]$ $\lambda$6731 & 1.24$\pm$0.09 & 3.1$\pm$0.3 \\ \noalign{\smallskip} \hline \noalign{\smallskip} \multicolumn{3}{l}{$^*$: The doublet is blended due to the low} \\ \multicolumn{3}{l}{spectral S/N and resolution.} \\ \noalign{\smallskip} \hline \noalign{\smallskip} \end{tabular} \end{center} \end{table} \bigskip \noindent {\bf IGR J21441+4640} \noindent This also is a strongly variable source in IBIS (Bird et al. 2010) with a bursticity larger than 4. Within the IBIS positional uncertainty of this source there are two galaxies (see Fig. 3, right panel), UGC 11802 and UGC 11806, at the same redshift ($z$=0.011), which form the galaxy pair KPG559. Both galaxies are detected at radio frequencies in the NVSS catalogue (NVSS J214354+463705, NVSS J214413+463718) and are also listed in the 2MASS extended catalogue (2MASX J21435408+4637048, 2MASX J21441345+4637169) (see Fig. 2, bottom-right panel). The radio data analysis provides a 20 cm flux for both objects, which are also detected at 92 cm and have spectral indices $\alpha$=0.38 and $\alpha$=0.51, respectively; both radio power and spectral index are typical of low luminosity AGN. The optical B (R) magnitudes of the two objects are 14.5 (9.5) and 14.7 (12.9). Both are detected with AKARI in the far IR, from 90 to 160 $\mu$m at the few Jansky flux level (Murakami et al. 2007). Within the IBIS uncertainty, XRT finds only one galaxy (UGC 11806) of the pair, with a low statistical significance of $2.5\sigma$ in the range 0.3--10 keV. The XRT spectrum can be described by an unabsorbed power law with fixed photon index of 1.8 and an observed 2--10 keV flux of $1.2 \times 10^{-13}$ erg cm$^{-2}$ s$^{-1}$. The upper limit of the X-ray flux in the 2--10 keV band for the companion galaxy UGC11802 is $8.3\times 10^{-14}$ erg cm$^{-2}$ s$^{-1}$. Optical spectroscopy (Fig. 4, lower panel) indicates that UGC 11806 is a narrow emission-line galaxy with flat continuum and prominent Balmer, [N {\sc ii}], [O {\sc iii}] and [S {\sc ii}] lines at a redshift consistent with the one found in the literature. Emission line ratios suggest that this is a transition object, that is, a LINER (Heckman 1980) with a possible contamination from an underlying starburst event. The optical spectrum of UGC 11802 (Fig. 4, third panel from top) indicates instead that this galaxy is a starburst, with no indication of AGN activity (which explains the absence of detectable X-ray emission from its nucleus). The low significance detection in X-rays of UGC 11806 may be due to the variable nature of the source (Bird et al. 2010) rather than to the high absorption, which is not readily apparent from the optical spectrum. To this aim we can infer the reddening local to the source by considering an intrinsic H$_\alpha$/H$_\beta$ line ratio of 2.86 (Osterbrock 1989). The corresponding color excess, obtained by comparing the intrinsic line ratio with the measured one by applying the Galactic extinction law of Cardelli et al. (1989), is $E(B-V)$ = 0.25 mag. This, using the formula of Predehl \& Schmitt (1995), corresponds to a hydrogen column density $N_{\rm H}$ = 1.4$\times$10$^{21}$ cm$^{-2}$ local to the AGN. Overall we conclude that UGC 11806 is the counterpart of the IBIS source; it is probably a low luminosity and highly variable AGN of LINER type. \section{Conclusion} The basic idea of this work is to propose a way whereby AGN can be easily found out among a set of unidentified objects detected in hard X-ray surveys. The method, which consists of two consecutive steps of cross-correlations between hard X-ray objects and IR/radio catalogues, is tested here for the unidentified sources contained in the 4th IBIS survey catalogue. Following this procedure, we first used the 2MASS extended catalogue to identify galaxies in the IBIS error circle and then we extracted those which were also radio emitters, in order to isolate AGN candidates by means of NVSS and SUMSS radio catalogues. As a result we obtained a set of 8 objects for which we performed a more in-depth study, and in some cases optical and/or X-ray follow up observations, in order to verify their true association with the {\it INTEGRAL} source as well as their AGN nature and class. The purpose of this work is not to search out for AGN using different selection criteria, but rather to confirm how a multiwavelength (radio, IR, optical, and X-ray) study of these 8 sources can be used to test the level of reliability and accuracy of the proposed method. In three cases (IGR J08190--3835, IGR J17520--6018, IGR J21441+4640) we found the X-ray counterparts of the IBIS sources. The optical spectra obtained for two of these sources (IGR J08190--3835, IGR J21441+4640) allowed us to identify them as AGN belonging to the Type 2 and LINER class. The third one (IGR J17520--6018) is most likely a type 2 AGN on the basis of the high X-ray absorption measured. We further suggest that 3 sources (IGR J00556+7708, IGR J17219--1509, IGR J21268+6203) are likely active galaxies on the basis of the radio spectra, NIR photometry and location above the Galactic plane: they are all likely associated with the IBIS objects. LCSB 0289O is instead a starburst galaxy, which is at most a very weak X-ray source and so unlikely to emit at high energies; we conclude that it is an improbable association of IGR J05583--1257. The nature of this {\it INTEGRAL} source is therefore still open. In only one case (IGR J03103+5706), we have not enough information for a clear classification of the radio/NIR source as an AGN; the nature of this {\it INTEGRAL} object remains doubious. Overall, detailed information and follow up measurements confirm the goodness of our method in the search for AGNs among unidentified hard X-ray emitters. On the basis of this work we have classified in the 4th IBIS catalogue the sources discussed in this paper as AGNs and the same approach can be used to pinpoint AGN candidates among Swift/BAT or future Nustar (Harrison et al. 2010) unidentified objects. \section*{Acknowledgments} We thank Jos\'e Vel\'asquez for Service Mode observations at the CTIO 1.5m telescope, and Fred Walter for relaying the observing information to him. We also thank the anonymous referee for useful remarks which helped us to improve the quality of this paper. This research has made use of the ASI Science Data Center Multimission Archive; it also used the NASA Astrophysics Data System Abstract Service, the NASA/IPAC Extragalactic Database (NED), and the NASA/IPAC Infrared Science Archive, which are operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This publication made use of data products from the Two Micron All Sky Survey (2MASS), which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This research has also made use of data extracted from the Six-degree Field Galaxy Survey and the Sloan Digitized Sky Survey archives; it has also made use of the SIMBAD database operated at CDS, Strasbourg, France, and of the HyperLeda catalog operated at the Observatoire de Lyon, France. The authors acknowledge the ASI and INAF financial support via grant No. I/008/07. LM is supported by the University of Padua through grant No. CPDR061795/06. VC is supported by the CONACYT research grant 54480-F (M\'exico). DM is supported by the Basal CATA PFB 06/09, and FONDAP Center for Astrophysics grant No. 15010003. GG acknowledges the support of Fondecyt regular 1085267, Fondo Gemini 32090009, and Fondo ALMA 31090009. The authors acknowledge the ASI financial support via ASI--INAF contracts I/033/10/0 and I/009/10/0.
1105.3175
\section{Introduction} Merging binaries consisting of black holes (BHs) and neutron stars (NSs) are prime targets for observation by ground-based gravitational wave (GW) detectors \citep[such as LIGO,][]{LIGO} and may be the progenitors of some short-hard gamma-ray bursts \citep[sGRBs,][]{npp92}. The great diversity of sGRB characteristics and the potential variation in the corresponding GW signals motivates a thorough investigation of the possible outcomes of binary compact object (BCO) mergers. BCOs may form through evolution of primordial binaries or through dynamical processes in star clusters~\citep{oleary,lee2010}. The latter population motivates this study of BH-NS interactions for systems which are initially marginally unbound. Star clusters at the centers of galaxies undergo mass segregation, resulting in heavier objects concentrated toward the center \citep[see, e.g.][]{bw77}. Recent Fokker-Plank models suggest that the Galactic nuclear cluster (NC) should have $\sim1800$ BHs and $\sim400$ NSs in the central 0.1~pc \citep{hopman06}. Such clusters are thus promising sites for BH-NS close encounters. Using models of galactic NCs, \citet{oleary} calculate the rate of binary BH formation through GW emission in close encounters. They find corresponding Advanced LIGO detection rates between 5 and 2700 per year, and estimate that the BH-NS rate could be around 1\% of this. These capture binaries form with relatively small periapsis separations, $r_p$; in particular, $\sim30$\% form with $r_p \lesssim 10 M$, where $M$ is the total mass of the system (Fig.~4 of \citet{oleary}). (Unless otherwise stated we employ geometric units with $G=c=1$.) This is due in part to the large velocity dispersion in the cluster core ($\sim1000$~km~s${}^{-1}$), but also to gravitational focusing, which may be understood as follows. The total rate is proportional to the cross-section: $\Gamma \propto \pi b^2$, where $b$ is the impact parameter. However, for Newtonian hyperbolic orbits with relative velocity $w$ at infinity, $r_p = b^2w^2/2M + O(w^4)$. Thus, the rate is {\em linearly} proportional to $r_p$. Globular clusters (GCs) that have undergone core collapse may also host BCO close encounters due to the high density of compact objects in their cores \citep{fabian75,grindlay2006}. For example, models of M15 calibrated to the observational velocity dispersion yield a NS fraction of $\sim55$\% in the inner 0.2~pc \citep{dull}. \citet{lee2010} calculate the expected rate of BCO interactions inside M15 as a function of time and then scale these results for GCs with a distribution of half-mass relaxation times. Depending upon the GC evolution model, they find that the global rate for BH-NS {\em collisions} (i.e., events for which $r_p \le R_{\rm NS}+R_{\rm BH}$) peaks at $\sim8-25~$yr${}^{-1}$Gpc${}^{-3}$ at redshifts between $z=0.36$ and $z=0.97$, and slowly declines to between $50-85\%$ of peak by $z=0$. (We obtained the BH-NS collision rate by re-scaling their NS-NS results according to the factors in Table 3 of \citealt{lee2010}). For the fiducial BH-NS system considered by \citet{lee2010}, collisions occur at $r_p\leq2.7 M$. Since the rate scales linearly with $r_p$, this implies an interaction rate $\sim30-100$~yr${}^{-1}$Gpc${}^{-3}$ with (for example) $r_p\le10 M$. Population synthesis models \citep{belczynski} find comparable rates for {\em primordial} BH-NS mergers: from $\sim0.1~$yr~${}^{-1}$Gpc${}^{-3}$ (pessimistic) to $\sim120~$yr~${}^{-1}$Gpc${}^{-3}$ (optimistic). However, primordial BH-NS binaries will enter the LIGO band with essentially zero eccentricity \citep{kowalska}. Thus, GW signals from BH-NS close encounters should be readily distinguishable due to their significant eccentricities. We further note that the sGRB rate, 8-30~yr${}^{-1}$Gpc${}^{-3}$ \citep{guettapiran06}, is comparable to the primordial BH-NS rate, and somewhat less than the expected NS-NS merger rate (30-400~yr${}^{-1}$Gpc${}^{-3}$ \citep{belczynski}). The estimates of \citet{lee2010} thus suggest that close encounters in clusters could contribute significantly to the sGRB progenitor population, especially if their emission is less tightly beamed than that of primordial mergers \citep{grindlay2006}. BH-NS mergers in clusters with $r_p \lesssim 10 M$ will exhibit complicated behaviors probing the strong field regime of general relativity (GR). For stellar mass BH companions, one cannot treat the NS as a perturbation of the BH spacetime. Furthermore, the non-linear nature of GR will most strongly manifest during a close encounter of the BH-NS. Numerical simulations within full GR are thus the preferred tool for exploring these systems. To date, such simulations have been performed by several groups \citep{illinoisBHNSspin,shibataBHNS3,matt,chawla,pannarale} (see \citealt{mattsreview} for a review). These studies employ quasi-circular initial data, appropriate for primordial systems, and have explored a range of behaviors depending on the mass ratio, the BH spin, the NS equation of state (EOS), and the magnetic field. Our results complement these works by offering a first study within full GR of initially hyperbolic encounters, of relevance to BH-NS capture events and related systems that merge with large eccentricity. In the remainder of the letter we outline our numerical method, present results of our parameter space survey, and discuss some of their implications. The main result is the striking dependence of the outcome---disk mass, unbound material, and GW signal---on the impact parameter. Though the most ``extreme'' outcomes might require fine-tuning and hence be rare, there is strong variation over the entire range $r_p \lesssim 10M$ we considered: for a hyperbolic encounter in a NC, this corresponds to roughly $30\%$ of encounters that lead to a bound system. We were not able to follow the larger $r_p$ cases through merger due to lack of computational resources, though certainly a fraction of these should also exhibit similar variability at the time of merger. This suggests that these systems could be a wellspring of varied and interesting GW and electromagnetic emission. \section{Numerical approach} Our 3D numerical code solves the Einstein field equations using finite difference (FD) techniques with Berger and Oliger style adaptive mesh refinement \citep[AMR,][]{bo84}. The numerical scheme for evolving the spacetime metric is substantively the same as the generalized harmonic method described by \citet{gh3d,paper2}, except that the FD scheme is fourth order accurate and uses fourth order Runge-Kutta time integration. We model the NS material as a perfect fluid and solve the hydrodynamic energy-momentum and continuity equations using conservative high-resolution shock-capturing schemes with second order Runge-Kutta time integration, and enforce strict conservation at AMR boundaries using the flux correction method of \citet{bc89}. We have implemented several methods for calculating inter-cell fluxes (HLL, \citealt{hll}; the Roe solver, \citealt{eulderink}; and the Marquina flux, \citealt{marquina}) and for reconstructing fluid primitive variables at cell interfaces (MC and minmod, \citealt{toro}; PPM, \citealt{ppm}; and WENO-5, \citealt{weno5}). Unless otherwise noted, the simulations described below were performed with HLL and WENO-5. Our hydrodynamics scheme allows for any EOS of the form $P=P(\rho,\epsilon)$ (e.g., $\Gamma$-law, piecewise polytrope, and tabular EOSs). We have tested the new hydrodynamics sector of our code on problems including 1D and 2D Riemann problems, Bondi accretion, and single NSs. More details on our code and tests will be presented in \citet{upcomingpaper}. For this first study the only parameter we vary is the initial periapsis separation. The BH and NS have a 4:1 mass ratio, and both are initially non-rotating. The NS is modeled as a TOV star with a broken $\Gamma$-law EOS (labeled ``HB'' in \citealt{jocelyn} and including a thermal component to allow for shock heating) and has mass $1.35 M_\odot$ and radius 11.6~km. The initial orbital parameters describe a hyperbolic encounter with relative velocity $w=1000$~km~s${}^{-1}$, corresponding to the central region of a NC \citep{oleary}. These orbits are nearly parabolic, with $e-1\sim O(10^{-5})$, and hence the close-encounter behavior also adequately describes such events in GCs. We superimpose initial data for the BH and NS at an initial separation of $50 M$ (498~km) with initial velocities according to a Newtonian orbit with the desired $r_p$. Though these superposed initial data do not strictly satisfy the constraint equations except at infinite separation, tests performed at various initial separations indicate that $50 M$ is sufficiently large that the constraint violation does not appreciably affect the system (relative to truncation error). \section{Results and discussion} We consider a range of periapsis separations from $r_p/M=5.0$ to 15 ({\it i.e.,} 50 to 150 km). (Henceforth, we will consider $r_p$ to be normalized by $M$.) In all of these cases, sufficient energy is carried away by GWs to result in a bound system. Our simulations exhibit three types of behavior: (1) a direct plunge ($r_p=5.0,5.83,6.67,6.81$), (2) following the initial periapsis passage, a single elliptical orbit and then a plunge ($r_p=6.95,7.22,7.5$), and (3) following the initial periapsis passage, a long-period elliptical orbit ($r_p=8.75,10.0,12.5,15.0$). For the latter group (and the high resolution $r_p=7.5$ run), the entire orbit is prohibitively long to simulate, and we focus on the burst of GWs associated with the first periapsis passage. For one case in each class ($r_p=5.0,7.5,10.0$) we ran three simulations with different characteristic mesh spacings (but always with 7 refinement levels) for convergence studies. At $t=0$, the low (medium, high) resolution run had finest meshes covering the BH and NS of roughly $80^3$ ($100^3$,$150^3$) cells, resolving the NS diameter with $\sim40$ ($50$, $75$) cells and the BH horizon diameter with $\sim70$ ($85$, $130$) cells. (We note that the level structure is set by truncation error estimates and is adjusted with time.) All other simulations were run at medium resolution. Unless otherwise noted, results will be reported for medium resolution, with error bars (where appropriate) computed from convergence calculations. Our simulations employ compactified coordinates such that the outer boundaries extend to spatial infinity. Thus, the global (ADM) $M$ and $J$ should be conserved. In practice, however, we must evaluate these quantities at a finite distance, making them subject to gauge artifacts, some propagating outward from the central BH/NS region from $t=0$. For $t<200M$, an extraction sphere of $300M$ is free of propagating artifacts, whence $M$ ($J$) is conserved to better than $0.3$ (2.0)\% for all cases at medium resolution. Based on the above of runs, some guidance from perturbative results \citep{pm63,turner77,bg2010}, and a zoom-whirl geodesic analogue of the two body problem \citep{Pretorius:2007jn}, we conjecture the following qualitative behavior as a function of $r_p$ for initial encounters resulting in a bound system. Consider $n$, the non-negative, integer number of periapsis passages before disruption/plunge, and $r^i_p$, the periapsis distance on the $i^{th}$ encounter for $1\le i \le n$. {\em Define} $r_p^{n+1}$ to be an {\em effective} periapsis distance for the final close encounter (i.e., the corresponding $r_p$ before the disruption/plunge part of the final encounter). Group (1) above has $n=0$, group (2) $n=1$, and group (3) $n\geq1$. The behavior of a close encounter will depend sensitively on the distance $\delta r^i_p=r^i_p-r_c$ between $r^i_p$ and a radius $r_c$ of an effective {\em unstable circular} orbit with the {\em same} energy and angular momentum. If $\delta r^i_p$ is sufficiently small (relative to $M$), the orbit will exhibit a whirl phase, where it asymptotes to a nearly circular orbit. The smaller $\delta r^i_p$, the longer the duration of the whirl, with a maximum when $\delta r^i_p=0$ that equals the time required for the binary to lose its {\em excess} orbital kinetic energy, either via GW emission or tidal transfer of energy to the NS material. The excess energy is the difference between the total energy of the binary entering the whirl phase and a putative binary on a {\em quasi-circular} inspiral at $r_c$. Because of the requirement that $\delta r^i_p$ be small, we only expect the possibility of significant whirling near the ultimate or penultimate encounter. If $\delta r^i_p$ is negative, the whirl will directly transition to a plunge. If positive and small, there will be a separation following the whirl; however the effective $r_c$ for the next encounter {\em increases} while $r^i_p$ {\em decreases} due to GW emission, and since this is quite sizeable for our 4:1 mass ratio system, $\delta r^{i+1}_p$ will likely be negative, resulting in a subsequent plunge. Furthermore, the separation where the NS starts to be tidally disrupted is within the radii of unstable whirl orbits, hence the NS will not survive any prolonged whirl phase. Prior encounters (for larger $n$ cases) will not exhibit significant whirling, and while $\delta r^i_p$ is large the orbital evolution could better be described as a series of precessing ellipses with decreasing eccentricity and semi-major axis. Considering the number of orbits $n$ as a function of the {\em initial} $r_p$, $n(r_p)$ is monotonically increasing, and the values of $r_p$ where one could see a notable final whirl would be near the steps in $n(r_p)$. The most pronounced whirl behavior will occur for small $n(r_p)$; as $n(r_p)$ increases, the amount of excess kinetic energy left over once the final orbit is reached decreases, and for sufficiently large $n$ the late stages will essentially follow a quasi-circular inspiral. \begin{figure*} \begin{center} \includegraphics[height = 1.3 in]{vertical_scale.eps} \put(2,90){$1\times \rho_{0}$} \put(2,45){$10^{-3}$} \put(2,0){$10^{-6}$} \hspace{0.5 in} \includegraphics[clip=true, draft=false, viewport=0 0 780 665, width=1.66in]{rp5_merge.eps} \includegraphics[clip=true, draft=false, viewport=0 0 780 615, width=1.66in]{rp6_81_tail.eps} \hspace{0.4 in} \includegraphics[height = 1.3 in]{vertical_scale.eps} \put(2,90){$10^{-4} \rho_{0}$} \put(2,45){$10^{-6}$} \put(2,0){$10^{-8}$} \hspace{0.5 in} \includegraphics[clip=true, draft=false, viewport=20 0 832 640, width=1.66in]{rp6_67_disk.eps} \\ \hspace{0.6 in}\includegraphics[clip=true, draft=false, viewport=30 30 810 645, width=1.66in]{rp6_95_transfer.eps} \includegraphics[clip=true, draft=false, viewport=120 100 687.3 547.3, width=1.66in]{rp6_95_later.eps} \hspace{0.4 in} \includegraphics[height = 1.3 in]{vertical_scale.eps} \put(2,90){$10^{-5} \rho_{0}$} \put(2,60){$10^{-6}$} \put(2,30){$10^{-7}$} \put(2,0){$10^{-8}$} \hspace{0.5 in} \includegraphics[clip=true, draft=false, viewport=0 0 780 615, width=1.66in]{rp7_5_disk.eps} \caption{Rest mass density in the equatorial plane from BH-NS simulations with varying $r_p$. The four panels on the left show (left to right, top to bottom, same color scale): (1) the BH and NS merging ($t=6.82$~ms, $r_p=5$), (2) the NS being stretched into a long tidal tail ($t=9.97$~ms, $r_p=6.81$), (3) a brief mass transfer episode during the NS's first periapsis passage ($t=8.78$~ms, $r_p=6.95$), and (4) the NS's subsequent distortion ($t=10.6$~ms, $r_p=6.95$). On the right is a nascent accretion disk (top, $t=12.3$~ms, $r_p=6.67$), and a late-stage accretion disk (bottom, $t=38.2$~ms, $r_p=7.5$, low resolution, PPM). The color scale is logarithmic with units of the initial maximum density ($\rho_{0}=8.3\times 10^{14}$ g cm$^{-3}$). The black hole is roughly the same coordinate size in all panels ($R_{\rm BH}=$ 16 km), which can be used to infer the relative scale of each snapshot.} \label{snapshots} \end{center} \end{figure*} Figure~\ref{snapshots} illustrates some of the varied phenomena encountered near the transition between $n(r_p)=0$ and $n(r_p)=1$, occurring at $r_p\approx 6.88\pm0.08$. Most striking is the amount of rest mass remaining after the merger as a function of $r_p$ (Table~\ref{master_table}). For cases in which the NS plunges without significant disruption, such as $r_p=5$ or $r_p=7.5$, less than 1\% of the initial mass is available to form an accretion disk. However, for $r_p$ closer to the first transition ($r_p=6.67$ and 6.81, see Figure~\ref{snapshots}), the NS is stretched into a long tidal tail, and a sizeable amount of bound material is left to form an accretion disk---12\% of the initial NS rest mass for $r_p=6.81$. \begin{table*} \begin{center} {\small \begin{tabular}{ l l l l l l l l } \hline\hline $r_p$ & $M_0/M_0(t=0)$\tablenotemark{a} & $M_{0,u}/M_0(t=0)$\tablenotemark{b} & $\tau_{\rm acc}$ (ms)\tablenotemark{c} & \multicolumn{2}{c}{First periapsis\tablenotemark{d}} & \multicolumn{2}{c}{Total\tablenotemark{e}} \\ & & & & $\frac{E_{GW}}{M}\cdot10^2 $ & $ \frac{J_{GW}}{M^2}\cdot10^2$ & $ \frac{E_{GW}}{M}\cdot10^2$ & $\frac{J_{GW}}{M^2}\cdot10^2$ \\ \hline 5.00 & 0.005 & 0.0 & 25 & \ldots & \ldots & $0.67(0.87)$ & $4.14(4.86)$ \\ 6.67 & 0.107 & 0.056 & 130 & \ldots & \ldots & $1.29$ & $9.10$ \\ 6.81 & 0.221 & 0.101 & 40 & \ldots & \ldots & $1.19$ & $9.60$ \\ 6.95 & 0.018 & 0.003 & 47 & $0.697$ & $7.33$ & $1.65$ & $13.9$\\ 7.22 & 0.013 & 0.001 & 16 & $0.358$ & $4.48$ & $1.18$ & $10.2$ \\ 7.50 & 0.009 & 0.003 & 7.6 & $0.242(0.147)$\tablenotemark{f} & $3.44 (2.46)$ & $1.03$ & $44.7$\\ 8.75 & \ldots & \ldots & \ldots & $0.073$ & $1.58$ & \ldots & \ldots \\ 10.0 & \ldots & \ldots & \ldots & $0.033(0.027)$ & $0.97(0.88)$ & \ldots & \ldots \\ 12.5 & \ldots & \ldots & \ldots & $0.011$ & $0.46$ & \ldots & \ldots \\ \hline \end{tabular} } \caption{Disk properties and GW energy and angular momentum losses} \tablenotetext{1}{Rest mass remaining outside the BH shortly after merger, normalized by the initial total rest mass.} \tablenotetext{2}{Unbound rest mass estimated using local fluid velocities.} \tablenotetext{3}{Rough {\em initial} accretion timescale ($\tau_{\rm acc} = M_0/\dot{M_0}$) evaluated shortly after merger.} \tablenotetext{4}{Energy and angular momentum lost to GWs during the first close encounter.} \tablenotetext{5}{Total GW energy and angular momentum losses for cases which were followed through merger.} \tablenotetext{6}{Results are from medium resolution runs; values in parentheses are Richardson extrapolated estimates using low and high resolutions, where available. Note that the relatively large error for $r_p=7.5$ (and to a lesser extent $r_p=5,10$) is due in part to truncation error altering the actual periapsis by a small amount, and in this regime the GW emission is highly sensitive to binary separation (Figure~\ref{gw_fit}).} \label{master_table} \end{center} \end{table*} Figure~\ref{fbplot} shows the approximate rate of fallback as a function of time for $r_p=6.81$, 6.95, 7.22, and 7.5. This is the rate at which material on elliptical orbits is expected to return to the accretion disk \citep[see][]{rosswog07}. (These accretion rates are likely upper limits since they do not account for nuclear burning, see \citealt{metzger2}.) The fallback rate is larger for cases with larger disk masses, such as $r_p=6.81$, but all cases exhibit an approximate $t^{-5/3}$ falloff. This time dependence was predicted for stellar disruptions around supermassive BHs by \citet{rees88}. It appears unlikely that BH-NS mergers with fallback rates as in Fig.~\ref{fbplot} will be able to explain sGRBs with extended emission \citep[see, e.g.,][]{nb06} if this emission is due to feeding of the accretion disk at late times. For example, by $t\approx 100$~s, the luminosity for the $r_p=6.81$ case would be only $L\sim\eta\dot{M}c^2 \sim 2\times 10^{42}$~erg/s (assuming an efficiency $\eta=0.1$). \begin{figure} \begin{center} \psfrag{formula1}{$\dot{M} \ (M_{\odot}/s)$} \psfrag{formula2}{$t^{-5/3}$} \includegraphics[width=3.25in,clip=true]{fbplot.eps} \caption{Approximate fallback accretion rates for $r_p=6.81$, 6.95, 7.22, and 7.5. These rates are evaluated at times ranging from 1.0 to 2.2~ms after the approximate time when the BH accretion rate plateaus following the merger. For this diagnostic, we consider the fluid in each cell as a ballistic particle and take its orbital period as the approximate fallback timescale. The instantaneous BH accretion rates evaluated at the same time are shown at the upper right (arbitrary abscissa).} \label{fbplot} \end{center} \end{figure} Table~\ref{master_table} also shows the total energy and angular momentum lost to GWs for $r_p$ between 5 and 12.5. For the cases that we followed through merger, we find 0.7-1.7\% of the total mass lost to GW energy, and estimate the final spins of the BHs to be $(0.49\pm 0.01,0.45,0.37,0.47,0.50,0.50)$ for $r_p=(5.00,6.67,6.81,6.95,7.22,7.50)$ respectively. The energy loss is largest for the transitional case ($r_p=6.95$), which has a large pulse from the whirling first passage and a second burst from the merger (Fig.~\ref{gwave_forms}). Table~\ref{master_table} also shows the GW losses for the initial encounter in cases where the NS survives the periapsis passage (columns 5 and 6). These fly-by pulses can be compared with the prediction of \citet{turner77} (Fig.~\ref{gwave_forms}, lower panels, and Fig.~\ref{gw_fit}), who used the Newtonian orbit together with quadrupole physics for the GW emission, which we will call the {\em Newtonian Quadrupole Approximation} (NQA). Our waveforms show roughly the same pulse shape as the NQA prediction but have larger amplitudes for the smaller $r_p$ cases. At $r_p=15$ we find the gravitational waveform from the initial fly-by to be indistinguishable from the NQA prediction at our resolution ($\pm 10$\%). \begin{figure} \begin{center} \hspace{-0.5cm} \includegraphics[height=1.5in,clip=true,draft=false]{rp6_95_psi4_axis.eps} \includegraphics[height=1.42in,clip=true,draft=false]{rp7_5_gwave_compare_turner.eps} \includegraphics[height=1.42in,clip=true,draft=false]{rp10_gwave_compare_turner.eps} \caption{ {\it (upper panel)} The Newman-Penrose scalar $\Psi_4$ on the $z$-axis (orthogonal to the orbit) for $r_p=6.95$. The first pulse is from the initial close encounter, the second from the merger-ringdown. Between the pulses there is an oscillation due to the rotating, distorted neutron star, which is significantly torqued during the first encounter. Here $t=0$ corresponds to the start of the simulation. {\it (lower panels)} The real and imaginary components (black diamonds and red squares) of the $l=2$, $m=2$ spherical harmonic of $r\Psi_4$ for $r_p=7.5$ (left) and $r_p=10$ (right). For comparison the NQA analytical results are shown multiplied by an overall factor so that the magnitude and phase match at peak ($t=0$). } \label{gwave_forms} \end{center} \end{figure} The enhancement in GW energy losses for close encounters may be due (in part) to zoom-whirl-like behavior. Figure~\ref{gw_fit} shows the GW energy loss as a function of $r_p$, along with the NQA prediction and a fit consistent with zoom-whirl dynamics \citep{Pretorius:2007jn} in the regime ($r_p \lesssim 10$) where we start to see significant departures from the NQA approximation. \begin{figure} \begin{center} \includegraphics[height=2in,clip=true,draft=false]{e_gw_fit.eps} \caption { Energy lost to GWs during the initial close encounter (i.e. excluding merger) as a function of $r_p$. The functional form $E_0(1-(\delta r_p/\Delta)^\gamma)$ (solid line), motivated by zoom-whirl dynamics, is a fit to the simulation results (red dots). $\delta r_p=\delta r^1_p=r^1_p-r_c$ as discussed in the text; here $r^1_p=r_p$. $E_0$ is the difference in energy between a quasi-circular orbit and an $e\approx 1$ orbit both with $r_p=r_c$. $\Delta$ is the range over which zoom-whirl like behavior dominates the GW emission energetics. $\gamma$ is a parameter that in the geodesic analogue is related to the instability exponent of the corresponding unstable circular orbit; here, we use it as our fitting parameter. The NQA approximation is the dotted line. } \label{gw_fit} \end{center} \end{figure} \section{Conclusions} An interesting result of this general relativistic study that is qualitatively consistent with previous Newtonian studies \citep[e.g.][]{lee2010}, is the great variability of the outcome as a function of impact parameter. For example, the remnant disk masses following merger range from nearly zero up to $\sim0.3 M_{\odot}$. Mergers leading to significant disks occur in a small (but not negligible) region of parameter space and could produce a sGRB. In follow up work we plan to extend the parameter space survey, varying the NS EOS and BH spin. We also intend to explore the detectability of these events with GW detectors, and conclude with brief preliminary comments. These signals may be difficult to detect with instruments such as LIGO since they lack a long inspiral phase and most of the power is at high frequencies (1500-2000 Hz for the masses considered here). Using the broadband AdLIGO noise curve, we find sky-averaged SNR of 3-9 for $r_p=5-10$, assuming a distance of 100~Mpc. Scaling the GW emission up to $(M_{\rm NS},M_{\rm BH})=(2,8)M_{\odot}$ and assuming optimal orientation gives SNR of 8 out to 340~Mpc for $r_p=7.5$. Encounters with larger $r_p$ could be easier to detect since they will have a number of fly-by GW bursts before the merger. Given that some BH-NS capture binaries emit the majority of their GW power at the high frequency end of the LIGO noise curve and could be relatively weak (both in terms of GW emission and extrapolated electromagnetic emission based on disk mass), it may be a worthwhile exercise to revisit the analysis of GRB 070201 \citep{Abbott:2007rh} with burst-templates adapted to capture driven BH-NS encounters. \acknowledgments We thank Adam Burrows, John Friedmann, Roman Gold, Benjamin Lackey, and Richard O'Shaughnessy for useful conversations. This research was supported by the NSF through TeraGrid resources provided by NICS under grant TG-PHY100053, the Bradley Program fellowship (BCS), the NSF Graduate Research Fellowship under grant DGE-0646086 (WE), NSF grants PHY-0745779 (FP) and PHY-1001515 (BCS), and the Alfred P. Sloan Foundation (FP). Simulations were also run on the {\bf Woodhen} cluster at Princeton University.
1903.06311
\section{Introduction} Bell's theorem \cite{Bell64, Bell66} highlights a precise sense in which quantum theory requires a departure from a classical worldview. Furthermore, violations of Bell inequalities provide a means for certifying the nonclassicality of {\em nature}, independently of the correctness of quantum theory. This is because Bell inequalities can be tested directly on experimental data. Experimental tests under very weak assumptions have confirmed this nonclassicality \cite{Belltest1, Belltest2, Belltest3}. Correlations that violate Bell inequalities have also found applications in information theory. Specifically, they constitute an information-theoretic resource insofar as they can be used to perform various cryptographic tasks in a device-independent way~\cite{BHK,Acin2006QKD,Scarani2006QKD,Acin2007QKD, colbeckamp, Pironio2010,Dhara2013DIRNG,vazirani14,Kaniewski2016chsh}. Consequently, much previous effort has been made to quantify resourfulness of correlations within Bell scenarios~\cite{gallego2012,de2014nonlocality,GellerPiani,gallego2016nonlocality,horodecki2015axiomatic,Amaral2017NCW,kaur2018fundamental,Brito2018tracedistance}. In this paper, we take a resource-theoretic approach to quantifying the nonclassicality of a given correlation in a Bell scenario, grounded in a new perspective Bell's theorem. This is the perspective of \term{causal modelling}, which differs from the traditional operational approaches both conceptually and in practice. Nevertheless, the natural choice of the set of free operations for the Bell scenario in our framework coincides with the one proposed in some previous works~\cite{de2014nonlocality,GellerPiani}, namely, \term{local operations and shared randomness} (\term{LOSR}) \footnote{There is widespread agreement that the free operations should somehow consist of local operations supplemented with shared randomness, however, different authors have been led to formalize this idea differently, that is, they have been led to distinct proposals for the the set of free operations. Indeed, the formalization provided in Refs.~\cite{gallego2016nonlocality,Amaral2017NCW,kaur2018fundamental} is inconsistent with the one given in Refs.~\cite{de2014nonlocality,GellerPiani} and therefore also with the one presented here. A detailed discussion of this issue can be found in Appendix~\ref{comparisonsub}.}. Our causal perspective on quantifying Bell nonclassicality also generalizes naturally to a framework for quantifying the nonclassicality of correlations in more general causal scenarios. We discuss this generalization in Section~\ref{comparisonsub}, but leave its development to future work. \subsection{Summary of main results} We now summarize the content and main results of our article. In Section~\ref{landscape}, we articulate the view on Bell's theorem that motivates our approach---the causal modelling paradigm---and contrast it with two other views on Bell's theorem, namely, the strictly operational and superluminal causation paradigms. In particular, we explain how the differences between these views impacts how one conceptualizes Bell inequality violations as a resource, and we highlight some of the advantages of our approach relative to the alternatives. We also introduce the notion of partitioned process theories~\cite{Coecke2014} as the mathematical framework for resource theories that we adopt in this article. In Section~\ref{basicresthry}, we provide a formal definition of the resource theory to be studied. For bipartite Bell scenarios, we argue that the set of processes which naturally constitute the resources in our approach is the set of all bipartite processes with classical inputs and outputs that can arise within a causal model with a (possibly nonclassical) common cause between the wings. We also argue that the natural set of free operations on such processes are those that are achieved by embedding the process in a circuit for which the only connection between the wings is a {\em classical} common cause, and we demonstrate that this is equivalent to the set of local operations and shared randomness, as the latter is formalized in Refs.~\cite{de2014nonlocality,GellerPiani}. In Section~\ref{rtprelim}, we introduce some of the central concepts of any resource theory, including the notion of a pre-order and its features, the notion of monotones and complete sets thereof, and the notions of cost and yield monotones, which underlie the explicit monotone constructions that follow. In Section~\ref{polything}, we show how one can use two instances of a linear program to determine the ordering relation which holds between any pair of resources (see Proposition~\ref{lem:belstheorem} and the discussion that follows it). In Section~\ref{twomonotones}, we define two monotones of particular interest. The first (defined in Eq.~\eqref{eq:CHSHmonotonedefn}) is based on a yield construction relative to all resources in the Clauser-Horne-Shimony-Holt (CHSH) scenario~\cite{CHSH} (a bipartite Bell scenario where the settings and outcomes all have cardinality two) and where the yield is measured by the value of the canonical CHSH functional. The second (defined in Eq.~\eqref{Malphadefn}) is based on a cost construction relative to a one-parameter family of resources in the CHSH scenario and where the cost is measured again by the value of the canonical CHSH functional. Although both of these monotones are originally defined in terms of an optimization problem, we derive closed-form expressions for each of them for resources within the CHSH scenario (see Propositions~\ref{prop:eqaboveCHSH} and ~\ref{geom} respectively). We show that within the CHSH scenario~\cite{CHSH}, a variety of monotones which have been previously studied are all equivalent (up to a monotonic function) to the first of these monotones (see Corollary~\ref{measurecor}). Because our two monotones are provably not equivalent, this result implies that the second of our monotones provides information beyond that given by previously studied monotones. In Section~\ref{sec:results} we leverage our two monotones to derive various global properties of the pre-order induced by single-copy deterministic conversions. Specifically, we prove that the pre-order: \begin{compactitem} \item is not complete (i.e., there exist incomparable resources), \item is not weak (the incomparability relation is not transitive), \item has both infinite width and infinite height, \item is locally infinite. \end{compactitem} We also prove that the two monotones just mentioned do not completely characterize the pre-order of resources, by showing that they fail to do so even for the special case of the CHSH scenario. We further show (in Theorem~\ref{prop:eightmonotonesV2}) that no fewer than eight continuous monotones can do the job. We also show (in Proposition~\ref{prop:zerodclasses}) that the equivalence classes among nonfree resources in the CHSH scenario (though not in general) are given exactly by the orbits of the symmetry group of deterministic free operations. Finally, in Section~\ref{sec:qr}, we show that all of the global features of the pre-order hold even for the strict subset of resources which can be realized in quantum theory. We also prove (in Lemma~\ref{prop:ext}) that every extremal quantumly realizable resource is at the top of the pre-order of quantumly realizable resources, and (in Proposition~\ref{prop:continuoustop}) that there are a continuous set of incomparable resources at the top of this pre-order. \subsection{How to read this article} We will demonstrate in Section~\ref{basicresthry} that in spite of the difference in our attitude towards Bell's theorem, the definition of the set of resources and the set of free operations that is natural for the Bell scenario within the causal modelling paradigm coincides with a definition that has been proposed within the strictly operational paradigm, namely, the one proposed in Refs.~\cite{de2014nonlocality,GellerPiani}. Because Bell scenarios are the focus of our article, any reader who would rather take the strictly operationalist attitude towards Bell's theorem can reinterpret all of our results through that lens. In particular, readers who are already sympathetic to the notion that LOSR, as defined in Refs.~\cite{de2014nonlocality,GellerPiani}, is the right choice of free operations may wish to skip Sections \ref{landscape} and \ref{basicresthry}. To understand our conviction that LOSR constitutes the right choice of free operations for Bell scenarios, however, readers are advised to read Sections~\ref{landscape} and \ref{sec:freeoperations}. In particular, to understand how our approach differs (advantageously) from other approaches, readers are encouraged to examine Sections~\ref{operationalparadigm} and ~\ref{SCparadigm} as well as Appendix~\ref{comparison}. Because Section~\ref{rtprelim} reviews basic definitions and terminology for concepts related to resource theories, any reader who has expertise on resource theories may wish to skip this section. We note, however, that some of the material presented therein is not found in standard treatments, such as our discussion of global properties of a pre-order and our discussion of a scheme for constructing useful cost and yield monotones. The presentation of our novel technical results begins in Section~\ref{polything}. \section{Motivating our approach and contrasting it with alternatives} \label{landscape} \subsection{Three views on Bell's theorem} The traditional commentary on Bell's theorem \cite{sep-bell-theorem, d1979quantum} takes a particular view on how to articulate the assumptions that are necessary to derive Bell inequalities. Among these assumptions, two are typically highlighted as deserving of the most scrutiny, namely, the assumptions that are usually termed {\em realism} and {\em locality}\footnote{Note, however, that different authors will formalize these assumptions in different ways.}. Abandoning one or the other of these two assumptions is the starting point of most commentaries on what to do in the face of violations of Bell inequalities.\footnote{See, however, the discussion of superdeterminism in footnote~\ref{superd}.} Furthermore, a schism seems to have developed between the camps that advocate for each of these two views~\cite{wiseman2014two}. Among the researchers who take Bell's theorem to demonstrate the need to abandon realism, there is a contingent which adopts a purely operational attitude towards quantum theory, that is, an attitude wherein the scientist's job is merely to predict the statistical distribution of outcomes of measurements performed on specific preparations in a specified experimental scenario. We shall refer to the members of this camp as {\em operationalists}~\cite{werner2014comment}. For such researchers, a violation of a Bell inequality is simply a litmus test for the inadequacy of a classical realist account of the experiment. One particular type of operationalist attitude, which we shall term the \term{strictly operational paradigm}, advocates that physical concepts ought to be defined in terms of operational concepts, and consequently that any properties of a Bell-type experiment, such as whether it is signalling or not and what sorts of causal connections might hold between the wings, must be expressed in the language of the classical input-output functionality of that experiment. In other words, they advocate that the only concepts that are meaningful for such an experiment are those that supervene\footnote{$A$-properties are said to supervene on $B$-properties if every $A$-difference implies a $B$-difference.} upon its input-output functionality.\footnote{ Some might describe what we have here called the strictly operational paradigm as the ``device-independent'' paradigm~\cite{Scarani2012device}, however, we avoid using the latter term here because its usage is not restricted to describing a particular type of empiricist philosophy of science: it also has a more technical meaning in the context of quantum information theory, wherein it indicates whether or not a given information-theoretic protocol depends on a prior characterization of the devices used therein. Indeed, Bell-inequality-violating correlations have been shown to be a key resource in cryptography because they allow for device-independent implementations of cryptographic tasks\cite{BHK,Acin2006QKD,Scarani2006QKD,Acin2007QKD, colbeckamp, Pironio2010,Dhara2013DIRNG,vazirani14,Kaniewski2016chsh}. } Most prior work on quantifying the resource in Bell experiments has been done within this paradigm, and the characteristic of experimental correlations that is usually taken to quantify the resource is simply some notion of distance from the set of correlations that satisfy all the Bell inequalities. Consider, on the other hand, the researchers who take realism as sacrosanct, and in particular those who take Bell's theorem to demonstrate the failure of locality---that is, the existence of superluminal causal influences~\cite{Maudlin2002quantum,Norsen2006}.\footnote{Although such influences do not imply the possibility of superluminal signalling, they do imply a certain tension with relativity theory if one believes that the latter does not merely concern anthropocentric concepts such as signalling, but also physical concepts such as causation.} Researchers in this camp, whom we shall refer to as advocates of the \term{superluminal causation paradigm}, would presumably find it natural to quantify the resource of Bell inequality violations in terms of the strength of the superluminal causal influences required to account for them (within the framework of a classical causal model). An approach along these lines is described in Refs.~\cite{Chaves2015relaxing,Chaves2017causalmultipartite}. Earlier work on the communication cost of simulating Bell-inequality violations~\cite{maudlin1992bell,Toner2003} is also naturally understood in this way.\footnote{\label{superd}A less common view on how to maintain realism in the face of Bell inequality violations is to hold fast to locality but give up on a different assumption that goes into the derivation of Bell inequalities, namely, that the hidden variables are statistically independent of the setting variables. This is known as the ``superdeterministic'' response to Bell's theorem~\cite{hooft2013fate}. Advocates of this approach would presumably find it natural to quantify the resource of Bell inequality violations in terms of the deviation from such statistical independence that is required to explain a given violation. In particular, the results of Refs.~\cite{hall} and ~\cite{barrettgisin} seeking to quantify the nonindependence needed to explain a given Bell inequality violation might be framed within a resource-theoretic framework. However, given that the setting variables can no longer be considered as freely specifiable within such an approach, it would be inappropriate to conceptualize a Bell experiment as a box-type process as we have done here. } In recent years, a third attitude toward Bell's theorem---inspired by the framework of causal inference~\cite{Pearl2009}---has been gaining in popularity. In this approach, the assumptions that go into the derivation of Bell inequalities are~\cite{Wood2015}: Reichenbach's principle (that correlations need to be explained causally), the framework of classical causal modelling, and the principle of no fine-tuning (that statistical independences should not be explained by fine-tuning of the values of parameters in the causal model). Here, a violation of a Bell inequality does not lead to the traditional dilemma between realism and locality, but rather attests to the impossibility of providing a non-fine-tuned explanation of the experiment within the framework of classical causal models. This attitude implies the possibility of a new option for what assumption to give up in the face of such a violation. Specifically, the new possibility being contemplated is that one can hold fast to Reichenbach's principle and the principle of no fine-tuning---and hence to the possibility of achieving satisfactory causal explanations of correlations---by replacing the framework of classical causal models with an intrinsically nonclassical generalization thereof. As is shown in Ref.~\cite{Wood2015}, because the correlations in a Bell experiment do not provide a means of sending superluminal signals between the wings, the only causal structure that is a candidate for explaining these correlations without fine-tuning is one wherein there is a purely common-cause relation between the wings, that is, one which admits no causal influences between the wings. Therefore, the new approach to achieving a causal explanation of Bell inequality violations is one that posits a common cause mechanism but replaces the usual formalism for causal models with one which allows for more general possibilities on how to represent its components~\cite{Allenetal}\footnote{Specifically, in the proposal of Ref.~\cite{Allenetal}, reversible deterministic causal dependences are represented by unitaries rather than bijective functions, and lack of knowledge is represented by density operators rather than by classical probability distributions.}. We refer to this attitude as the \term{causal modelling paradigm}. The causal modelling paradigm implies not only a novel attitude towards Bell's theorem, but also a change in how one conceives of the resource that powers the information-theoretic applications of Bell-inequality violations. The resource is not taken to be some abstract notion of distance from the set of Bell-inequality-satisfying correlations within the space of all nonsignalling correlations, as advocates of the strictly operational paradigm seem to favour, nor to consist of the strength of superluminal causal influences, as advocates of the superluminal causation paradigm would presumably have it. Rather, we take the resource to be the {\em nonclassicality} required by any generalized causal model which can explain the Bell inequality violations without fine-tuning. We shall show that in the resource theory that emerges by adopting this attitude, the nonclassicality of common-cause processes in Bell experiments cannot be captured solely by the degree of violation of facet-defining Bell inequalities. That is, there are distinctions among such common-cause processes---different ways for these to be nonclassical---which do not correspond to distinctions in the degree of violation of any facet-defining Bell inequality. \subsection{Generalized causal models} We will work with the notion of a generalized (i.e., not necessarily classical) causal model that has been developed in Refs.~\cite{Henson2014,Fritz2012beyondBell} using the framework of generalized probabilistic theories (GPTs)~\cite{hardy01, barrettGPT, janotta2014}), and refer to it as a \term{GPT causal model}. Since we are interested in the distinction between classical and nonclassical, without specifically distinguishing quantum and supra-quantum types of nonclassicality, we will not be making use of any of the recent work~\cite{CostaShrapnel, Allenetal, BLO} on devising an intrinsically {\em quantum} notion of a causal model.\footnote{However, we will consider the question of when certain correlations that arise in a GPT causal model can be quantumly-realized.} One can then approach the study of nonclassicality in arbitrary causal structures from within the scope of these GPT causal models, and pursue the development of a resource theory of such nonclassical features. One must simply specify the nature of the scenario being considered: the number of wings of the experiment (commonly conceptualized as the laboratories of different parties when discussing information-theoretic tasks), and the causal structure presumed to hold among these wings.\footnote{It is perhaps inappropriate to call the relation between the parties in a general communication protocol a ``causal structure'', insofar as the latter term usually refers to a directed acyclic graph (DAG) in the causal inference literature, and a communication network can have cycles, such as when there exists a communication channel in both directions between two parties. Nonetheless, we will here focus on communication networks that {\em do} correspond to DAGs.} The set of all resources one might contemplate are then the set of processes that can be described with a GPT causal model having the appropriate causal structure. In this article, we focus on the causal structure wherein there is a common cause that acts on all of the wings, but no causal influences between any of them, which we term a \term{Bell scenario}. However, we do include some discussion regarding other causal structures in Appendix~\ref{diffcausalstr}. We conceptualize any experimental configuration as a process from its inputs to its outputs. In the GPT framework for causal models, one has the capacity to consider processes that have GPT systems as inputs and outputs at the various wings. However, we will restrict our attention to processes that have only {\em classical} inputs and outputs. Such processes can be conceptualized as black-box processes, to which one inputs classical variables and from which classical variables are output. They are therefore precisely the sorts of processes considered in the device-independent paradigm. We further restrict our attention to processes with a classical input and classical output at each wing, where the input temporally precedes the output.\footnote{Thus, we do not consider processes which involve a sequence over time of classical input variables and classical output variables; that is, in the language of Refs~\cite{qcombs08,qcombs09}, we do not consider general $n$-combs. } In the device-independent paradigm, the term ``box'' is generally used as jargon for such processes (for instance, as it is used in the term ``PR box''~\cite{Popescu1994}). We therefore refer to such processes as \term{box-type processes} or simply \term{boxes}. A box-type process is completely characterized by specifying the conditional probability distribution over its outcome variables given its input variables. We use the term \term{common-cause box} to refer to box-type processes which can be realized using a causal structure consisting of a common cause acting on all of the wings. In GPT causal models, all common-cause boxes can be decomposed into the preparation of a GPT state on a multipartite system, followed by the distribution of the component subsystems among the wings, followed by each subsystem being subjected to a GPT measurement, chosen from a fixed set according to the classical input variable at that wing (the local setting variable), and the result of which is the classical output variable at that wing (the local outcome variable). In short, such processes can be decomposed in the same manner in which a multipartite Bell experiment is decomposed into a preparation of a correlated resource and local measurements. The distinction between classical and nonclassical common-cause boxes is simply the distinction between whether there is a {\em classical} causal model underlying the process, or whether one must resort to a causal model which invokes a nonclassical GPT. \subsection{Resourcefulness in the causal modelling paradigm} In order to quantify the nonclassicality of common-cause boxes, we will use the approach to resource theories described in Ref.~\cite{Coecke2014}. In this approach, resource theories are defined via {\em partitioned process theories}. An \term{enveloping theory of processes} must be specified, together with a subtheory of processes that can be implemented at no cost, called the \term{free subtheory of processes}. This partitions the set of all processes in the enveloping theory into free and costly (i.e., nonfree) processes. One can then ask of any pair of processes in the enveloping theory whether the first can be converted to the second by embedding it in a circuit composed of processes that are drawn entirely from the free subtheory. The set of higher-order processes which are realized in this way---i.e., by embedding in a circuit composed of processes drawn from the free subtheory---is termed the \term{set of free operations}. Pairwise convertibility relations under the set of free operations define a pre-order on the set of all resources, and a partial order over the equivalence classes of such resources. One can then quantify the relative worth of different resources by their relative positions in this partial order. Functions over resources that preserve ordering relations, termed monotones, provide a particularly simple means of quantifying the worth of resources. The resource theory considered in this article is defined as follows. We take the enveloping theory of processes to consist of the common-cause boxes that can be realized in a GPT causal model, which we term \term{GPT-realizable}. We take the free subtheory of processes to consist of the common-cause boxes that can be realized in a classical causal model, which we term \term{classically realizable}. It follows that the free common-cause boxes are precisely those that satisfy all the Bell inequalities, while the costly common-cause boxes are those that violate some Bell inequality. To determine the ordering relations that hold among these common-cause boxes, one must determine the convertibility relations among them. Given the definition of our resource theory, whether one common-cause box can be converted to another is determined by whether this can be achieved by composing it with classical common-cause boxes. This subsumes correlated local processings of the inputs and outputs of the box, as we describe in Section~\ref{sec:freeoperations}. \subsubsection{A note about nomenclature} In this article, we avoid describing the resource behind Bell inequality violations as \enquote{nonlocality}. This is because we believe that it is {\em only} for those who take the lesson of Bell's theorem to be the existence of superluminal causal influences that it is appropriate to describe violations of Bell inequalities by this term. Researchers in the operationalist camp have not, generally speaking, avoided using the term \enquote{nonlocality}, but seem instead to use it as a synonym for ``violation of a Bell inequality'' rather than to imply a commitment to superluminal causal influences. However, we believe that such a usage invites confusion and so we opt instead to avoid the term altogether. Nevertheless, our project is very much in line with earlier projects that describe themselves as developing a \enquote{resource theory of nonlocality}, such as Refs.~\cite{gallego2012,de2014nonlocality,GellerPiani,gallego2016nonlocality}. \subsection{Contrast to the strictly operational paradigm}\label{operationalparadigm} As noted in the introduction and as will be demonstrated in Section \ref{sec:freeoperations}, in the special case of Bell scenarios---the focus of this article---the natural set of free operations within our causal modelling paradigm is equivalent to one of the proposals for the set of free operations made in earlier works within the strictly operational paradigm, namely, {\em local operations and shared randomness} (LOSR), as the latter is defined in Refs.~\cite{de2014nonlocality,GellerPiani}. Additionally, the natural enveloping theory adopted in the strictly operational approach, namely, the set of no-signalling boxes, also coincides with that of our enveloping theory for the case of Bell scenarios, namely, the set of GPT-realizable common-cause boxes (where the equivalence of these two sets can be inferred from the results of Ref.~\cite{barrettGPT}). Therefore, in spite of the difference in the attitude we take towards Bell's theorem, the resource theory that we define for Bell scenarios is the same as the one studied in Refs.~\cite{de2014nonlocality,GellerPiani}. Nonetheless, the difference in our attitude towards Bell's theorem is not inconsequential. We presently outline its significance for the project of this article as well as for potential future generalizations of this project. Most importantly, the causal modelling approach diverges sharply from any strictly operational approach once one considers causal structures beyond Bell scenarios. As discussed in Appendix~\ref{diffcausalstr}, in a resource theory of nonclassicality for more general causal structures, both the free subtheory and the enveloping theory proposed by the causal modelling approach are radically different from those suggested by the strictly operational approach. In particular, the free subtheory need not be LOSR in a general causal structure and the enveloping theory need not be the set of all nonsignalling operations. Our approach allows us to define a resource theory that is specific to a scenario in which only strict subsets of the wings are connected by common causes~\cite{BilocalCorrelations,Fritz2012beyondBell} (such as the triangle-with-settings scenario described in Appendix~\ref{diffcausalstr}) and this provides a concrete example of a case where the free subtheory is not LOSR and the enveloping theory is not all nonsignalling operations. In these cases, the free operations are``local operations and causally admissable shared randomness'',wherein only those subsets of wings that are connected by a common cause have shared randomness. This is distinct from the LOSR operations, which assume that randomness is shared between all the wings. It seems unlikely that the resource theory we propose in these cases can be motivated (or even fully characterized) in the strictly operational paradigm. Even for Bell scenarios, however, the causal modelling approach offers advantages over its competitors. In particular, it singles out a unique set of free operations, while the strictly operational approach does not. From our perspective, the resource underlying Bell inequality violations is the nonclassicality of the causal model required to explain them with a common cause, so \emph{clearly} the free operations should involve only classical common causes acting between the wings. In the strictly operational paradigm, by contrast, any operation which preserves no-signalling and takes local boxes to local boxes might constitute a legitimate candidate for a \enquote{free} operation. This ambiguity is reflected in the existence of distinct proposals for the set of free operations in strictly operational resource theories. Aside from LOSR, there is also a proposal called \textit{wirings and prior-to-input classical communication} (WPICC)~\cite{gallego2016nonlocality} which allows for classical causal influences among the wings {\em prior} to when the parties receive their inputs (See Appendix~\ref{WPICCvsLOSR}). If one believes that there is a singular concept which underlies the violation of Bell inequalities, then at most {\em one} of these proposals (LOSR or WPICC) can be taken as the relevant set of free operations.\footnote{Competing sets of free operations may be interesting for studying phenomena {\em other} than the resource powering violations of Bell inequalities, but this is not the issue at stake in this article.} Although WPICC operations meet all desired operational criteria, they are immediately ruled out as candidates for the free operations within the causal modelling paradigm, on the grounds that they involve nontrivial cause-effect influences between the wings. Another advantage of our approach for the Bell scenario is that it highlights the fact that {\LOSR} is {\em by construction} a convex set, a fact which is critical for the algorithmic method that we derive for determining the ordering relation between any two resources. In highlighting this fact, our approach led us to notice an oversight in some previous attempts to formalize LOSR, as discussed in Appendix~\ref{comparisonsub}. Finally, we note that prior work of \citet{GellerPiani} departs from the strictly operational paradigm through their use of the \emph{unified operator formalism}~\cite{Acin2010Unified,Short2013}, which is analogous to the quantum formalism, but where nonpositive Hermitian operators are allowed to represent states. They do not characterize boxes primarily by their input-output functionality, but rather as a composition of a bipartite source with local measurements. Indeed in their Fig. 4, they explicitly depict the internal structure of the box. It is in this sense that their approach does not quite fit the mould of a strictly operational approach but is rather somewhat more in the flavour of the causal modelling approach we have described here. Nonetheless, the unified operator formalism differs significantly from the GPT formalism of Refs.~\cite{Henson2014,Fritz2012beyondBell} with respect to the \emph{independence} of the nonclassical common cause from the measurements employed in realizing nonclassical boxes. In the unified operator formalism, the Hermitian operator describing the shared state cannot be chosen freely for a given set of quantum measurements, because some choices would yield negative numbers rather than valid probabilities. By contrast, in the GPT formalism that we adopt here, the set of GPT states is contained within the dual of the set of GPT product measurements, and hence any measurement scheme can be paired with any shared state while yielding valid probabilities. The causal modelling paradigm must reject any dependence of the shared state on the choice of measurements, while such dependence is unavoidable within the unified operator formalism. As defined in Ref.~\cite{Pearl2009}, a causal model is a directed acyclic graph, or equivalently, a circuit of causal processes, wherein the distinct processes in the circuit are required to be {\em autonomous} (i.e., independently variable). We therefore classify Ref.~\cite{GellerPiani} as neither within the causal modelling paradigm nor within the strictly operational paradigm, while still exhibiting some features of each of these approaches. \subsection{Contrast to the superluminal causation paradigm} \label{SCparadigm} To our knowledge, advocates of the superluminal causation paradigm have not attempted to develop a resource theory for Bell inequality violations (although Refs.~\cite{Chaves2015relaxing,Chaves2017causalmultipartite} are related in spirit). If it {\em were} attempted (within the framework of Ref.~\cite{Coecke2014}), then the commitments of the approach suggest that it would also be done differently from the way we have done so here. Those who endorse the superluminal causation paradigm do not shy away from the notion of causation, and hence a resource theory developed within their paradigm could be presented using the same framework that we use here --- that of causal models. However, such an approach would likely be framed entirely in terms of {\em classical} causal models, rather than introducing the notion of GPT causal models. Advocates of the superluminal causation paradigm would naturally define the free boxes to be those that involve only subluminal causes. Hence, in scenarios wherein the inputs and the outputs at one wing are space-like separated from those at the other wings, so that subluminal causal influences cannot act between the wings, a box is free if and only if it can be realized by a classical common cause. Thus, the natural choice of the free subtheory in the superluminal causation paradigm coincides with the free subtheory in the causal modelling paradigm. On the other hand, the natural choice of the enveloping theory in the superluminal causation paradigm consists of the set of boxes that are classically realizable given superluminal causal influences between the wings. This differs from the enveloping theory in the causal modelling paradigm because it includes boxes that are signalling. In the superluminal causation paradigm, therefore, it is natural to try and quantify the resource in terms of the strength of the superluminal causal influence between the wings that is required to explain it in a classical causal model.\footnote{It should be noted that no \emph{finite} speed of superluminal causal influences can satisfactorily account for the predictions of quantum theory, per Ref.~\cite{Bancal2012}, so such influences would need to be assumed to be of infinite speed.} Because the enveloping theory within this paradigm includes not only non-signalling boxes that violate Bell inequalities but signalling boxes as well, the resource theory is rich enough to describe communication between the wings. Therefore, defining the resource theory in this way would not distinguish classical and nonclassical common-cause resources (as we propose to do here), but would instead draw a line between classical common-cause resources and everything else --- including classical signalling resources.\footnote{Note, therefore, that if one seeks to partition resources of a given type into classical and nonclassical varieties, then defining the enveloping theory correctly is just as important as defining the free subtheory correctly.} If one were to go this route, then all of classical Shannon theory would be subsumed in the resource theory. A potential response to this expansion in the scope of the project might be to try to eliminate such signalling resources {\em by hand}, by demanding that the enveloping theory was constrained to those boxes that are non-signalling among the wings. Such a response, however, seems to compromise the ideals of the superluminal causation paradigm, because no-signalling is an operational notion rather than a realist one.\footnote{John Bell famously argued against the idea that no-signalling could embody an assumption of locality in a fundamental physical theory on the grounds that it was too anthropocentric~\cite{bell1995nouvelle}:\begin{quote} ...the ``no signaling'' notion rests on concepts which are desperately vague, or vaguely applicable. The assertion that \enquote{we cannot signal faster than light} immediately provokes the question: Who do we think {\em we} are? {\em We} who can make \enquote{measurements}, {\em we} who can manipulate \enquote{external fields}, {\em we} who can \enquote{signal} at all, even if not faster than light? Do {\em we} include chemists, or only physicists, plants, or only animals, pocket calculators, or only mainframe computers? \end{quote} } \section{The free and enveloping theories that define our resource theory}\label{basicresthry} \subsection{Classical and nonclassical common-cause boxes} \label{envelopingthry} We begin by formalizing the relevant definitions from the previous section. For ease of presentation, we focus throughout on the bipartite Bell scenario, but the multipartite Bell scenario can be formalized analogously. Fig.~\ref{fig:CandNCboxes}(a) depicts the structure of a generic GPT-realizable common-cause box. The classical variables that range over the (fixed) choices of local measurements are termed the \term{setting variables}, denoted $S$ (left wing) and $T$ (right wing), while the classical variables that range over the possible results of these measurements are termed the \term{outcome variables}, denoted $X$ (left wing) and $Y$ (right wing). \begin{figure}[htb!] \begin{center} \subfigure[\label{fig:CandNCboxesGPT}] { \centering \includegraphics[scale=0.5]{figures/NonclassicalBox.pdf} } \subfigure[\label{fig:CandNCboxesCLASSICAL}] { \centering \includegraphics[scale=0.5]{figures/ClassicalBox.pdf} } \caption{The distinction between \subref{fig:CandNCboxesGPT} a {\em generic} GPT-realizable common-cause box and \subref{fig:CandNCboxesCLASSICAL} a {\em classical} common-cause box. Here, single-line edges denote classical systems, and single-line boxes denote processes that have only classical inputs and outputs (depicted in light blue). Double-line edges denote nonclassical systems and double-line boxes denote processes that have one or more nonclassical inputs or outputs (depicted in pink). Any common-cause box consistent with an internal structure of the type indicated in \subref{fig:CandNCboxesCLASSICAL} is termed classical, while a common-cause box that is not consistent with the structure of \subref{fig:CandNCboxesCLASSICAL} but instead is only consistent with an internal structure of the type indicated in \subref{fig:CandNCboxesGPT} is termed nonclassical.\label{fig:CandNCboxes} }\end{center} \end{figure} \begin{defn} We define the \term{type} of a box as the full specification of the cardinalities of all of the setting variables and all of the outcome variables, and we denote the type of a resource $R$ as $[R]$. We introduce the following notational convention to specify types: the cardinalities of the setting variables for all $n$ wings and the cardinalities of the outcome variables for all $n$ wings are specified as the bottom and top rows, respectively, of a $2{\times }n$ matrix. For example, for the 2-wing common-cause box depicted in Fig.~\ref{fig:CandNCboxes}, the type is {\xyst}, where $|O|$ denotes the cardinality of a variable $O$. \end{defn} If we further particularize to the CHSH scenario, where the cardinalities of both setting and outcome variables is 2, then the type is {\twotwotwotwo}. Let us label the system distributed to the left wing by $A$ and the one to the right wing by $B$. In the GPT framework, states and effects on $A$ ($B$) are represented by vectors in a real vector space of dimension $d_A$ ($d_B$), that is, in $\mathbb{R}^{d_A}$ ($\mathbb{R}^{d_B}$). States and effects on the composite $AB$ are represented by vectors in the tensor product of these vector spaces, $\mathbb{R}^{d_A}\otimes \mathbb{R}^{d_B}$. If the GPT representation of the $X=x$ outcome of the $S=s$ measurement on system $A$ is ${\bf r}^{A}_{x|s} \in \mathbb{R}^{d_A}$ and that of the $Y=y$ outcome of the $T=t$ measurement on system $B$ is ${\bf r}^{B}_{y|t} \in \mathbb{R}^{d_B}$, and if ${\bf s}^{AB} \in \mathbb{R}^{d_A}\otimes \mathbb{R}^{d_B}$ denotes the GPT state of the composite $AB$, then the conditional probability distribution associated to this GPT-realizable common-cause box is \begin{equation} P_{XY|ST}(xy|st) = ({\bf r}^{A}_{x|s} \otimes {\bf r}^{B}_{y|t}) \cdot {\bf s}^{AB}, \label{GPTCCbox} \end{equation} where $\cdot$ denotes the Euclidean inner product. By virtue of their internal causal structure, all GPT-realizable common-cause boxes satisfy the no-signalling conditions $P_{Y|ST} = P_{Y|T}$ and $P_{X|ST} = P_{X|S}$. It is straightforward to verify that this follows from Eq.~\eqref{GPTCCbox} using the fact that $\sum_x {\bf r}^{A}_{x|s} = {\bf u}^A$, where ${\bf u}^A$ is the unique deterministic effect on $A$, which is independent of value $s$ of the setting variable, and using the analogous fact for $B$. The common-cause boxes that are considered to be {\em free} in our resource theory are those that can be realized when the GPT governing the internal workings of the box is classical probability theory, as depicted in Fig.~\ref{fig:CandNCboxesCLASSICAL}. In such cases, the scope of possibilities for the overall functionality of the common-cause box can be characterized as follows. The systems $A$ and $B$ are described by classical variables, $\Lambda_A$ and $\Lambda_B$ (here assumed to be discrete). Classically, the composite system $AB$ is prepared in a joint distribution over these, $P_{\Lambda_A \Lambda_B}$. The GPT state in this case is $[{\bf s}^{AB}]_{\lambda_A\lambda_B} = P_{\Lambda_A \Lambda_B}(\lambda_A, \lambda_B)$, where $[{\bf v}]_i$ denotes the $i$th component of a vector ${\bf v}$ living in vector space $\mathbb{R}^{|\Lambda_A|} \otimes \mathbb{R}^{|\Lambda_B|}$. Without loss of generality, we can take systems $A$ and $B$ to be perfectly correlated (by incorporating any noise into the measurements), corresponding to the case where $P_{\Lambda_A \Lambda_B}(\lambda_A, \lambda_B) = \sum_{\lambda} \delta_{\lambda_A,\lambda} \delta_{\lambda_B,\lambda} P_{\Lambda}(\lambda)$ for some distribution $P_{\Lambda}(\lambda)$, and where $\delta$ denotes the Kronecker-delta function. This distribution over $\Lambda_A$ and $\Lambda_B$ can be conceptualized as follows: sample a variable $\Lambda$ from some distribution, then let $\Lambda_A$ and $\Lambda_B$ be copies of it. Classically, the $X=x$ outcome of the $S=s$ measurement on system $A$ is modelled by a conditional probability distribution $P_{X|S\Lambda_A}$. The GPT effect associated to this measurement on $A$ is ${\bf r}^{A}_{x|s}$ with components $[{\bf r}^{A}_{x|s}]_{\lambda_A} = P_{X|S\Lambda_A}(x|s\lambda_A)$. Similarly, the GPT effect associated to the measurement on $B$ is ${\bf r}_{y|t}^{B}$ and has components $[{\bf r}^{B}_{y|t}]_{\lambda_B} = P_{Y|T\Lambda_B}(y|t\lambda_B)$. Substituting these expressions into Eq.~\eqref{GPTCCbox}, we conclude that a classical common-cause box satisfies \begin{align} &P_{XY|ST}(xy|st)\nonumber\\ &= \smashoperator{\sum_{\lambda_A\lambda_B}}P_{ X|S\Lambda_A}(x|s\lambda_A)P_{Y|T\Lambda_B}(y|t\lambda_B) P_{\Lambda_A \Lambda_B}(\lambda_A\lambda_B)\nonumber\\ &=\smashoperator{\sum_{\lambda}}P_{ X|S\Lambda}(x|s\lambda)P_{Y|T\Lambda}(y|t\lambda) P_{\Lambda}(\lambda). \label{ClassicalCCbox} \end{align} This is recognized to be the expression for a conditional probability distribution $P_{XY|ST}$ that satisfies the Bell inequalities. \subsection{The free operations} \label{sec:freeoperations} The set of free operations defining the resource theory in our approach are those that can be achieved by embedding the resource in a circuit composed of box-type processes that are classical and that respect the causal structure of the scenario. The stipulation that the process respects the causal structure is required for it to remain within the enveloping set of processes in the resource theory. Because Bell scenarios are the ones of interest to us in this article, the set of free operations are those that can be achieved by embedding the resource in a circuit composed of box-type processes that are classical and that have the causal structure of a Bell scenario, namely, a {\em common-cause} acting on all of the wings.\footnote{In particular, any operation involving a cause-effect relation between the wings is excluded from the free set.} The most general free operation taking a bipartite common-cause box with settings $S, T$ and outcomes $X, Y$ to a bipartite common-cause box with settings $S', T'$ and outcomes $X', Y'$ is depicted in blue in Fig.~\ref{fig:transfexplicit}. It is the most general processing which makes use of a classical common cause that can act on the local pre-processings and the local post-processings at each of the wings. It subsumes as special cases processings wherein classical common causes act on any of the subsets of these four local processings. Note that the most general free operation allows arbitrary feed-forward of classical information on each wing, since this does not require any causal influences between the wings.\footnote{Because the only physical restriction we are imagining is that no cause-effect influences are present {\em between} wings, feed-forward of {\em nonclassical} information (that is, of arbitrary GPT systems) at each wing is also a free LOSR operation. {\em Without loss of generality}, however, we consider only feed-forward of classical systems in this work, because this is already sufficient to generate {\em any} conditional probability distribution $P_{X' Y' S T|XY S'T'}$ consistent with the causal structure, i.e., satisfying Eqs.~(\ref{freeopNoSignal}-\ref{freeopNoRetro}). } But any such operation can always also be put into the canonical form depicted in blue in Fig.~\ref{fig:transf}. It suffices to note that the system that mediates the action of the common cause on the post-processings on a given wing can always be passed down the classical side-channel. Henceforth, we use this canonical form when describing the most general free operation. \begin{figure}[ht] \centering \includegraphics[width=225pt]{figures/FreeOpsExplicit.pdf} \caption{The most general form of an operation $P_{X' Y' S T|XY S'T'}$ (in blue) taking a common-cause box $P_{XY|ST}$ (in pink) to a common-cause box $P_{X'Y'|S'T'}$.} \label{fig:transfexplicit} \end{figure} Formally, such an operation transforms the conditional probability distribution $P_{XY|ST}$ to $P_{X'Y'|S'T'}$ as \begin{equation}\label{actionfreeop} P_{X'Y'|S'T'} = \sum_{XYST} P_{X' Y' S T|XY S'T'} P_{XY|ST} \end{equation} where the conditional probability distribution $P_{X' Y' S T|XY S'T'}$ satisfies certain constraints, which we specify below. \begin{figure}[ht] \centering \includegraphics[width=225pt]{figures/freetransf.pdf} \caption{The canonical form of a generic bipartite {\LOSR} operation $P_{X' Y' S T|XY S'T'}$ (in blue) taking a common-cause box $P_{XY|ST}$ (in pink) to a common-cause box $P_{X'Y'|S'T'}$. } \label{fig:transf} \end{figure} Circuit fragments that map processes to processes (such as the ones depicted in blue in Figs.~\ref{fig:transfexplicit} and ~\ref{fig:transf}) have been studied extensively in recent years in a variety of frameworks, most notably the quantum combs framework of Refs.~\cite{qcombs08,qcombs09}, and the process matrix framework of Refs.~\cite{OCB12,Oreshkov2016}. If the source and target resources are denoted by $R$ and $R'$, respectively, and the free operation is denoted by $\tau$, we represent Eq.~\eqref{actionfreeop} as \begin{equation} R' = \tau \circ R, \end{equation} where $\circ$ is a particular instance of the \term{link product} of Ref.~\cite{qcombs08}. On the left wing, the most general local {\em pre-}processing takes as input the setting variable of the target resource ($S'$) and the variable originating from the common cause, and it generates as output the setting variable of the source resource ($S$) as well as an arbitrary variable which propagates down the side-channel. The most general {\em post-}processing on the left wing takes as input the outcome variable of the source resource ($X$) and the side-channel variable, and it generates as output the outcome variable of the target resource ($X'$). Included as special cases among these pre- and post-processings are maps from $S'$ to $S$ and from $X$ to $X'$ that constitute relabellings, coarse-grainings, or fine-grainings of the variable, where the possibilities are constrained by the cardinalities of these variables. Also included as special cases are instances where the map from $S'$ to $S$ or the map from $X'$ to $X$ is chosen probabilistically, and instances where these two maps are correlated (by making use of the side-channel). The analogous pre- and post-processings at the right wing are also possible. Finally, the choices of maps on the left can also be correlated with the choices of maps on the right, by leveraging the common cause. Note also that free operations can change the cardinality of a given box, which is reflected in the fact that we have not restricted the cardinalities of $X',Y',S'$, or $T'$ in any way. Thus, free operations can change the type of a resource. The free operations are characterized by those $P_{X' Y' S T|XY S'T'}$ which can be achieved via the type of circuit fragment depicted in Fig.~\ref{fig:transf}, namely, those such that \begin{align} \label{LC4} &P_{X' Y' S T|XY S'T'}(x'y'st|xys't')= \\ \nonumber &\smashoperator{\sum_{\lambda_A\lambda_B}} \begin{array}{r} P_{X' S|X S' \Lambda_A }(x's|xs'\lambda_A) P_{Y'T|YT'\Lambda_B}(y't|yt'\lambda_B) \\\times P_{\Lambda_A \Lambda_B}(\lambda_A\lambda_B) \end{array} \end{align} for some joint distribution $P_{\Lambda_A \Lambda_B}$ and for some $P_{X' S|X S' \Lambda_A }$ and $P_{Y'T|YT'\Lambda_B}$ satisfying no-retrocausation conditions \begin{align}\begin{split}\label{NRCs} P_{S|X S' \Lambda_A }=P_{S|S'\Lambda_A}\\ P_{T|Y T'\Lambda_B}=P_{T|T'\Lambda_B}. \end{split}\end{align} One can directly check that any $P_{X' Y' S T|XY S'T'}$ admitting of a decomposition as in Eq.~\eqref{LC4} satisfies the {\em operational} no-signalling constraints \begin{align}\begin{split} P_{X' S |XY S'T'} = P_{X'S|XS'}\\ P_{Y' T |XY S'T'} = P_{Y'T|YT'}\end{split}\label{freeopNoSignal} \end{align} and the {\em operational} no-retrocausation conditions \begin{align}\begin{split} P_{S|X S'}=P_{S|S'}\\ P_{T|Y T'}=P_{T|T'}.\end{split}\label{freeopNoRetro} \end{align} The parts of the circuit fragment in Fig.~\ref{fig:transf} that are associated to $P_{X' S|X S' \Lambda_A }$ and $P_{Y'T|YT'\Lambda_B}$ we refer to as \term{local operations}. The part associated to $P_{\Lambda_A \Lambda_B}$ corresponds to a joint distribution on the variables distributed to the two wings and can therefore be conceived of as \term{shared randomness}. Consequently, the free operations we are endorsing here can indeed be described as local operations and shared randomness ({\em \LOSR}), as noted earlier. \begin{defn}\label{defnLOSR} An operation is in the set \term{LOSR} (and termed an \term{LOSR operation}) if and only if it is associated to a conditional probability distribution $P_{X' Y' S T|XY S'T'}$ that admits of the sort of decomposition specified by Eqs.~\eqref{LC4} and \eqref{NRCs}. \end{defn} Previous resource-theoretic approaches to Bell-inequality violations have also endorsed the intuitive notion that local operations supplemented with shared randomness should constitute the free operations. Different works, however, have made different proposals for how this notion ought to be formalized. The correct formalization, in our opinion, is the one provided in Geller and Piani~\cite{GellerPiani} and independently in deVincente~\cite{de2014nonlocality}, which coincides with the one given above\footnote{The definition of \term{LOSR} given in Geller and Piani~\cite{GellerPiani} is very similar to the one provided here (see Fig. 4 therein), while the one provided in de Vicente\cite{de2014nonlocality} is much more cumbersome.}. Therefore, in this article we are endorsing the proposal of Refs.~\cite{de2014nonlocality,GellerPiani} to take \term{LOSR} as the free operations. On the other hand, Refs.~\cite{gallego2016nonlocality,Amaral2017NCW,kaur2018fundamental} have formalized the notion of local operations supplemented with shared randomness differently, defining a strict subset of the set \term{LOSR} defined above (a subset that can be shown to be nonconvex). Nonetheless, we believe that this discrepancy was an oversight and that it is unlikely anyone would defend taking this subset rather the full set to define the resource theory. We discuss the issue in depth in Appendix~\ref{comparisonsub}. As a final comment, note that, without loss of generality, we can take the joint distribution to be $P_{\Lambda_A \Lambda_B}(\lambda_A \lambda_B) = \sum_{\lambda} \delta_{\lambda_A,\lambda} \delta_{\lambda_B,\lambda} P_{\Lambda}(\lambda)$ for some distribution $P_{\Lambda}$, and hence express Eq.~\eqref{LC4} as \begin{align} \label{LC4prime} &P_{X' Y' S T|XY S'T'}(x' y' st|xy s't')= \\\nonumber &\sum_{\lambda}P_{X' S|X S' \Lambda }(x' s|x s' \lambda) P_{Y'T|YT'\Lambda}(y't|yt' \lambda) P_{\Lambda}(\lambda). \end{align} As a consequence, the conditional probability distribution $P_{X' Y' S T|XY S'T'}$ can be conceptualized as the more familiar object $P_{\tilde{X} \tilde{Y}|\tilde{S} \tilde{T}}$ for setting variables $\tilde{S},\tilde{T}$ and outcome variables $\tilde{X}, \tilde{Y}$ that are defined as follows. We take the composite of the outputs of the circuit fragment on the left wing, $X'$ and $S$, as a composite outcome variable $\tilde{X}$, so that $\tilde{X}\coloneqq(X',S)$. Similarly, we take the composite of the inputs on the left wing, $X$ and $S'$, as a composite setting variable $\tilde{S}$, so that $\tilde{S}\coloneqq(X,S')$. Making the analogous definitions for $\tilde{Y}$ and $\tilde{T}$ in terms of $Y, T, Y', T'$ on the right wing, Eq.~\eqref{LC4prime} can be rewritten as \begin{align} \label{LC4primeprime} &P_{\tilde{X}\tilde{Y}|\tilde{S}\tilde{T}}(\tilde{x}\tilde{y}|\tilde{s}\tilde{t})=\\ &\nonumber\qquad\sum_{\lambda}P_{ \tilde{X}|\tilde{S}\Lambda}(\tilde{x}|\tilde{s}\lambda)P_{\tilde{Y}|\tilde{T}\Lambda}(\tilde{y}|\tilde{t}\lambda) P_{\Lambda}(\lambda). \end{align} Recalling Eq.~\eqref{ClassicalCCbox}, it is clear that $P_{\tilde{X} \tilde{Y}|\tilde{S} \tilde{T}}$ satisfies all of the Bell inequalities. This illustrates the consistency of our proposal for the free operations, for we have just shown that the free operations on a resource $P_{XY|ST}$ are those that are achieved by taking a link product~\cite{qcombs08} with a process $P_{X' Y' S T|XY S'T'} \coloneqq P_{\tilde{X} \tilde{Y}|\tilde{S} \tilde{T}}$ which satisfies all of the Bell inequalities. Consider a source resource $R_1$ of type $[R_1]$ and a target resource $R_2$ of type $[R_2]$. We denote the type of an operation $\tau$ taking any resource of type $[R_1]$ to any resource of type $[R_2]$ by $[\tau] \coloneqq [R_1]\rightarrow [R_2]$, and we denote the set of all free operations of type $[R_1]\rightarrow [R_2]$ by $\underset{\scriptscriptstyle ^{[R_1]\rightarrow [R_2]}}{\LOSR}$. \subsubsection{Locally deterministic operations and local symmetry operations} It is valuable to consider two special finite-cardinality subsets of {\LOSR} operations: those that are deterministic and those that are invertible. Note that the invertible {\LOSR} operations are included among the deterministic ones because any indeterminism in the operation would be an obstacle to invertibility. \begin{defn} \label{defn:LDO} An {\LOSR} operation is in the set \term{\LDO} (i.e., it is a \term{locally deterministic operation}) if and only if the conditional probabilities $P_{X' Y' S T|XY S'T'}$ which define the operation take values in $\{0,1 \}$ for all values of $X'$,$Y'$,$S$,$T$,$X$,$Y$,$S'$ and $T'$. We denote the complete set of {\LDO} operations of type $[R_1]\rightarrow [R_2]$ by $\underset{\scriptscriptstyle ^{[R_1]\rightarrow [R_2]}}{\LDO}$. \end{defn} Deterministic {\LOSR} operations---i.e., {\LDO} operations---\emph{factorize} in the sense that every {\LDO} operation can be expressed as the product of two local deterministic operations such that \begin{align} \label{ExtremalOpLOSR} &P^{\rm det}_{X' Y' S T|XY S'T'}= P^{\rm det}_{X' S|X S'}P^{\rm det}_{Y' T|Y T'}. \end{align} This follows from the fact that the deterministic dependences preclude any dependence on the shared random variables $\lambda_A$ and $\lambda_B$ in Eq.~\eqref{LC4}, which then reduces to Eq.~\eqref{ExtremalOpLOSR}. Furthermore, the no retrocausation assumption of Eq.~\eqref{freeopNoRetro} implies that these deterministic dependencies are of the following form: \begin{align}\begin{split} \label{ExtremalOpLOSRb} P^{\rm det}_{X' S|X S'} = \delta_{S,f_A(S')} \delta_{X',g_A(X,S')},\\ P^{\rm det}_{Y' T|Y T'} = \delta_{T,f_B(T')} \delta_{Y',g_B(Y,T')} \end{split}\end{align} for some functions $f_A$, $g_A$, $f_B$ and $g_B$. Specifically, on the left wing, $S$ is generated as deterministically as a function of $S'$ (the pre-processing) and $X'$ is generated deterministically as a function of $X$ and $S'$ (the post-processing, which is setting-dependent), and similarly for the right wing. A generic bipartite locally deterministic operation is depicted in Fig.~\ref{det}. \begin{figure}[htb!] \centering \includegraphics[width=225pt]{figures/LDO2} \caption{A generic bipartite locally deterministic operation $P_{X'Y'ST|XYS'T'} \in$ {\LDO} consists of a product of deterministic operations at each wing. The black dots in the figure represent classical copy operations, and the output variables for each gate are deterministic functions of the input variables for that gate. } \label{det} \end{figure} The cardinality of the set {\LDO} for a given type can be easily deduced. Let $|S|, |X|, \dots$ denote the cardinalities of the variables $S$, $X$, $\dots$. The total number of possibilities for the function $g_A$ is $|X'|^{|X| \cdot |S'|}$, and the total number of possibilities for the function $f_A$ is $|S|^{|S'|}$, so that the total number of possibilities for a deterministic operation on the left wing is $\left(|S| \cdot |X'|^{|X|}\right)^{|S'|}$. An analogous decomposition holds for the deterministic operations on the right wing, and the total number of possibilities for these is $\left(|T|\cdot |Y'|^{|Y|}\right)^{|T'|}$. Consequently, the cardinality of the set {\LDO} in this bipartite case is \begin{align}\label{eq:LDOcount} |\LDO| = \left(|S| \cdot |X'|^{|X|}\right)^{|S'|} \, \left(|T|\cdot |Y'|^{|Y|}\right)^{|T'|}. \end{align} The other important subset of {\LOSR} are those type-preserving operations which are {\em invertible} (and hence also deterministic). We refer to this subset of {\LOSR} operations as the \emph{local symmetry operations} and denote it {\LSO}. \begin{defn} \label{defn:LSO} The set \term{\LSO} (i.e., the \term{local symmetry operations}) is the subset of type-preserving operations in {\LDO} that are invertible. \end{defn} Every local symmetry operation, $P^{\rm sym}_{X' Y' S T|XY S'T'}$, has the form of a locally deterministic operation, $P^{\rm det}_{X' Y' S T|XY S'T'}$, specified in Eqs.~\eqref{ExtremalOpLOSR}-\eqref{ExtremalOpLOSRb}. That is, \begin{align} \label{SymOps} &P^{\rm sym}_{X' Y' S T|XY S'T'}= P^{\rm sym}_{X' S|X S'}P^{\rm sym}_{Y' T|Y T'}. \end{align} where \begin{align}\begin{split} \label{SumOpsb} P^{\rm sym}_{X' S|X S'} = \delta_{S,f_A(S')} \delta_{X',g_A(X,S')},\\ P^{\rm sym}_{Y' T|Y T'} = \delta_{T,f_B(T')} \delta_{Y',g_B(Y,T')}, \end{split}\end{align} but where $f_A$, $g_A$ are such that $P^{\rm sym}_{X' S|X S'}$ defines an invertible map from $(X, S')$ to $(X',S)$, and where $f_B$ and $g_B$ are such that $P^{\rm sym}_{Y' T|Y T'}$ defines an invertible map from $(Y,T')$ to $(Y', T)$. Unlike general {\LDO} operations, {\LSO} operations are always type-preserving, and hence the type {\xystprime} always matches the type {\xyst}. Note that an exchange of the parties is a symmetry operation (i.e., invertible), but it cannot be implemented by {\em local} operations, and so it is not part of {\LSO}. As a final remark, notice that the set of LSO operations forms a \emph{group}. This follows from the fact that the properties of being deterministic and invertible persist under composition, and that the inverse of every LSO operation is in LSO. This group is generated by the permutations of the value of a setting variable, and the permutations of the value of an outcome variable, where the choice of the latter permutation might depend also on the value of the setting variable on the same wing. In the bipartite case, the ${\LSO}$ group is a finite group of order\footnote{The order of a group is the cardinality of the set of group elements, i.e., the order of the ${\LSO}$ group quantifies the total number of \emph{invertible} {\LDO} operations.} \begin{align} |{\LSO}|=(|S|!)\cdot (|X|!)^{|S|}\cdot (|T|!)\cdot (|Y|!)^{|T|}, \end{align} corresponding to the $(|S|!)$ relabelings for the settings of the left wing, multiplied by the $(|X|!)$ relabelings of outcomes for each of the $|S|$ different settings, and similarly for the right wing. The group can be generated by the relabelings of only adjacent settings or outcomes, and hence the ${\LSO}$ group admits a natural representation in terms of ${(|S|{-}1)+|S|(|X|{-}1)+(|T|{-}1)+|T|(|Y|{-}1)}$ generators (see Ref.~\citep[App.~B]{Rosset2014classifying}). For a concrete example, consider the operations transforming type {\twotwotwotwo} into type {\twotwotwotwo}. Throughout this work, we index the values a variable $X$ can take as $x\in \{0{,}{...},|X|-1\}.$ Accordingly, in the {\twotwotwotwo} scenario, $X,Y,S,T$ take values in $\{0,1\}$. Using this notation, the group of LSO can be generated explicitly by the four operations which interconvert $P_{XY|ST}(x,y|s,t)$ with either $P_{XY|ST}(x,y|s{\oplus}1,t)$, $P_{XY|ST}(x,y|s,t{\oplus}1)$, $P_{XY|ST}(x{\oplus}s,y|s,t)$, or $P_{XY|ST}(x,y{\oplus}t|s,t)$, where $\oplus$ denotes summation modulo two.\footnote{A second generating set of operations for this group is given by $\tau_1{,}{...},\tau_6$ defined in Proposition~\ref{prop:symmetrypartitioning}.} One can readily verify~\cite{Seress2003} that the order of this group is 64. \sloppy Suppose that a resource $R$ is represented as a real-valued vector $\vec{R}$ of conditional probabilities $P_{XY|ST}(xy|st)$, or any linear transformation thereof (such as the representation in terms of correlators used in Section~\ref{twomonotones}). ${\LSO}$ operations act as invertible linear maps on such a representation. Assuming $f$ is a linear function over $\vec{R}$, then its action can be represented as $ f(\vec{R}) = \vec{f} \cdot \vec{R}$ for some $\vec{f}$. Hence, it is equally as meaningful to speak about $\vec{f}$ being transformed under ${\LSO}$ group elements as it is to speak about $\vec{R}$ being so transformed. The action of an ${\LSO}$ operation on $\vec{f}$ can be thought of as applying the \emph{inverse transformation} to $\vec{R}$, i.e., \begin{align}\label{resourcesvsfunctionals} \vec{f} \cdot(\pi \vec{R}) = (\pi^{T} \vec{f})\cdot \vec{R} . \end{align} Note that many \emph{type-changing} ${\LOSR}$ operations are equally well-defined as transformations on linear functions. The critical requirement is that the operation be \emph{left-invertible}, i.e., it should act as an \emph{injective} function on the set of conditional probabilities. See Refs.~\cite{Pironio2005,Rosset2014classifying,Rosset2019CausalInequalities} for discussions on the topic of converting linear functions (and Bell inequalities in particular). \subsection{Convexity of the set of free operations} \label{detsuf} We now show that the set of free operations is convex, and that the extremal elements are deterministic, and enumerable for fixed type of the source resource and of the target resource. This implies that the set of free operations mapping from a given source resource type to a given target resource type is a polytope. We begin by proving convexity. \begin{prop} \label{lem:convexity} The set {\LOSR} is convex, i.e., if $\tau_0 \in \LOSR$ and $\tau_1 \in \LOSR$, then ${{w\tau_0+(1-w)\tau_1} \in \LOSR}$ for $0 \le w \le 1$. \end{prop} This follows from the fact that the resources required to achieve such a mixing are achievable using {\LOSR}. Suppose $\beta$ is a binary variable that decides whether $\tau_0$ or $\tau_1$ will be implemented. It suffices to imagine that $\beta$ is sampled from a distribution $P_{\beta}$ where $P_{\beta}(0)=w$, that it is copied and distributed to both wings (with a copy sent down the side-channel at each wing), and that the local processings that are implemented on each wing are made to depend on $\beta$ (chosen so that if $\beta=b$, then $\tau_b$ is implemented overall). Because $\beta$ can be incorporated into the definition of the shared randomness, the procedure just described is itself achievable using {\LOSR}. The convexity of the set of {\LOSR} operations is crucial for the technique we develop in the next section to answer questions about resource conversion. Recognizing the full potential of this convexity is one of the key contributions of our work. In Appendix~\ref{comparisonsub}, we discuss convexity further, in particular noting that previous formulations of {\LOSR} did not seem to recognize the physical realizability of convex mixing within {\LOSR}, but rather imposed convexity mathematically. Next, we highlight features of the extremal free operations. \begin{prop} \label{lem:detextremal} The set of convexly extremal operations in {\LOSR} are precisely the subset of operations comprising {\LDO}, namely the deterministic {\LOSR} operations. \end{prop} This proposition is a minor generalization of Fine's argument, since the latter states that {\em locally deterministic} models can generate any conditional distribution that arises in a {\em locally indeterministic} model~\cite{FinePRL}. As in Fine's argument, here too any indeterminism in the local operations can be absorbed into the shared randomness, and hence allowing indeterministic local operations provides no more generality than considering only deterministic local operations. \begin{proof} It suffices to run Fine's argument for the composite variables $\tilde{S}, \tilde{T}, \tilde{X}$ and $\tilde{Y}$. To see this explicitly, note that the constituent factors in the expression for an LOSR operation in Eq.~\eqref{LC4primeprime} can be rewritten as \begin{align*} &P_{\tilde{X}|\tilde{S} \Lambda}(\tilde{x}|\tilde{s} \lambda)=\sum_{\lambda_A\in\Lambda_A} P^{\rm det,\lambda_A}_{\tilde{X}|\tilde{S}}(\tilde{x}|\tilde{s}) P_{\Lambda_A|\Lambda}(\lambda_A|\lambda),\\ &P_{\tilde{Y}|\tilde{T} \Lambda}(\tilde{y}|\tilde{t} \lambda)=\sum_{\lambda_B\in\Lambda_B} P^{\rm det,\lambda_B}_{\tilde{Y}|\tilde{T}}(\tilde{y}|\tilde{t}) P_{\Lambda_B|\Lambda}(\lambda_B|\lambda), \end{align*} where for each value of $\lambda_A$, the conditional $P^{\rm det,\lambda_A}_{\tilde{X}|\tilde{S}}$ describes a deterministic operation on the left wing specifying the value of $\tilde{X} = (X',S)$ for every value of $\tilde{S} = (X,S')$, and similarly for {$P^{\rm det,\lambda_B}_{\tilde{Y}|\tilde{T}}$} on the right wing. Plugging these back into Eq.~\eqref{LC4primeprime}, we have that \begin{align} \label{LC4det} &P_{\tilde{X}\tilde{Y}|\tilde{S}\tilde{T}}(\tilde{x}\tilde{y}|\tilde{s}\tilde{t}) = \\\nonumber& \sum_{\lambda_A,\lambda_B} P^{\rm det,\lambda_A}_{ \tilde{X}|\tilde{S}}( \tilde{x}|\tilde{s}) P^{\rm det,\lambda_B}_{\tilde{Y}|\tilde{T}}(\tilde{y}|\tilde{t}) P_{\Lambda_A\Lambda_B}(\lambda_A\lambda_B), \end{align} where we have defined $P_{\Lambda_A\Lambda_B}(\lambda_A\lambda_B)\coloneqq \sum_{\lambda\in\Lambda}P_{\Lambda_A|\Lambda}(\lambda_A|\lambda)P_{\Lambda_B|\Lambda}(\lambda_B|\lambda)P_{\Lambda}(\lambda).$ Eq.~\eqref{LC4det} shows that a generic indeterministic {\LOSR} operation can always be decomposed into a convex combination of products of deterministic operations on each wing, and hence the convexly extremal {\LOSR} operations are precisely the {\LDO} operations.\end{proof} What we have shown above is that any element of $\underset{\scriptscriptstyle ^{[R_1]\rightarrow [R_2]}}{\LOSR}$ admits of a convex decomposition into elements of $\underset{\scriptscriptstyle ^{[R_1]\rightarrow [R_2]}}{\LDO}$. This implies the following useful geometric fact: \begin{prop}[Polytope of free operations]\label{lem:transpolytope}\hspace{0pt}\\ The set of all free operations of a given type is a polytope whose vertices are the locally deterministic operations of that type, \begin{align}\label{eq:convexityoftransformations} \underset{\scriptscriptstyle ^{[R_1]\rightarrow [R_2]}}{\LOSR} = \operatorname{ConvexHull}{\left(\underset{\scriptscriptstyle ^{[R_1]\rightarrow [R_2]}}{\LDO}\right)}. \end{align} \end{prop} The number of vertices of this polytope corresponds to the cardinality of the set of {\LDO} operations, as given in Eq.~\eqref{eq:LDOcount}. \section{Resource theory preliminaries} \label{rtprelim} A central question in any resource theory is whether one resource can be converted to another via the free operations. Many notions of conversion are studied: single-copy deterministic conversion, single-copy indeterministic conversion (where the probability of success need only be nonzero), multi-copy conversion (where one is given more than one copy of the resource), asymptotic conversion (where one is given arbitrarily many copies), and catalytic conversion (where one has access to another resource that must be returned intact after the conversion). We here focus on single-copy deterministic conversion. As noted earlier, we denote the application of an operation $\tau$ to a resource $R$ by $\tau \circ R$. If $R_1$ can be converted to $R_2$ by free operations, one writes $R_1 \conv R_2$, otherwise one writes $R_1 \nconv R_2$. Explicitly, \begin{align*} & R_1\conv R_2 \quad\text{denotes\; that}\quad\exists\; {\tau\in\underset{\scriptscriptstyle ^{[R_1]\conv [R_2]}}{\LOSR}} \\&\quad\text{such that}\quad R_2 = \tau \circ R_1, \vphantom{\tau\in\underset{\scriptscriptstyle ^{[R_1]\conv [R_2]}}{\LOSR}}\\ \text{and}\quad &R_1\nconv R_2 \quad\text{denotes\; that}\quad\nexists\; {\tau\in\underset{\scriptscriptstyle ^{[R_1]\conv [R_2]}}{\LOSR}} \\&\quad\text{such that}\quad R_2 = \tau \circ R_1. \end{align*} If one can determine, for any pair of resources $R_1$ and $R_2$, whether $R_1$ can be converted to $R_2$ using a free operation, then one can determine the \term{pre-order} over all resources that is induced by the conversion relation. A pre-order, by definition, is a transitive and reflexive binary relation between resources. The conversion relation is reflexive because the identity operation is free and maps a resource to itself, while it is transitive because if $R_1\conv R_2$ and $R_2 \conv R_3$ then $R_1 \conv R_3$. There are four possible ordering relations that might hold between a pair of resources.\begin{flalign*} &R_1\text{ is \term{strictly above} }R_2\text{ if:}& \!\!\left(\!\!\begin{array}{r}R_1\conv R_2 \\ \!\text{and }R_2\nconv R_1\end{array}\!\!\right) \!,\\ &R_1\text{ is \term{strictly below} }R_2\text{ if:}& \!\!\left(\!\!\begin{array}{r}R_1\nconv R_2 \\ \!\text{and }R_2\conv R_1\end{array}\!\!\right) \!,\\ &R_1\text{ is \term{incomparable} to }R_2\text{ if:}& \!\!\left(\!\!\begin{array}{r}R_1\nconv R_2 \\ \!\text{and }R_2\nconv R_1\end{array}\!\!\right) \!,\\ &R_1\text{ is \term{equivalent} to }R_2\text{ if:}& \!\!\left(\!\!\begin{array}{r}R_1\conv R_2 \\ \!\text{and }R_2\conv R_1\end{array}\!\!\right) \!. \end{flalign*} If $R_1$ is either strictly above or strictly below $R_2$, we say that $R_1$ and $R_2$ are \term{strictly ordered}. We pause to comment on the notion of equivalence of resources. By definition, if $R_1$ is equivalent to $R_2$ then the conversion from one to the other is free in both directions, \begin{align*} \begin{array}{r}\exists\; {\tau_1\in\underset{\scriptscriptstyle ^{[R_1]\conv [R_2]}}{\LOSR}}\quad\text{such that}\quad R_2 = \tau_1 \circ R_1, \\ \text{and }\;\exists\; {\tau_2\in\underset{\scriptscriptstyle ^{[R_2]\conv [R_1]}}{\LOSR}}\quad\text{such that}\quad R_1 = \tau_2 \circ R_2.\end{array} \end{align*} It need not be the case, however, that either of the free operations $\tau_1$ or $\tau_2$ is invertible, nor that one is the inverse of the other. For instance, if $R_1$ and $R_2$ are both free resources, then $\tau_1$ can be the operation which discards $R_2$ and prepares $R_1$, while $\tau_2$ can be the operation which discards $R_1$ and prepares $R_2$. The conversion relation between resources implies a corresponding conversion relation between \term{equivalence classes} of resources (relative to the equivalence relation defined above), wherein for any two equivalence classes, they are either strictly ordered or incomparable. The conversion relation between equivalence classes is therefore antisymmetric and describes a \term{partial order} relation rather than a pre-order relation. One can therefore conceptualize the project of characterizing the pre-order as a characterization of the equivalence classes and of the partial order that holds among these. In this work, we do not provide a characterization of the equivalence classes, and so our focus will be on directly characterizing features of the pre-order of resources. \subsection{Global features of a pre-order} \label{sec:oraclelimitations} To have a complete understanding of deterministic single-copy conversion in a resource theory, one must have an understanding of the pre-order that this conversion relation defines. In this section, we describe some of the basic features that characterize pre-orders. Perhaps the most basic question about a pre-order of resources is whether or not it is \term{totally pre-ordered}, meaning that every pair of elements in the pre-order is strictly ordered or equivalent (i.e., the pre-order has no incomparable elements). Equivalently, we say that a pre-order is totally pre-ordered if and only if the partial order over equivalence classes that it defines is totally ordered (i.e., has no incomparable elements). If there do exist incomparable resources, one can ask if the binary relation of incomparability is transitive, in which case the pre-order is termed \term{weak}. A \term{chain} is a subset of the pre-order in which every pair of elements is strictly ordered. The \term{height} of a pre-order is the cardinality of the largest chain contained therein. An \term{antichain} is a subset of the pre-order in which every pair of elements is incomparable. The \term{width} of a pre-order is the cardinality of the largest antichain contained therein. Other important properties of the pre-order refer to the \term{interval} between a pair of resources, where $R$ is in the interval of $R_1$ and $R_2$ if and only if both $R_1\conv R$ and $R\conv R_2$. If the number of equivalence classes which lie in the interval between a pair of resources is \emph{finite} for \emph{every} pair of inequivalent resources, then the pre-order is said to be \term{locally finite}, otherwise it is said to be \term{locally infinite}. \subsection{Features of resource monotones}\label{sec:intotomonotones} A resource monotone is a function over resources whose value cannot increase under any free operation in the resource theory. Formally, \begin{samepage}\begin{defn}\label{def:monotone} A function $M$ from resources to the reals is called a \term{resource monotone} if and only if \begin{subequations}\begin{align} &R_1 \conv R_2\quad \;\textrm{implies}\;\quad M(R_1) \ge M(R_2),\\ \shortintertext{or equivalently,} &M(R_1) < M(R_2)\quad\;\textrm{implies}\;\quad R_1 \nconv R_2. \end{align}\end{subequations} \end{defn}\end{samepage} \noindent In other words, a resource monotone is an order-preserving map from the pre-order of resources to the total order of real numbers. Whenever some monotone $M$ and a pair of resources $R_1$ and $R_2$ satisfies $M(R_1) < M(R_2)$, we will say that the monotone $M$ {\em witnesses} the fact that $R_1 \nconv R_2$. If the pre-order is not totally pre-ordered (i.e., if there exist incomparable resources), then no single monotone can completely characterize the pre-order. A complete characterization may be achieved, however, by a family of monotones. Specifically, a family of monotones $\{ M_i \}_i$ is said to be \term{complete} if it completely characterizes the pre-order, that is, if \begin{align} \label{completeset} \begin{split}&\forall R_1,R_2: {R_1 \conv R_2}\;\;\\ &\quad\textrm{if and only if}\;\;\,\forall i: M_i(R_1) \ge M_i(R_2). \end{split}\end{align} A complete set of monotones is therefore an alternative way of describing the pre-order. Strictly speaking, monotones should be functions from resources {\em of any type in the resource theory} to the reals. However, many natural functions are only defined for particular types of resources. For instance, the function $P_{XY|ST}(00|00)P_{XY|ST}(11|01)+P_{XY|ST}(20|02)$ is only defined for common-cause boxes where the cardinalities of $X$ and $T$ are three. To accommodate this, we define the notion of a \term{monotone relative to a set $\boldsymbol{S}$}: $M$ is a monotone relative to a set $\boldsymbol{S}$ of resources if and only if for all $\{R_1,R_2\}\in \boldsymbol{S}$, $R_1\conv R_2$ implies $M(R_1)\geq M(R_2)$. A family of monotones $\{M_i\}_i$ is said to be \term{complete relative to a set} $\boldsymbol{S}$ if it holds that \begin{align}\begin{split} \label{completewrtsubset} &\forall R_1,R_2\in \boldsymbol{S}: {R_1 \conv R_2}\;\; \\ &\quad\textrm{if and only if}\;\;\,\forall i: M_i(R_1) \ge M_i(R_2). \end{split}\end{align} If $\boldsymbol{S}$ is any set of resources all of which are of a particular type, a monotone relative to $\boldsymbol{S}$ is said to be \term{type-specific}. \subsection{Monotone constructions for any resource theory}\label{sec:genericmonotones} Here we review a variety of approaches to constructing resource monotones. We will make use of these versatile constructions to define an especially useful pair of monotones for the resource theory of common-cause boxes in Section~\ref{twomonotones}. \subsubsection{Cost and yield monotones} \label{costandyield} It is possible to upgrade a type-specific monotone to a type-independent monotone using either a \term{cost construction} or a \term{yield construction}. In fact, a cost or yield construction takes \emph{any} function (monotone or not) together with a set of resources and induces a type-independent monotone from it, as follows. Given any function $f$ which maps some set $\boldsymbol{S}$ of resources to real numbers, one can define associated monotones which are applicable to all resources, as follows: \begin{align} \label{yieldprescr} &M[f\textup{-yield},\boldsymbol{S}](R)\coloneqq \\&\quad \max\limits_{R^{\star}\in \boldsymbol{S}} \left\{ f(R^{\star}) \;\text{ s.t. }\; R\conv R^{\star} \right\} ,\nonumber \\ \label{costprescr} &M[f\textup{-cost},\boldsymbol{S}](R)\coloneqq \\&\quad\min\limits_{R^{\star}\in \boldsymbol{S}} \left\{ f(R^{\star}) \;\text{ s.t. }\; R^{\star}\conv R \right\}.\nonumber \end{align} If there does not exist any $R^{\star} \in \boldsymbol{S}$ such that $R\conv R^{\star}$, then the yield is defined to be $-\infty$. Similarly, if there does not exist any $R^{\star} \in \boldsymbol{S}$ such that $R^{\star}\conv R$, then the cost is defined as $\infty$~\cite{Gonda2019Monotones}. In words, $M[f\textup{-yield},\boldsymbol{S}]$ is a monotone which asks for the most valuable resource in the set $\boldsymbol{S}$ (as measured by the function $f$) that one can create from the given resource $R$.\footnote{The maximum of a function $f$ over the set of boxes to which $R$ can be converted can also be thought of as the performance of $R$ over the so-called `nonlocal game' defined by the `payoff function' $f$. Since the set of boxes to which $R$ can be converted (of any given type) is a polytope, it follows that all {\em forbidden conversions (those from $R$ to a resource outside the polytope)} can be witnessed by a suitable set of payoff functions, namely, whatever linear functions pick out the facets of $R$'s polytope (for any given target type). In other words, any resource outside the polytope will attain a higher value on at least one of these functions. It follows, then, that the set of yield monotones induced by \emph{all possible linear functions} constitutes a complete set of monotones. While this observation may not be useful in practice, it does pose an interesting contrast with the findings of Ref.~\cite{Buscemi2012LOSR}: For common-cause {\em boxes}, we find that `nonlocal games' constitute a complete set of monotones; whereas \cite{Buscemi2012LOSR} shows that for the resource theory of \emph{quantum states} under {\LOSR} it is semiquantum games instead of nonlocal games that form a complete set of monotones. } Meanwhile, $M[f\textup{-cost},\boldsymbol{S}](R)$ is a monotone which asks for the least valuable resource in the set $\boldsymbol{S}$ (as measured by the function $f$) that one can use to create the given resource $R$. Note that in both cases, many different functions may yield the same monotone, so there is a conventional element to one's choice of function. Note also that $\boldsymbol{S}$ may be restricted to resources of a particular type (in which case $f$ need only be defined on resources of that type), and yet the type of the resource $R$ for which the monotones may be evaluated is unrestricted. \subsubsection{Weight and robustness monotones}\label{sec:othermonotones} Various functions have been used as measures of the distance of a resource from the set of classical common-cause boxes in previous work~\cite{de2014nonlocality,GellerPiani,beigi2015monotone, Beirhorst2016Bell,Cavalcanti2016Measures,Brito2018tracedistance,geometry2018}. In what follows, we highlight some of these which are monotones in our resource theory. The {\em nonlocal fraction}, which we denote here by $M_{\textup{NF}}$, is the minimum weight of the nonfree fraction in any convex decomposition of the resource, \begin{align} &M_{\textup{NF}}(R)\coloneqq \\&\quad\min_{\substack{0{\leq} \lambda{\leq 1}\\R_{*}{\in}\boldsymbol{S}_{[R]}\\ L{\in}\boldsymbol{L}_{[R]}}} \left\{\lambda \,\;\text{ s.t. }\; R=\lambda\, R_{*}+(1{-}\lambda)L\right\}. \nonumber\end{align} The nonlocal fraction was proven to be a resource monotone relative to (a superset of) the {\LOSR} free operations in Ref.~\citep[Sec.~5.2]{de2014nonlocality}, though it is there termed the `EPR2' measure. Next, there is the case of robustness measures\footnote{Note that in Ref.~\cite{geometry2018} these were termed `visibilities'.} which quantify the minimum weight of a resource from some particular class that must be added convexly with the original resource for the mixture to be free. The two robustness measures that we consider differ by the class of resources that are mixed with the original resource. The first, which we denote by $M_{\textup{RBST},\boldsymbol{L}}(R)$, considers mixing the original resource $R$ with any element in the set $\boldsymbol{L}_{[R]}$ of \emph{free} resources of the same type: \begin{align} &M_{\textup{RBST},\boldsymbol{L}}(R) \coloneqq \\\nonumber&\quad\min_{\substack{0{\leq} \lambda{\leq} 1\\L\in\boldsymbol{L}_{[R]}}} \left\{\lambda \,\;\text{ s.t. }\; \lambda\, L+(1{-}\lambda)R \:\in \boldsymbol{L}_{[R]}\right\}. \end{align} This robustness measure was shown to be a resource monotone relative to {\LOSR} in Ref.~\citep[Sec.~3]{GellerPiani}. The second robustness measure, which we denote simply by $M_{\textup{RBST}}(R)$ considers mixing the original resource $R$ with any element in the set $\boldsymbol{S}_{[R]}$ of all resources of the same type: \begin{align} &M_{\textup{RBST}}(R) \coloneqq \\\nonumber&\quad\min_{\substack{0{\leq} \lambda{\leq} 1\\R_{*}{\in}\boldsymbol{S}_{[R]}}} \left\{\lambda \,\;\text{ s.t. }\; \lambda\, R_{*}+(1{-}\lambda)R \:\in \boldsymbol{L}_{[R]}\right\}. \end{align} The unified resource theory formalism of Ref.~\cite{Gonda2019Monotones} implies that all three of these distance measures are resource monotones in any resource theory wherein the free operations act convexly\footnote{An operation $\tau$ acts convexly if the image $\tau\circ(R_3)$ is a given mixture of $\tau\circ(R_1)$ and $\tau\circ(R_2)$ whenever the preimage $R_3$ is the same mixture of $R_1$ and $R_2$. All linear operations act convexly. Convex \emph{action} should not be confused with convex \emph{closure} of the operations, which was the subject of Proposition~\ref{lem:convexity}.}, including our resource theory here. Additionally, in Corollary~\ref{measurecor}, we show that each of these three distance measures can be explicitly related to a monotone for which we provide a closed-form expression relative to {\twotwotwotwo}-type resources. By extension, we therefore also provide closed-form expressions for these three distance measures relative to {\twotwotwotwo}-type resources. \section{A linear program for determining the ordering of any pair of resources} \label{polything} Next, we provide a linear program which allows one to determine the ordering relation that holds between any two resources in our enveloping theory. To do so, it is convenient to set up some useful notation. \begin{defn} Let the bold symbol \term{$\boldsymbol{S}$} refer to any set of resources. We use subscripts to specify the type of the resources in the set, such as $\boldsymbol{S}_{\xyst}$ or $\boldsymbol{S}_{[R]}$. We use superscripts to specify further properties of a set. For example, the set of all GPT-realizable common-cause boxes is denoted by $\boldsymbol{S}^G$, the set of all nonfree resources is denoted by $\boldsymbol{S}^{\rm nonfree}$, and the set of all free resources is denoted by $\boldsymbol{S}^{\rm free}$. Whenever we wish to emphasize that a specific set is discrete, we denote it \term{$\mathbfcal{V}$}, and whenever we wish to emphasize that a specific set is a polytope, we denote it \term{$\mathbfcal{P}$}. \end{defn} Let $\mathbfcal{P}^{\LOSR}_{[R_2]}(R_1)$ denote the {\em continuous} set of resources of type $[R_2]$ into which $R_1$ can be converted under {\LOSR}, that is, the image of $R_1$ under $\underset{\scriptscriptstyle ^{[R_1]\rightarrow [R_2]}}{\LOSR}$. Similarly, let $\mathbfcal{V}^{\LDO}_{[R_2]}(R_1)$ denote the {\em discrete} set of resources of type $[R_2]$ into which $R_1$ can be converted under {\LDO}, that is, the image of $R_1$ under $\underset{\scriptscriptstyle ^{[R_1]\rightarrow [R_2]}}{\LDO}$. From Propositions~\ref{lem:convexity}~and~\ref{lem:transpolytope}, and the finite cardinality of $\mathbfcal{V}^{\LDO}_{[R_2]}(R_1)$, it follows that $\mathbfcal{P}^{\LOSR}_{[R_2]}(R_1)$ is a convex set with a finite number of vertices, and hence is a polytope: \begin{prop}[The polytope of resources obtainable from a given resource by LOSR]\label{lem:belstheorem}\hspace{0pt}\\ The set of all resources of type $[R_2]$ obtainable from $R_1$ by {\LOSR} forms a polytope, \begin{align} \mathbfcal{P}^{\LOSR}_{[R_2]}(R_1) = {\operatorname{ConvexHull}}{\left(\mathbfcal{V}^{\LDO}_{[R_2]}(R_1)\right)}. \end{align} \end{prop} We can express the content of Proposition~\ref{lem:belstheorem} equivalently as \begin{align}\begin{split} &R_1 \conv R_2 \quad\text{if and only if}\quad \\&R_2\in{\operatorname{ConvexHull}}{\left(\mathbfcal{V}^{\LDO}_{[R_2]}(R_1)\right)}. \end{split}\end{align} Therefore, to determine whether $R_1$ is higher than $R_2$ in the pre-order of resources, it suffices to implement the following computational test: \begin{compactenum} \item Enumerate all of the locally deterministic operations which take resources of type $[R_1]$ to type $[R_2]$. (They are finite in number.) \item Compute the images of $R_1$ under all of these locally deterministic operations. \item Determine whether or not $R_2$ can be expressed as a convex combination of these images. (This is a linear program.) \end{compactenum} To determine which of the four possible ordering relations holds for a given pair of resources, $R_1$ and $R_2$, it suffices to determine whether $R_1 \conv R_2$ or not and whether $R_2\conv R_1$ or not. This requires just two instances of the linear program.\footnote{In the language of Ref.~\cite{girard2015witnesses}, these linear programs constitute a {\em complete witness} for conversion. } According to Proposition~\ref{lem:belstheorem}, the image of a resource under the set of \emph{all} LOSR free operations is equivalent to the convex closure of the image of the resource under only the \emph{extremal} operations. Replacing the set of all operations with only the extremal ones is a dramatic shortcut. In principle, the linear program just described allows one to characterize the pre-order completely. For instance, this linear program defines a complete set of monotones for a given set of resources $\mathbf{S}$, namely, $\{ M_{R'} : R' \in \mathbf{S}\}$ where the monotone $M_{R'}$ is defined as follows: for all $R\in \mathbf{S}$, $M_{R'}(R)=1$ if $R\to R'$ by {\LOSR} and $M_{R'}(R)=0$ otherwise. $M_{R'}(R)$ reports the answer returned by the linear program for the question of whether $R\to R'$ by {\LOSR}, and if one has the answer for all $R'\in \mathbf{S}$, then one has located $R$ within the pre-order. However, such a brute-force characterization of the pre-order requires one to apply the linear program to {\em every} pair of resources, which is not possible in practice. Rather, the linear program is primarily useful for answering questions about conversions among pairs (or finite sets) of resources. To characterize the full pre-order more generally, one would ideally have a finite set of resource monotones that characterize the pre-order completely. Furthermore, in order to determine certain global properties of the pre-order, such as those described earlier, knowledge of a few carefully chosen resource monotones will typically suffice. This is the strategy we will adopt hereafter in the article. Specifically, over the next few sections, we define a pair of resource monotones and use these to prove that the pre-order of single-copy deterministic conversion is not totally pre-ordered (i.e., there exist incomparable resources), that it is not weak (the incomparability relation is not transitive), that it has both infinite width and infinite height, and that it is locally infinite. \section{Two useful monotones}\label{twomonotones} We will define two monotones, one a cost construction and the other a yield construction, where the sets of resources relative to which these costs and yields are evaluated (to be described below) contain only resources of type {\twotwotwotwo}. It is useful to first review some facts about the set of all common-cause boxes of type {\twotwotwotwo}, that is, about $\boldsymbol{S}^G_{\twotwotwotwo}$. \subsection{Preliminary facts regarding CHSH inequalities and PR boxes} We adopt the convention of Ref.~\cite{brunner2013Bell} of parametrizing common-cause boxes of type-{\twotwotwotwo} in terms of outcome biases and two-point correlators. The outcome biases are \begin{subequations}\begin{align*} \begin{split}\expec{A_s}&:= \sum_{x\in \{0,1\}} (-1)^x P_{X|S}(x|s) \\&= P_{X|S}(0|s) - P_{X|S}(1|s)\end{split}\\ \begin{split} \text{and}\quad\expec{B_t}&:= \sum_{y\in \{0,1\}} (-1)^y P_{Y|T}(y|t) \\&= P_{Y|T}(0|t) - P_{Y|T}(1|t),\end{split}\\ \shortintertext{and the two-point correlators are} \expec{A_t B_s}&:= \sum_{x,y \in \{0,1\}} (-1)^{(x\oplus y)} \; P_{XY|ST}(x y| s t). \end{align*}\end{subequations} Recalling that the set of common-cause boxes coincides with the set of no-signalling boxes in the Bell scenario, $\boldsymbol{S}^G_{\twotwotwotwo}$ constitutes what is conventionally referred to as the ``no-signalling'' set for this type.\footnote{However, as noted in Appendix \ref{diffcausalstr}, for causal structures different from the Bell scenario, the set $\boldsymbol{S}$ of processes that can be realized by a GPT causal model on the causal structure is typically distinct from the no-signalling set.} This set is well-known to be a polytope defined by 16 positivity inequalities~\cite{Barrett2005PRresource,Beirhorst2016Bell}. The set of classical (free) resources of type {\twotwotwotwo} is a subset therein, conventionally termed the \enquote{local set}, and is defined by the same 16 positivity inequalities together with eight additional facet-defining Bell inequalities, namely, the canonical CHSH inequality and its seven variants~\cite{Bellreview}. A resource is therefore nonclassical (nonfree) if and only if it violates a facet-defining Bell inequality. The eight variants of the canonical CHSH function are \begin{align}\label{eq:chshvariants0}&\hspace{-3ex}\text{\footnotesize \(\begin{array}{l} {\CHSH}_0(R) \coloneqq {+}\expec{A_0 B_0}{+}\expec{A_1 B_0}{+}\expec{A_0 B_1}{-}\expec{A_1 B_1},\\ {\CHSH}_1(R) \coloneqq {+}\expec{A_0 B_0}{+}\expec{A_1 B_0}{-}\expec{A_0 B_1}{+}\expec{A_1 B_1},\\%\leq 2 \\ {\CHSH}_2(R) \coloneqq {+}\expec{A_0 B_0}{-}\expec{A_1 B_0}{+}\expec{A_0 B_1}{+}\expec{A_1 B_1},\\%\leq 2 \\ {\CHSH}_3(R) \coloneqq {-}\expec{A_0 B_0}{+}\expec{A_1 B_0}{+}\expec{A_0 B_1}{+}\expec{A_1 B_1},\\ {\CHSH}_4(R) \coloneqq {-}\expec{A_0 B_0}{-}\expec{A_1 B_0}{-}\expec{A_0 B_1}{+}\expec{A_1 B_1},\\%\leq 2 \\ {\CHSH}_5(R) \coloneqq {-}\expec{A_0 B_0}{-}\expec{A_1 B_0}{+}\expec{A_0 B_1}{-}\expec{A_1 B_1},\\%\leq 2 \\ {\CHSH}_6(R) \coloneqq {-}\expec{A_0 B_0}{+}\expec{A_1 B_0}{-}\expec{A_0 B_1}{-}\expec{A_1 B_1},\\%\leq 2 \\ {\CHSH}_7(R) \coloneqq {+}\expec{A_0 B_0}{-}\expec{A_1 B_0}{-}\expec{A_0 B_1}{-}\expec{A_1 B_1}. \end{array}\)} \end{align} The canonical CHSH function is ${\CHSH}_0$, which we will sometimes denote simply as ${\CHSH}$. In terms of these, the eight facet-defining Bell inequalities are \begin{equation} {\CHSH}_k(R) \leq 2\;\;\text{ for}\;\;k\in \{0,\dots, 7\}. \end{equation} Note that the regions defined by strict violation of each of the eight inequalities are nonoverlapping~\cite{Beirhorst2016Bell}. It follows that {\em one and only one} of the eight ${\CHSH}$ inequalities can be violated by a given resource, i.e., for nonfree $R$ there is precisely one value of $k$ such that ${\CHSH}_k (R) > 2$. There are eight extremal nonfree vertices of the full polytope $\boldsymbol{S}^G_{\twotwotwotwo}$. One of these is the canonical PR box~\cite{Popescu1994,Barrett2005PRunit}, denoted $R_{\textup{PR}}$ and defined explicitly in Table~\ref{tab:genboxes2}; the other seven are variants of this PR box. For each $k$, we denote the associated variant of the PR-box by $R_{\text{PR},k}$ (so that the canonical PR box is associated to $k=0$, $R_{\textup{PR}}=R_{\text{PR},0}$). $R_{\text{PR},k}$ is the unique resource that maximally violates the $k$th ${\CHSH}$ inequality, i.e., that achieves its algebraic maximum, ${\CHSH}_k(R_{\text{PR},k}) = 4$. Unsurprisingly, the variants of the facet-defining Bell inequalities are interconvertible under {\LSO} operations, as are the variants of the extremal vertices. To illustrate this, it is convenient to factorize the {\twotwotwotwo} LSO group into a subgroup which stabilizes ${\CHSH}_0$ and a subgroup which does not, as follows. \begin{prop}\label{prop:symmetrypartitioning} \begin{samepage}Consider the following invertible operations, i.e., elements of the {\LSO} group for {\twotwotwotwo}-type resources: \newline \noindent\begin{tabular}{ll}\label{taus} $\!\tau_1\!:$ & $P_{XY|ST}(x,y|s,t)\leftrightarrow P_{XY|ST}(x,y{\oplus}1|s,t)$\\ $\!\tau_2\!:$ & $P_{XY|ST}(x,y|s,t)\leftrightarrow P_{XY|ST}(x,y|s{\oplus}1,t)$\\ $\!\tau_3\!:$ & $P_{XY|ST}(x,y|s,t)\leftrightarrow P_{XY|ST}(x,y|s,t{\oplus}1)$\\ $\!\tau_4\!:$ & $P_{XY|ST}(x,y|s,t)\leftrightarrow P_{XY|ST}(x{\oplus}1,y{\oplus}1|s,t)$ \label{CHSHpreservingtransformations}\\ $\!\tau_5\!:$ & $P_{XY|ST}(x,y|s,t)\leftrightarrow P_{XY|ST}(x{\oplus}s,y|s,t{\oplus}1)$\\ $\!\tau_6\!:$ & $P_{XY|ST}(x,y|s,t)\leftrightarrow P_{XY|ST}(x,y{\oplus}t|s{\oplus}1,t)$\\ \end{tabular}\end{samepage} \newline\noindent Then,\newline \begin{compactenum}[(\ref{prop:symmetrypartitioning}a)][4] \item The order-64 group $G_{123456}$ generated by $\{\tau_1,\tau_2,\tau_3,\tau_4,\tau_5,\tau_6\}$ is the entire {\LSO} group for {\twotwotwotwo} resources. \item The order-8 subgroup $G_{123}$ generated by $\{\tau_1,\tau_2,\tau_3\}$ has no elements in common with the subgroup $G_{456}$ generated by $\{\tau_4,\tau_5,\tau_6\}$ other than the identity operation. \item\label{prop:CHSHstabilizer} The order-8 subgroup $G_{456}$ generated by $\{\tau_4,\tau_5,\tau_6\}$ stabilizes the canonical PR box and the $\CHSH_0$ inequality. \item\label{prop:CHSHorbit} For any $k\in\{0...7\}$, the orbit of $\CHSH_k$ under $G_{123}$ is $\{\CHSH_0,...,\CHSH_7\}$, and the orbit of $R_{\textup{PR},k}$ under $G_{123}$ is $\{R_{\textup{PR},0},...,R_{\textup{PR},7}\}$. \end{compactenum} \end{prop} \begin{proof} The first two claims in Proposition~\ref{prop:symmetrypartitioning} are readily verified by standard group theory algorithms~\cite{Seress2003}. The latter two claims become self-evident by explicitly examining the actions of the operations on expectation values (and hence, their action on resources or functions on resources), per Table~\ref{relabelingeff}. { \begin{table*}[htb!] \begin{center}\centering {\setlength{\tabcolsep}{1.2ex} \begin{tabular}{|c|rrrrrrrr|} \toprule & \(\Braket{A_0}\) & \(\Braket{A_1}\) & \(\Braket{B_0}\) & \(\Braket{B_1}\) & \(\Braket{A_0 B_0}\) & \(\Braket{A_1 B_0}\) & \(\Braket{A_0 B_1}\) & \(\Braket{A_1 B_1}\) \\ \midrule \(\tau_1\) & \(\Braket{A_0}\) & \(\Braket{A_1}\) & \(-\Braket{B_0}\) & \(-\Braket{B_1}\) & \(-\Braket{A_0 B_0}\) & \(-\Braket{A_1 B_0}\) & \(-\Braket{A_0 B_1}\) & \(-\Braket{A_1 B_1}\)\\ \(\tau_2\) & \(\Braket{A_1}\) & \(\Braket{A_0}\) & \(\Braket{B_0}\) & \(\Braket{B_1}\) & \(\Braket{A_1 B_0}\) & \(\Braket{A_0 B_0}\) & \(\Braket{A_1 B_1}\) & \(\Braket{A_0 B_1}\)\\ \(\tau_3\) & \(\Braket{A_0}\) & \(\Braket{A_1}\) & \(\Braket{B_1}\) & \(\Braket{B_0}\) & \(\Braket{A_0 B_1}\) & \(\Braket{A_1 B_1}\) & \(\Braket{A_0 B_0}\) & \(\Braket{A_1 B_0}\)\\ \(\tau_4\) & \(-\Braket{A_0}\) & \(-\Braket{A_1}\) & \(-\Braket{B_0}\) & \(-\Braket{B_1}\) & \(\Braket{A_0 B_0}\) & \(\Braket{A_1 B_0}\) & \(\Braket{A_0 B_1}\) & \(\Braket{A_1 B_1}\)\\ \(\tau_5\) & \(\Braket{A_0}\) & \(-\Braket{A_1}\) & \(\Braket{B_1}\) & \(\Braket{B_0}\) & \(\Braket{A_0 B_1}\) & \(-\Braket{A_1 B_1}\) & \(\Braket{A_0 B_0}\) & \(-\Braket{A_1 B_0}\)\\ \(\tau_6\) & \(\Braket{A_1}\) & \(\Braket{A_0}\) & \(\Braket{B_0}\) & \(-\Braket{B_1}\) & \(\Braket{A_1 B_0}\) & \(\Braket{A_0 B_0}\) & \(-\Braket{A_1 B_1}\) & \(-\Braket{A_0 B_1}\)\\ \bottomrule \end{tabular}}\vspace{-1ex} \end{center} \caption{ Action of each of the six specified symmetry operations in terms of marginal expectation values and correlators.} \label{relabelingeff} \end{table*}} In light of Table~\ref{relabelingeff}, the third claim is easily verified. The fourth claim simply captures the fact that the eight CHSH functions are related by ${\LSO}$, and similarly the eight PR boxes are also interconvertible under ${\LSO}$. We can explicitly show how the interconversions are accomplished by $G_{123}$ by describing the actions of $\{\tau_1,\tau_2,\tau_3\}$ as permutations on the ordered set of $\CHSH$ functions, or equivalently, on the ordered set of PR boxes. \begin{compactitem} \item $\tau_1$ flips the sign of every correlator, so the action of $\tau_1$ on the ordered set of $\CHSH$ functions is the permutation $(0,4)(1,5)(2,6)(3,7)$. \item $\tau_2$ exchanges the roles of $A_0$ and $A_1$, so the action of $\tau_2$ on the ordered set of $\CHSH$ functions is the permutation $(0,1)(2,3)(4,5)(6,7)$. \item $\tau_3$ exchanges the roles of $B_0$ and $B_1$, so the action of $\tau_3$ on the ordered set of $\CHSH$ functions is the permutation $(0,2)(1,3)(4,6)(5,7)$. \end{compactitem} Therefore the \emph{orbit} of $\CHSH_k$ under $G_{123}$ is easily checked to be $\{\CHSH_0,...,\CHSH_7\}$, as claimed. The ordered set of PR boxes transforms under LSO operations in exactly the same manner as the ordered set of CHSH functions, since the {\em values} of the marginals and the correlators for resource $R_{\textup{PR},k}$ coincide with the {\em coefficients} of the associated terms in the linear function $\CHSH_k$ (compare, e.g., the expression for CHSH$_0$ in Eq.~\eqref{eq:chshvariants0} with the values of the marginals and correlators for $R_{\textup{PR}}$ in Table \eqref{tab:genboxes2}). Hence, the argument just given also establishes that the orbit of $R_{\textup{PR},k}$ under $G_{123}$ is $\{R_{\textup{PR},0},...,R_{\textup{PR},7}\}$. \end{proof} \subsection{Defining the two useful monotones} \noindent {\bf Monotone 1: The yield of a resource with respect to the set of resources of type {\twotwotwotwo}, as measured by the CHSH function.} To define our first monotone, consider the canonical CHSH function \begin{align*} \operatorname{CHSH}(R)\!\coloneqq \expec{A_0 B_0}+\expec{A_0 B_1}+\expec{A_1 B_0}-\expec{A_1 B_1}. \end{align*} The CHSH function is type-specific\footnote{The CHSH function is well-defined only for resources of type {\twotwotwotwo}.} and furthermore is not a monotone~\cite{de2014nonlocality}. However, we can apply the prescription of Eq.~\eqref{yieldprescr} to this function, taking the set $\boldsymbol{S}$ to be $\boldsymbol{S}^G_{\twotwotwotwo}$, i.e., the set of all common-cause boxes of type {\twotwotwotwo}. Doing so, we define the following (type-independent) yield-based monotone, which we will denote by $M_{\CHSH}$: \begin{align}\begin{split}\label{eq:CHSHmonotonedefn} &M_{\CHSH}(R) \coloneqq M[\textup{CHSH-yield}, \boldsymbol{S}^G_{\twotwotwotwo}](R)\\ &= \max\limits_{R^{\star}\in\boldsymbol{S}^G_{\twotwotwotwo}} \left\{\CHSH(R^{\star})\,\;\text{ s.t. }\;R\conv R^{\star} \right\}. \end{split}\end{align} Note that one can {\em always} find some $R^{\star}\in\boldsymbol{S}^G_{\twotwotwotwo}$ such that $R\conv R^{\star}$ regardless of the type or details of $R$, simply because free resources of type {\twotwotwotwo} may always be freely generated after discarding $R$. Hence, the value of this monotone is never less than 2, which is the maximum of the CHSH function when applied to the subset of free resources. If one applies this procedure to {\em any} of the eight variants of the CHSH functions in Eq.~\eqref{eq:chshvariants0}, the monotones one thereby obtains all turn out to be equivalent to $M_{\CHSH}$. This follows from the fact that all variants of the CHSH function are interconvertible under LSO and therefore the maximum of any one in an optimiziation over all LOSR operations is the same as any other, as noted in Proposition~\ref{prop:symmetrypartitioning}{\ref{prop:CHSHorbit}}. \noindent {\bf Monotone 2: The cost of a resource with respect to a set of noisy PR box resources, as measured by the CHSH function.} Our second monotone also involves optimizing the CHSH function, but it is a cost-based monotone, and the set of resources over which one optimizes is restricted to a particular one-parameter family of resources of type {\twotwotwotwo} (rather than the full set $\boldsymbol{S}^G_{\twotwotwotwo}$). To define this family, we need to highlight a particular resource in the free set, which we denote $L_{\textup{NPR}}^{\rm b}$.\footnote{The use of $L$ instead of $R$ when describing the resource $L_{\textup{NPR}}^{\rm b}$ is a nod to the conventional terminology wherein the classical common-cause boxes are often called the {\em local} boxes. See the discussion in the introduction for why we explicitly avoid the local-nonlocal terminology here.} $L_{\textup{NPR}}^{\rm b}$ can be defined as the uniform mixture of the PR box with the maximally mixed resource $L_{\varnothing}$ (defined in Table~\ref{tab:genboxes2}), namely $L_{\textup{NPR}}^{\rm b}=\frac{1}{2} R_{\textup{PR}}+\frac{1}{2} L_{\varnothing}$, as enumerated in Table~\ref{tab:genboxes2}. The superscript $\rm b$ in the notation $L_{\textup{NPR}}^{\rm b}$ denotes the fact that this resource sits on the boundary of the free set, namely, that it saturates the canonical CHSH inequality, $\CHSH(L_{\textup{NPR}}^{\rm b})=2$. The one-parameter family of resources defining our cost construction are the convex mixtures of $R_{\textup{PR}}$ and $L_{\textup{NPR}}^{\rm b}$. We denote the set of these by $\boldsymbol{C}_{\textup{NPR}}$. Formally, \begin{align} \boldsymbol{C}_{\textup{NPR}} &\coloneqq \{ C(\alpha) :\alpha \in [0,1]\},\\ \shortintertext{where} \label{eq:chainparametrization} C(\alpha)&\coloneqq\;\alpha\, R_{\textup{PR}}+(1{-}\alpha) L_{\textup{NPR}}^{\rm b}. \end{align} We use \enquote{$\boldsymbol{C}$} because the set of resources forms a chain (defined in Section~\ref{sec:oraclelimitations}) and \enquote{NPR} because each resource in the chain is a noisy version of the PR box. Geometrically, the chain $\boldsymbol{C}_{\textup{NPR}}$ describes a line segment of resources with endpoints $R_{\textup{PR}}$ and $L_{\textup{NPR}}^{\rm b}$, and $\alpha$ parametrizes the distance from $C(\alpha)$ to $L_{\textup{NPR}}^{\rm b}$ (the bottom of the chain). To see that the elements of $\boldsymbol{C}_{\textup{NPR}}$ do indeed form a chain in the partial order, it suffices to note that one can move downwards (\emph{decreasing} $\alpha$) starting from any $C(\alpha)$ by mixing $C(\alpha)$ with $L^{\rm b}_{\textup{NPR}}$, but one cannot move upwards (\emph{increasing} $\alpha$) from any $C(\alpha)$, as doing so would require increasing the value of the monotone $M_{\rm CHSH}$. Table~\ref{tab:genboxes2} provides an explicit characterization of a generic resource on the chain, as well as its endpoints and the maximally-mixed free resource. { \begin{table*}[htb!] \centering {\setlength{\tabcolsep}{1.2ex} \begin{tabular}{|r|cccccccc|c|c|} \toprule & \(\Braket{A_0}\) & \(\Braket{A_1}\) & \(\Braket{B_0}\) & \(\Braket{B_1}\) & \(\Braket{A_0 B_0}\) & \(\Braket{A_1 B_0}\) & \(\Braket{A_0 B_1}\) & \(\Braket{A_1 B_1}\) & CHSH\\ \midrule \(R_{\textup{PR}}=C(1)\) & 0 & 0 & 0 & 0 & +1 & +1 & +1 & $-$1 & 4\\ \(L_{\text{NPR}}^{\rm b}= C(0)\) & 0 & 0 & 0 & 0 & $\nicefrac{+1}{2}$ & $\nicefrac{+1}{2}$ & $\nicefrac{+1}{2}$ & $\nicefrac{-1}{2}$ & 2\\ \(C(\alpha)\) & 0 & 0 & 0 & 0 & $\tfrac{\alpha+1}{2}$ & $\tfrac{\alpha+1}{2}$ & $\tfrac{\alpha+1}{2}$ & $\tfrac{-\alpha-1}{2}$ & $2\alpha{+}2$ \\\midrule \(L_{\varnothing}\) & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ \bottomrule \end{tabular}}\vspace{-1ex} \caption{\label{tab:genboxes2} An explicit description of the resources referenced in our definitions. } \end{table*}} Using this one-parameter family of resources, we define the following cost-based monotone, which we denote $M_{\textup{NPR}}$, \begin{align} \label{Malphadefn} M_{\textup{NPR}}(R) &\coloneqq M[\CHSH\textup{-cost},\boldsymbol{C}_{\textup{NPR}}](R)\\\nonumber &= \min\limits_{R^{\star}\in \boldsymbol{C}_{\textup{NPR}}} \left\{\CHSH(R^{\star})\,\;\text{ s.t. }\;R^{\star}\conv R\right\}, \end{align} where if for some $R$ there is no $R^{\star}\in \boldsymbol{C}_{\textup{NPR}}$ such that $R^{\star}\conv R$, then we define $M_{\textup{NPR}}= \infty$. Critically, note that the CHSH function is an injective (one-to-one) mapping from points on the line segment $\boldsymbol{C}_{\textup{NPR}}$ to the real numbers, with \begin{align} {\CHSH}\big(C(\alpha)\big)=2\alpha{+}2. \end{align} Thus, the problem of minimizing the CHSH function over $R^{\star}\in \boldsymbol{C}_{\textup{NPR}}$ such that $R^{\star}\conv R$ is exactly the same as minimizing the function $2\alpha{+}2$ under the constraint $C(\alpha)\conv R$, that is, \begin{align} \label{Malpha2} M_{\textup{NPR}} &= \min\limits_{\alpha \in [0,1]} \left\{ 2\alpha{+}2 \,\;\text{ s.t. }\;C(\alpha) \conv R\right\}. \end{align} For each variant $R_{\textup{PR},k}$ of the PR box, where $k\in \{0,\dots,7\}$, we can define the chain of noisy versions thereof, that is, $\boldsymbol{C}_{\textup{NPR},k} \coloneqq \{ C_k(\alpha) :\alpha \in [0,1]\}$ where $C_k(\alpha) \coloneqq\;\alpha\, R_{\textup{PR},k}+(1{-}\alpha) L_{\textup{NPR},k}^{\rm b}$, with $L_{\textup{NPR},k}^{\rm b}=\frac{1}{2} R_{\textup{PR},k}+\frac{1}{2} L_{\varnothing}$. One can of course define a cost-based monotone for each such chain. However, all eight of these chains define the {\em same} monotone, because the local symmetry operations allow one to move among these, as a consequence of Proposition~\ref{prop:symmetrypartitioning}{\ref{prop:CHSHorbit}} and the fact that $L_{\varnothing}$ is stable under all {\twotwotwotwo}-type local symmetry operations.\footnote{As an aside, note that, unlike the cost with respect to the chain $\mathbf{C}_{\textup{NPR}}$, Eq.~\eqref{Malphadefn}, the cost with respect to the set $\boldsymbol{S}^G_{\twotwotwotwo}$ of \emph{all} resources of type {\twotwotwotwo}, as measured by the CHSH function, is utterly uninformative with regards to distinguishing the elements of $\boldsymbol{S}^G_{\twotwotwotwo}$. This is because the resource $R_{\text{PR},4}$ can be converted to any other {\twotwotwotwo}-type resource, and yet $\CHSH(R_{\text{PR},4})=-4$, the algebraic minimum of the canonical CHSH function. Consequently, the value of this CHSH-cost with respect to the set of all resources of type {\twotwotwotwo} is $-4$. Since this monotone is constant on all resources in the scenario, it is completely uninformative.} \subsection{Closed-form expressions for $M_{\CHSH}$ and $M_{\rm NPR}$ for {\twotwotwotwo}-type resources}\label{sec:geometryproof} The definitions of $M_{\CHSH}$ and $M_{\rm NPR}$ both involve an optimization over a continuous set of states. In this section, we derive closed-form expressions for these monotones for resources of type {\twotwotwotwo}. Consider first $M_{\CHSH}$. \begin{prop}\label{prop:eqaboveCHSH} For any free resource $R$ of type {\twotwotwotwo}, $M_{\CHSH}(R) =2$. For any nonfree resource $R$ of type {\twotwotwotwo}, there is a unique $k \in \{0,\dots,7\}$ for which $\operatorname{CHSH}_k(R) > 2$ and such that \begin{equation} \label{eqaboveCHSH} {M_{\CHSH}(R) = {\CHSH}_k(R)}. \end{equation} Equivalently, each function $\CHSH_{k}$ is a monotone relative to the subset of {\twotwotwotwo}-type resources for which $\operatorname{CHSH}_k(R) \geq 2$. \end{prop} \begin{proof} We already noted in Section~\ref{twomonotones} that $M_{\CHSH}(R)=2$ for all resources $R$ that are free, so it suffices to consider the case of nonfree resources. As noted above, the fact that there is precisely one value of $k$ such that ${\CHSH}_k (R) > 2$ for a nonfree resource $R$ follows from the results in Ref.~\cite{Beirhorst2016Bell}. Thus, we must show that $M_{\CHSH}(R) = \operatorname{CHSH}_k(R)$ for this value of $k$. To prove this, we invoke Theorem 2.2 of Ref.~\cite{Beirhorst2016Bell}, which informs us that every resource $R$ which violates the $k$th ${\CHSH}$ inequality admits a convex decomposition in terms of the $k$th variant of the PR box and some free resource that saturates the $k$th ${\CHSH}$ inequality, denoted $L_k^{\rm b}$, such that ${R=\;\lambda\, R_{\textup{PR},k}+(1{-}\lambda)L_k^{\rm b}}$ for some $\lambda\in [0,1]$. Further, $\lambda$ is specified {\em uniquely} by the linearity of the ${\CHSH}$ functions and the fact that ${\CHSH_k(R_{\textup{PR},k})=4}$ and ${\CHSH_k(L_k^{\rm b})=2}$, which together imply that $\CHSH_k(R)={{\CHSH}_k \big(\lambda\, R_{\textup{PR},k}+(1{-}\lambda)L_k^{\rm b}\big)}={4\lambda+2(1{-}\lambda)}$. Again leveraging this unique decomposition together with linearity of the $\operatorname{CHSH}_k$ function and the linearity of {\LOSR} transformations, it follows that for any {\LOSR} operation $\tau$, we have $\CHSH_k(\tau\circ R)={\lambda\CHSH_k(\tau\circ R_{\textup{PR},k})+(1{-}\lambda)\CHSH_k(\tau\circ L_k^{\rm b})}$. Clearly ${\CHSH(\tau\circ R_{\textup{PR},k})\leq 4}$, since four is the algebraic maximum of the $\operatorname{CHSH}_k$ function, and ${\CHSH_k(\tau\circ L_k^{\rm b})\leq 2}$, since every {\LOSR} operation takes a free resource $L_k^{\rm b}$ to a free resource $L'_k$, for which $\CHSH_k(L'_k)\leq 2$. For $R$ such that $\CHSH_k(R)>2$, then, it follows that free operations on $R$ cannot increase its ${\CHSH}_k$ value, and hence the maximum in Eq.~\eqref{eq:CHSHmonotonedefn} is achieved by $R$ itself. This proves Eq.~\eqref{eqaboveCHSH}. \end{proof} Using the closed-form expression for $M_{\CHSH}$, we can additionally provide closed-form expressions for the weight and robustness monotones introduced in Section~\ref{sec:othermonotones} for {\twotwotwotwo}-type resources: \begin{cor}\label{measurecor} For resources of type {\twotwotwotwo}, the nonlocal fraction and the robustnesses to mixing are related to $M_{\CHSH}$ as follows: \begin{subequations}\begin{align} M_{\textup{NF}}(R) &= \frac{M_{\CHSH}(R)-2}{2},\\ M_{\textup{RBST},\boldsymbol{L}}(R) &= \frac{M_{\CHSH}(R)-2}{M_{\CHSH}(R)+2},\\ M_{\textup{RBST}}(R)&= \frac{M_{\CHSH}(R)-2}{M_{\CHSH}(R)+4}. \end{align}\end{subequations} \end{cor} \begin{proof} The relationship of these distance measures to the extent by which the ${\CHSH}$ inequality is violated was derived in Appendix~E of Ref.~\cite{geometry2018}. We simply recast those results in terms of $M_{\CHSH}(R)$ instead of ${\CHSH}(R)$ by means of Proposition~\ref{prop:eqaboveCHSH}. \end{proof} The values of the four monotones $M_{\CHSH}(R)$, $M_{\textup{NF}}(R)$, $M_{\textup{RBST},\boldsymbol{L}}(R)$, and $M_{\textup{RBST}}(R)$ are therefore all expressible as strictly-increasing functions of one another when applied to resources of type {\twotwotwotwo}. That is, if any one of these monotones increases (respectively decreases) between a given pair of resources of type {\twotwotwotwo}, then \emph{all} of monotones will similarly increase (respectively decrease) between that pair of resources. As we will focus on the {\twotwotwotwo} type below, and the three distance-function monotones are no more informative than $M_{\CHSH}$ in this case, we will not discuss them further. We now turn to providing a closed-form expression for $M_{\NPR}$ for resources of type {\twotwotwotwo}. We first recall some more details of the geometry of $\boldsymbol{S}^G_{\twotwotwotwo}$. Recall that we use the superscript $b$ to denote that a resource lies on the particular boundary of the free set that is defined by the CHSH inequality (and thus that it saturates this inequality). We further use the superscript ${bb}$ to denote that a resource both saturates the CHSH inequality and {\em additionally} lies on the boundary of the full polytope of resources, $\boldsymbol{S}^G_{\twotwotwotwo}$. The set $\boldsymbol{L}_k^{\rm b}$ of CHSH$_k$-inequality-saturating resources is 7-dimensional, and the set $\boldsymbol{L}_k^{\rm bb}$ of CHSH$_k$-inequality-saturating resources on the boundary of the full polytope $\boldsymbol{S}^G_{\twotwotwotwo}$ is 6-dimensional.\footnote{$\boldsymbol{L}^{\rm b}$ is a facet of the 8-dimensional {\twotwotwotwo} local polytope, and facets of polytopes are always one dimension lower than the dimension of the polytope itself. A resource is within $\boldsymbol{L}_k^{\rm bb}$ if it is both a member of the facet defined by the CHSH$_k$ inequality and also a member of some other facet defined by a positivity inequality. The regions defined by the intersection of adjacent facets are generally termed `ridges', and a ridge always has dimensionality $d-2$, where $d$ is the dimension on the polytope. $\boldsymbol{L}_k^{\rm bb}$ is a collection of all the eight ridges adjacent to the $\boldsymbol{L}_k^{\rm b}$ facet. Equivalently, $R\in \boldsymbol{L}_k^{\rm bb}$ if and only if $R$ can be convexly decomposed as a mixture over seven-or-fewer (out of eight) deterministic boxes which saturate the CHSH inequality. Each possible size-seven subset of CHSH-inequality-saturating deterministic boxes defines one of the eight 6-dimensional ridges comprising $\boldsymbol{L}_k^{\rm bb}$.} It follows that $\boldsymbol{L}_k^{\rm bb} \subseteq \boldsymbol{L}_k^{\rm b}$. \begin{prop}\label{geom} For any free resource $R$ of type {\twotwotwotwo}, $M_{\rm NPR}(R) =2$. For any nonfree resource $R$ of type {\twotwotwotwo}, there is a unique $k \in \{0,\dots,7\}$ for which $\operatorname{CHSH}_k(R) > 2$. Within this region, if $R\in\boldsymbol{C}_{\textup{NPR},k}$, then we have simply $M_{\textup{NPR}}(R) = \CHSH_k(R)$. If, on the other hand, $R\not\in\boldsymbol{C}_{\textup{NPR},k}$, we have $$M_{\textup{NPR}}(R)=2\alpha{+}2,$$ where $\alpha$ is the value appearing in the decomposition $R=\;\gamma\, L_{R}^{\rm bb}+(1{-}\gamma) C_k(\alpha)$, where $C_k(\alpha) \in \boldsymbol{C}_{\textup{NPR},k}$, $L_{R}^{\rm bb} \in \boldsymbol{L}_k^{\rm bb}$ and $\gamma \in [0,1]$. This value of $\alpha$ is unambiguous (and computable from simple geometry) because there exists a {\em unique} resource $L_{R}^{\rm bb} \in \boldsymbol{L}_k^{\rm bb}$ and a unique choice of $\gamma \in [0,1]$ and of $\alpha \in [0,1]$ such that $R=\;\gamma\, L_{R}^{\rm bb}+(1{-}\gamma) C_k(\alpha)$. \end{prop} \noindent The (unique) relevant decomposition is shown in Fig.~\ref{fig:TwoDifferentDecompositions} (for the case where $k=0$). The proof of this proposition is given in Appendix~\ref{sec:geom}. \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.7] \path[draw, ultra thick] (-30:4) -- (90:4) -- (210:4) -- cycle; \node[draw,shape=circle,fill,scale=.4, label={[label distance=0.1cm]0:\large{${L_{R}^{\rm bb}}$}}] at (-30:4) {}; \node[draw,shape=circle,fill,scale=.4, label={[label distance=0.1cm]90:\large{${R_{\textup{PR}}}$}}] at (90:4) {}; \node[draw,shape=circle,fill,scale=.4, label={[label distance=0.1cm]180:\large{${L_{\textup{NPR}}^{b}}$}}] at (210:4) {}; \path[name path=chsh] (-4,0) -- (4,0); \path[name path=nrp, shift=(-30:4)] (0:0) -- (150:7); \path [name intersections={of=chsh and nrp, by=R}]; \node [draw,shape=circle,fill,scale=.4, label={[label distance=-0.1cm]60:${\textbf{\textit{R}}}$}] at (R) {}; \path[name path=ls] (210:4) -- (90:4); \path [name intersections={of=ls and nrp, by=C}]; \node [draw,shape=circle,fill,scale=.4, label={[label distance=-0.1cm]135:${C(\alpha)}$}] at (C) {}; \draw [decoration={brace, mirror, raise=0.1cm }, decorate] (C) -- (R); \node [label={[label distance=0cm, rotate=-30]-90:$\boldsymbol{\gamma}\phantom{1}$}] at ($(C)!0.5!(R)$) {}; \draw [decoration={brace, mirror, raise=0.1cm }, decorate] (R) -- (-30:3.7); \node [label={[label distance=0cm, rotate=-30]-90:$\boldsymbol{(1-\gamma)}\phantom{111}$}] at ($(-30:4)!0.5!(R)$) {}; \draw [decoration={brace, mirror, raise=0.1cm }, decorate] (90:4) -- (C); \node [label={[label distance=0.1cm, rotate=60]90:$\boldsymbol{(1-\alpha)}\phantom{1}$}] at ($(C)!0.5!(90:4)$) {}; \draw [decoration={brace, mirror, raise=0.1cm }, decorate] (C) -- (210:4); \node [label={[label distance=0.1cm, rotate=60]90:$\boldsymbol{\alpha}\phantom{1}$}] at ($(C)!0.5!(210:4)$) {}; \path[clip] (-30:4) -- (90:4) -- (210:4) -- cycle; \draw[thick, shift=(-30:4)] (0:0) -- (150:7); \end{tikzpicture} \end{center} \caption[]{ A depiction of a family of resources parametrized by $\alpha$ and $\gamma$, and the unique decomposition of a particular point $R(\alpha,\!\gamma)$ in terms of a point $C(\alpha)$ on the chain $\boldsymbol{C}_{\textup{NPR}}$ and a (unique) CHSH-saturating resource $L_{R}^{\rm bb}$ that lies in the boundary of the set of GPT-realizable common-cause boxes. Note that the parameters $\alpha$ and $(1-\alpha)$ indicate the fraction of the full line segment attributed to each sub-segment, and similarly with $\gamma$ and $(1-\gamma)$. } \label{fig:TwoDifferentDecompositions} \end{figure} \FloatBarrier \section{Properties of the pre-order of common-cause boxes}\label{sec:results} We now leverage the two monotones just introduced to prove multiple interesting features of the pre-order of common cause boxes. \subsection{Inferring global properties of the pre-order }\label{sec:lessonsfromtwomonotones} Important properties of the pre-order over all resources can already be learned by considering just these two monotones ($M_{\rm CHSH}$ and $M_{\rm NPR}$) and just resources of type {\twotwotwotwo}, indeed, just a specific kind of two-parameter family of resources within this set. The kind of two-parameter family that we consider, denoted ${\boldsymbol{S}_{\twotwotwotwo}^{L^{\rm bb}_{\star}} \subset \boldsymbol{S}^G_{\twotwotwotwo}}$, is \begin{equation}\label{Rset} \boldsymbol{S}_{\twotwotwotwo}^{L^{\rm bb}_{\star}} \coloneqq \{ R(\alpha,\!\gamma) : \alpha \in [0,1],\; \gamma \in [0,1]\}, \end{equation} where \begin{align}\label{eq:def2paramfamily} R(\alpha,\!\gamma)\coloneqq\; \gamma\, L^{\rm bb}_{\star}+(1{-}\gamma)C(\alpha), \end{align} with $C(\alpha) \in \boldsymbol{C}_{\textup{NPR}}$. There are many such families, one for each choice of a resource $L^{\rm bb}_{\star}\in \boldsymbol{L}^{\rm bb}$. Each such family $\boldsymbol{S}_{\twotwotwotwo}^{L^{\rm bb}_{\star}}$ is the convex hull of the chain $\boldsymbol{C}_{\textup{NPR}}$ and the associated point $L^{\rm bb}_{\star}$, i.e., \begin{align} \boldsymbol{S}_{\twotwotwotwo}^{L^{\rm bb}_{\star}} = {\operatorname{ConvexHull}}{\left(\;\{L^{\rm bb}_{\star},\;R_{\textup{PR}},\;L_{\textup{NPR}}^{\rm b}\}\;\right)}. \end{align} Evaluating $M_{\textup{NPR}}$ for resources in this family is straightforward, thanks to Proposition~\ref{geom}. The proposition directly implies that for any $R(\alpha,\!\gamma) \in \boldsymbol{S}_{\twotwotwotwo}^{L^{\rm bb}_{\star}}$, \begin{align}\label{eq:alphacostonfam} M_{\rm NPR}\big(R(\alpha,\!\gamma)\big) = 2\alpha{+}2. \end{align} We now consider the value of $M_{\CHSH}$ for resources in this family. Noting that ${{\CHSH}\big(R(\alpha,\!\gamma)\big)\geq 2}$ for all ${R(\alpha,\!\gamma)\in \boldsymbol{S}_{\twotwotwotwo}^{L^{\rm bb}_{\star}}}$, Proposition~\ref{prop:eqaboveCHSH} states that ${M_{\CHSH}\big(R(\alpha,\!\gamma)\big)={\CHSH}\big(R(\alpha,\!\gamma)\big)}$. \noindent Substituting the definition of $C(\alpha)$ from Eq.~\eqref{eq:chainparametrization} into Eq.~\eqref{eq:def2paramfamily}, we obtain \begin{align* R(\alpha,\!\gamma)={\gamma\, L_{\star}^{\rm bb} \!+ (1{-}\gamma)\alpha\, R_{\textup{PR}}+ (1{-}\gamma)(1{-}\alpha)L_{\textup{NPR}}^{\rm b}}. \end{align*} Recalling that the CHSH function is linear and that it satisfies ${\CHSH(L^{\rm b})=2}$ for all ${L^{\rm b} \in \boldsymbol{L}^{\rm b}}$ and $\CHSH(R_{\rm PR})=4$, it follows that \begin{align} \nonumber M_{\CHSH}\big(R(\alpha,\!\gamma)\big) &=\,{\CHSH}{\big(R(\alpha,\!\gamma)\big)} \\\nonumber &= 2\gamma + 4(1{-}\gamma)\alpha+ 2(1{-}\gamma)(1{-}\alpha) \\&= 2\alpha(1{-}\gamma)+2. \label{MCHSHfam} \end{align} \begin{figure}[b!] \begin{center} \subfigure[\label{fig:twoMTrianglePlot}] { \centering \begin{tikzpicture}[scale=0.7] \path[draw, ultra thick] (-30:4) -- (90:4) -- (210:4) -- cycle; \node[draw,shape=circle,fill,scale=.4, label={[label distance=0.1cm]0:\large{${L_{\star}^{\rm bb}}$}}] at (-30:4) {}; \node[draw,shape=circle,fill,scale=.4, label={[label distance=0.1cm]90:\large{${R_{\textup{PR}}}$}}] at (90:4) {}; \node[draw,shape=circle,fill,scale=.4, label={[label distance=0.1cm]180:\large{${L_{\textup{NPR}}^{\rm b}}$}}] at (210:4) {}; \begin{pgfonlayer}{myback} \path[clip] (-30:4) -- (90:4) -- (210:4) -- cycle; \foreach \y in{-3,-2.5,...,4} \draw[name path=c.\y, color=jflyBlue] (-4,\y) -- (4,\y); \foreach \y in{180,175,...,120} \draw[name path=s.\y, shift=(-30:4), color=jflyVermillion] (0:0) -- (\y:7); \end{pgfonlayer} \path[name path=sr1, shift=(-30:4)] (0:0) -- (135:7); \path[name path=cr1] (-4,1.5) -- (4,1.5); \path [name intersections={of=cr1 and sr1, by=R1}]; \node [draw,shape=circle,fill,scale=.4, label={[label distance=-0.1cm]60:${R_1}$}] at (R1) {}; \path[name path=sr2, shift=(-30:4)] (0:0) -- (130:7); \path[name path=cr2] (-4,0) -- (4,0); \path [name intersections={of=cr2 and sr2, by=R2}]; \node [draw,shape=circle,fill,scale=.4, label={[label distance=-0.1cm]240:${R_2}$}] at (R2) {}; \path[name path=sr3, shift=(-30:4)] (0:0) -- (145:7); \path[name path=cr3] (-4,0.5) -- (4,0.5); \path [name intersections={of=cr3 and sr3, by=R3}]; \node [draw,shape=circle,fill,scale=.4, label={[label distance=-0.1cm]240:${R_3}$}] at (R3) {}; \end{tikzpicture} } \subfigure[\label{fig:twoMMonotoneAxes}] { \centering \begin{tikzpicture}[scale=1] \path[draw, ultra thick] (0,0) -- (0,4) -- (4,4) -- (4,0) ; \path[draw, ultra thick, dashed] (0,0) -- (4,0) ; \node [draw,shape=circle,fill,scale=.4, label={[label distance=-0.1cm]225:${2}$}] at (0,0) {}; \node [draw=none,scale=.4, label={[label distance=-0cm]-90:${4}$}] at (4,0) {}; \node [draw=none,scale=.4, label={[label distance=-0cm]180:${4}$}] at (0,4) {}; \node [draw,shape=circle,fill,scale=.4, label={[label distance=-0.1cm]45:${R_{PR}}$}] at (4,4) {}; \node at (2,-0.5) {${M_{\textup{NPR}}}$}; \node at (-1, 2) {${M_{\textup{CHSH}}}$}; \path[pattern=north west lines] (0,0) -- (0,4) -- (4,4)--cycle; \path[pattern=north east lines] (0,0) -- (0,4) -- (4,4)--cycle; \node [draw,shape=circle,fill,scale=.4, label={[label distance=-0.1cm]45:${R_1}$}] at (3,2) {}; \node [draw,shape=circle,fill,scale=.4, label={[label distance=-0.1cm]225:${R_2}$}] at (3.33,1) {}; \node [draw,shape=circle,fill,scale=.4, label={[label distance=-0.1cm]225:${R_3}$}] at (2.33,1.33) {}; \begin{pgfonlayer}{myback} \path[clip] (0,0) -- (4,0) -- (4,4) -- cycle; \foreach \y in{0,0.334,...,4} \draw[name path=c.\y, color=jflyBlue] (\y,\y) -- (4,\y); \foreach \y in{0,0.334,...,4} \draw[name path=c.\y, color=jflyVermillion] (\y,0) -- (\y,\y);; \end{pgfonlayer} \end{tikzpicture}} \end{center} \caption{\subref{fig:twoMTrianglePlot} A plot of the 2-parameter family of resources $\boldsymbol{S}_{\twotwotwotwo}^{L^{\rm bb}_{\star}}$ (defined in Eq.~\eqref{Rset}), with values for $M_{\CHSH}$ depicted by a set of level curves (light blue, horizontal lines) and values for $M_{\rm NPR}$ depicted by another set of level curves (orange, diagonal lines). \subref{fig:twoMMonotoneAxes} A plot of the same 2-parameter family of resources, but in a Cartesian coordinate system with $M_{\CHSH}$ and $M_{\rm NPR}$ as the coordinates. Because all resources on the bottom border in plot \subref{fig:twoMTrianglePlot} are free, these all map to a single point in \subref{fig:twoMMonotoneAxes}, namely $(M_{\CHSH},M_{\rm NPR})=(2,2)$. The fact that there are no resources with $M_{\CHSH}=2$ and $M_{\rm NPR} > 2$ is represented by the use of a dashed line at the base of the plot in \subref{fig:twoMMonotoneAxes}. Similarly, the hatched region in \subref{fig:twoMMonotoneAxes} describes joint values of the two monotones that are not achieved by any resource in the family, as $M_{\CHSH}(R)\leq M_{\NPR}(R)$ for all $R$. Pictured in both plots are three illustrative resources. The points $R_1$ and $R_2$ are incomparable, as are $R_3$ and $R_2$, while $R_1$ and $R_3$ are strictly ordered. This implies that the incomparability relation in the pre-order is not transitive. } \label{fig:twoM} \end{figure} In Fig.~\ref{fig:twoMTrianglePlot}, we plot some of the level curves\footnote{A level curve of a function $f$ is a set of points that yield the same value of $f$; e.g., $\{ x\,|\,f(x){=}c\}$.} for $M_{\rm NPR}$ and $M_{\CHSH}$ over any such two-parameter family of resources. The level curve defined by ${M_{\rm NPR}(R)=2\alpha{+}2}$ is a diagonal line in Fig.~\ref{fig:twoMTrianglePlot}, extending from the (implicit) point ${C(\alpha)}$ to the point ${L_{\star}^{\rm bb}}$. The level curve defined by ${M_{\CHSH}(R)=2\alpha(1{-}\gamma)+2}$ is a horizontal line in Fig.~\ref{fig:twoMTrianglePlot}, extending between the two implicit points ${C(\alpha)}$ and ${\alpha\, R_{\textup{PR}}+(1{-}\alpha) L_{\star}^{\rm bb}}$. From these level curves, we can immediately deduce a number of features of the pre-order of resources. In particular, we consider those features of the pre-order that were defined in Section~\ref{sec:oraclelimitations}. First, we see that the pre-order is locally infinite, simply by virtue of the fact that there exist chains which are represented by \emph{continuous} sets of distinct resources, such as the chain $\boldsymbol{C}_{\textup{NPR}}$. The interval between any two resources in such a continuous chain contains a continuous infinity of inequivalent resources. Second, one can also see that the pre-order of resources is not totally pre-ordered. For instance, the two resources $R_1$ and $R_2$ in Fig.~\ref{fig:twoMTrianglePlot} are incomparable, as witnessed by the fact that \(R_1\) has a larger value of $M_{\CHSH}$ than \(R_2\) does, but \(R_2\) has a larger value of $M_{\rm NPR}$ than \(R_1\) does. More generally, the level curves for the two monotones allow one to immediately construct (by inspection) a continuous infinity of such incomparable pairs. Furthermore, the binary relation of incomparability is not transitive, so the partial order is not weak. This can be seen by the example of the three resources in Fig.~\ref{fig:twoMTrianglePlot}: $R_1$ and $R_2$ are incomparable (as just argued) and $R_3$ and $R_2$ are incomparable (by the same logic), yet $R_1$ and $R_3$ are \emph{comparable}, as evidenced by the fact that one can obtain $R_3$ from $R_1$, by mixing $R_1$ with any free resource that intersects the line defined by the points $R_1$ and $R_3$. In addition, one can also see that the height of the pre-order is infinite. It suffices to note that the chain $\boldsymbol{C}_{\textup{NPR}}$ is totally ordered and contains a continuum of elements. The width of the pre-order is also infinite. Consider, for example, the line segment defined by the points $R_1$ and $R_2$ in Fig.~\ref{fig:twoMTrianglePlot}. This subset of resources constitutes an antichain, as every resource in it is incomparable to every other: each resource has a higher $M_{\rm NPR}$ value and lower $M_{\CHSH}$ value than any of its neighbors towards the left, and has a lower $M_{\rm NPR}$ value and higher $M_{\CHSH}$ value than any of its neighbors towards the right. Because this subset also forms a continuum, it follows that the width of the pre-order is infinite. \begin{figure}[b!] \begin{center} \subfigure[\label{fig:completeTrianglePlot}] { \centering \begin{tikzpicture}[scale=0.7] \path[draw, ultra thick] (-30:4) -- (90:4) -- (210:4) -- cycle; \node[draw,shape=circle,fill,scale=.4, label={[label distance=0.1cm]0:\large{${L_{\star}^{\rm bb}}$}}] at (-30:4) {}; \node[draw,shape=circle,fill,scale=.4, label={[label distance=0.1cm]90:\large{${R_{\textup{PR}}}$}}] at (90:4) {}; \node[draw,shape=circle,fill,scale=.4, label={[label distance=0.1cm]180:\large{${L_{\textup{NPR}}^{\rm b}}$}}] at (210:4) {}; \path[name path=ls] (210:4) -- (90:4); \path[name path=chsh] (-4,0) -- (4,0); \path[name intersections={of=chsh and ls, by=lp}]; \path[name path=nrp, shift=(-30:4)] (0:0) -- (150:7); \path[name intersections={of=nrp and ls, by=lup}]; \path[clip] (-30:4) -- (90:4) -- (210:4) -- cycle; \draw[thick] (-4,0) -- (4,0); \path[clip] (-30:4) -- (90:4) -- (210:4) -- cycle; \draw[thick, shift=(-30:4)] (0:0) -- (150:7); \path [name intersections={of=chsh and nrp, by=R}]; \node [draw,shape=circle,fill,scale=.4, label={[label distance=-0.1cm]45:${R}$}] at (R) {}; \path[name path=rs] (-30:4) -- (90:4); \path[name intersections={of=chsh and rs, by=rp}]; \begin{pgfonlayer}{myback} \filldraw[draw=black, fill=jflyYellow, opacity=0.5] (-30:4) -- (R) -- (rp) -- cycle; \filldraw[draw=black, fill=jflyYellow, opacity=0.5] (-30:4) -- (R) -- (lp) -- (lup); \filldraw[draw=black, fill=jflyBlue, opacity=1] (-30:4) -- (R) -- (lp) -- (210:4) -- cycle; \filldraw[draw=black, fill=jflySkyBlue, opacity=0.4] (R) -- (lup) -- (90:4) -- (rp) -- cycle; \end{pgfonlayer} \end{tikzpicture}} \subfigure[\label{fig:completeMonotoneAxes}] { \centering \begin{tikzpicture}[scale=1] \path[draw, ultra thick] (0,0) -- (0,4) -- (4,4) -- (4,0) ; \path[draw, ultra thick, dashed] (0,0) -- (4,0) ; \node [draw,shape=circle,fill,scale=.4, label={[label distance=-0.1cm]225:${2}$}] at (0,0) {}; \node [draw=none,scale=.4, label={[label distance=-0cm]-90:${4}$}] at (4,0) {}; \node [draw=none,scale=.4, label={[label distance=-0cm]180:${4}$}] at (0,4) {}; \node [draw,shape=circle,fill,scale=.4, label={[label distance=-0.1cm]45:${R_{PR}}$}] at (4,4) {}; \node at (2,-0.5) {${M_{\textup{NPR}}}$}; \node at (-1, 2) {${M_{\textup{CHSH}}}$}; \path[pattern=north west lines] (0,0) -- (0,4) -- (4,4)--cycle; \path[pattern=north east lines] (0,0) -- (0,4) -- (4,4)--cycle; \node [draw,shape=circle,fill,scale=.4, label={[label distance=-0.1cm]45:${R}$}] (R) at (2.75,1.25) {}; \draw[thick] (1.25,1.25) -- (4,1.25); \draw[thick] (2.75,0) -- (2.75,2.75); \path[draw, ultra thick] (0,0) -- (4,4); \begin{pgfonlayer}{myback} \fill[fill=jflyYellow, opacity=0.5] (R) rectangle (4,0); \path[clip] (0,0) -- (4,0) -- (4,4) -- cycle; \fill[fill=jflyYellow, opacity=0.5] (R) rectangle (1.25,2.75); \fill[fill=jflyBlue, opacity=1] (R) rectangle (0,0); \fill[fill=jflySkyBlue, opacity=0.4] (R) rectangle (4,4); \end{pgfonlayer} \end{tikzpicture}} \end{center} \caption[]{ \subref{fig:completeTrianglePlot} and \subref{fig:completeMonotoneAxes} provide the same pair of depictions of the 2-parameter family of resources $\boldsymbol{S}_{\twotwotwotwo}^{L^{\rm bb}_{\star}}$ as were introduced in Fig.~\ref{fig:twoM}. We consider a particular resource $R$. In \subref{fig:completeTrianglePlot}, we depict the level curves of $M_{\rm CHSH}$ (horizontal) and $M_{\rm NPR}$ (angled) which include $R$. By monotonicity of the two monotones, $R$ cannot be freely converted into any resource in the upper light-blue region or in the pair of yellow regions. As we prove in Section~\ref{sec:ourcompleteness}, the two monotones are complete for this subset, which is equivalent to the fact that an arbitrary resource $R$ {\em can} be freely converted to any resource in the lower dark-blue region; namely, the entire region wherein $M_{\rm CHSH}$ and $M_{\rm NPR}$ do not have a value greater than the one they have on $R$. Resources in the upper light-blue region can be converted to $R$, while resources in the pair of yellow regions are incomparable to $R$. } \label{fig:complete} \end{figure} Also by inspection, for a given nonfree resource, there are a continuum of chains and antichains which contain it. In order to see this, let us first introduce some terminology. Within the plane of the two-parameter family of resources, depicted in Fig.~\ref{fig:completeTrianglePlot}, we refer to a direction from a given point $R$ as an \enquote{antichain direction} relative to that point, if this direction lies {\em strictly clockwise} from the direction defined by the $M_{\rm CHSH}$ level curve that passes through $R$ and strictly counterclockwise from the direction defined by the $M_{\rm NPR}$ level curve that passes through $R$. Otherwise, it is called a ``chain direction''. Thus an antichain direction relative to $R$ is defined by any vector originating in $R$ and terminating at a point strictly within either yellow region in Fig.~\ref{fig:completeTrianglePlot}, while a chain direction relative to $R$ is defined by any vector originating in $R$ and terminating in either blue region. A one-dimensional curve of resources in this subset defines a chain (antichain) if and only if at every point on the curve, the tangent to the curve at that point is aimed\footnote{More precisely: a line defines two opposing directions, and both of these directions will point in a chain direction, or both will point in an antichain direction.} in a chain direction (antichain direction) relative to that point. A final lesson we learn from these two monotones is that the set of all monotones induced (via Eq.~\eqref{eq:CHSHmonotonedefn}) by the facet-defining Bell inequalities for a given type do not yield a complete set of monotones for the resources of that type. We have shown that the set of resources is not totally pre-ordered, and as stated in Section~\ref{costandyield}, the eight facet-defining Bell inequalities for the {\twotwotwotwo}-scenario induce only a single monotone: $M_{\CHSH}$. Since no single monotone can be complete for a pre-order of resources that includes incomparable resources, it follows immediately that the monotones induced by the facet-defining Bell inequalities for the {\twotwotwotwo} type are not sufficient for fully characterizing the pre-order of resources of that type. Since such resources trivially can be lifted to any nontrivial Bell scenario (where the lifted resource will violate no facet-defining Bell inequalities other than CHSH), it follows that: \begin{prop}\label{prop:beyondbellviolation} The pre-ordering of resources relative to {\LOSR} operations cannot be resolved solely using the degree of violations of facet-defining Bell inequalities. \end{prop} \begin{proof} By definition, any complete set of monotones allows one to compute the values of any other monotone from them~\footnote{If one has a set of monotones $\{M_i\}_i$ which is complete, then for a given resource $R$, the set of values $\{M_i(R)\}_i$ is sufficient for (in principle) computing the value $M(R)$ of any monotone $M$ on resource $R$. First, one can deduce the equivalence class of $R$ from $\{M_i(R)\}_i$; this is possible by the completeness of the set $\{M_i\}_i$. Then, one can select any resource $R'$ from the equivalence class of $R$ and can evaluate $M(R')$ for the given monotone $M$. Because a monotone must assign the same value to all resources within an equivalence class, it holds that $M(R')=M(R)$. (Note that our argument here does not imply that one can {\em in practice} compute the value $M(R)$; this computation might involve solving a hard problem.)}. However, although the value of $M_{\CHSH}(R)$ can be computed (for any type-{\twotwotwotwo} resource $R$) from the eight values of the facet-defining ${\CHSH}$ functionals in Eq.~\eqref{eq:chshvariants0}, the value of $M_{\rm NPR}(R)$ cannot. This implies that any complete set of monotones must include at least one monotone (like $M_{\rm NPR}(R)$) which depends on information beyond the values of the eight ${\CHSH}$ functionals. \end{proof} Proposition~\ref{prop:beyondbellviolation} shows that the nonclassicality of common-cause processes is not completely characterized by the monotones that are naturally associated to facet-defining Bell functionals, despite the fact that such Bell functionals {\em are} sufficient to witness whether or not a resource is nonclassical. \subsection{Incompleteness of the two monotones} \label{sec:ourincompleteness} In this section, we prove that the two-element set of monotones $\{ M_{\CHSH},M_{\rm NPR}\}$ is not a complete set. We do so by showing that it is not complete even for resources of type {\twotwotwotwo}. A simple proof is as follows. Consider resources of the form $R={\tfrac{1}{2} L^{\rm bb}_{\star}+\tfrac{1}{2}C(\text{½})}$ for different choices of the CHSH-saturating resource $L^{\rm bb}_{\star}$ that lies in the boundary of $\boldsymbol{S}^G_{\twotwotwotwo}$. We will show that there are pairs of resources of this form which are strictly ordered, and other pairs of resources of this form which are incomparable. These facts cannot be captured by the two monotones, which see all resources of this form as equivalent, with $M_{\rm NPR} = 3$ and $M_{\CHSH} =2.5$. \begin{table*}[ht] \centering {\setlength{\tabcolsep}{1.2ex} \begin{tabular}{|c|cccccccc|c|c|} \toprule & \(\Braket{A_0}\) & \(\Braket{A_1}\) & \(\Braket{B_0}\) & \(\Braket{B_1}\) & \(\!\Braket{A_0 B_0}\!\) & \(\!\Braket{A_1 B_0}\!\) & \(\!\Braket{A_0 B_1}\!\) & \(\!\Braket{A_1 B_1}\!\) & \(M_{\CHSH}\) & \(M_{\rm NPR}\)\\ \midrule \(L_1^{\rm bb}\) & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 2 & 2\\ \(L_2^{\rm bb}\) & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 0 & 2 & 2\\ \(L_3^{\rm bb}\) & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 0 & 2 & 2\\ \midrule ${C(\text{½})}$ & 0 & 0 & 0 & 0 & \(\nicefrac{3}{4}\) & \(\nicefrac{3}{4}\) & \(\nicefrac{3}{4}\) & \(\nicefrac{-3}{4}\) & \(3\) & 3\\ \midrule $\frac{1}{2}L_1^{\rm bb}+\frac{1}{2}C(\text{½})$ & \(\nicefrac{1}{2}\) & \(\nicefrac{1}{2}\) & \(\nicefrac{1}{2}\) & \(\nicefrac{1}{2}\) & \(\nicefrac{7}{8}\) & \(\nicefrac{7}{8}\) & \(\nicefrac{7}{8}\) & \(\nicefrac{1}{8}\) & \(\nicefrac{5}{2}\) & 3\\ $\frac{1}{2}L_2^{\rm bb}+\frac{1}{2}C(\text{½})$ & 0 & 0 & 0 & 0 & \(\nicefrac{7}{8}\) & \(\nicefrac{7}{8}\) & \(\nicefrac{3}{8}\) & \(\nicefrac{-3}{8}\) & \(\nicefrac{5}{2}\) & 3\\ $\frac{1}{2}L_3^{\rm bb}+\frac{1}{2}C(\text{½})$ & 0 & 0 & 0 & 0 & \(\nicefrac{7}{8}\) & \(\nicefrac{3}{8}\) & \(\nicefrac{7}{8}\) & \(\nicefrac{-3}{8}\) & \(\nicefrac{5}{2}\) & 3\\ \bottomrule \end{tabular}}\vspace{-1ex} \caption{\label{tab:genboxes3} An explicit description of the resources which demonstrate the incompleteness of the pair of monotones $\{ M_{\rm CHSH}, M_{\rm NPR}\}$. The fact that $\Braket{A_0 B_0}{=}1$ for the free boxes immediately proves that these do indeed lie on the boundary of the full set of GPT-realizable common-cause boxes of this type, $\boldsymbol{S}^G_{\twotwotwotwo}$ (since it implies that $p(0,1|0,0) = 0 =p(1,0|0,0)$, and hence these boxes saturate positivity inequalities). } \end{table*} Consider for example the resources $L_1^{\rm bb}$, $L_2^{\rm bb}$, and $L_3^{\rm bb}$ defined in Table~\ref{tab:genboxes3}. Using the pairwise comparison algorithm described in Section~\ref{polything}, one can verify that the resource ${\tfrac{1}{2} L_1^{\rm bb}+\tfrac{1}{2}C(\text{½})}$ is strictly higher in the order than ${\tfrac{1}{2} L_2^{\rm bb}+\tfrac{1}{2}C(\text{½})}$, while the two resources ${\tfrac{1}{2} L_2^{\rm bb}+\tfrac{1}{2}C(\text{½})}$ and ${\tfrac{1}{2} L_3^{\rm bb}+\tfrac{1}{2}C(\text{½})}$ are incomparable. Note that $L_1^{\rm bb}$ is a {\em convexly extremal} resource, while $L_2^{\rm bb}$ and $L_3^{\rm bb}$ are not. As an aside, it is worth noting that because the nonlocal fraction and the two standard robustness measures witness exactly the same ordering relations as $M_{\CHSH}$ does (as demonstrated in Section~\ref{sec:othermonotones}), one gains nothing by supplementing $M_{\CHSH}$ and $M_{\textup{NPR}}$ with them. Rather, new monotones are needed. The incompleteness of the two-element set $\{M_{\CHSH},M_{\textup{NPR}}\}$ is also established directly from the argument presented in Section~\ref{sec:monotonecount}. \subsubsection{Completeness of the two monotones for certain families of resources} \label{sec:ourcompleteness} Although $M_{\CHSH}$ and $M_{\rm NPR}$ do not form a complete set of monotones for the set of all resources of type {\twotwotwotwo}, it turns out that they {\em do} form a complete set of monotones for certain subsets thereof. \begin{prop}\label{prop:whencomplete} The pair of monotones $\{M_{\CHSH},M_{\rm NPR}\}$ are a complete set relative to the subset of resources $\boldsymbol{S}_{\twotwotwotwo}^{L^{\rm bb}_{\star}}$ (defined in Eq.~\eqref{Rset}) for any $L^{\rm bb}_{\star} \in \boldsymbol{L}^{\rm bb}$. \end{prop} Proposition~\ref{prop:whencomplete} is proven in Appendix~\ref{proofprop2}. The logic of the proof is quite simple: we prove that there always exists a free operation $\tau_{{\rm erase}-\gamma}$ which converts an arbitrary resource $R(\alpha_1,\gamma_1)$ in the family to some resource $R(\alpha_2,0)$ lying on the chain $\boldsymbol{C}_{\textup{NPR}}$ without changing the value of $M_{\CHSH}$. By convexity, it follows that $R(\alpha_1,\gamma_1)$ can be converted to any resource in the convex hull of $R(\alpha_1,\gamma_1)$, $R(\alpha_2,0)$, $L^{\rm bb}_{\star}$, and $L_{\rm NPR}^{\rm b}$; namely, the dark-blue region in Fig.~\ref{fig:complete}. This region corresponds to the set of all resources with a lower value of both $M_{\CHSH}$ and $M_{\rm NPR}$. It follows that if a conversion is not forbidden by consideration of this pair of monotones, then it is achievable. By the definition of completeness for a set of monotones (see Eq.~\eqref{completeset}), this implies that the two monotones are indeed a complete set for this family of resources. \subsection{At least eight independent measures of nonclassicality} \label{sec:monotonecount} In this section, we tackle the question of how many independent continuous monotones are required to fully specify the partial order of resources. This is the content of Theorem~\ref{prop:eightmonotonesV2}. Along the way to proving this result, we also prove a powerful result about the equivalence classes under {\LOSR} for nonfree resources of type {\twotwotwotwo}, stated in Proposition~\ref{prop:zerodclasses}. We begin by drawing a distinction among resources. \begin{defn}\label{def:orbital} A resource is said to be \term{orbital} if its equivalence class under type-preserving {\LOSR} is equal to its equivalence class under {\LSO}. \end{defn} It follows that if all the resources in a set $\mathbf{S}$ are orbital, then the quotient space~\cite{Quotients1994} of $\mathbf{S}$ under the group {\LSO} provides a representation of the partial order of {\LOSR}-equivalence classes of resources in $\mathbf{S}$ (despite the fact that the {\LOSR} operations do not themselves form a group).\footnote{For practical purposes, Ref.~\cite[App.~B]{Rosset2014classifying} provides a technical discussion regarding how to efficiently select a representative Bell inequality under a finite symmetry group; the procedure discussed there is equally applicable for the task of efficiently selecting \emph{canonical form} resources. Note, however, that the {\LSO} symmetry group differs from the Bell-polytope automorphism group considered in Ref.~\cite{Rosset2014classifying}, in that {\LSO} does \emph{not} include the symmetry of exchange-of-parties.} This property of resources is pertinent to the discussion here because of the following result: \begin{prop}\label{prop:zerodclasses} All nonfree resources of type {\twotwotwotwo} are orbital. \end{prop} The proof is provided in Appendix~\ref{proofprop3}. Note that for {\em free} resources, {\LOSR}-equivalence is distinct from {\LSO}-equivalence because the {\LSO}-equivalence class of any resource (including a free resource) is of finite cardinality, while the {\LOSR}-equivalence of a free resource is the entire set of free resources, which is of infinite cardinality. Thus, free resources are not orbital. Moreover, the coincidence between being nonfree and being orbital does {\em not} generalize beyond the {\twotwotwotwo} scenario. For instance, note that a pair of ${\twotwotwotwo}$ resources, $R_1$ and $R_2$, which are implemented in parallel can be conceptualized as a ${\fourfourfourfour}$ resource, $R_{1\otimes2}$, by composing the two binary setting variables on the left wing into a single 4-valued setting variable on the left wing, and similarly for the other setting variable and the outcome variables. If $R_1$ is free and $R_2$ is nonfree, then $R_{1\otimes 2}$ is nonfree, and yet because $R_1$'s equivalence class is not generated by {\LSO}, neither is the equivalence class of $R_{1\otimes 2}$. Thus, $R_{1\otimes 2}$ is a nonfree resource that is not orbital. To express the next proposition, we require the following definition. \begin{defn}\label{intrinsicdimension} The \term{intrinsic dimension} of a set of resources $\boldsymbol{S}$, denoted $\operatorname{IntrinsicDim}(\boldsymbol{S})$, is the smallest cardinality of continuous functions from the set to real numbers required to uniquely identify a resource within $\boldsymbol{S}$. \end{defn} \begin{prop}\label{prop:orbitalstuff} For any compact set $\mathbf{S}$ of resources that are all orbital, the intrinsic dimension of the set $\mathbf{S}$ is a lower bound on the cardinality of a complete set of continuous monotones for $\mathbf{S}$ (and for any superset of $\mathbf{S}$). \end{prop} The proof is provided in Appendix~\ref{sec:proofproporbitalstuff}. Recognizing that the set of nonfree resource of type {\twotwotwotwo} has intrinsic dimension equal to eight,\footnote{That ${\operatorname{IntrinsicDim}(\boldsymbol{S}^{\textup{nonfree}}_{\twotwotwotwo})=8}$ is evidenced by the characterization of such resources in terms of outcome biases and two-point correlators. If $T$ indicates any type, then ${\operatorname{IntrinsicDim}(\boldsymbol{S}^{\textup{nonfree}}_{T})=\operatorname{IntrinsicDim}(\boldsymbol{S}^{G}_{T})}$ whenever ${\boldsymbol{S}^{\textup{G}}_{T}\neq\boldsymbol{S}^{\textup{free}}_{T}}$ (think of subtracting one polytope from a circumscribing polytope of the same dimension). See Refs.\cite{Bellreview,Pironio2005,CG2004I3322,Rosset2014classifying} for discussions on the intrinsic dimension of no-signalling polytopes.} then Propositions~\ref{prop:zerodclasses}~and~\ref{prop:orbitalstuff} together imply the following theorem: \begin{thm}\label{prop:eightmonotonesV2} For resources of type {\twotwotwotwo}, the cardinality of a complete set of continuous monotones is no less than 8. \end{thm} \section{Properties of the pre-order of {\em quantumly realizable} common-cause boxes} \label{sec:qr} The bulk of this article has considered the resource theory which is defined by taking the enveloping theory of resources to be the GPT-realizable common-cause boxes, and the free subtheory of resources to be the classically realizable common-cause boxes. In this section, we consider a slightly different resource theory, wherein the enveloping theory of resources is taken to be the common-cause boxes that are realizable in a {\em quantum} causal model, which we term \term{quantumly realizable}, while the free subtheory is chosen to be, as before, the common-cause boxes that are classically realizable. Effectively, the new resource theory concerns the nonclassicality of common-cause boxes within the scope of nonclassicality that can be achieved quantumly. In other words, it concerns the {\em intrinsic quantumness} of common-cause boxes. Formally, the conditional probability distribution associated to a quantumly realizable common-cause box is of the same form as Eq.~\eqref{GPTCCbox}, that is, \begin{equation} P_{XY|ST}(xy|st) = ({\bf r}^{A}_{x|s} \otimes {\bf r}^{B}_{y|t}) \cdot {\bf s}^{AB}, \label{QuantumCCbox} \end{equation} but where the vector ${\bf s}^{AB}$ is a real vector representation of a quantum state on the bipartite system composed of quantum systems $A$ and $B$, and the sets of vectors $\{ {\bf r}^{A}_{x|s}\}_x$ and $\{ {\bf r}^{B}_{y|t}\}_y$ are real vector representations of POVMs on $A$ and on $B$ respectively. (See, e.g., Ref.~\cite{Henson2014}.) Although the conclusions we drew in Section~\ref{sec:lessonsfromtwomonotones} concerned the pre-order of GPT-realizable common-cause boxes, analogous results hold true for the pre-order of quantumly realizable common-cause boxes. This is because the kind of two-parameter family of GPT-realizable common-cause boxes that was used to establish global features of the pre-order of such boxes in Section~\ref{sec:lessonsfromtwomonotones} contains a two-parameter family of quantumly realizable common-cause boxes that can be used for the same purpose. A caricature of one such quantumly realizable family is provided in Fig.~\ref{TsirelsonHardy}. Specifically, if one reviews the arguments that were used in Section~\ref{sec:lessonsfromtwomonotones} to establish the various global properties of the pre-order of GPT-realizable common-cause boxes, it becomes apparent that these apply equally well to the quantumly realizable common cause boxes. It is also straightforward to show that the lower bound on the cardinality of a complete set of monotones, obtained in Section~\ref{sec:monotonecount}, also applies to the resource theory of quantumly realizable common-cause boxes. It suffices to consider the case of the quantumly realizable resources of type {\twotwotwotwo}, hereafter $\boldsymbol{S}^Q_{\twotwotwotwo}$, and to note that the set of nonfree resources therein, that is, the set $\boldsymbol{S}^{\textup{nonfree}}_{\twotwotwotwo} \bigcap \boldsymbol{S}^{Q}_{\twotwotwotwo}$, still has intrinsic dimension equal to eight. \begin{table*}[htb] \centering {\setlength{\tabcolsep}{1.2ex} \begin{tabular}{|c|cccccccc|c|c|} \toprule & \(\Braket{A_0}\) & \(\Braket{A_1}\) & \(\Braket{B_0}\) & \(\Braket{B_1}\) & \(\Braket{A_0 B_0}\) & \(\Braket{A_1 B_0}\) & \(\Braket{A_0 B_1}\) & \(\Braket{A_1 B_1}\) & $M_{\CHSH}$ & $M_{\NPR}$\\ \midrule \(R_{\textup{Tsirelson}}\) & 0 & 0 & 0 & 0 & $\nicefrac{\sqrt{2}}{2}$ & $\nicefrac{\sqrt{2}}{2}$ & $\nicefrac{\sqrt{2}}{2}$ & $\nicefrac{-\sqrt{2}}{2}$ & $2\sqrt{2}$ & $2\sqrt{2}$\\ & & & & & \(\scriptstyle\approx 0.707\) & \(\scriptstyle\approx 0.707\) & \(\scriptstyle\approx 0.707\) & \(\scriptstyle\approx {-}0.707\) & \(\scriptstyle\approx 2.828\) & \(\scriptstyle\approx 2.828\)\\\midrule \(R_{\textup{Hardy}}\) & $\scriptstyle 5{-}2\sqrt{5}$ & $\scriptstyle \sqrt{5}{-}2$ & $\scriptstyle 5{-}2\sqrt{5}$ & $\scriptstyle \sqrt{5}{-}2$ & $\scriptstyle 6\sqrt{5}{-}13$ & $\scriptstyle 3\sqrt{5}{-}6$ & $\scriptstyle 3\sqrt{5}{-}6$ & $\scriptstyle 2\sqrt{5}{-}5$ & $\scriptstyle 10(\sqrt{5}{-}2)$ & 4\\ & \(\scriptstyle\approx 0.528\) & \(\scriptstyle\approx 0.236\) & \(\scriptstyle\approx 0.528\) & \(\scriptstyle\approx 0.236\) & \(\scriptstyle\approx 0.416\) & \(\scriptstyle\approx 0.708\) & \(\scriptstyle\approx 0.708\) & \(\scriptstyle\approx {-}0.528\) & \(\scriptstyle\approx 2.361\) & \\ \midrule \(R_{\textup{Tilt}}(\theta)\) & $\cos (\theta)$ & 0 & $\tfrac{\cos (\theta)}{\xi(\theta)}$ & $\tfrac{\cos (\theta)}{\xi(\theta)}$ & $\tfrac{1}{\xi(\theta)}$ & $\tfrac{\sin ^2(\theta )}{\xi(\theta)}$ & $\tfrac{1}{\xi(\theta)}$ & $\!\tfrac{-\sin ^2(\theta )}{\xi(\theta)}$ & $2\, \xi(\theta)$ & $\scriptstyle\binom{\text{see}}{\text{caption}}$ \\\midrule \(R_{\textup{Tilt}}(0)\) & 1 & 0 & 1 & 1 & 1 & 0 & 1 & 0 & 2 & 2 \\ \bottomrule \end{tabular}}\vspace{-1ex} \caption{\label{tab:boxes3} An explicit description of the Tsirelson resource, the Hardy resource, and a family of extremal quantum resources (parametrized by $\theta$) which are exposed by tilted Bell inequalities~\cite{Yang2013selftesting,Bamps2015selftesting}. We employ the shorthand $\xi(\theta)\coloneqq \sqrt{\sin ^2(\theta){+}1}$ to allow all definitions to fit within the table. We also analytically derived ${M_{\NPR}}\big(R_{\textup{Tilt}}(\theta)\big)=\frac{\xi(\theta)\left(\xi(\theta)-1\right)}{2\left(1-\cos(\theta)\right)-\xi(\theta)\left(\xi(\theta)-1\right)}$, for $0<\theta\leq \pi/2$. One can readily verify that ${M_{\NPR}}\big(R_{\textup{Tilt}}(\theta)\big)$ increases with the amount of tilt (i.e., $\expec{A_0}=\cos(\theta)$), whereas ${M_{\CHSH}}\big(R_{\textup{Tilt}}(\theta)\big)=2\sqrt{2-\cos^2(\theta)}$ decreases with added tilt. The opposite behavior of the two monotones implies that every resource in the tilted family $\theta\in (0,\pi/2]$ is incomparable to every other. $R_{\textup{Tilt}}(0)$ is a free resource, not violating any Bell inequality; at the other end of the family, $R_{\textup{Tilt}}(\tfrac{\pi}{2})=R_{\textup{Tsirelson}}$. } \end{table*}} In the rest of this section, we consider properties of the pre-order of quantumly realizable common-cause boxes that are particular to the quantum case. Unlike for the set $\boldsymbol{S}^G_{\twotwotwotwo}$, where the partial order of equivalence classes has a unique element at the top of the order (the equivalence class of $R_{\rm PR}$), in $\boldsymbol{S}^Q_{\twotwotwotwo}$ there is no unique element at the top of the order. An easy way to see this is by considering the example of the Tsirelson box ($R_{\rm Tsirelson}$) and the Hardy box ($R_{\rm Hardy}$), each of which is defined explicitly in Table~\ref{tab:boxes3}. Noting that $M_{\CHSH}(R_{\rm Tsirelson})=M_{\NPR}(R_{\rm Tsirelson})= 2\sqrt{2}\approx 2.828$, and that $M_{\CHSH}(R_{\rm Hardy})=10(\sqrt{5}{-}2) \approx 2.361$ and $M_{\NPR}(R_{\rm Hardy})=4$, it follows immediately that the two boxes are incomparable since $M_{\CHSH}(R_{\rm Tsirelson}) > M_{\CHSH}(R_{\rm Hardy})$ while $M_{\NPR}(R_{\rm Tsirelson}) < M_{\NPR}(R_{\rm Hardy})$. We show these two resources in Fig.~\ref{TsirelsonHardyTrianglePlot}, together with an approximate sketch\footnote{An analytic characterization of the set of all extremal quantumly realizable resources within $\boldsymbol{S}^Q_{\twotwotwotwo}$ is not known. In Fig.~\ref{TsirelsonHardyTrianglePlot}, the endpoints and the slope of the curve at the endpoints are exact, and the rest of the curve is merely an interpolation.} of the extremal quantumly realizable resources which interpolate between them (the light-blue curve). The values of $M_{\rm CHSH}$ and $M_{\rm NPR}$ on all of these resources is plotted in Fig.~\ref{TsirelsonHardyMonotoneAxes}. From the figure, one can immediately infer that $R_{\rm Tsirelson}$ and $R_{\rm Hardy}$ are incomparable. \color{black} \begin{figure}[b!] \begin{center} \subfigure[\label{TsirelsonHardyTrianglePlot}] { \centering \begin{tikzpicture}[scale=0.7] \path[draw, ultra thick] (-30:4) -- (90:4) -- (210:4) -- cycle; \node[draw,shape=circle,fill,scale=.4, label={[label distance=0.1cm]0:\large{${L_{\star}^{\rm bb}}$}}] at (-30:4) {}; \node[draw,shape=circle,fill,scale=.4, label={[label distance=0.1cm]90:\large{${R_{\textup{PR}}}$}}] at (90:4) {}; \node[draw,shape=circle,fill,scale=.4, label={[label distance=0.1cm]180:\large{${L_{\textup{NPR}}^{\rm b}}$}}] at (210:4) {}; \path[name path=sr1, shift=(-30:4)] (0:0) -- (150:7); \path[name path=cr1] (-4,1) -- (4,1); \path [name intersections={of=cr1 and sr1, by=R1}]; \node [draw,shape=circle,fill,scale=.4, label={[label distance=-0.1cm]135:${R_{\textup{Tsirelson}}}$}] at (R1) {}; \path[name path=sr2, shift=(-30:4)] (0:0) -- (120:7); \path[name path=cr2] (-4,-0.5) -- (4,-0.5); \path [name intersections={of=cr2 and sr2, by=R2}]; \node [draw,shape=circle,fill,scale=.4, label={[label distance=-0.1cm]45:${R_{\textup{Hardy}}}$}] at (R2) {}; \path [name intersections={of=cr1 and sr2, by=R3}]; \draw[thick, color=Blue, circle color=Blue, dotted pattern] (R1) to [out=-5, in = 130] (R2) ; \begin{pgfonlayer}{myback} \path[clip] (-30:4) -- (90:4) -- (210:4) -- cycle; \foreach \y in{-3,-2.5,...,4} \draw[name path=c.\y, color=jflyBlue] (-4,\y) -- (4,\y); \foreach \y in{180,175,...,120} \draw[name path=s.\y, shift=(-30:4), color=jflyVermillion] (0:0) -- (\y:7); \end{pgfonlayer} \end{tikzpicture} } \hskip -0.5cm \subfigure[\label{TsirelsonHardyMonotoneAxes}] { \centering \begin{tikzpicture}[scale=1] \path[draw, ultra thick] (0,0) -- (0,4) -- (4,4) -- (4,0) ; \path[draw, ultra thick, dashed] (0,0) -- (4,0) ; \node [draw,shape=circle,fill,scale=.4, label={[label distance=-0.1cm]225:${2}$}] at (0,0) {}; \node [draw=none,scale=.4, label={[label distance=-0cm]-90:${4}$}] at (4,0) {}; \node [draw=none,scale=.4, label={[label distance=-0cm]180:${4}$}] at (0,4) {}; \node [draw,shape=circle,fill,scale=.4, label={[label distance=-0.1cm]45:${R_{\textup{PR}}}$}] at (4,4) {}; \node at (3,-0.5) {${M_{\textup{NPR}}}$}; \node at (-1, 3) {${M_{\textup{CHSH}}}$}; \path[pattern=north west lines] (0,0) -- (0,4) -- (4,4)--cycle; \path[pattern=north east lines] (0,0) -- (0,4) -- (4,4)--cycle; \node[fill=white] at (2,2.3) {${R_{\textup{Tsirelson}}}$}; \node [draw,shape=circle,fill,scale=.4] (R1) at (2,2) {}; \node [draw,shape=circle,fill,scale=.4, label={[label distance=-0.1cm]0:${R_{\textup{Hardy}}}$}] (R2) at (4,1) {}; \draw[thick, color=Blue, circle color=Blue, dotted pattern] (R1) to [out=-5, in = 100] (R2); \draw[dashed] (R1) -- (0,2); \draw[dashed] (R1) -- (2,0); \node [draw=none] at (2,-0.2) {\scriptsize{${2\sqrt{2}}$}}; \node [draw=none] at (-0.4,2) {\scriptsize{${2\sqrt{2}}$}}; \draw[thick] (R1) to [out=-15, in = 100] (4,0); \begin{pgfonlayer}{myback} \path[clip] (0,0) -- (4,0) -- (4,4) -- cycle; \foreach \y in{0,0.334,...,4} \draw[name path=c.\y, color=jflyBlue] (\y,\y) -- (4,\y); \foreach \y in{0,0.334,...,4} \draw[name path=c.\y, color=jflyVermillion] (\y,0) -- (\y,\y);; \end{pgfonlayer} \end{tikzpicture} } \end{center} \caption[.]{ \subref{fig:completeTrianglePlot} and \subref{fig:completeMonotoneAxes} provide the same pair of depictions of the 2-parameter family of resources $\boldsymbol{S}_{\twotwotwotwo}^{L^{\rm bb}_{\star}}$ as were introduced in Fig.~\ref{fig:twoM}. Here, we provide a caricature of some ordering relations among quantumly realizable common-cause boxes within this 2-parameter family. We depict the Tsirelson and Hardy boxes (with scaled-up values of the monotones, but accurate ordering of these values), together with a guess of what the boundary of the set of quantumly realizable resources within this 2-parameter family might be (dotted blue curves). In \subref{TsirelsonHardyMonotoneAxes}, we also depict the values of the two monotones for the set of convexly extremal, quantumly realizable resources which are self-tested by the tilted Bell inequalities (smooth black curve). } \label{TsirelsonHardy} \end{figure} Recall that no quantumly realizable resource can achieve the algebraic maximum of $M_{\rm CHSH}$, while some GPT-realizable (such as $R_{\rm PR}$) {\em can} achieve the maximum. In contrast to $M_{\CHSH}$, $M_{\rm NPR}$ is such that some quantumly realizable resources (such as $R_{\rm Hardy}$) violate it maximally. Furthermore, whereas $R_{\rm PR}$ maximizes both $M_{\rm CHSH}$ and $M_{\rm NPR}$, no single quantumly realizable resource maximizes both those monotones. Therefore, a unique feature of the enveloping theory of \emph{quantumly realizable} common-cause boxes is that {\em inequivalent} resources can simultaneously be maximally nonclassical (according to distinct monotones), even among {\twotwotwotwo}-type resources. The interpolated curve in Figs.~\ref{TsirelsonHardyTrianglePlot} and \ref{TsirelsonHardyMonotoneAxes} furthermore suggests that perhaps all extremal quantum-realizable resources depicted therein are relatively incomparable. \noindent The following lemma gives a powerful result regarding maximally nonclassical resources: \begin{lem} \label{prop:ext} If a nonfree resource $R$ is convexly extremal in the set $\boldsymbol{S}^Q_{\twotwotwotwo}$ of quantumly realizable resources of type {\twotwotwotwo}, then $R$ is at the top of the pre-order among quantumly realizable resources of type {\twotwotwotwo}. \end{lem} \begin{proof} Let $R\in \boldsymbol{S}^Q_{\twotwotwotwo}$ be nonfree and extremal in $\boldsymbol{S}^Q_{\twotwotwotwo}$. Then, to prove the proposition, we need only prove that any quantumly realizable $R'\in \boldsymbol{S}^Q_{\twotwotwotwo}$ that can be freely converted to $R$ cannot be higher in the order than $R$ (rather, it must be equivalent). Assume the existence of some quantumly realizable $R'$ such that $R'\conv R$. Since $R$ is extremal in the image of $R'$ under {\LOSR},\footnote{This is justified as follows: from $R' \conv R$ it follows that $R \in\mathbfcal{P}^{\textup{LOSR}}_{[R]}(R')$, and from the fact that quantumly realizable boxes remain quantumly realizable under {\LOSR}, it follows that $\mathbfcal{P}^{\textup{LOSR}}_{[R]}(R') \subset \boldsymbol{S}^Q_{\twotwotwotwo}$. Finally, $R$ is by assumption extremal in $\boldsymbol{S}^Q_{\twotwotwotwo}$; hence, it is extremal in $\mathbfcal{P}^{\textup{LOSR}}_{[R]}(R')$ as well.} it must be that $R'$ is converted to $R$ through extremal operations: that is, through {\LDO}. But as follows from Lemma~\ref{lem:allCHSHsensitive} in Appendix~\ref{proofprop3}, or as can be explicitly checked,\footnote{One can explicitly check that all extremal \twotwotwotwo-type resources are mapped to the free set by any deterministic operation which is not a symmetry, which implies by convexity that {\em all} \twotwotwotwo-type resources are also mapped to the free set by these operations. } the image of any {\twotwotwotwo}-scenario resource is \emph{free} under any deterministic operation which is not a symmetry! Put another way, there \emph{is no preimage} of any nonfree {\twotwotwotwo}-scenario resource among {\twotwotwotwo}-scenario resources under deterministic nonsymmetry operations. This means that the only $\tau\in \underset{\scriptscriptstyle ^{\twotwotwotwo\rightarrow\twotwotwotwo}}{\LDO}$ such that \emph{conceivably} $\tau\circ R'=R$ are symmetry operations. As such, if $R$ is a nonfree extremal quantumly realizable resource of type {\twotwotwotwo}, the only quantumly realizable resources (of the same type) which can be converted to $R$ are symmetries of $R$. Since resources related by a symmetry operation are in the same equivalence class, there are no {\twotwotwotwo}-type quantumly realizable resources strictly above $R$ in the partial order.\end{proof} Lemma~\ref{prop:ext} allows us to conclude the following: \begin{prop}\label{prop:continuoustop} There exists a continuous set of resources that are at the top of the pre-order of quantumly realizable {\twotwotwotwo} resources, and wherein each resource is incomparable to every other resource in the set. \end{prop} \begin{proof} Lemma~\ref{prop:ext} states that any subset of resources which are extremal in $\boldsymbol{S}^Q_{\twotwotwotwo}$ are at the top of the pre-order of quantumly realizable {\twotwotwotwo} resources. The fact that one can find a continuous set of such resources follows from the well-known fact that $\boldsymbol{S}^Q_{\twotwotwotwo}$ is not a polytope. By furthermore choosing such a set of extremal resources for which $M_{\CHSH}$ takes a distinct value for every resource in the set, one additionally guarantees that no two of these top-of-the-order resources are in the same equivalence class, and hence each must be incomparable to every other in the set. Refs.~\cite{Masanes2003,Allcock2009,Bellreview,geometry2018} provide some explicit sets of resources satisfying these criteria. \end{proof} As one concrete example, consider the one-parameter family of quantumly realizable resources which are self-tested by the tilted Bell inequalities. We denote this family by $\{ R_{\textup{Tilt}}(\theta): \theta \in (0,\pi/2]\}.$ The definition of $R_{\textup{Tilt}}(\theta)$ is given in Table~\ref{tab:boxes3}. These resources are related to a corresponding family of tilted Bell functionals~\cite{Acin2012randomnessvsnonlocality,Wolfe2012quantumbounds,Yang2013selftesting,Bamps2015selftesting}, parametrized by $\beta \in [0,2]$, namely, \begin{align*} \label{eq:tiltedCHSH}\operatorname{TiltedCHSH}_{\beta}&(R)\coloneqq \beta \expec{A_0} +\expec{A_0 B_0}\\\nonumber&\quad+\expec{A_1 B_0}+\expec{A_0 B_1}-\expec{A_1 B_1}, \\\nonumber\begin{split}\text{where}\quad \smashoperator{\max\limits_{R^{\star}\in \boldsymbol{S}^{\rm free}_{\twotwotwotwo}}}& \;\;\operatorname{TiltedCHSH}_{\beta}(R^{\star}) = 2+ \beta, \\ \text{and where}\quad \smashoperator{\max\limits_{R^{\star}\in \boldsymbol{S}^Q_{\twotwotwotwo}}}& \;\;\operatorname{TiltedCHSH}_{\beta}(R^{\star}) = \sqrt{8+2\beta^2}. \end{split}\end{align*} Note that the only value of $\beta$ for which the maximum value of this function over the quantumly realizable set $\boldsymbol{S}^Q_{\twotwotwotwo}$ coincides with the maximum value over the free set $\boldsymbol{S}^{\rm free}_{\twotwotwotwo}$ is $\beta=2$. Whenever $\beta<2$, the resource $R_{\textup{Tilt}}(\theta)$ for $\theta$ defined implicitly by the equation $\beta=\frac{2}{\sqrt{1+2\tan^2 (\theta)}}$ is the \emph{unique} maximizer over $\boldsymbol{S}^Q_{\twotwotwotwo}$ of the corresponding tilted Bell functional. Formally, \begin{align*} &\beta=\frac{2}{\sqrt{1+2\tan^2 (\theta)}}<2\;\quad\text{implies}\quad \\&\operatorname{TiltedCHSH}_{\beta}(R^{\star}) < {\operatorname{TiltedCHSH}_{\beta}}\big\lparen R_{\textup{Tilt}}(\theta)\big\rparen \end{align*} for any $R^{\star}\in \boldsymbol{S}^Q_{\twotwotwotwo}$ distinct from $R_{\textup{Tilt}}(\theta)$. It follows that every resource $R_{\textup{Tilt}}(\theta)$ is convexly extremal in the set of quantumly realizable resources, and its extremality is \emph{exposed} by the corresponding tilted Bell functional. In fact, every resource in this family is incomparable to every other in the family, as can be shown directly by considering the values of $M_{\CHSH}$ and $M_{\rm NPR}$. In Fig.~\ref{TsirelsonHardyMonotoneAxes}, we show a plot of the values of the two monotones evaluated on this family. The points form a continuous antichain, shown in black. Note that the family of resources $\{ R_{\textup{Tilt}}(\theta): \theta \in (0,\pi/2]\}$ does not lie in any plane in the linear space of resources, and as such we do not attempt to plot the family directly (rather we \emph{only} plot its valuations with respect to the two monotones). \section{Conclusions and outlook} We have conceptualized Bell experiments as common-cause `box-type' processes: bipartite or multipartite processes with classical variables as inputs and outputs, the internal causal structure of which is a common-cause acting on all of the wings of the experiment. We have argued in favour of this conceptualization by appeal to the fact that Bell's theorem can be regarded as implying the need for nonclassicality in the causal model that underlies the process. We have begun to quantify the nonclassicality of such common-cause box-type processes by developing a resource theory thereof. We have argued in favour of a particular choice of the free operations for this resource theory, namely, those which can be achieved by embedding the resource into a circuit consisting of box-type processes realizable with a {\em classical} common cause, and we have shown that this set is equivalent to the set of local operations and shared randomness. We have focused here on characterizing the pre-order defined by single-copy deterministic conversion of resources under the free operations. We have provided a linear program that decides how any two resources are ordered. By leveraging a pair of functions that we have proven to be monotones, we have also established a number of properties of this pre-order, such as the fact that it contains incomparable resources, that it has infinite width and height, that it is locally infinite, and that the incomparability relation is not transitive. Moreover, despite the fact that the values of the facet-defining Bell functionals are necessary and sufficient for \emph{witnessing} the nonclassicality of a common-cause box, we have shown that they are not sufficient for \emph{quantifying} the nonclassicality of a common-cause box. In other words, there are aspects of the nonclassicality of such boxes relevant to resource conversions that are not captured by the degree of violation of the facet-defining Bell inequalities. For the particular case of resources with two binary inputs and two binary outputs, we moreover showed that at least eight continuous monotones are required to fully specify the pre-order among resources. We have also derived some interesting facts about the pre-order of resources when one restricts attention to common-cause boxes that can be realized in quantum theory. In particular, we have shown that for quantumly realizable resources of type {\twotwotwotwo}, all convexly extremal resources are at the top of the pre-order of such resources, and that there are an infinite number of incomparable resources at the top of this pre-order. \bigskip There is much scope for advancing and generalizing our work, some examples of which we now describe. One of the most fundamental problems that is yet to be solved is that of characterizing the equivalence classes of resources in the pre-order induced by single-copy deterministic conversion. That is, one would like a compressed representation of each resource that includes all and only information that is relevant to determining its equivalence class in this pre-order. Finding such a representation would be the analogue within our resource theory of proving that the equivalence classes of pure bipartite entangled states under LOCC~\cite{nielsen1999conditions} are given by the Schmidt coefficients of the state. All resource monotones could then be efficiently expressed in terms of this compressed representation, while all other parameters of a resource could be safely ignored. Even among resources of type {\twotwotwotwo} (much less for resources of arbitrary type), we do not have a complete set of monotones for this pre-order.\footnote{Although considerations of the examples given in Section~\ref{sec:ourincompleteness} might provide the intuition necessary to find such a complete set for resources of type {\twotwotwotwo}.} Another interesting open problem is to connect the existing monotones to figures of merit for interesting operational tasks. E.g.,~does the value of the monotone $M_{\CHSH}$ determine the extent to which a given resource can be used for key distribution or randomness generation~\cite{BHK,Acin2006QKD,Scarani2006QKD,Acin2007QKD, colbeckamp, Pironio2010,Dhara2013DIRNG,vazirani14,Kaniewski2016chsh}? Since the monotone $M_{\textup{NPR}}$ is maximized for high-bias boxes from the $R_{\textup{Tilt}}(\theta)$ family (and by the Hardy box) as opposed to by the Tsirelson box, $M_{\textup{NPR}}$ is likely a figure of merit for operational tasks where the advantage is provided by such correlations~\cite{Acin2012randomnessvsnonlocality,BMP}. Note that in deriving our results about properties of this pre-order, we have not needed to consider any types of resource beyond {\twotwotwotwo}, that is, it has sufficed to consider Bell experiments of the CHSH type. It may be that more nuanced features of this pre-order only become apparent for more general types of resources. An obvious generalization of our work is to consider the pre-order induced by different sorts of conversion relations, such as indeterministic single-copy conversion\footnote{Indeterministic single-copy conversion is single-copy conversion that makes use of a post-selection. Therefore, to contemplate this notion of conversion for our resource theory is to contemplate expanding the set of free operations from LOSR to LOSR with post-selection. However, LOSR with postselection can map a correlation $P_{XY|ST}$ that satisfies the Bell inequalities to one that violates them, and even to one that violates the no-signalling condition. (This is in contrast to the situation with LOCC, where allowing postselection does not change the set of states that one can prepare for free.) Consequently, what sort of correlation is consistent with a classical common cause---and hence what should be deemed free in a resource theory of nonclassicality of common cause boxes---becomes contingent on what sort of postselection was implemented. For example, in a Bell experiment wherein detectors are not perfectly efficient, postselecting on detection can induce Bell inequality violations even in the absence of a nonclassical common cause. However, for a given value of the detection efficiency, this might only be able to explain a particular degree of violation, while any higher violation would still attest to the presence of a nonclassical common cause. In such a context, the boundary between the correlations that are consistent with a classical common cause and those that are not would no longer coincide with the facets of the Bell polytope. Consequently, even defining the free set of resources becomes quite complicated when postselection is allowed.}, multi-copy conversion, asymptotic conversion, and conversion in the presence of a catalyst (see Refs.~\cite{Coecke2014,Gour2015thermo,Fritz2017asymptotic} for a discussion of these different notions, and Refs.~\cite{Brunner2009distillation,Lang2014zoo, SandersGour,JonathanPlenio,vanDamHayden} for relevant examples of such generalized conversions). Other generalizations require changes to the enveloping theory of resources one is considering. We have noted that our definition of the free operations can easily be extended to define a resource theory of nonclassicality for box-type processes in more general causal structures, distinct from that of a Bell experiment. For example, as discussed in Appendix.~\ref{diffcausalstr}, it can be extended to a scenario we term the triangle-with-settings scenario~\cite[Fig.~8]{BilocalCorrelations}, of which the much-studied `triangle scenario'~\cite{Steudel2015,Wolfe2016inflation,Gisin2017triangle,Fraser2018} is a special case. Another example would be to extend our definition to the `bilocality scenario'~\cite{Branciard2010,BilocalCorrelations,Chaves2017starnetworks, TavakoliStarNetworks,RossetNetworks,tavakoli2016noncyclic}. The analysis of such cases is complicated by the fact that our proposal implies that the set of free operations is not convex for them. Another such generalization would be to causal structures wherein there are cause-effect relations between different parts of the experiment, for instance, experiments involving sequences of nondestructive measurements on parts of a shared resource, such as the causal structure known as the `instrumental scenario'~\cite{Pearl1995,Bonet2001,Evans2012,Henson2014,Chaves2017instrumental, Himbeeck2018instrumental}. A generalization of our resource theory in a different direction is to consider processes whose inputs and outputs are {\em not} classical (i.e., processes that are not `box-type'), but rather describe quantum or post-quantum systems. For the case of the common-cause structure which we focused on here, a quantum resource theory of this sort would subsume entanglement theory, but where quantum correlation is defined relative to the set of local operations and shared randomness (LOSR) rather than local operations and classical communication (LOCC). \vspace{-1em} \tocless \section{Acknowledgments} The authors acknowledge useful discussions with Tobias Fritz, Tomáš Gonda and Denis Rosset. D.S. is supported by a Vanier Canada Graduate Scholarship. This research was supported by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Research, Innovation and Science. This publication was made possible through the support of a grant from the John Templeton Foundation. The opinions expressed in this publication are those of the authors and do not necessarily reflect the views of the John Templeton Foundation. ABS acknowledges support by the Foundation for Polish Science (IRAP project, ICTQT, contract no. 2018/MAB/5, co-financed by EU within Smart Growth Operational Programme). \onecolumngrid \bigskip \begin{center} \line(1,0){250} \end{center} \bigskip \twocolumngrid \onecolumngrid \clearpage
2204.09059
\section{Introduction} \label{sec: introduction} Short duration gamma-ray bursts (sGRB) are bright, brief flashes of gamma rays \citep[$<$\,$2$ s;][]{Chryssa93} produced by the coalescence of two compact objects \citep[][]{Eichler1989,Narayan1992}, either a binary neutron star system (BNS; \citealt{Ruffert1999, Rosswog2003}) or a neutron star and a black hole (NS-BH; \citealt{Faber2006,Shibata2011}). Beginning in the era of the \textit{Neil Gehrels Swift Observatory} (subsequently \textit{Swift}; \citealt{Gehrels04}), sGRBs were, for the first time, localized to arcsecond accuracy based on the detection of their X-ray afterglows \citep{Gehrels2005,Barthelmy2005}, and shortly thereafter, their optical afterglows \citep{Fox2005,Hjorth2005sgrb,Villasenor2005}. These accurate localization's allowed for the identification of their host galaxies, and, in turn, their redshifts. Nevertheless, $\sim$\,$20$\,$-$\,$30\%$ of sub-arcsecond localized sGRBs are classified as hostless, hereafter \textit{observationally} hostless, due to their lack of a coincident galaxy to deep limits ($\gtrsim$\,26 mag; \citealt{Stratta2007,Perley2009,Rowlinson2010,Berger2010a,FongBerger2013,Tunnicliffe2014}) or multiple galaxies with a similar probability of chance coincidence \citep{Bloom2002,Berger2010a}. Although these events lack a coincident galaxy, a number of low-$z$ candidate hosts have been identified at large physical offsets (out to $\sim$\,$75$ kpc; \citealt{Bloom2007,Stratta2007,Troja2008,Rowlinson2010,Berger2010a}); localizing the sGRBs to well outside of the galaxy's light and potentially in tenuous (low density) environments. Furthermore, some events with secure host associations have been discovered within the outskirts of their galaxies at $>$\,$15$ kpc from their host's nucleus \citep{DAvanzo2009,Rowlinson2010,Troja2019,Lamb2019}, while others are found at $<$\,$1$ kpc \citep{Antonelli2009,DAvanzo2009,Levesque2010,Troja2016,OConnor2021}. The diverse environments of sGRBs could be an indicator of multiple progenitor formation channels within the observed population: \textit{i}) a primordial (isolated) formation channel \citep{PortegiesZwart1998,Voss2003,OShaughnessy2005,Belczynski2006,Belczynski2008,Tauris2017,Abbott2017kick,Kruckow2018,VignaGomez2018,Zevin2019}, \textit{ii)} dynamical formation in a globular cluster \citep{Phinney1991,Davies1995,Grindlay2006,Hopman2006,Salvaterra2008,Guetta2009,Salvaterra2010,Lee2010,Church2011,Bae2014,Andrews2019a,Adhikari2020,Ye2020,Stegmann2021}, or \textit{iii}) even formation in a galaxy cluster environment \citep{Niino2008,Salvaterra2010}. Thus, identifying events formed through these multiple channels impacts our understanding of stellar formation and evolution and provides useful insight for population synthesis studies. In the primordial formation channel, these large offsets are expected due to a change in velocity (a natal kick) imparted to the system, following mass ejection from the second supernova explosion \citep{Lyne1994,Hansen1997,Bloom1999,Fryer1999,Wex2000,Hobbs2005,Belczynski2006}. Combined with the long merger delay times ($10^7$\,$-$\,$10^{11}$ yr) predicted for BNS systems \citep{Zheng2007,Zemp2009}, a large natal kick can allow the binary to reach substantial distances and even escape its birth galaxy. However, a binary escaping its galaxy, denoted as \textit{physically} hostless, is theorized to occur in an extremely low density ($n$\,$<$\,$10^{-4}$ cm$^{-3}$) intergalactic medium (IGM) environment, making detection of an afterglow unlikely \citep[][]{Panaitescu2001,Salvaterra2010,Duque2019}. Moreover, by studying their early X-ray afterglow lightcurves, \citet{OConnor2020} found that $\lesssim$\,$16\%$ of sGRBs are consistent with such low densities, including only a single observationally hostless event (GRB 080503; \citealt{Perley2009}). Nevertheless, this does not exclude sGRBs with large offsets from having occurred within the halo's of their host galaxies or within a dense globular cluster environment \citep{Salvaterra2010}. An alternative explanation for observationally hostless bursts is that these sGRBs occurred in faint, undetected host galaxies at higher redshifts (i.e., $z$\,$\gtrsim$\,$1$\,$-$\,$2$; \citealt{Berger2010a,Tunnicliffe2014}). Such high-$z$ events suggest progenitors that formed through a primordial channel with short merger delay times \citep[e.g.,][]{Andrews2019b,BeniaminiPiran2019}, indicating that BNS systems may have formed early enough to pollute the early Universe with heavy metals \citep{Ji2016a,Ji2016b,Roederer2016,Hansen2017,Safarzadeh2017,Beniamini2018,Safarzadeh2019,Zevin2019}. Furthermore, our understanding of the environments and formation channels of sGRBs has fundamental implications for inferring the rate of detectable gravitational wave (GW) sources and for the follow-up of their electromagnetic (EM) counterparts, as the quick localization of the EM counterpart depends on inferences (such as, e.g., stellar mass, star formation rate, offset) from the known population of sGRB host galaxies \citep[][]{Nissanke2013,Gehrels2016,Arcavi2017,Artale2020gw,Ducoin2020} and on targeted searches using catalogs of nearby galaxies \citep[][]{White2011,Dalya2016,Cook2019}. Disentangling between the different scenarios is observationally challenging. Due to the faintness of sGRB afterglows, redshift measurements from afterglow spectroscopy are rarely successful \citep[e.g.,][]{deUgarte2014,AguiFernandez2021}. Therefore, deep imaging and spectroscopic observations from the most sensitive telescopes are required to identify the GRB host galaxy and estimate its distance scale. In this work, we targeted a sample of 31 sGRBs that lack a putative host galaxy with large-aperture telescopes to search for faint, coincident galaxies. Our facilities include: the Lowell Discovery Telescope (LDT), the Keck Observatory, the Gemini Observatory, the Gran Telescopio Canarias (GTC), the Very Large Telescope (VLT), and the \textit{Hubble Space Telescope} (\textit{HST}). The paper is outlined as follows. In \S \ref{sec: observations}, we define our sample selection criteria, and the optical/nIR imaging analysis techniques used in this work. In \S \ref{sec: methods}, we describe the methods employed to detect, localize, and compute photometry of the host galaxies, as well as the probabilistic criteria used for host assignment. In \S \ref{sec: Results}, we present the results and discuss the demographics of sGRB offsets, host galaxies, and environments. We present a discussion of these results in \S \ref{sec: discussion} and conclude in \S \ref{sec: conclusions}. We present a detailed summary of the individual events analyzed in this work in Appendix \ref{sec: appendixsampleanalysis}. We adopt the standard $\Lambda$-CDM cosmology with parameters $H_0=67.4$, $\Omega_M=0.315$, and $\Omega_\Lambda=0.685$ \citep{Planck2018}. All confidence intervals are at the $1\sigma$ level and upper limits at the $3\sigma$ level, unless otherwise stated. All reported magnitudes are in the AB system, and are corrected for Galactic extinction \citep{Schlafly2011}. Throughout the paper we adopt the convention $F_\nu\propto t^{-\alpha}\nu^{-\beta}$. \section{Observations and Analysis} \label{sec: observations} \subsection{Sample selection} \label{sec: sampleselection} The association of a GRB with a host galaxy relies on the accurate localization of its afterglow. Therefore, we consider the sample of short GRBs detected with \textit{Swift} and localized by the X-ray Telescope \citep[XRT;][]{Burrows2005} to arcsecond accuracy. We include both GRBs with a short duration\footnote{\url{https://swift.gsfc.nasa.gov/results/batgrbcat/}}, defined as $T_{90}$\,$<$\,$2$ s \citep{Chryssa93}, and GRBs with a temporally extended emission (hereafter sGRBEE), as defined by \citet{Norris2006}. \textit{GRB classification --} As of May 2021, the \textit{Swift} Burst Alert Telescope \citep[BAT;][]{Barthelmy2005} has detected 127 short duration GRBs of which 91 (72\%) have an X-ray afterglow localization. These X-ray localized events form the basis of our sample. Short duration bursts with soft spectra (i.e., a hardness ratio $S_{50-100\,\textrm{keV}}/S_{25-50\,\textrm{keV}}$\,$<$\,$1$, where $S$ represents the gamma-ray fluence in a given energy range; \citealt{Lien2016}) or non-negligible spectral lag \citep{Norris2006} were flagged as ``possibly short'' (see, e.g., \citealt{Lien2016}). In addition, we include sGRBEEs and candidate sGRBEEs identified by \citet{Lien2016}, \citet{Dichiara2021}, and GCN Circulars. We note that a classification as sGRBEE can be highly subjective due to the fact that they share properties of both short hard bursts and long GRBs. One example is GRB~170728B which displays a short pulse ($<$\,2 s) followed by visibly extended emission ($T_{90}$\,$=$\,$48\pm27$ s). However, the spectrum of the initial short pulse is quite soft with $E_\textrm{peak}$\,$\sim$\,$80$ keV. Not having any additional information on, e.g., the spectral lag, host galaxy, or supernova, we label GRB~170728B a candidate sGRBEE. Other events which display the characteristic features of sGRBEE, such as a spectrally hard initial pulse with negligible spectral lag \citep{Norris2006}, can be more confidently assigned to this class. In total, we identify 32 sGRBEE (including 18 candidate sGRBEE\footnote{GRBs 051210, 120804A and 181123B satisfy $T_{90}$\,$<$\,2 s but also display evidence for extended emission, see \citet{Dichiara2021} for details. We therefore include these in the sample of candidate sGRBEEs.}) of which 29 (90\%) have an X-ray localization. Therefore, our initial sample totals 159 events which are either classical sGRBs or sGRBEEs. \begin{figure} \includegraphics[width =\columnwidth, trim= 0 0 0 35, clip]{figs/OurInitialSampleTest_localization_Fong_2.pdf} \vspace{-8mm} \caption{The distribution of short GRB localization methods between X-ray and optical for the sample analyzed in this work and the sample in \citet{Fong2013}. } \label{fig: localization} \end{figure} \begin{figure} \includegraphics[width =\columnwidth, trim= 0 0 0 25, clip]{figs/Final_Pie_Fig_2a.pdf} \caption{Breakdown of host classification for the \textit{Swift}/BAT sample of short GRBs: GRBs with a published host galaxy in the literature are shown in blue, those classified as hostless are shown in green, and those with no published host galaxy are displayed in purple. Poorly localized short GRBs, such as those with only BAT detections or a large positional uncertainty based on their afterglow $\sigma_\textrm{AG}$\,$>$\,$4\arcsec$, are shown in gray, and are excluded from the sample compiled in this work. } \label{fig: toplevel} \end{figure} \textit{GRB localization --} Past searches for the host galaxies of short GRBs \citep[e.g.,][]{Prochaska2006,DAvanzo2009,Berger2010a,FongBerger2013,Tunnicliffe2014} mainly focused on optically localized events with sub-arcsecond positions (Figure \ref{fig: localization}). However, an optically selected sample is potentially subject to multiple observing biases, which can affect the observed redshift and offset distributions. An optical position disfavors small offsets from the host's nucleus \citep[e.g.,][]{OConnor2021}, especially in the case of faint short GRB afterglows or dusty environments. In addition it may disfavor events occurring in the low-density environments expected for large-offset GRBs \citep{Panaitescu2001,Salvaterra2010,Duque2019,OConnor2020}. In order to mitigate potential biases due to an optical selection of the sample, we included all XRT localized events within our follow-up campaign. Although XRT positions typically have larger uncertainties than optical, radio, or \textit{Chandra} localizations, XRT localized bursts contribute valuable information to the demographics of sGRB host galaxies in terms of redshift, stellar mass, star formation rate, and galaxy type \citep[e.g.,][]{Gehrels2005,Bloom2006}. Hereafter, we consider only the 120 events with at least an X-ray localization, of which 49 ($\sim$40\%) also have an optical localization. \textit{Selection criteria --} We adopt two additional criteria to build a homogeneous sample of bursts. The first is that the uncertainty on the GRB's localization is $<$\,$4\arcsec$ (90\% confidence level, hereafter CL) as bursts with a poorer localization can only be securely associated to bright ($r$\,$\lesssim$\,$21$ mag) galaxies and would not benefit from a campaign of deep optical imaging. This requirement excludes 13 XRT localized events from our sample\footnote{These are: GRBs 050509B, 060502B, 061210 (EE), 090621B, 100206A, 100628A, 130313A, 140320A, 140611A, 150301A, 150728A, 161104A, and 170524A.}. We further impose a limit of $A_V$\,$<$\,$1.5$ mag \citep{Schlafly2011} on the Galactic extinction along the GRB sightline in order to eliminate regions where host galaxy searches would be less sensitive\footnote{This condition excluded GRBs 050724A (EE), 080426A, 080702A, 081024A, 150101A, 180402A, 200907B, and 201006A.}. This cut allows us to remove crowded regions along the Galactic plane ($|b|$\,$<$\,$15^\circ$) where our search would not be meaningful due to chance alignment with foreground stars. Among the remaining 99 short GRBs matching our criteria (see Figure \ref{fig: toplevel}), 43 are associated to a host galaxy, 7 are classified as hostless based on deep ground-based and \textit{HST} imaging \citep[see, e.g.,][]{Berger2010a,FongBerger2013}, and 49 more events lack evidence of an underlying host galaxy based on the initial ground-based follow-up reported through GCN circulars. The latter group of bursts is the focus of our study. Deep late-time imaging is crucial to determine whether the lack of a candidate host galaxy is due to the shallow depth of the initial ground-based follow-up, a high redshift, or a large angular separation due, for example, to a high natal kick velocity imparted to the progenitor. \begin{figure} \includegraphics[width = \columnwidth]{figs/top_level_fig_final_4.pdf} \caption{An outline of the candidate selection process, and follow-up methodology employed in this work in order to locate and identify the host galaxies of short GRBs.} \label{fig: outline} \end{figure} \textit{Observing strategy --} As a first step (see Figure \ref{fig: outline}), we targeted these bursts with the 4.3-m LDT (PIs: Troja, Cenko, Gatkine, Dichiara) and performed deep optical imaging, typically in $r$-band, to search for an underlying host galaxy to depth $r$\,$\gtrsim$\,$25$ mag. In the case of a detection, we scheduled the target for multi-color imaging in order to characterize the galaxy's spectral energy distribution (SED) and, if the galaxy's candidate was brighter than $\approx$\,$21$\,$-$\,$22$ AB mag, for optical spectroscopy in order to measure its redshift. In total, 30 out of 46 short GRBs (65\% of the sample) were followed-up with the LDT from 2014 to 2021 through our programs. Those events which were not observed by LDT were either only visible from the Southern Hemisphere or already had limits comparable to LDT's typical depth ($r$\,$\sim$\,$24.5$\,$-$\,$25$ mag). In all other cases, we flagged the burst for further deep imaging with large-aperture telescopes. We targeted these sGRBs as part of our programs on the twin 8.1-m Gemini telescopes (PI: Troja) and the 10-m Keck-I telescope (PI: Cenko) to search for host galaxies to deeper limits ($r$\,$\gtrsim$\,$26$\,$-$\,$28$ AB mag). These observations were further complemented with public archival data from the 10.4-m GTC, the Keck Observatory, the Gemini Observatory, and \textit{HST}. The final sample of events observed through these programs comprises 31 sGRBs (see Table \ref{tab: observations}) discovered between 2009 to 2020 (14 of which have only an XRT localization). Of these 31 events, about 20\% display extended emission. When compared to previous studies of sGRB host galaxies, which included 36 sGRBs discovered between 2005 to 2013 \citep[e.g.,][]{Fong2013}, our program doubles the sample of well-studied sGRB environments. A table of the X-ray and gamma-ray properties of sGRBs in our sample is shown in Table \ref{tab: XrayAGprop}. \subsection{Optical/nIR Imaging} \label{sec:imaging description} Due to the isotropic distribution of GRBs on the sky and the multi-year nature of this project, the optical and near-infrared imaging obtained for our sample is heterogeneous and spans a range of observatories, filters, and exposure times. These observations were typically taken months to years after the explosion when contamination from the GRB afterglow is negligible. The majority of our optical observations were carried out by the Large Monolithic Imager (LMI) on the LDT, the Gemini Multi-Object Spectographs \citep[GMOS;][]{Hook2004} on both Gemini North (GMOS-N) and Gemini South (GMOS-S), the Low Resolution Imaging Spectrometer \citep[LRIS;][]{Oke1995} at the Keck Observatory, and the Optical System for Imaging and low-Intermediate-Resolution Integrated Spectroscopy \citep[OSIRIS;][]{Cepa2000} at the GTC. We also include publicly available near-infrared observations obtained with the \textit{HST} Wide Field Camera 3 (WFC3). A log of observations presented in this work is reported in Table \ref{tab: observations}. \subsubsection{Lowell Discovery Telescope (LDT)} \label{sec:LDT} Observations with the Large Monolithic Imager (LMI) mounted on the 4.3-meter LDT at the Lowell Observatory in Happy Jack, AZ were carried out starting in 2014 as part of a long-term project (PIs: Troja, Gatkine, Dichiara) to study the afterglow and host galaxies of sGRBs. In order to have good visibility, only bursts with declination $\gtrsim$\,$-30^\circ$ were selected. Over 60 sGRBs were observed as part of this program, and results on single events were presented in, e.g., \citet{Troja2016,Troja2018,Troja2019,OConnor2021,Ahumada21}. In this work, we present unpublished observations for 22 sGRBs in our sample. LDT/LMI observations were carried out largely in the $r$-band with a typical exposure of $1200$\,$-$\,$1500$ s, chosen to obtain a depth of $r$\,$\gtrsim$\,$24.5$\,$-$\,$25$ mag in good observing conditions. However, the true image depth varies depending on the observing conditions at the time of our observations, which span multiple observing cycles across $\sim$\,$7$ years. All images were visually inspected and those flagged as poor were re-acquired at a later date. When a candidate host galaxy was detected, we performed additional observations in the $g$, $i$, and $z$ bands in order to better characterize the galaxy's SED. Data were reduced and analyzed using a custom pipeline \citep{Toy2016} that makes use of standard CCD reduction techniques in the IRAF\footnote{IRAF is distributed by the National Optical Astronomy Observatory, which is operated by the Association of Universities for Research in Astronomy (AURA) under cooperative agreement with the National Science Foundation (NSF).} package including bias subtraction, flat-fielding, sky subtraction, fringe correction, and cosmic ray rejection using Laplacian edge detection based on the \texttt{L.A.Cosmic} algorithm \citep{vanDokkum2001}. Following this image reduction process, the pipeline uses \texttt{SExtractor} \citep{Bertin1996} to identify sources in each frame, and then the \texttt{Software for Calibrating AstroMetry and Photometry} \citep[\texttt{SCAMP};][]{Bertin2006} to compute the astrometric solution. The aligned frames are then stacked using the \texttt{SWarp} software \citep{Bertin2002,Bertin2010}. The absolute astrometry of the stacked image was calibrated against the astrometric system of either the Sloan Digital Sky Survey \citep[SDSS;][]{Ahumada2020} Data Release 16 or the Panoramic Survey Telescope and Rapid Response System Survey \citep[Pan-STARRS1, hereafter PS1;][]{Chambers2016} Data Release 2, likewise using the combination of \texttt{SExtractor} and \texttt{SCAMP}. The SDSS and PS1 catalogs were further used to calibrate the photometric zeropoint (using \texttt{SExtractor} aperture photometry for the magnitude determination). We selected the SDSS catalog when available, and otherwise used PS1. We ensured that the sources used for both the astrometric and photometric calibrations were isolated point sources by sorting out those which did not pass our selection criteria based on their signal-to-noise ratio (SNR), full width at half-maximum intensity (FWHM), ellipticity, and \texttt{SExtractor} \texttt{CLASS\_STAR} parameter. \subsubsection{Gemini Observatory} We carried out observations (PI: Troja) of short GRB host galaxies using the Gemini Multi-Object Spectographs (GMOS) mounted on the twin 8.1-m Gemini North and Gemini South telescopes located on Mauna Kea and Cerro Pach\'{o}n, respectively. These observations targeted 9 sGRBs (GRBs 110402A, 140930B, 151229A, 160601A, 160927A, 170127B, 171007A, 191031D, and 200411A) with deep constraints ($r\gtrsim 25$ mag) on an underlying host galaxy. The observations occurred between November 3, 2019 and February 1, 2021. We mainly selected the $r$-band and $i$-band with exposure times ranging from 900\,-\,2250 s and 355\,-\,1440 s, respectively. We supplemented our observations with archival data for GRBs 120305A, 120630A, 130822A, 140516A, 140930B, 150831A, 160408A, 180727A. We made use of tasks within the \texttt{Gemini IRAF} package (v. 1.14) to perform bias and overscan subtraction, flat-fielding, de-fringing, and cosmic ray rejection. The individual frames were then aligned and stacked using the IRAF task \texttt{imcoadd}. We additionally performed sky subtraction using the \texttt{photutils}\footnote{\url{https://photutils.readthedocs.io/en/stable/}} package to estimate the median sky background after masking sources in the image. The world-coordinate systems were then calibrated against the astrometric systems of SDSS or PS1 using either \texttt{astrometry.net} \citep{Lang2010} or the combination of \texttt{SExtractor} and \texttt{SCAMP} outlined in \S \ref{sec:LDT}. For southern targets we used the Dark Energy Survey \citep[DES;][]{Abbott2021DES} Data Release 2. Isolated field stars selected from these catalogs were used for photometric calibration. We additionally performed observations of GRB 151229A with Flamingos-2 (hereafter, F2) at Gemini South in Cerro Pach\'{o}n, Chile on July 22, 2021. These observations were carried out in the $J$ and $K_s$ filters (see Table \ref{tab: observations}). We reduced and analyzed these data using the \texttt{DRAGONS}\footnote{\url{https://dragons.readthedocs.io/}} software \citep{Labrie2019}. The photometry was calibrated using nearby point-sources in the Two Micron All Sky Survey \citep[2MASS;][]{Skrutskie2006} catalog. We then applied a standard conversion between the Vega and AB magnitude systems. \begin{figure*} \includegraphics[width = 2\columnwidth]{figs/test2.pdf} \caption{Optical spectra of sGRB host galaxies (solid purple line) in flux units of $10^{-17}$ erg cm$^{-2}$ s$^{-1}$ \AA$^{-1}$ versus wavelength in \AA. The observed emission lines are marked by black lines, and the error spectrum is displayed as a solid black line. The spectra are smoothed with a Savitzky-Golay filter for display purposes. The spectra are not corrected for Galactic extinction. } \label{fig: spectra} \end{figure*} \begin{figure*} \includegraphics[width = 1.65\columnwidth]{figs/GRB160410A_Spec_updated.pdf} \caption{ Keck/LRIS optical spectrum of the afterglow of sGRB 160410A at $z=1.717 \pm 0.001$. The spectrum is normalized to the continuum. Absorption lines at this redshift are marked by black lines, and lines corresponding to intervening absorbers at $z=1.444$ and $1.581$ are marked by red and blue lines, respectively. The error spectrum is represented by a solid black line.} \label{fig: spectra160410A} \end{figure*} \subsubsection{Keck Observatory} Through our program (PI: Cenko) on the 10-m Keck-I Telescope on Mauna Kea we obtained deep late-time imaging of GRBs 120305A, 120630A, 130822A, and 130912A. The Keck/LRIS observations took place during one half-night on October 25, 2014, and were carried out in both the $G$ and $R$ filters with exposure times of 3000 s and 2750 s, respectively. Observations of a fifth target (GRB~110112A) were incorrectly pointed by 0.15 deg and do not cover the GRB position (Chris Gelino, Priv. Comm.). Therefore, these data were not included. We complemented our observations with public archival LRIS data for GRBs 110402A, 140516A, 160927A, 170127B, 170728A, and 180805B. The data were retrieved from the Keck Observatory Archive, and analyzed using the \texttt{LPipe} pipeline \citep{Perley2019}. The pipeline processes raw files through standard CCD reduction techniques (e.g., bias-subtraction, flat-fielding, sky-subtraction, cosmic-ray rejection) to produce fully calibrated and stacked images. The final stacked image's absolute astrometry was calculated based on either the SDSS or PS1 catalogs. We used \texttt{astrometry.net} or the combination of \texttt{SExtractor} and \texttt{SCAMP} outlined in \S \ref{sec:LDT}. We found that \texttt{astrometry.net} provided an accurate astrometric solution for sparse fields by making use of the standard stars within the Keck field-of-view. The photometric zeropoints were likewise calibrated using unsaturated SDSS (when available) or PS1 sources. We additionally include archival infrared imaging obtained with Keck MOSFIRE \citep{McLean2012} for GRBs 131004A, 151229A, 160601A, 170127B, and 180805B. These data were reduced using the MOSFIRE data reduction pipeline\footnote{\url{https://keck-datareductionpipelines.github.io/MosfireDRP/}}, and calibrated using point sources in the 2MASS catalog. Standard offsets were applied to convert magnitudes into the AB system. \subsubsection{Gran Telescopio Canarias (GTC)} \label{sec: GTC_imaging} We obtained publicly available images of GRBs 160601A and 160927A (Table \ref{tab: observations}) taken with the 10.4-m GTC, which is located at the Roque de los Muchachos Obervatory in La Palma, Spain. The observations used the OSIRIS instrument, and were carried out in $r$-band. The data were retrieved from the GTC Public Archive\footnote{\url{https://gtc.sdc.cab.inta-csic.es/gtc/}}. They were reduced and aligned using standard techniques within the \texttt{astropy} \citep{Astropy2018} software library to perform bias subtraction and flat-fielding. The individual frames were then combined to produce the final reduced image. The absolute astrometric correction was performed using \texttt{astrometry.net}, and the photometric zeropoints were calibrated to SDSS. \subsubsection{Very Large Telescope (VLT)} \label{sec: vlt_imaging} We analyzed archival images of GRBs 091109B, 150423A, and 150831A (Table \ref{tab: observations}) obtained with the 8.2-m VLT, operated by the European Southern Observatory (ESO) in Cerro Paranal, Chile. The observations were taken with the FOcal Reducer/low dispersion Spectrograph 2 (FORS2) in $R$-band for GRBs 091109B, 150423A, and 150831A and an additional $I$-band observation for GRB 150831A. The raw images were retrieved from the ESO Science Archive\footnote{\url{http://archive.eso.org/eso/eso_archive_main.html}}. The data were processed using standard tasks within \texttt{astropy} (similarly to \S \ref{sec: GTC_imaging}). \subsubsection{Hubble Space Telescope (HST)} \label{sec: hst imaging} We obtained the publicly available \textit{Hubble Space Telescope} (\textit{HST}) Wide Field Camera 3 (WFC3) data from the Mikulski Archive for Space Telescopes (MAST)\footnote{\url{https://archive.stsci.edu/index.html}} for GRBs 091109B, 110112A, 131004A, and 150423A. The observations (ObsID: 14685; PI: Fong) were taken between October 11, 2016 and February 3, 2017 in the \textit{F110W} filter with a typical exposure of 5200 s ($\sim$\,$2$ \textit{HST} orbits). The data were processed using standard procedures within the \texttt{DrizzlePac} package \citep{Gonzaga2012} in order to align, drizzle, and combine exposures. The observations within a single epoch were aligned to a common world-coordinate system with the \texttt{TweakReg} package. The \texttt{AstroDrizzle} software was then used to reject cosmic rays and bad pixels, and to create the final drizzled image combining all exposures within a single epoch. The final pixel scale was \ang{;;0.06}/pix using \texttt{pixfrac} $=0.8$. The \textit{HST} photometric zeropoints were determined with the photometry keywords obtained from the \textit{HST} image headers, and were corrected with the STScI tabulated encircled energy fractions. \subsection{Optical Spectroscopy} \label{sec:spectroscopy description} Bright host galaxies identified through our imaging campaign were targeted for optical spectroscopy in order to constrain their distance scale. These targets include the fields of sGRBs~101224A and 140622A, observed with Keck/LRIS, and sGRBs 180618A and 191031D, observed with Gemini/GMOS-N. We complemented these observations with archival Keck spectroscopic data for sGRBs 110402A, 151229A, 160410A and 180805B as these bursts also match our selection criteria (\S \ref{sec: sampleselection}). Our spectroscopic campaign also included the candidate short GRB~060121 for which no visible trace was detected in a deep $3\times900$~s Keck/LRIS exposure. This was likewise the case for the archival Keck spectroscopy of sGRB 151229A. For sGRBs 180618A and 191031D, a weak trace was detected by the Gemini spectroscopic observations, but no obvious emission or absorption features were identified. The log of spectroscopic observations analyzed in this work is provided in Table \ref{tab: SpecObs}. The Gemini data were reduced and analyzed using the Gemini IRAF package (v. 1.14), whereas Keck/LRIS data were reduced using the \texttt{LPipe} software. The processed spectra are displayed in Figure \ref{fig: spectra}, and the result for each sGRB is reported in Table \ref{tab: SpecObs} and described in more detail in Section \ref{sec: Results}. We note that the optical spectrum obtained for sGRB 160410A is a rare case of afterglow spectroscopy (Figure \ref{fig: spectra160410A}) as discussed in \citet{AguiFernandez2021}. \begin{figure*} \centering \includegraphics[width = 1.5\columnwidth]{figs/GRB130912A_HST_vs_Keck.pdf} \caption{A comparison between ground-based Keck/LRIS imaging in $R$-band (left) and \textit{HST}/WFC3 imaging in the \textit{F110W} filter (right) for sGRB 130912A. The Keck imaging sets an upper limit of $R\gtrsim 26.2$ mag on a coincident host galaxy, whereas \textit{HST} imaging to depth $F110W\gtrsim 27.2$ mag unveils a candidate host offset by only $\sim$\,$0.7\arcsec$ from the sGRB's optical localization (magenta circle). } \label{fig: 130912A_HST_vs_Keck} \end{figure*} \begin{figure*} \begin{tabular}{ccc} \subfloat{\includegraphics[width = 2.2in]{figs/GRB091109B_HST_F110W.png}} \hspace{-0.5cm} & \subfloat{\includegraphics[width = 2.2in]{figs/GRB110112A_HST_F110W.png}} \hspace{-0.5cm} & \subfloat{\includegraphics[width = 2.2in]{figs/GRB110402A_Keck_I.png}} \vspace{-0.5cm} \\ \vspace{-0.5cm} \subfloat{\includegraphics[width = 2.2in]{figs/GRB130912A_HST_F110W.png}} \hspace{-0.5cm} & \subfloat{\includegraphics[width = 2.2in]{figs/GRB131004A_HST_F110W.png}} \hspace{-0.5cm} & \subfloat{\includegraphics[width = 2.2in]{figs/GRB140129B_LDT_r.png}} \\ \vspace{-0.5cm} \subfloat{\includegraphics[width = 2.2in]{figs/GRB140930B_gemini_r.png}} \hspace{-0.5cm} & \subfloat{\includegraphics[width = 2.2in]{figs/GRB150423A_HST_F110W.png}} \hspace{-0.5cm} & \subfloat{\includegraphics[width = 2.2in]{figs/GRB160408A_Gemini_r.png}} \\ \subfloat{\includegraphics[width = 2.2in]{figs/GRB160525B_LDT_r.png}} \hspace{-0.5cm} & \subfloat{\includegraphics[width = 2.2in]{figs/GRB160410A_LDT_g.png}} \hspace{-0.5cm} & \subfloat{\includegraphics[width = 2.2in]{figs/GRB160601A_GTC_r.png}} \\ \end{tabular} \caption{Host galaxy finding charts for optically localized sGRBs. The magenta circle represents the sGRB localization, and the putative host galaxy is designated by a blue circle (those lacking a blue circle are observationally hostless). Other candidate hosts are marked by black circles and labeled by G1, G2, G3, etc., with increasing offset from the sGRB's localization. Nearby objects that are too faint for star-galaxy classification (\S \ref{sec: source detection}) are labeled as A, B, C, etc. The size of each field is represented by the scalebar. In each figure, North is up and East is to the left. The figures have been smoothed for display purposes. } \label{fig: GalOpt} \end{figure*} \begin{figure*} \begin{tabular}{ccc} \subfloat{\includegraphics[width = 2.2in]{figs/GRB160927A_GTC_r.png}} \hspace{-0.5cm} & \subfloat{\includegraphics[width = 2.2in]{figs/GRB170428A_Gemini_i.png}} \hspace{-0.5cm} & \subfloat{\includegraphics[width = 2.2in]{figs/GRB170728A_Keck_R.png}} \vspace{-0.5cm} \\ \subfloat{\includegraphics[width = 2.2in]{figs/GRB170728B_LDT_r.png}} \hspace{-0.5cm} & \subfloat{\includegraphics[width = 2.2in]{figs/GRB180618A_LDT_r.png}} \hspace{-0.5cm} & \end{tabular} \contcaption{ } \end{figure*} \begin{figure*} \begin{tabular}{ccc} \subfloat{\includegraphics[width = 2.2in]{figs/GRB101224A_LDT_r.png}} \hspace{-0.5cm} & \subfloat{\includegraphics[width = 2.2in]{figs/GRB120305A_Keck_R.png}} \hspace{-0.5cm} & \subfloat{\includegraphics[width = 2.2in]{figs/GRB120630A_Keck_G.png}} \vspace{-0.5cm} \\ \vspace{-0.5cm} \subfloat{\includegraphics[width = 2.2in]{figs/GRB130822A_Keck_G.png}} \hspace{-0.5cm} & \subfloat{\includegraphics[width = 2.2in]{figs/GRB140516A_Gem_i.png}} \hspace{-0.5cm} & \subfloat{\includegraphics[width = 2.2in]{figs/GRB140622A_LDT_r.png}} \\ \vspace{-0.5cm} \subfloat{\includegraphics[width = 2.2in]{figs/GRB150831A_VLT_R.png}} \hspace{-0.5cm} & \subfloat{\includegraphics[width = 2.2in]{figs/GRB151229A_Gemini_z.png}} \hspace{-0.5cm} & \subfloat{\includegraphics[width = 2.2in]{figs/GRB170127B_Keck_G.png}} \vspace{0.25cm} \end{tabular} \caption{Same as Figure \ref{fig: GalOpt} but for X-ray localized sGRBs. } \label{fig: GalXray} \end{figure*} \begin{figure*} \begin{tabular}{ccc} \vspace{-0.5cm} \subfloat{\includegraphics[width = 2.2in]{figs/GRB171007A_Gem_i.png}} \hspace{-0.5cm} & \subfloat{\includegraphics[width = 2.2in]{figs/GRB180727A_Gem_r.png}} \hspace{-0.5cm} & \subfloat{\includegraphics[width = 2.2in]{figs/GRB180805B_Keck_V.png}} \\%\vspace{-0.5cm} \subfloat{\includegraphics[width = 2.2in]{figs/GRB191031D_Gem_r.png}} \hspace{-0.5cm} & \subfloat{\includegraphics[width = 2.2in]{figs/GRB200411A_Gemini_r_zoom.png}} \hspace{-0.5cm} & \end{tabular} \contcaption{} \end{figure*} \section{Methods} \label{sec: methods} In order to determine the putative host galaxy for each GRB, we began by identifying all galaxies near the GRB position in our late-time imaging. The source detection and classification (star-galaxy separation) procedure is outlined in Section \ref{sec: source detection}. The late-time images were aligned with respect to the afterglow discovery images to precisely determine the host offset from the GRB position, as outlined in \S \ref{sec: Offset Measurements}. The host association was then determined through probabilistic arguments based on the observed sky density of galaxies in Section \ref{sec: Pcc}. The results of our analysis for each GRB are presented in \S \ref{sec: Results}. \subsection{Source Detection and Classification} \label{sec: source detection} Source detection was performed using the \texttt{SExtractor} package after applying a Gaussian filter with a FWHM of 3 pixels. We required that a source consist of a minimum area of 5 pixels at $>$\,$1\sigma$ above the background (\texttt{DET\_THRESH}\,$=$\,$1$). The source detection was visually inspected to prevent erroneous blending of adjacent sources. Source photometry was computed using the \texttt{SExtractor} \texttt{MAG\_AUTO} parameter, which utilizes Kron apertures. In the case of faint sources, the magnitude was computed using seeing matched aperture photometry with the aperture (\texttt{MAG\_APER}) diameter set to the FWHM of the image's point-spread function (PSF). The photometry was calibrated for each instrument as outlined in \S \ref{sec:imaging description}. The candidate host galaxy photometry for each GRB is presented in Table \ref{tab: host properties}. In order to determine whether a detected source could be identified as a galaxy we utilized the \texttt{SExtractor} \texttt{SPREAD\_MODEL} parameter. First, we ran \texttt{SExtractor} to identify bright, unsaturated and isolated point-like objects. We selected them based on their SNR, FWHM, \texttt{CLASS\_STAR} parameter ($>$\,$0.8$), and ellipticity ($<$\,$0.2$). We further imposed \texttt{FLAGS}\,$<$\,$1$, which excludes sources that are saturated, blended, or too close to the image boundary. These point-like sources were then passed to \texttt{PSFEx} \citep{Bertin2011,Bertin2013} to estimate the image PSF. This was then fed to \texttt{SExtractor} to estimate the \texttt{SPREAD\_MODEL} parameter which, for each detected source, measures the deviation of the source profile from the local normalized image PSF. Point-like sources are characterized by \texttt{SPREAD\_MODEL} $\approx$\,$0$, whereas extended objects deviate significantly from the local PSF and have \texttt{SPREAD\_MODEL} $>$\,$0$. For sources smaller than the image PSF (e.g., cosmic rays or spurious detections), \texttt{SPREAD\_MODEL} $<$\,$0$. These star-galaxy classifiers become more uncertain for fainter sources, and we considered the classification as inconclusive for sources with SNR $\lesssim$\,$5$. \subsection{Offset Measurements} \label{sec: Offset Measurements} In order to precisely localize the GRB with respect to a candidate host galaxy, we utilized relative astrometry to align our late-time images with the afterglow discovery image. In our sample, 14 sGRBs (45\%) do not have an optical localization, and we relied on the \textit{Swift}/XRT enhanced positions \citep{Goad2007,Evans2009}. The associated errors are assumed to follow Rayleigh statistics \citep{Evans2014,Evans2020}, and in our work are computed at the 68\% level of the Rayleigh distribution. The afterglow positional uncertainty $\sigma_{AG}$ from XRT is therefore derived as $\sigma_{AG}\approx$ err$_{90}/1.42$, where err$_{90}$ is the 90\% error typically reported by the \textit{Swift} team\footnote{\url{https://www.swift.ac.uk/xrt\_positions/}}. The remaining 17 sGRBs (55\% of the total sample) have an optical counterpart, and for these bursts we obtained publicly available discovery images from the Ultra-Violet Optical Telescope \citep[UVOT;][]{Roming2005} on-board \textit{Swift}, the 8.1-m Gemini North Telescope, the GTC, the VLT, the 4.2-m William Herschel Telescope (WHT), the 3.6m Telescopio Nazionale Galileo (TNG), and the 2-m Liverpool Telescope. We applied standard procedures for reduction and calibration of these ground-based images, and used \texttt{SExtractor} for afterglow localization. For the \textit{Swift}/UVOT data (GRBs 110402A, 131004A, and 170728A) we used the \texttt{uvotimsum} task within \texttt{HEASoft v6.27.2} to co-add multiple exposures. This produces a higher signal-to-noise afterglow detection. The afterglow localization error (statistical) was then determined used the \texttt{uvotdetect} task. We used \texttt{SExtractor} to identify common point sources in both the late-time and discovery images, and then \texttt{SCAMP} to compute the astrometric solution. The rms uncertainty $\sigma_\textrm{tie}$ in the offset of astrometric matches between the late-time and afterglow images provides the uncertainty in the sGRBs localization on the late-time image frame, and is included within the determination of the host offset error \citep{Bloom2002}. The projected offset $R_o$ is then determined by measuring the distance between the afterglow centroid and the host galaxy's center. The latter is determined as the barycenter of the pixel distribution using the parameters \texttt{XWIN\_IMAGE} and \texttt{YWIN\_IMAGE} and its uncertainty $\sigma_\textrm{host}$ is derived by adding in quadrature the positional error in both directions. The afterglow centroid and its associated uncertainty $\sigma_\textrm{AG}$ are determined with \texttt{SExtractor} using the same methodology. The uncertainty in the sGRB offset is computed as $\sigma_R = \sqrt{\sigma_\textrm{tie}^2+\sigma_\textrm{AG}^2+\sigma_\textrm{host}^2}$ \citep{Bloom2002,FongBerger2013}. The offset and uncertainty for each GRB is recorded in Table \ref{tab: host properties}. For each candidate host galaxy, we also determine the half-light radius ($R_e$) as measured by \texttt{SExtractor} (with \texttt{FLUX\_RADIUS} = 0.5). This allows us to compute a host-normalized offset (see the discussion in \S \ref{sec: offset distribution}). \subsection{Host Galaxy Assignment} \label{sec: Pcc} The association of a GRB to a host galaxy relies on probabilistic arguments based on the likelihood of finding a random galaxy near the GRB localization. This is estimated by computing the probability to detect a galaxy of equal magnitude or brighter within a given region on the sky \citep[e.g.,][]{Bloom2002,Bloom2007,Berger2010a}. If the probability is too high or equivalent for multiple galaxies in the field (see Figure \ref{fig: 130912A_HST_vs_Keck}), the GRB is considered observationally hostless. Using the methods outlined by \citet{Bloom2002}, the probability of chance coincidence is \begin{align} P_{cc}=1-e^{-\pi R^2 \sigma(\lesssim m)} \label{eqn: Pcc} \end{align} where $R$ is the effective angular offset of the galaxy from the GRB position. For XRT localized GRBs, or those where a galaxy is not detected coincident to the GRB position, the effective angular offset is given by $R=\max\Big(3\sigma_R,\sqrt{R_o^2+4R_e^2}\Big)$, where $3\sigma_R$\,$\approx$\,$1.59\times$ err$_{90}$. If the GRB has a precise (sub-arcsecond) localization, and lies within the visible light of a galaxy, we adopt $R=2R_e$ \citep{Bloom2002}. The quantity $\sigma(\lesssim$\,$m)$ in Equation \ref{eqn: Pcc} denotes the number density of galaxies brighter than magnitude $m$ based on deep optical and infrared surveys \citep[e.g., the Hubble Deep Field;][]{Metcalfe2001}. For our optical observations, we utilize $\sigma(\lesssim$\,$m)$ based on $r$-band number counts from \citet{Hogg1997}. For infrared observations, we use the \textit{H}-band (\textit{HST}/$F160W$ filter) number counts presented by \citet{Metcalfe2006,Galametz2013}. The magnitude for each galaxy is corrected for Galactic extinction \citep{Schlafly2011} prior to computing the probability. This is done because the galaxy number counts used in this work \citep{Hogg1997,Metcalfe2006,Galametz2013} were derived from observations of high Galactic latitude fields, where the extinction is negligible. For each sGRB, we computed the probability of chance coincidence for all galaxies identified within $1\arcmin$ of the sGRB position. We require that the putative host galaxy for each sGRB has $P_{cc}\lesssim 0.1$ to be considered a robust association, otherwise we deem the sGRB to be observationally hostless. At offsets $>$\,$1\arcmin$, a $P_{cc}$\,$\lesssim$\,$0.1$ requires an extremely bright galaxy $r$\,$\lesssim$\,$16$ mag, which would not be missed in our imaging. We also note that the largest angular offset reported for a sGRB is $\sim$\,$16\arcsec$ \citep[GRB 061201;][]{Stratta2007}. In many cases there are a number of faint extended objects ($r$\,$\gtrsim$\,$23$ mag) at $\gtrsim$\,$10\arcsec$ which we remove from our analysis due to their high probability of chance coincidence $P_{cc}$\,$\gtrsim$\,$0.5$. The remaining galaxies in the field are then considered candidate hosts; see Figure \ref{fig: 130912A_HST_vs_Keck} for an example finding chart for sGRB 130912A based on deep Keck and \textit{HST} imaging. We report the results of our search for each sGRB in Appendix \ref{sec: appendixsampleanalysis}, and their finding charts are displayed in Figures \ref{fig: GalOpt} and \ref{fig: GalXray}. Sources classified as a galaxy are denoted by G1, G2, G3, etc., by increasing offset from the GRB position, whereas sources which could not be classified are labeled as A, B, C, etc., in the same manner. The probability of chance coincidence reported for each sGRB (Table \ref{tab: host properties}) is based on $r$-band number counts when possible, but if the galaxy is only detected in redder filters we include this probability instead using the number counts presented by \citet{Capak2007} for the $i$-band and \citet{Capak2004} for the $z$-band. \subsection{Galaxy SED Modeling} \label{sec:prospector} For those events with well-sampled galaxy SEDs but lacking a spectroscopic redshift, we obtained a photometric redshift by modeling the SED using \texttt{prospector} \citep{Johnson2019} with the methods previously utilized by \citet{OConnor2021,Dichiara2021,Piro2021}. We note that these photometric redshifts were determined based on the assumption that the photometric jump between two filters is due to the 4000 \AA\, break. A large break is indicative of an older stellar population. We adopted a \citet{Chabrier2003} initial mass function (IMF) with integration limits of 0.08 and 120 $M_{\odot}$ (\texttt{imf\_type = 1}), an intrinsic dust attenuation $A_V$ using the extinction law of \citet[][\texttt{dust\_type = 2}]{{Calzetti2000}}, and a delayed-$\tau$ star formation history (\texttt{sfh=4}). Furthermore, we include nebular emission lines using the photoionization code \texttt{Cloudy} \citep{Ferland2013}. In the cases of sGRBs 151229A, 180618A, and 191031D we turned off nebular emission lines as their spectra (Table \ref{tab: SpecObs}) did not display bright or obvious emission features. The synthetic SEDs derived from these model parameters were calculated using the flexible stellar population synthesis (FSPS) code \citep{Conroy2009} using WMAP9 cosmology \citep{Hinshaw2013}. The free model parameters are: the redshift $z$, the total stellar mass formed $M$, the age $t_{\rm age}$ of the galaxy, the e-folding timescale $\tau$, the intrinsic reddening $A_V$, and the metallicity $Z$. These parameters are further used to compute the stellar mass $M_*$. We adopt uniform priors in log $t_{\rm age}$, log $\tau$, log $Z$, $A_V$ as in \citet{Mendel2014}. The prior on the photometric redshift is uniform between $z_\textrm{phot}=0$\,$-$\,$3$. However, for sGRBs with a UV detection of their afterglow (e.g., sGRBs 110402A and 140129B; see Appendix \ref{sec: appendixsampleanalysis}) from \textit{Swift}, we adopt $z_\textrm{phot}=0$\,$-$\,$1.5$. The fits were performed using the dynamic nested sampling method implemented in the \texttt{DYNESTY} package \citep{dynesty}. The best fit model SEDs and the resulting photometric redshift estimates are displayed in Figure \ref{fig: SED_fits}. The photometric redshifts for these sGRBs are recorded in Table \ref{tab: host properties}, and the stellar mass is reported in their individual sections in Appendix \ref{sec: appendixsampleanalysis}. \begin{figure*} \begin{tabular}{cc} \includegraphics[width = 2\columnwidth]{figs/test_grid.pdf} \end{tabular} \caption{Spectral energy distributions of sGRB host galaxies with photometric redshifts determined in this work. The best fit model spectrum (solid line) and model photometry (squares) describing the galaxy SED is compared to the extinction-corrected photometry (circles). The observed spectrum, smoothed with a Savitzky-Golay filter, for the host galaxies of GRBs 180618A and 191031D is shown by a solid black line (see Table \ref{tab: SpecObs}). } \label{fig: SED_fits} \end{figure*} \section{Results} \label{sec: Results} In this work, we have analyzed the host galaxies and environments of 31 sGRBs; 17 with a sub-arcsecond position from optical observations and 14 with only an XRT localization (Figure \ref{fig: localization}). In Figures \ref{fig: GalOpt} and \ref{fig: GalXray}, we display a finding chart for each sGRB in our sample. We find that 18 events (see Table \ref{tab: host properties}) are associated to a host galaxy ($P_{cc}$\,$<$\,$0.1$), while 13 events are deemed observationally hostless. With respect to previous work, we have adopted the $P_{cc}$ threshold previously used by \citet{Bloom2002} and \cite{Berger2010a}, whereas other authors have utilized lower thresholds, such as 0.01 \citep{Tunnicliffe2014} or 0.05 \citep{FongBerger2013}. We demonstrate below that our choice is robust and ensures a low number of spurious associations. Based on our host galaxy assignments, we identify a spectroscopic redshift for 5 sGRBs in our sample (sGRBEEs 110402A, 160410A, and 180805B, and GRBs 101224A and 140622A; see Tables \ref{tab: host properties} and \ref{tab: SpecObs}). In addition, we derive a photometric redshift for 8 events (sGRBEEs 110402A and 170728B, and GRBs 120630A, 140129B, 151229A, 180618A, 191031D and 200411A; Figure \ref{fig: SED_fits} and Table \ref{tab: host properties}). The detailed analysis for each sGRB is reported in Appendix \ref{sec: appendixsampleanalysis}, and the magnitudes and offsets for the putative host galaxies are presented in Table \ref{tab: host properties}. We estimate the number of spurious galaxy associations in our sample following \citet{Bloom2002}. The probability that all sGRB host galaxies discovered in this work are a chance alignment with the GRB localization is given by \begin{equation} P_\textrm{false}=\prod^{m}_{k=1} P_k=4.8\times 10^{-25} \end{equation} where $m=18$ (the number of host galaxies we associate to sGRBs in this work) and $P_k$ is the probability of chance coincidence for each sGRB computed using Equation \ref{eqn: Pcc} based on $r$-band number counts (\S \ref{sec: Pcc}). If we compute $P_\textrm{false}$ for the optical and X-ray localized samples separately, we obtain $P_\textrm{false}=3.4\times 10^{-15}$ and $1.4\times 10^{-10}$, respectively. Moreover, the probability that every galaxy has a real, physical association to these GRBs can be estimated using \begin{equation} P_\textrm{real}=\prod^{m}_{k=1} (1-P_k)=0.36. \end{equation} If we consider again the optical and X-ray localized samples individually we find $P_\textrm{real}=0.76$ and $0.48$, respectively. \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/sGRB_Angular_Offset.pdf} \includegraphics[width=\columnwidth]{figs/sGRB_Offset_Fig_Final.pdf} \includegraphics[width=\columnwidth]{figs/sGRB_HostNorm_Offset.pdf} \caption{\textit{\textbf{Top}}: Cumulative distribution of angular offsets for sub-arcsecond localized sGRBs (red). Also displayed is the sub-sample of 10 sub-arcsecond localized events with EE (blue) and the sample of sGRBs with $T_{90}$\,$<$\,$2$ s (cyan). \textit{\textbf{Middle}}: Cumulative distribution of projected physical offsets for 33 sGRBs with sub-arcsecond localization (black) compared to the distribution from \citet{FongBerger2013} based on 22 sGRBs (red). \textit{\textbf{Bottom}}: Same as middle panel but for host-normalized offsets. The offsets of long GRBs (purple) are displayed for comparison \citep{Blanchard2016}. } \label{fig: offset_dist} \end{figure} We now compare the properties of the host galaxies determined in this work to other large samples previously presented within the literature \citep[e.g.,][]{Fong2013,Tunnicliffe2014}. To do so, we supplement the 31 sGRBs that we analyzed with 41 events (29 sub-arcsecond) from the literature with deep host galaxy searches. Out of these 72 well-studied events, we find that 37 have a spectroscopic redshift, 11 have a photometric redshift, 20 are observationally hostless, and 15 display extended emission. In order to perform a one-to-one comparison with our homogeneously selected sample, we excluded events from the literature which did not satisfy our selection criteria (specified in \S \ref{sec: sampleselection} and Table \ref{fig: outline}): including $A_V$\,$<$\,$1.5$ mag, $\sigma_\textrm{AG}$\,$<$\,$4\arcsec$, and a \textit{Swift}/BAT detection of the prompt emission. These criteria exclude a number of sGRBs typically included in other samples: sGRBs 050509B, 060502B, 090621 , 100206A, 161104A, and sGRBEE 061210 are excluded due to the large error ($>$\,$4\arcsec$) of their XRT localization, sGRBEE 050724 does not satisfy $A_V$\,$<$\,$1.5$ mag, and sGRBEE 050709 (\textit{HETE}), sGRBEE 060121 (\textit{HETE}), and sGRB 070707 (\textit{INTEGRAL}) are excluded as they were not detected with \textit{Swift}/BAT. The probabilities of chance coincidence for X-ray localized sGRBs were recalculated with the XRT enhanced positions derived using \texttt{HEASOFT} v6.28. Different versions of the XRT calibration database and analysis software may change the error radius by up to 50\% of its value, and this step ensures that all the X-ray positions are based on the same calibration database (\texttt{HEASOFT} v6.28). The resulting probabilities uniformly adopt the $3\sigma$ positional error (see \S \ref{sec: Pcc}), while in the literature different conventions (e.g., 68\% or 90\% CL) were sometimes adopted. Based on this re-analysis, 3 XRT localized events (sGRBs~050813, 061217, and 070729) are found to have candidate hosts with $P_{cc}$\,$>$\,$0.1$, and are hereafter considered observationally hostless. This leaves us with only 9 sGRBs in the literature sample with both an XRT localization and a putative host galaxy (sGRBs 051210, 060801, 080123, 100625A, 101219A, 121226A, 141212A, 150120A, and 160624A). Including the events in this work, this sample doubles to 18 XRT localized events with a putative host. The impact of these XRT events is discussed in \S \ref{sec: XRT_offset}. \subsection{Offset Distribution} \label{sec: offset distribution} \subsubsection{Sub-arcsecond Localized} \label{sec: subarcsec offset} We begin by studying the angular offset distribution (Figure \ref{fig: offset_dist}; top panel) for sGRBs with sub-arcsecond positions. With a few exceptions, this sample coincides with the sample of optically-localized bursts, which have a typical uncertainty of $\sim$\,$0.2\arcsec$ on their offset. The measured angular offsets range between $0.06\arcsec$ (GRB 090426; \citealt{Antonelli2009,Levesque2010}) to $16\arcsec$ (GRB 061201; \citealt{Stratta2007}), with 70\% of the bursts lying $<$\,$2\arcsec$ from their putative host galaxy's center. For comparison, GRB 170817A was located at $10.6\arcsec$ (2 kpc) from its galaxy's center \citep{Levan2017,Im2017}. We convert angular offsets into projected physical offsets by using the sGRB distance scale, typically derived from the putative host galaxy. For sGRBs without a measured redshift, we adopt the median redshift (\S \ref{sec: redshift dist}), $z$\,$\approx$\,$0.5$, for sGRBs in our sample\footnote We note that the subset of events without a measured redshift are very unlikely to reside at $z$\,$<$\,$0.5$, and are more likely between $z$\,$\sim$\,$0.5$\,$-$\,$1$, where the difference in angular scale is $D_\theta(z=1.0)/D_\theta(z=0.5)$\,$\approx$\,$1.3$. We find that varying the redshift of these events does not significantly affect our results.}. We find that the physical offsets of sGRBs range from 0.4~kpc to 75~kpc with a median of $5.6$ kpc. This is slightly larger than the median of 4.5 kpc from \citet{FongBerger2013} and a factor of $4\times$ larger than the median value for long GRBs \citep{Bloom2002,Lyman2017}. This result is consistent with the $<$\,$10$ kpc median sGRB offset derived by \citet{OConnor2020}, and with the expectations from binary population synthesis of BNS mergers \citep[see, e.g.,][]{Fryer1999,Bloom1999,Belczynski2006,Church2011,Mandhai2021,Perna2021}, although some modeling efforts predict larger median offsets \citep{Zemp2009,Wiggins2018}. The last quantity to explore is the host-normalized offset, which provides the most uniform comparison between the location of sGRBs with respect to their galaxies (Figure \ref{fig: offset_dist}; bottom panel). We find that the median host normalized offset of the entire sGRB sample (sub-arcsecond localized) is $R_o/R_e$\,$\sim$\,$1.2$ (shown in Figure \ref{fig: offset_dist}). However, our dataset includes both high-resolution \textit{HST} imaging and seeing-limited ground-based observations, and the latter might bias the inferred half-light radii of faint unresolved galaxies to larger values. By performing a homogeneous analysis of the \textit{HST} dataset only, we derive $R_o/R_e$\,$\sim$\,$2$, consistent with the value from the literature \citep{FongBerger2013}. For comparison, the median host normalized offset for long GRBs is $R_o/R_e$\,$\sim$\,$0.6$ (\citealt[][]{Blanchard2016,Lyman2017}). Furthermore, based on Figure \ref{fig: offset_dist}, we find that the offset distribution of this sample of sGRBEEs (dark blue lines) is a factor of $3-4\times$ further extended than long GRBs. A KS test between the two samples yields $p_\textrm{KS}\approx0.04$ (in both host normalized and physical offset), rejecting the null hypothesis that they are drawn from the same distribution at the $\sim$\,$2\sigma$ level. This provides additional and independent support to the hypothesis that their progenitors are different from those of long GRBs \citep{Norris2006,Gehrels2006}. We also explored whether there was an evolution of the observed offset distribution with redshift. In Figure \ref{fig: offset_redshift_normed}, we separate the physical offsets for sub-arcsecond localized GRBs into two distributions with $z$\,$<$\,$0.5$ and $z$\,$>$\,$0.5$. The median offset for sGRBs at $z$\,$<$\,$0.5$ (7.5 kpc) is a factor of $\sim$\,$2\times$ higher than those at $z$\,$>$\,$0.5$ (3.2 kpc), despite a KS test supporting that they are drawn from the same distribution ($p_\textrm{KS}$\,$=$\,$0.09$). In addition, no sGRBs at $z$\,$>$\,$0.5$ have a projected physical offset $>$\,$15$ kpc, compared to 50\% of those at $z$\,$<$\,$0.5$. If we perform the same comparison for the host normalized offset distribution (Figure \ref{fig: offset_redshift_normed}), we find that the two samples are again consistent with being drawn from the same distribution with $p_\textrm{KS}$=$0.25$, despite all events at $>$\,$5R_e$ being located at low redshifts. Although the distributions are similar statistically, the lack of large offsets at $z$\,$>$\,$0.5$ is suggestive of a redshift evolution effect. The physical implications of this possible redshift evolution are discussed in \S \ref{sec: discussion_bias}. \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/sGRB_Offset_redshift_split_2.pdf} \includegraphics[width=\columnwidth]{figs/sGRB_Offset_redshift_split_normalized.pdf} \caption{ \textit{\textbf{Top}}: Cumulative distribution of projected physical offsets for sGRBs with both a sub-arcsecond localization and spectroscopic redshift at $z$\,$<$\,$0.5$ (blue) and $z$\,$>$\,$0.5$ (red). \textit{\textbf{Bottom}}: Same as the top panel but for host-normalized offsets.} \label{fig: offset_redshift_normed} \end{figure} \subsubsection{Including XRT Localized sGRBs} \label{sec: XRT_offset} The previous section focused on sub-arcsecond localized events, however, the majority of sGRBs have only an XRT localization. For the sample of 99 events satisfying our selection criteria (\S \ref{sec: sampleselection}), the median error on the XRT enhanced position is $\sim$\,$1.8\arcsec$. Due to this large uncertainty, often comparable to the measured angular offset, XRT localized events are difficult to include in the offset distribution. Here, we adopt a Bayesian formalism to identify the true distribution of offsets for XRT localized GRBs. Following \citet{Bloom2002}, we assume that the probability density distribution of the GRB's offset from its host galaxy follows a Rice distribution \citep{Wax1954}, denoted by $\mathcal{R}(x,\mu,\sigma)$ where $\mu$ and $\sigma$ are the shape parameters. Applying Bayes' theorem, the posterior distribution for the true offset, $R_\textrm{true}$, of the GRB from its host galaxy's center given the observed offset, $R_\textrm{obs}$, and its uncertainty, $\sigma_R$, is \begin{equation} P(R_\textrm{true}|R_\textrm{obs})=\frac{P(R_\textrm{obs}|R_\textrm{true})P(R_\textrm{true})}{P(R_\textrm{obs})}, \end{equation} where the probability density for the likelihood $P(R_\textrm{obs}|R_\textrm{true})$ is given by the Rice distribution $\mathcal{R}(R_\textrm{obs},R_\textrm{true},\sigma_R)$. The choice of prior distribution, $P(R_\textrm{true})$, can have a significant impact on the unknown posterior. While simple priors may appear to minimize our assumptions on the underlying distribution, we note that they are generally unrealistic. For example, assuming that the GRB has an equal probability of occurring anywhere in a circle surrounding the galaxy's centroid (i.e., uniform probability in area), such that $P(R_\textrm{true})$\,$\propto$\,$R_\textrm{true}$, preferentially favors larger radii. Whereas both observations of sGRBs (Figure \ref{fig: offset_dist}) and models of BNS systems \citep{Bloom1999} find that the significant majority of systems form at $<$\,$10$ kpc. Therefore, we consider two different prior distributions: \textit{i}) following the observed distribution of physical offsets for sub-arcsecond localized sGRBs (Figure \ref{fig: offset_dist}), and \textit{ii}) assuming that GRBs form following an exponential profile $P(R_\textrm{true})\propto \exp(-R_\textrm{true}/R_*)$ where $R_*$ is taken to be the half-light radius of each galaxy. In Figure \ref{fig: offset_dist_withxray}, we refer to these priors as ``observed'' and ``exponential''. We choose to adopt the median value of the posterior distribution $P(R_\textrm{true}|R_\textrm{obs})$ for each GRB's offset, and include these XRT localized GRBs within the cumulative distribution of sGRB offsets. In Figure \ref{fig: offset_dist_withxray} we demonstrate how the X-ray localized events impact the offset distribution for the two prior distributions. The ``observed'' and ``exponential'' priors only cause a marginal deviation from the sub-arcsecond only distribution. Therefore, based on this analysis, the offsets of X-ray localized events are not inherently different from those with an optical localization. \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/sGRB_Offset_Fig_withXray_2.pdf} \caption{Cumulative distribution of sGRB offsets for the sample of sub-arcsecond localized events (purple) compared to X-ray localized events for two different priors (\S \ref{sec: XRT_offset}): \textit{i}) the ``observed'' prior (orange) and \textit{ii}) the ``exponential'' prior (red).} \label{fig: offset_dist_withxray} \end{figure} \subsubsection{Including Hostless sGRBs} \label{sec: including hostless} Up to this point, we have focused on the offset distribution of sGRBs with a confident host galaxy association ($P_{cc}$\,$<$\,$0.1$). Here, we include in our study 12 sub-arcsecond localized observationally hostless events. For these bursts, we identify the galaxy with the lowest chance probability $P_{cc}$ and measure the offset between the burst position and the galaxy's centroid (Appendix \ref{sec: appendixsampleanalysis}). Only 2 of these events are located within 10~kpc of their most likely host and, as a result, the median offset for the sample is 26.4~kpc, $5\times$ larger than the value derived in \S \ref{sec: subarcsec offset}. We further examine the implications of these hostless events in \S \ref{sec: discussion_hostless}. \begin{figure} \includegraphics[width =\columnwidth]{figs/sGRB_Offset_withHostless.pdf} \caption{Cumulative distribution of projected physical offsets for sub-arcsecond localized sGRB with a putative host (red) and for those which are hostless (blue); the total population is shown in black. } \label{fig: host_vs_hostless_offset} \end{figure} \begin{figure*} \centering \includegraphics[width=1.5\columnwidth]{figs/sGRB_Galaxy_LumFunc_r-band_Final_2a.pdf} \caption{Host galaxy $r$-band magnitude versus redshift for the sample of sGRBs included in this work. Spectroscopic (light purple) and photometric (blue) redshift measurements from the literature are represented by circles, and those determined in this work by stars. The small light gray circles represent galaxies in the CANDELS UDS. The black lines demonstrate the range of $0.1$\,$-$\,$1.0L^*$ galaxies as a function of redshift \citep{Brown2001,Ilbert2005,Willmer2006,Reddy2009,Finkelstein2015}. The deep constraint on the host galaxy of GRB 160410A \citep{AguiFernandez2021} is marked by a downward magenta triangle. In the right panel, we show the $r$-band magnitude for the host galaxies of sGRBs without a known redshift (dark purple diamonds), including the lowest $P_{cc}$ candidate host of observationally hostless events. Magnitudes have been corrected for Galactic extinction \citep{Schlafly2011}. } \label{fig: rband_vs_z} \end{figure*} \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/sGRB_offset_vs_mag.pdf} \caption{Host galaxy $r$-band magnitude versus angular offset for the sample of sGRBs included in this work. We also include GRBs where the galaxy with the lowest probability of chance coincidence has $P_{cc}$\,$>$\,$0.1$ (gray). The shaded gray region marks where $P_{cc}$\,$>$\,$0.1$. } \label{fig: rband_vs_offset} \end{figure} \subsection{Host Luminosities} \label{sec: hostlum} In Figure \ref{fig: rband_vs_z}, we display the apparent $r$-band magnitude (corrected for Galactic extinction) of sGRB host galaxies plotted against their redshift. By comparing the brightness of these galaxies to a sample of $\sim$\,$30,000$ galaxies from the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey project \citep[CANDELS,][]{Koekemoer2011,Grogin2011} Ultra Deep Survey \citep[UDS;][]{Galametz2013}, we confirm that the host galaxies of sGRBs trace the brightest galaxies ($0.1$\,$-$\,$1.0L^*$) at each redshift. In the right panel of Figure \ref{fig: rband_vs_z}, we report the $r$-band magnitude of candidate host galaxies without a known redshift, including the lowest $P_{cc}$ candidate host galaxies of observationally hostless events. We have identified that 4 sub-arcsecond localized observationally hostless events within our sample (e.g., GRBs 150423A, 160408A, 160601A, 160927A) have lowest $P_{cc}$ candidates with faint $r$-band magnitudes ($r$\,$\gtrsim$\,$24.5$ mag; corrected for Galactic extinction). When compared to typical sGRB host galaxies (Figure \ref{fig: rband_vs_z}) this is suggestive of either \textit{i}) an origin at $z$\,$>$\,$1$ or \textit{ii}) a population of under-luminous sGRB galaxies ($<$\,$0.1L^*$). Even if under-luminous, these galaxies would have to occur at $z$\,$>$\,$0.5$ in order to avoid an unexplained gap between faint galaxies and the known bright hosts at low-$z$ (Figure \ref{fig: rband_vs_z}). We note that there are only a handful of examples of low luminosity ($<$\,$0.1L^*$) sGRB host galaxies in GRBs 070714B \citep{Cenko2008}, 101219A \citep{Fong2013}, 120804A \citep{Berger2013,Dichiara2021}, and 151229A (this work), all of which reside at $z$\,$>$\,$0.5$. We observe the same trend in the observationally hostless sample of XRT localized sGRBs (e.g., GRB 140516A, 150831A, 170127B, 171007A, 180727A); there are faint $r$\,$\gtrsim$\,$24.5$ mag candidates detected within their XRT localization's, which range from $2.2$\,$-$\,$2.7\arcsec$ (90\% CL). We emphasize that none of these events are located near bright, low-$z$ galaxies (none within $60\arcsec$) from which they could have been kicked. This is in contrast to other observationally hostless events, such as sGRBs 061201, 090515, and 091109B, where the most likely host galaxy is a bright, low-$z$ galaxy at a significant offset. We discuss this further in \S \ref{sec: discussion_hostless}. In Figure \ref{fig: rband_vs_offset}, we show the $r$-band magnitude of sGRB host galaxies versus the angular offset of the sGRB from its host for both X-ray (diamonds) and optically localized GRBs (circles). The gray shaded region represents the region precluded from a strong host association, due to $P_{cc}$\,$>$\,$0.1$. Based on the distribution of XRT localized events we find that it is difficult to associate a galaxy fainter than $r$\,$>$\,$23.5$ to a GRB lacking a precise, sub-arcsecond localization. While the brightest sGRBs may have an X-ray localization (from \textit{Swift}/XRT) of $\sim$\,$1.4$\,$-$\,$1.5\arcsec$ (90\% CL), the majority are less precisely localized to $>2\arcsec$. As such, the majority of X-ray localized sGRBs are limited to associations with galaxies brighter than $r$\,$<$\,$23.5$ mag, decreasing the likelihood of association with galaxies at $z$\,$>$\,$1$ (see \S \ref{sec: discussion_bias}). \subsection{Redshift Distribution} \label{sec: redshift dist} Our sample consists of 72 well-localized sGRBs (including the sub-class of sGRBEEs) observed in homogeneous conditions. Of these, 37 (51\%) have a spectroscopic redshift, 11 (16\%) a photometric redshift, and 24 (33\%) lack a distance measurement. Only three of these redshift measurements come from direct afterglow spectroscopy, whereas the large majority are determined from the putative host galaxy. In Figure \ref{fig: redshift_dist} (top panel), we display a histogram of the observed redshift distribution. The median value is $z$\,$\approx$\,$0.5$ for the sample of spectroscopic redshifts, and $z$\,$\approx$\,$0.6$ for the combined sample of photometric and spectroscopic redshifts. By adding $4$ spectroscopic redshifts at $z$\,$>$0.5 and $7$ photometric redshifts at $z$\,$>$0.4, our work mainly populates the upper tail of the distribution. This shows the importance of deep imaging and spectroscopy, using large aperture $8$\,$-$\,$10$m telescopes, in probing the most distant sGRBs and their faint host galaxies. However, only 1 of our events lies at $z>1$ (Table \ref{tab: host properties}). This is not surprising as our survey is optically-driven and affected by complex selection effects, such as the so-called ``redshift desert'' ($1.4$\,$<$\,$z$\,$<2.5$; also marked in Figure \ref{fig: redshift_dist}) where common nebular emission lines are shifted towards infrared wavelengths. A similar systematic survey of sGRBs at nIR wavelengths would be essential to complement our study and extend the redshift distribution of sGRBs. The number of distant sGRBs is an important constraint for progenitor models and their delay time distribution (DTD). In Figure \ref{fig: redshift_dist} (bottom panel), we show the cumulative distribution of sGRB redshifts (including photometric redshifts) compared to predictions based on different DTD models. The two models commonly adopted in the literature are: \textit{i}) a log-normal distribution \citep{Nakar2006,Wanderman2015} and \textit{ii}) a power-law with decay index between $\sim$\,$-1$ to $-1.5$ \citep{HaoYuan2013}. A larger number of $z$\,$>$\,$0.5$ events increases the tension with the log-normal DTD models. A KS test between our distribution and the \citet{Nakar2006} model yields $p_{KS}$\,$=$\,$10^{-2}$, rejecting the null hypothesis that the observed redshift distribution is drawn from their model. The observed distribution appears instead consistent with the power-law DTD models with slope $\sim$\,$-1$ to $-1.5$\footnote{We note that the redshift distribution also depends on the assumptions as to the SFH, gamma-ray luminosity function, detector sensitivity, and minimum delay time, and can therefore be different even for the same DTD.}. However, a significant population of bursts with no known redshift exists. Our survey identifies that their likely host galaxies are much fainter than the rest of the sample (Figure \ref{fig: rband_vs_z}), and a likely explanation is that these bursts represent a missing population of high-$z$ sGRBs. In the most extreme case, these would be prompt mergers with a negligible delay time between formation and merger. In Figure \ref{fig: redshift_dist} we show the implications of this scenario. The dotted black line represents the hypothetical redshift distribution derived assuming that all the bursts with no known redshift follow the SFH of the Universe \citep{Moster2013}. This sets a lower limit to the true redshift distribution and helps constrain the parameter space allowed by observations. By assuming that sGRB progenitors are described by a single DTD function, the \citet{HaoYuan2013} curve is consistent with all the observing constraints \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/sGRB_redshift_hist_rebin.pdf} \includegraphics[width=\columnwidth]{figs/sGRB_redshift_CDF.pdf} \caption{\textit{\textbf{Top}}: Histogram of the observed spectroscopic redshifts (purple) for 36 sGRBs matching our selection criteria. We also show a sample of photometric redshifts (blue) for 12 additional events. The gray solid region marks the ``redshift desert'' between $1.4$\,$<$\,$z$\,$<2.5$. \textit{\textbf{Bottom}}: Cumulative distribution of sGRB redshifts (black) compared to the expected distribution for several different DTDs \citep{Nakar2006,HaoYuan2013,Wanderman2015}. The dashed black line represents a lower limit to $P(<z)$ assuming $\sim$\,$50\%$ of the population occurs at $z$\,$>$\,$1$ with a negligible delay time. } \label{fig: redshift_dist} \end{figure} \subsection{Circumburst Environment} \label{sec: environment} \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/fluxfluence_vs_offset_2.pdf} \caption{Ratio of $0.3$\,$-$\,$10$ keV X-ray flux at 11-hours, $F_{X,11}$, to the $15$\,$-$\,$150$ keV gamma-ray fluence, $\phi_\gamma$, versus the projected physical offset from the sGRB host galaxy. sGRBs with $T_{90}$\,$<$\,$2$ s are represented by light purple circles, sGRBEE by dark purple squares and observationally hostless events (adopting the offset to their lowest $P_{cc}$ candidate host) are displayed by light gray circles. Events with upper limits on $F_{X,11}$ are shown by downward triangles. The sample of events is compiled from \citet{Nysewander2009,Berger2014,OConnor2020,OConnor2021}.} \label{fig: fluxfluence_vs_offset} \end{figure} In this section, we explore the consistency between the observed offsets of sGRBs around their galaxies and their inferred circumburst environment based on observations of their afterglows in X-rays. First, we use the onset of the X-ray afterglow from \textit{Swift}/XRT to set a lower limit to the circumburst density for each of the 31 bursts in our sample (see \citealt{OConnor2020} and our Appendix \ref{appendix: densitycalc}). Of these 31 bursts we find that $<33\%$ have a circumburst density consistent with $n_\textrm{min}$\,$<$\,$10^{-4}$ cm$^{-3}$, setting an upper limit to the fraction of sGRBs in this sample occurring in a IGM-like environment (physically hostless). Of these potentially low-density events, 5 are observationally hostless (Table \ref{tab: XrayAGprop}). Moreover, we searched for a correlation between the GRB offsets and their high-energy properties. In particular, the ratio of the X-ray flux at 11-hours, $F_{X,11}$, to the prompt gamma-ray fluence, $\phi_\gamma$, is known to probe the circumburst density such that $F_{X,11}/\phi_\gamma$\,$\propto$\,$n^{1/2}$ \citep{Sari1998,Wijers1999,Granot2002}. This is valid only in the synchrotron slow cooling regime when the cooling frequency lies above the X-ray band, and does not accounts for energy injection from the central engine. Moreover, this quantity $F_{X,11}/\phi_\gamma$ is independent of distance. In Figure \ref{fig: fluxfluence_vs_offset}, we observe that there is a large scatter in the correlation (see also \citealt{OConnor2020}). Although GRBs with small offsets tend to occupy the upper part of the plot, and those with larger offsets the lower part, no trend can be conclusively established. We find no evidence for a population of bursts in a rarefied environment. Instead, we find that observationally hostless sGRBs (e.g., sGRBs 061201, 091109B, 110112A, 111020A, 160601A, 160927A) are not X-ray faint when compared to the overall population, as they all lie above $\log(F_{X,11}/\phi_\gamma)$\,$\gtrsim$\,$-6.1$. While these events have no secure host association, we paired them with their most likely host galaxy to calculate their offsets in Figure~\ref{fig: fluxfluence_vs_offset}. However, the X-ray brightness of their afterglows does not support the large offset/low density scenario implied by these galaxy's associations. \section{Discussion} \label{sec: discussion} \subsection{A Redshift Evolution of sGRB Locations} \label{sec: discussion_bias} By exploring the distribution of sGRB offsets at $z$\,$<$\,$0.5$ and $z$\,$>$\,$0.5$ (Figure \ref{fig: offset_redshift_normed}; top panel), we identified a redshift evolution in the locations of sGRBs around their galaxies. Based on our analysis, there are no events with $z$\,$>$\,$0.5$ at physical offsets $>$\,$15$ kpc, compared to 50\% at $z$\,$<$\,$0.5$. We examine three possible factors which could be at the origin of the observed trend: \textit{i}) an evolution of the host galaxy size, \textit{ii}) an intrinsic property of their progenitors, or \textit{iii}) an observational bias against dim high-z galaxies. The increased size of sGRB host galaxies over cosmic time possibly leads to a larger birth radius of the progenitor, and therefore a larger offset. This is consistent with observations of galaxy size evolution following the relation $R_e$\,$\propto$\,$(1+z)^{-\alpha}$ with $\alpha$\,$\approx$\,$0.6$\,$-$\,$1.3$ \citep[see, e.g.,][]{Dahlen2007,vanderWel2008,Papovich2012,Ribeiro2016,Allen2017,Paulino-Afonso2017} leading to growth by a factor of $\sim$\,$2\times$ between $z$\,$=$\,$1$ and the present. It is not clear if this growth is completely due to a true galaxy evolution effect or an observational bias due to surface brightness dimming with distance. Nonetheless, we show that, when normalized by the host galaxy's size, the two distributions at $z$\,$<$\,$0.5$ and $z$\,$>$\,$0.5$ move closer to each other (Figure \ref{fig: offset_redshift_normed}). In particular, for offsets $<$\,$R_e$ they seem to track each other well. However, we find that all events with offsets $>$\,$5R_e$ reside only in low-$z$ galaxies. By correlating the physical offset with host galaxy type (see Figure \ref{fig: offset_vs_type}), we find that low-$z$ early-type galaxies preferentially host these sGRBs with large spatial offsets. These events are commonly interpreted as highly kicked BNS systems \citep{Behroozi2014,Zevin2020} or BNS mergers dynamically formed in globular clusters \citep{Salvaterra2010,Church2011}. However, we note that an alternative possibility is that the sGRB progenitors were formed in the extended stellar halo of their galaxy \citep{PeretsBeniamini2021}, and as such do not require large natal kicks. Thus, the large host normalized offsets may be due to the fact that $R_e$ is not a good tracer of the extended stellar halo in early-type galaxies \citep{DSouza2014,Huang2018}. Another physical explanation for this evolution is that systems merging at low redshifts had a longer delay time between formation and merger of the binary, allowing them to travel further distances than those merging at higher redshifts. However, through population synthesis, \citet{Perna2021} found the opposite trend: simulated BNS at high redshift reach a larger distance from their host galaxies. In fact, they found that $\sim$\,$20\%$ of BNS systems in simulated galaxies at $z$\,$=$\,$1$ reach offsets $>$\,$15$ kpc, whereas none have been identified observationally. Future population synthesis modeling, specifically using inferences from observations of Galactic BNS systems \citep{Beniamini2016,Beniamini2016p2,Tauris2017,Abbott2017kick,Kruckow2018,VignaGomez2018,Andrews2019a,BeniaminiPiran2019}, is required to discern whether these results are expected under different assumptions for the delay time and natal kick distributions. Nevertheless, we bear in mind that an alternative scenario to explain the redshift evolution is an observational bias against faint high-$z$ galaxies. This bias can most easily be understood based on Figure \ref{fig: rband_vs_z}, where the decrease in host galaxy apparent magnitude as a function of redshift is displayed. For instance, above $z$\,$>$\,$1$ the majority of galaxies in the universe are fainter than $r$\,$>$\,$23.5$ mag, with a significant fraction dimmer than $r$\,$>$\,$25$ mag. In order to associate a GRB to such faint galaxies (Figure \ref{fig: rband_vs_offset}) requires an offset of $\lesssim$\,$3\arcsec$ (corresponding to $\lesssim$\,$25$ kpc, assuming $z$\,$\approx$\,$1$). This condition becomes more stringent if the probability of chance coincidence cutoff threshold is decreased from the 10\% value used in this work (\S \ref{sec: Pcc}). For example, adopting a cutoff value of 5\%, as used in previous studies \citep{FongBerger2013}, requires an offset $\lesssim 2.2\arcsec$ or, equivalently, $\lesssim$\,$18$ kpc, even for sub-arcsecond localized sGRBs. Surprisingly, even a Milky Way-like spiral galaxy at $z$\,$\approx$\,$1$ ($r$\,$\approx$\,$23$ mag) will have a probability of chance alignment larger than 5\% (10\%) if the projected physical offset is $>$\,$20$ (30) kpc \citep{Tunnicliffe2014}. Therefore, we find that it is unlikely, based on probabilistic grounds, to associate high-$z$ sGRBs to galaxies at large physical offsets. This bias may explain, at least in part, the observed redshift evolution of sGRB offsets and should be taken into account when comparing the observed offset distribution to progenitor models. \begin{figure} \centering \includegraphics[width=\columnwidth]{figs/offset_type_histogram.pdf} \caption{Histogram of projected physical offset of sGRBs from their host galaxies. The distribution for late-type galaxies is shown in purple, and early-type hosts in green \citep{Gompertz2020,Paterson2020,OConnor2021}. We have limited the sample to those with classified galaxy type and an error on their offset of $<$\,$20\%$. } \label{fig: offset_vs_type} \end{figure} \subsection{Hostless Short GRBs} \label{sec: discussion_hostless} \subsubsection{Observationally Hostless Fraction} We have selected a homogenous sample (\S \ref{sec: sampleselection}) of short GRBs detected by \textit{Swift}/BAT of which 72 have a sensitive search for their host galaxy. We identify that $\sim$\,$28\%$ (20 events) of these 72 events are observationally hostless. This fraction is higher than the value of $17\%$ reported by \citet{Fong2013}. We find that this difference is mainly driven by the larger sample of X-ray localized events studied in our work. Considering only the sample with sub-arcsecond positions, the hostless fraction is $26\%$, consistent between the two works. As the fraction of hostless sub-arcsecond localized events is consistent with the full population, we find that our result is not driven by the lower accuracy of X-ray localized events. In fact, in \S \ref{sec: XRT_offset}, we demonstrated that the offsets of X-ray localized events are consistent with the locations of sub-arcsecond localized sGRBs (Figure \ref{fig: offset_dist_withxray}). This suggests that any selection bias against large offsets or low-density environments acts on both samples in the same way. \subsubsection{Interpretation of Hostless Events} We emphasize that there is a lingering ambiguity as to the origin of hostless short GRBs. The main scenarios are that \textit{i}) the GRB was kicked to a substantial distance from its birth galaxy, such that the probability of chance alignment is large, or \textit{ii}) the GRB merged in a faint, undetected galaxy at a smaller angular distance. However, the diagnosis for individual events is complicated, and it is difficult to distinguish between these two scenarios. For instance, the hostless sGRBs presented by \citet{Berger2010a} are located at a significant offset ($30$\,$-$\,$75$ kpc) from bright low-$z$ galaxies ($z<0.5$). However, despite their brightness, the probability of chance coincidence is $\gtrsim 10\%$. Therefore, it is not clear whether these sGRBs are truly associated to these low-$z$ galaxies, or whether they reside in faint, undetected hosts ($H$\,$>$\,$26$ mag). The interpretation has a direct impact on the energetics, redshift (\S \ref{sec: redshift dist}), and delay time distributions of sGRBs. In this work, we have tripled the number of observationally hostless sGRBs (from 7 to 20 events). We find that half of the observationally hostless sGRBs lack any nearby (low-$z$) candidate host. These events are more likely to have exploded in faint $r\gtrsim 24.5$ mag galaxies (see \S \ref{sec: hostlum}) that are consistent with $0.1-1.0L^*$ galaxies at $z>1$. We note, however, that an alternative explanation is that these represent a population of low luminosity ($<$\,$0.1 L^*$) galaxies hosting sGRBs at $z$\,$<$\,$1$, although this is at tension with the population of well-determined sGRB hosts ($0.1$\,$-$\,$1 L^*$; \citealt{Berger2010a}) and with predictions from population synthesis modeling, which find that BNS systems preferentially form in the most massive (brightest) galaxies \citep{Behroozi2014,Mapelli2018,Artale2019,Artale2020,Adhikari2020,Mandhai2021,Chu2022}. Previous work in the literature \citep[see, e.g., ][]{Berger2010a,Tunnicliffe2014} has focused on the likelihood to detect faint galaxies at high-$z$, as opposed to the large probability of chance coincidence even in the event that a galaxy is detected. We find that despite detecting these faint galaxies, they are difficult to confidently associate to the GRB using the standard probability of chance coincidence methodology \citep{Bloom2002}. This is indicative of an observational bias against faint galaxies (see also \S \ref{sec: discussion_bias}). We note that a larger population of sGRBs at $z$\,$>$\,$1$ implies a steep DTD with an increased fraction of events with short delay times, as deduced based on Galactic BNS systems \citep[][]{BeniaminiPiran2019}. This would further disfavor log-normal DTD models (\S \ref{sec: redshift dist}), and support a primordial formation channel for these events. We further explored the sample of observationally hostless events that lie close to low-$z$ galaxies. We exploited their high-energy properties to probe their environments (\S \ref{sec: environment}), as their circumburst density can be used to constrain their allowed physical offset \citep{OConnor2020}. Figure \ref{fig: fluxfluence_vs_offset} shows a weak correlation between X-ray afterglow brightness with the sGRB location, such that a larger offset leads to fainter X-ray emission. The X-ray constraints for hostless events are either too shallow or inconsistent with the observed trend. Although this does not conclusively rule out that these hostless sGRBs could be mergers kicked out into the IGM (physically hostless), it does not offer observational support and leaves their nature undetermined. Rapid and deep X-ray observations with next-generation instruments (e.g., the \textit{Athena X-ray observatory}; \citealt{athena}) will be capable of probing X-ray fluxes of $\sim$\,$10^{-16}$ erg cm$^{-2}$ s$^{-1}$ within 12 hr of the GRB trigger, and, therefore, will be able to detect the low flux regime of physically hostless sGRBs. We note that the main factor preserving the ambiguity in interpreting these events is that the distance scale to the sGRB is not known. Therefore, in order to disentangle between faint hosts and large offsets we require better constraints as to the distance to short GRBs. The most critical observational tests are \textit{i}) rapid afterglow spectroscopy to determine redshift independent of the galaxy association (e.g., GRB 160410A; this work and \citealt{AguiFernandez2021}), \textit{ii}) the conclusive identification of a kilonova, providing indirect evidence of the GRB distance scale \citep{Troja2019, Chase2022}, or \textit{iii}) the advent of next generation GW detectors capable of detecting compact binaries at cosmological distances \citep{Punturo2010,Dwyer2015}. \section{Conclusions} \label{sec: conclusions} We carried out a systematic study of the host galaxies of 31 short GRBs. This analysis effectively doubles the sample of well-studied sGRB host galaxies, leading to a total of 72 events fitting our selection criteria with sensitive searches for their host. We assign a spectroscopic redshift to 5 of these events, and derive a photometric redshift for 7 others. Based on the results of this study, we present the subsequent findings: \begin{enumerate}[leftmargin=\parindent] \item The sub-arcsecond localized population of sGRBs has a median projected physical offset of $5.6$ kpc ($4\times$ larger than for long GRBs; \citealt{Blanchard2016,Lyman2017}), with $70\%$ of events occurring at $<$\,$10$ kpc from their host's nucleus. \item We find that 28\% of sGRBs (20 out of 72) lack a putative host galaxy to depth $r$\,$>$\,$26$ mag. For half of these hostless bursts, the most likely host is a faint ($r$\,$>$\,$24.5$ mag) galaxy consistent with a high redshift origin ($z$\,$>$\,$1$). \item Based on this evidence and the larger sample of $48$ redshifts, we have presented improved constraints on the redshift distribution of sGRBs. We find that 20\% of sGRBs with known redshift lie above $z$\,$>$\,$1$, although this number could be as high as 50\% when including the population of events with no known host. The data is inconsistent with log-normal DTDs for their progenitors, and instead favors power-law models with index $-1$ or steeper. \item By correlating the high-energy properties of sGRBs with their locations, we find evidence of a possible trend linking the X-ray brightness to the distance from the host galaxy. We point out that hostless events, if associated to their most likely nearby galaxy, do not follow this trend. Hence, their X-ray brightness does not lend support to their interpretation as mergers in a rarefied medium. \item We find that sGRBEEs are inconsistent with the offset distribution of long GRBs in both projected physical offset and host normalized offset. This conclusion is reached independently of classical sGRBs. \item Lastly, we uncover that the low redshift population of sGRBs is further offset by a factor of $2\times$ from their hosts compared to the sample at $z$\,$>$\,$0.5$ with the median value increasing from 3.2 to 7.5 kpc. This redshift evolution can be explained either by a physical evolution in their progenitors or the larger size of low-$z$ galaxies. \end{enumerate} We emphasize that while late-time observations alone cannot allow for concrete host associations for events at $>$\,$50$ ($25$) kpc past $z$\,$\gtrsim$\,$0.1$ ($1.0$), rapid optical spectroscopy can determine the GRB's distance scale and yield a confident host galaxy assignment. Moreover, rapid and deep optical and infrared observations can lead to the identification of a kilonova, providing an indication of the GRB's distance. These transient are expected to be detectable out to $z$\,$\sim$\,$1$ with both current (\textit{James Webb Space Telescope}; \textit{JWST}) and future observatories (e.g., the 39-m Extremely Large Telescope; \citealt{ELT}). In addition, the combination of next generation GW detectors (i.e., Einstein Telescope and Cosmic Explorer; \citealt{Punturo2010,Dwyer2015}) with EM observations can allow for confident associations (out to $z$\,$\sim$\,$4$\,$-$\,$10$; \citealt{Hall2019,Singh2021}) as the distance of the GW event can be compared to nearby galaxies. This will allow us to unambiguously distinguish between the large offset scenario and a high-$z$ explanation for observationally hostless sGRBs. Lastly, future infrared observations with \textit{HST} and \textit{JWST} will probe lower stellar mass galaxies as a function of redshift (Figure \ref{fig: 130912A_HST_vs_Keck}), allowing for more robust limits on the possible faint (high-$z$) galaxies these sGRBs. High resolution observations would also allow for an accurate morphological analysis of the detected hosts, leading to a better understanding of the ratio of early- to late-type galaxies, which yields important information as to the age and formation channels of sGRB progenitors and can illuminate whether events at large offsets are due to kicks or formation in their galaxy's halo. \section*{Acknowledgements} B.~O. acknowledges useful discussions with Phil Evans and Geoffrey Ryan. B.~O. thanks Amy Gottlieb for assistance in obtaining LDT observations. B.~O. was partially supported by the National Aeronautics and Space Administration through grants NNX16AB66G, NNX17AB18G, and 80NSSC20K0389, through \textit{Chandra} Award Numbers GO021065A, GO021062A, and GO122068X issued by the \textit{Chandra} X-ray Center, which is operated by the Smithsonian Astrophysical Observatory for and on behalf of the National Aeronautics Space Administration under contract NAS8-03060, and by the National Science Foundation through grant no. 12850. P.~B.'s research was supported by a grant (no. 2020747) from the United States-Israel Binational Science Foundation (BSF), Jerusalem, Israel. J.~B.~G. acknowledges financial support from the Spanish Ministry of Science and Innovation (MICINN) through the Spanish State Research Agency, under Severo Ochoa Program 2020-2023 (CEX2019-000920-S). This project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme, grant 101002761 (BHianca; PI: Troja). This work made use of data supplied by the UK \textit{Swift} Science Data Centre at the University of Leicester. This research has made use of the Keck Observatory Archive (KOA), which is operated by the W. M. Keck Observatory and the NASA Exoplanet Science Institute (NExScI), under contract with the National Aeronautics and Space Administration. Based on observations obtained at the international Gemini Observatory, a program of NSF's OIR Lab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation on behalf of the Gemini Observatory partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigaci\'{o}n y Desarrollo (Chile), Ministerio de Ciencia, Tecnolog\'{i}a e Innovaci\'{o}n (Argentina), Minist\'{e}rio da Ci\^{e}ncia, Tecnologia, Inova\c{c}\~{o}es e Comunica\c{c}\~{o}es (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). The \textit{HST} data (ObsID: 1465) used in this work was obtained from the Mikulski Archive for Space Telescopes (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-\textit{HST} data is provided by the NASA Office of Space Science via grant NNX09AF08G and by other grants and contracts. These results also made use of Lowell Observatory's Lowell Discovery Telescope (LDT), formerly the Discovery Channel Telescope. Lowell operates the LDT in partnership with Boston University, Northern Arizona University, the University of Maryland, and the University of Toledo. Partial support of the LDT was provided by Discovery Communications. LMI was built by Lowell Observatory using funds from the National Science Foundation (AST-1005313). This paper makes use of data obtained from the Isaac Newton Group of Telescopes Archive which is maintained as part of the CASU Astronomical Data Centre at the Institute of Astronomy, Cambridge. This work is based on data from the GTC Public Archive at CAB (INTA-CSIC), developed in the framework of the Spanish Virtual Observatory project supported by the Spanish MINECO through grants AYA 2011-24052 and AYA 2014-55216. The system is maintained by the Data Archive Unit of the CAB (INTA-CSIC). Based on observations made with the Liverpool Telescope operated on the island of La Palma by Liverpool John Moores University in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias with financial support from the UK Science and Technology Facilities Council. Additionally, this work is based on data obtained from the ESO Science Archive Facility. We additionally made use of Astropy, a community-developed core Python package for Astronomy \citep{Astropy2018}. \section*{Data Availability} The data underlying this article will be shared on reasonable request to the corresponding author. \begin{table*} \centering \caption{Log of imaging observations of sGRB host galaxies. } \label{tab: observations} \begin{tabular}{lccccccccccc} \hline \hline \\[-2.5mm] \textbf{GRB} & \textbf{$T_{90}^c$} & \textbf{RA} & \textbf{Dec} &\textbf{Obs. Date} & \textbf{Telescope} & \textbf{Instrument} & \textbf{Filter} & \textbf{Exp.} & \textbf{AG Image$^{b}$} & \textbf{AB Mag$^{d}$} &\textbf{$A_\lambda$} \\ & \textbf{(s)} & \textbf{(J2000)} & \textbf{(J2000)} & \textbf{(UT)} & & & & \textbf{(s)} & & & \textbf{(mag)} \\ \hline 091109B & 0.3 & 07:30:56.61 & -54:05:22.85 & 11-10-2009 & VLT & FORS2 & \textit{R} & 3600 & Y & ... & ... \\ ... & ... & ... & ... & 11-01-2016 & \textit{HST} & WFC3 & \textit{F110W} & 5600 & ... & $>27.3$ & 0.13 \\ \hline 101224A & 0.2 & 19:03:41.72 & 45:42:49.5 & 06-11-2020 & LDT & LMI & \textit{g} & 750 & ... &$22.71\pm0.06$ & 0.17 \\ ... & ... & ... & ... & 06-11-2020 & LDT & LMI & \textit{r} & 750 & ... & $22.11\pm0.06$ & 0.12 \\ ... & ... & ... & ... & 06-11-2020 & LDT & LMI & \textit{i} & 750 & ... & $21.91\pm0.05$ & 0.09\\ ... & ... & ... & ... & 06-11-2020 & LDT & LMI & \textit{z} & 800 & ... & $21.84\pm0.05$& 0.06 \\ \hline 110112A & 0.5 & 21:59:43.85 & 26:27:23.9 & 01-12-2011 & WHT & ACAM & \textit{i} & 900 & Y & ... &...\\ ... & ... & ... & ... & 10-13-2016 & \textit{HST} & WFC3 & \textit{F110W} & 5200 & ... & $>27.3$ & 0.05\\ \hline 110402A$^{a}$ & 56 & 13:09:36.53 & 61:15:09.9 & 04-02-2011 & \textit{Swift} & UVOT & $\textit{wh}$ & 1630 & Y &... & ... \\ ... & ... & ... & ... & 05-27-2014 & Keck & LRIS & \textit{B} & 180 & ...& $24.19\pm0.11$ & 0.06 \\ ... & ... & ... & ... & 05-27-2014 & Keck & LRIS & \textit{I} & 570 & ...& $23.35\pm0.10$ & 0.03 \\ ... & ... & ... & ... & 08-03-2020 & Gemini & GMOS-N & \textit{r} & 900 & ...& $24.24\pm0.20$ & 0.04 \\ ... & ... & ... & ... & 05-05-2021 & LDT & LMI & \textit{i} & 1500 & ...& $23.35\pm0.09$ & 0.03 \\ ... & ... & ... & ... & 05-06-2021 & LDT & LMI & \textit{z} & 2100 & ...& $23.0\pm0.16$ & 0.02 \\ \hline 120305A & 0.1 & 03:10:08.68 & 28:29:31.0 & 03-13-2012 & Gemini & GMOS-N & \textit{i} & 2340 & ... & $21.56\pm0.08$ & 0.71 \\ ... & ... & ... & ... & 03-06-2014 & LDT & LMI & \textit{r} & 2700 & ... & $22.32\pm0.09$ & 0.81 \\ ... & ... & ... & ... & 10-25-2014 & Keck& LRIS&\textit{G}& 3000 & ... & $23.00\pm0.06$ & 1.30 \\ ... & ... & ... & ... & 10-25-2014 & Keck& LRIS&\textit{R} & 2750 & ...& $22.28\pm0.04$ & 0.75 \\ ... & ... & ... & ... & 11-09-2021 & LDT & LMI & \textit{y} & 1980 & ... & $<20.6$ & 0.38 \\ \hline 120630A & 0.6 & 23:29:11.07 & 42:33:20.3 & 07-01-2012 & Gemini & GMOS-N & \textit{r} & 500 & ...& $21.60\pm0.06$ & 0.21 \\ ... & ... & ... & ... &07-01-2012& Gemini & GMOS-N & \textit{i} & 500 & ...& $21.25\pm0.07$ & 0.19 \\ ... & ... & ... & ... &07-01-2012 & Gemini & GMOS-N & \textit{z} & 500 & ...& $21.08\pm0.05$ & 0.14 \\ ... & ... & ... & ... & 09-05-2014 & LDT & LMI & \textit{r} & 700 & ...& $21.56\pm0.05$ & 0.21 \\ ... & ... & ... & ... &09-05-2014 & LDT & LMI & \textit{i} & 400 & ...& $21.16\pm0.06$ & 0.16 \\ ... & ... & ... & ... &2014-09-05 & LDT & LMI & \textit{z} & 800 & ...& $21.0\pm0.2$ & 0.11 \\ ... & ... & ... & ... &10-25-2014 & Keck & LRIS & \textit{R} & 3300 & ...& $21.63\pm0.04$ & 0.20 \\ ... & ... & ... & ... &10-25-2014 & Keck & LRIS & \textit{G} & 3600 & ...& $22.45\pm0.03$ & 0.34 \\ ... & ... & ... & ... & 11-09-2021 & LDT & LMI & \textit{y} & 1980 & ... & $<20.2$ & 0.09 \\ ... & ... & ... & ... & -- & WISE & -- & \textit{W1} & -- & ...& $19.48\pm0.05$ & 0.02 \\ ... & ... & ... & ... & -- & WISE & -- & \textit{W2} & -- & ...& $19.61\pm0.08$ & 0.016 \\ \hline 130822A & 0.04 & 01:51:41.27 & -03:12:31.7 & 08-23-2013 & Gemini & GMOS-N & \textit{i} & 600 & ...& $17.79\pm0.03$ & 0.05 \\ ... & ... & ... & ... & 10-25-2014 & Keck & LRIS & \textit{G} & 3000 & ...& $18.84\pm0.03$ & 0.08 \\ ... & ... & ... & ... & 10-25-2014 & Keck & LRIS & \textit{R} & 2750 & ...& $18.18\pm0.03$ & 0.05 \\ \hline 130912A & 0.3 & 03:10:22.23 & 13:59:48.7 & 09-13-2013 & WHT & ACAM & \textit{i} & 900 & Y & ... & ... \\ ... & ... & ... & ... & 02-25-2014 & LDT & LMI & \textit{r} & 2700 & ...& $>24.9$ & 0.56 \\ ... & ... & ... & ... &10-25-2014 & Keck & LRIS & \textit{G} & 2400 & ...& $>26.3$ & 0.90 \\ ... & ... & ... & ... & 10-25-2014 & Keck & LRIS & \textit{R} & 2750 & ...& $>26.2$ & 0.52 \\ ... & ... & ... & ... & 01-09-2017 & \textit{HST} & WFC3 & \textit{F110W} & 5200 & ...& $>27.2$ & 0.22 \\ \hline 131004A & 1.5 & 19:44:27.08& -02:57:30.2 & 10-04-2013 & \textit{Swift} & UVOT & \textit{wh} & 520 & Y & ... & ... \\ ...& ... &...& ...& 10-07-2013 & Keck & MOSFIRE & \textit{$K_s$} & 290 & ...& $>22.3$ & 0.08 \\ ... & ... & ... & ... & 10-11-2016 & \textit{HST} & WFC3 & \textit{F110W} & 5212 & ...& $25.80\pm0.05$ & 0.22 \\ \hline 140129B & 1.35 & 21:47:01.66 & +26:12:23.0 & 01-29-2014 & \textit{Swift} & UVOT & \textit{wh} & 150 & Y & ... & ... \\ ... & ... & ... & ... & 06-10-2014 & LDT & LMI & \textit{r} & 1500 & ... & $23.55\pm0.10$ & 0.20 \\ ... & ... & ... & ... & 11-03-2019 & LDT & LMI & \textit{r} & 1200 & ...& $23.50\pm0.09$ & 0.20 \\ ... & ... & ... & ... & 08-06-2021 & LDT & LMI & \textit{g} & 1200 & ...& $24.52\pm0.18$ & 0.30 \\ ... & ... & ... & ... & 08-06-2021 & LDT & LMI & \textit{i} & 1200 & ...& $23.52\pm0.10$ & 0.15 \\ ... & ... & ... & ... & 08-06-2021 & LDT & LMI & \textit{z} & 1000 & ...& $<23.0$ & 0.11 \\ \hline 140516A & 0.2 & 16:51:57.40 & 39:57:46.3 & 05-16-2014 & Gemini & GMOS-N & \textit{i} & 1800 & ... & $>26.1$ & 0.02 \\ ... & ... & ... & ... & 09-04-2014 & LDT & LMI & \textit{r} & 4200 & ...& $>25.0$ & 0.03 \\ ... & ... & ... & ... & 10-15-2019 & Keck & MOSFIRE & $K_s$ & 1800 & ...& $>23.6$ & 0.005 \\ \hline 140622A & 0.13 & 21:08:41.53 & -14:25:9.5 & 08-05-2021 & LDT & LMI & \textit{g} & 1200 &... & $22.75\pm0.07$ & 0.22\\ ... & ... & ... & ... & 08-05-2021 & LDT & LMI & \textit{r} &1200 & ...& $22.43\pm0.07$ & 0.15 \\ ... & ... & ... & ... & 08-05-2021 & LDT & LMI & \textit{i} & 750 & ...& $21.95\pm0.06$ & 0.11 \\ ... & ... & ... & ... & 08-05-2021 & LDT & LMI & \textit{z} & 800 & ...& $22.0\pm0.2$ &0.08 \\ \hline 140930B & 0.8 & 00:25:23.4 &24:17:41.7 & 10-01-2014 & Gemini & GMOS-N & \textit{r} & 1350 & Y & ... &... \\ ... & ... & ... & ... & 10-02-2014 & Gemini & GMOS-N & \textit{r} & 1350 & Y & ... & ... \\ ... & ... & ... & ... & 08-01-2020 & Gemini & GMOS-N & \textit{r} & 1650 & ...& $23.8\pm0.2$ & 0.06 \\ \hline 150423A & 0.08 & 14:46:18.86 & 12:17:00.70 & 04-23-2015 & VLT & FORS2 & \textit{R} & 300 & Y & ... & ... \\ ... & ... & ... & ... & 02-03-2017 & \textit{HST} & WFC3 & \textit{F110W} & 5200 & ...& $>27.2$ & 0.02 \\ \hline 150831A & 1.15 & 14:44:05.84 & -25:38:06.4 & 09-01-2016 & VLT & FORS2 & \textit{R} & 2400 & ...& $>25.8$ & 0.22 \\ ... & ... & ... & ... & 03-07-2017 & VLT & FORS2 &\textit{I} &2400 & ...& $>24.5$ & 0.16 \\ ... & ... & ... & ... & 07-29-2020 & Gemini & GMOS-S & \textit{i} & 2040 & ... & $>25.7$ & 0.16 \\ \hline \end{tabular} \end{table*} \begin{table*} \centering \contcaption{} \begin{tabular}{lccccccccccc} \hline \hline \\[-2.5mm] \textbf{GRB} & \textbf{$T_{90}^c$} & \textbf{RA} & \textbf{Dec} &\textbf{Obs. Date} & \textbf{Telescope} & \textbf{Instrument} & \textbf{Filter} & \textbf{Exp.} & \textbf{AG Image$^{b}$} & \textbf{AB Mag$^{d}$} &\textbf{$A_\lambda$} \\ & \textbf{(s)} & \textbf{(J2000)} & \textbf{(J2000)} & \textbf{(UT)} & & & & \textbf{(s)} & & & \textbf{(mag)} \\ \hline 151229A & 1.4 &21:57:28.78& -20:43:55.2 & 03-08-2019 & LDT & LMI & \textit{r}&1200 & ... & $>24.5$ & 0.05 \\ ...& ... &...& ...& 07-30-2019 & Gemini &GMOS-S & \textit{z}&1920 & ...& $24.47\pm0.10$ & 0.03 \\ ...& ... &...& ...& 10-15-2019 & Keck & MOSFIRE & \textit{Y} & 1340 & ...& $24.0\pm0.2$ & 0.03 \\ ...& ... &...& ...& 08-11-2020 & Gemini & GMOS-N & \textit{r}&2250 & ...& $25.75\pm0.20$ & 0.05 \\ ...& ... &...& ...& 06-16-2021 & LDT & LMI & \textit{i}& 900 & ... & $>23.8$ & 0.04 \\ ...& ... &...& ...& 07-22-2021 & Gemini & F2 & \textit{J}& 1680 & ...& $23.10\pm0.18$ & 0.02 \\ ...& ... &...& ...& 07-22-2021 & Gemini & F2 & $K_s$ & 1680 & ...& $22.78\pm0.19$ & 0.01 \\ ...& ... &...& ...& 07-30-2021 & Gemini & GMOS-S & \textit{i}& 1680 & ...& $25.41\pm0.20$ & 0.04 \\ \hline 160408A & 0.3 & 08:10:29.81 & 71:07:43.7 & 04-08-2016 & Gemini & GMOS-N & \textit{r} & 900 & Y & ... & ... \\ ... & ... & ... & ... & 04-09-2016 & Gemini & GMOS-N & \textit{r} & 900 & ...& $>25.8$ & 0.06 \\ ... & ... & ... & ... & 03-29-2020 & LDT & LMI & \textit{g} & 1500 & ... & $>24.6$ & 0.08 \\ ... & ... & ... & ... & 03-29-2020 & LDT & LMI & \textit{r} & 1500 & ...& $>24.5$ & 0.06 \\ ... & ... & ... & ... & 03-29-2020 & LDT & LMI & \textit{i} & 1500 & ...& $>24.2$ & 0.04 \\ ... & ... & ... & ... & 03-29-2020 & LDT & LMI & \textit{z} & 1500 & ...& $>23.7$ & 0.03 \\ \hline 160410A$^a$ & 96 & 10:02:44.37 & 03:28:42.4 & 04-10-2016 & \textit{Swift} & UVOT & \textit{wh} & 540 & Y & ... & ... \\ ... & ... & ... & ... & 04-28-2016 & Keck & DEIMOS & \textit{R} &330 & ... & $>25.0$ & 0.05 \\ ... & ... & ... & ... & 04-28-2016 & Keck & DEIMOS & \textit{I} & 330 & ... & $>24.2$ & 0.03 \\ ... & ... & ... & ... & 12-15-2020 & LDT & LMI & \textit{r} &2100 & ...& $>24.5$ & 0.05 \\ ... & ... & ... & ... & 02-06-2021 & LDT & LMI & \textit{g} & 1950 & ...& $>24.9$ & 0.07 \\ \hline 160525B & 0.3 &09:57:32.23 & 51:12:24.9 & 05-25-2016 & \textit{Swift}& UVOT &\textit{wh} & 150 & Y & ... & ... \\ ... & ... & ... & ... & 01-29-2020 & LDT & LMI& \textit{g} & 1200 & ...& $23.30\pm0.15$ & 0.03 \\ ... & ... & ... & ... & 01-29-2020 & LDT & LMI &\textit{r}& 1200 & ...& $23.29\pm0.09$ & 0.02 \\ ... & ... & ... & ... &02-29-2020 & LDT & LMI & \textit{i} &1500 & ...& $23.29\pm0.18$ & 0.016 \\ ... & ... & ... & ... & 12-15-2020 & LDT & LMI & \textit{z} & 2000 & ...& $23.4\pm0.3$ & 0.012 \\ \hline 160601A & 0.12 & 15:39:43.97 & 64:32:30.5 & 06-02-2016 & Gemini & GMOS-N & \textit{r} & 900 & Y & ... & ... \\ ... & ... & ... & ... & 06-03-2016 & LDT & LMI & \textit{r} & 720 & ...& $>24.6$ & 0.05 \\ ... & ... & ... & ... & 09-08-2016 & GTC & OSIRIS & \textit{r} & 1680 & ...& $>25.9$ & 0.05 \\ ... & ... & ... & ... & 03-25-2019 & Keck & MOSFIRE & $K_s$ & $2400$ & ... & $>23.5$ & 0.01 \\ ... & ... & ... & ... & 08-01-2020 & Gemini & GMOS-N & \textit{r} & 1800 & ... & $>25.6$ & 0.05 \\ ... & ... & ... & ... & 02-05-2021 & LDT & LMI & \textit{g} & 800& ...& $>22.5$ & 0.07 \\ ... & ... & ... & ... & 02-05-2021 & LDT & LMI & \textit{i} & 1200 & ...& $>22.5$ & 0.04 \\ ... & ... & ... & ... & 02-05-2021 & LDT & LMI & \textit{z} & 1500 & ...& $>22.0$ & 0.03 \\ \hline 160927A & 0.48 & 17:04:58.22 & 17:19:54.9 & 09-28-2016 & GTC & OSIRIS & \textit{r} & 1915 & Y & ... & ... \\ ... & ... & ... & ... & 02-23-2017 & GTC & OSIRIS &\textit{r} & 1200 & ...& $>26.1$ & 0.15 \\ ... & ... & ... & ... & 05-20-2018 & LDT & LMI & \textit{r} & 300 & ...& $>24.3$ &0.15 \\ ... & ... & ... & ... & 10-06-2018 & Keck & LRIS & \textit{G} & 2760 & ...& $>25.9$ & 0.25 \\ ... & ... & ... & ... & 10-06-2018 & Keck & LRIS & \textit{R} & 600 & ...& $>25.2$ & 0.14 \\ ... & ... & ... & ... & 09-04-2019 & Keck & LRIS & \textit{Z} & 800 & ...& $>24.8$ & 0.10 \\ ... & ... & ... & ... & 08-01-2020 & Gemini & GMOS-N & \textit{i} & 720 & ...& $>26.0$ & 0.13 \\ \hline 170127B & 0.5 &01:19:54.47 &-30:21:28.6 & 2018-01-27 & Gemini & GMOS-S & \textit{g} & 1800 &...& $>24.2$ & 0.06 \\ ... & ... & ... & ... & 2018-10-06 & Keck & LRIS &\textit{G} & 2520 & ...& $>26.1$ & 0.07 \\ ... & ... & ... & ... & 2018-10-06 & Keck & LRIS &\textit{R} & 1720 & ...& $>26.0$ & 0.04 \\ ... & ... & ... & ... & 2019-09-04 & Keck & LRIS &\textit{G} & 1920 & ...& $>26.0$ & 0.07 \\ ... & ... & ... & ... & 2019-09-04 & Keck & LRIS &\textit{I} & 1600 & ...& $>25.9$ & 0.04 \\ ... & ... & ... & ... & 2019-10-15 & Keck & MOSFIRE &\textit{J} & 2010 & ...& $>24.1$ & 0.01 \\ ... & ... & ... & ... & 01-30-2021 & Gemini & GMOS-S &\textit{z} & 1440 & ...& $>23.9$ & 0.03 \\ \hline 170428A& 0.2 & 22:00:18.78 & 26:54:57.0 & 04-29-2017 & LDT & LMI & \textit{i} & 1200 & ... & $22.2\pm0.2$ & 0.09 \\ ... & ... & ... & ... & 05-01-2017 & TNG & LRS & \textit{i} & 1470 & ... & $22.05\pm0.15$ & 0.09 \\ ... & ... & ... & ... & 05-01-2017 & TNG & LRS & \textit{z} & 1620 & ... & $21.94\pm0.15$ & 0.06 \\ ... & ... & ... & ... & 05-21-2018 & LDT & LMI & \textit{g} & 100 & ...& $>23.5$ & 0.17 \\ ... & ... & ... & ... & 05-21-2018 & LDT & LMI & \textit{r} & 200 & ...& $22.21\pm0.10$ & 0.12 \\ ... & ... & ... & ... & 05-21-2018 & LDT & LMI & \textit{i} & 200 & ...& $21.93\pm0.15$ & 0.09 \\ ... & ... & ... & ... & 05-21-2018 & LDT & LMI & \textit{z} & 100 & ...& $22.1\pm0.3$ & 0.06 \\ \hline 170728A & 1.3 &03:55:33.17 & 12:10:54.7 & 07-28-2017 & \textit{Swift} & UVOT & \textit{wh} & 150 & Y &... & ... \\ ... & ... & ... & ... & 01-14-2018 & Keck & LRIS & \textit{G} & 1380 & ...& $>25.3$ & 0.76 \\ ... & ... & ... & ... & 01-14-2018 & Keck & LRIS & \textit{R} & 1380 & ...& $>25.1$ & 0.44 \\ ... & ... & ... & ... & 01-08-2019 & LDT & LMI & \textit{r} & 900 & ...& $>24.6$ & 0.47 \\ \hline 170728B$^a$ & 48 & 15:51:55.47 & 70:07:21.1 & 07-28-2017 & \textit{Swift} & UVOT & \textit{wh} & 900 & Y &... &... \\ ... & ... & ... & ... & 11-03-2019 & LDT & LMI & \textit{r} & 1200 & ...& $23.13\pm0.06$ & 0.06 \\ ... & ... & ... & ... & 12-07-2019 & LDT & LMI & \textit{g} & 900 & ...& $23.82\pm0.06$ & 0.09 \\ ... & ... & ... & ... & 12-07-2019 & LDT & LMI & \textit{i} & 1200 & ...& $22.67\pm0.05$ & 0.04 \\ ... & ... & ... & ... & 12-07-2019 & LDT & LMI & \textit{z} & 1200 & ... & $22.36\pm0.15$ & 0.03 \\ \hline 171007A$^a$ & 68 &09:02:24.14 & 42:49:08.8 & 01-09-2020 & LDT & LMI & \textit{r} & 1200 & ... & $>24.9$ & 0.04\\ ... & ... & ... & ... & 02-01-2021 & Gemini & GMOS-N & \textit{i} & 1440 & ...& $>26.1$ & 0.03 \\ \hline \end{tabular} \end{table*} \begin{table*} \centering \contcaption{} \begin{tabular}{lccccccccccc} \hline \hline \\[-2.5mm] \textbf{GRB} & \textbf{$T_{90}^c$} & \textbf{RA} & \textbf{Dec} &\textbf{Obs. Date} & \textbf{Telescope} & \textbf{Instrument} & \textbf{Filter} & \textbf{Exp.} & \textbf{AG Image$^{b}$} & \textbf{AB Mag$^{d}$} &\textbf{$A_\lambda$} \\ & \textbf{(s)} & \textbf{(J2000)} & \textbf{(J2000)} & \textbf{(UT)} & & & & \textbf{(s)} & & & \textbf{(mag)} \\ \hline 180618A$^a$ & 47 & 11:19:45.87 & 73:50:13.5 & 06-18-2018 & Liverpool & IO:I & \textit{r} & 60 & Y & ... & ... \\ ... & ... & ... & ... & 04-07-2019 & LDT & LMI & \textit{r} & 1200 & ...& $23.08\pm0.08$ & 0.16 \\ ... & ... & ... & ... & 12-07-2019 & LDT & LMI & \textit{g} & 1200 & ...& $24.11\pm0.12$ & 0.22 \\ ... & ... & ... & ... & 12-07-2019 & LDT & LMI & \textit{i} & 1200 & ...& $22.45\pm0.10$ & 0.12 \\ ... & ... & ... & ... & 05-05-2021 & LDT & LMI & \textit{z} & 1800 & ...& $22.34\pm0.12$ & 0.09 \\ ... & ... & ... & ... & 05-05-2021 & LDT & LMI & \textit{y} & 1400 & ...& $>21.5$ & 0.06 \\ \hline 180727A & 1.1 & 23:06:39.68 & -63:03:06.7 & 10-14-2018 & Gemini & GMOS-S & \textit{i} & 2520 & ...& $>26.0$ & 0.03 \\ ... & ... & ... & ... & 07-28-2019 & Gemini & GMOS-S & \textit{r} & 1560 & ...& $>26.1$ & 0.04 \\ ... & ... & ... & ... & 07-30-2019 & Gemini & GMOS-S & \textit{g} & 1800 & ...& $>26.3$ & 0.06 \\ ... & ... & ... & ... & 07-30-2019 & Gemini & GMOS-S & \textit{z} & 1800 & ...& $>26.0$ & 0.02 \\ \hline 180805B$^a$ & 122 & 01:43:07.59 & -17:29:36.4 & 09-10-2018 & Keck & LRIS & \textit{G} & 1920 & ...& $23.52\pm0.07$ & 0.06 \\ ... & ... & ... & ... & 09-10-2018 &Keck & LRIS & \textit{I} & 1600 & ...& $22.34\pm0.12$ & 0.03 \\ ... & ... & ... & ... & 09-04-2019 &Keck & LRIS & \textit{V} & 1680 & ...& $22.83\pm0.09$ & 0.04 \\ ... & ... & ... & ... & 09-04-2019 & Keck & LRIS & \textit{Z} & 1400 & ... & $22.01\pm0.14$ & 0.02 \\ ... & ... & ... & ... & 10-15-2019 &Keck & MOSFIRE & $K_s$ & 1800 & ...& $21.23\pm0.15$ & 0.005 \\ ... & ... & ... & ... & 01-16-2021 & LDT & LMI & \textit{z} & 2000 & ...& $21.98\pm0.09$ & 0.02 \\ \hline 191031D & 0.3 &18:53:09.57 &47:38:38.8 & 11-02-2019 & Gemini & GMOS-N & \textit{r} & 720 & ... & $21.78\pm0.05$ & 0.14 \\ ... & ... & ... & ... & 11-03-2019 & LDT & LMI & \textit{g} &1200& ...& $22.89\pm0.07$ & 0.21 \\ ... & ... & ... & ... &04-18-2021 & LDT & LMI & \textit{i} & 600 & ...& $21.3\pm0.2$ & 0.11 \\ ... & ... & ... & ... &04-18-2021 & LDT & LMI & \textit{z} & 700 & ...& $21.3\pm0.3$ & 0.08 \\ ... & ... & ... & ... &04-18-2021 & LDT & LMI & \textit{Y} & 700 & ...& $21.1\pm0.3$ & 0.07 \\ ... & ... & ... & ... & -- & PS1 & -- & \textit{i} & -- & ...& $21.53\pm0.06$ & 0.11 \\ ... & ... & ... & ... & -- & PS1 & -- & \textit{z} & -- & ...& $21.03\pm0.03$ & 0.08 \\ ... & ... & ... & ... & -- & WISE & -- & \textit{W1} & -- & ...& $19.6\pm0.15$ & 0.014 \\ ... & ... & ... & ... & -- & WISE & -- & \textit{W2} & -- & ...& $20.16\pm0.30$ & 0.01 \\ \hline 200411A & 0.3 &03:10:39.39 & -52:19:03.4 &01-25-21 & Gemini & GMOS-S & \textit{r} & 1800 & ...& $22.55\pm0.03$ & 0.03\\ ... & ... & ... & ... & -- & DES & -- &\textit{g} & -- & ...& $23.6\pm0.2$ & 0.06 \\ ... & ... & ... & ... & -- & DES & -- &\textit{r} & -- & ...& $22.6\pm0.1$ & 0.04 \\ ... & ... & ... & ... & -- & DES & -- &\textit{i} & -- & ...& $21.9\pm0.1$ & 0.03 \\ ... & ... & ... & ... & -- & DES & -- &\textit{z} & -- & ...& $21.3\pm0.1$ & 0.02 \\ ... & ... & ... & ... & -- & VISTA & -- &\textit{J} & -- & ...& $20.9\pm0.2$ & 0.01 \\ ... & ... & ... & ... & -- & WISE & -- & \textit{W1} & -- & ...& $20.0\pm0.1$ & 0.003 \\ ... & ... & ... & ... & -- & WISE & -- & \textit{W2} & -- & ...& $20.2\pm0.3$ & 0.003 \\ \hline \end{tabular} \begin{flushleft} \quad \footnotesize{$^a$ sGRBEE.} \\ \quad \footnotesize{$^b$ Afterglow image used for relative alignment.}\\ \quad \footnotesize {$^c$ $T_{90}$ values were retrieved from the \textit{Swift} BAT GRB catalog \citep{Lien2016}.}\\ \quad \footnotesize {$^d$ Host galaxy magnitudes not corrected for Galactic extinction $A_\lambda$ \citep{Schlafly2011}.}\\ \end{flushleft} \end{table*} \begin{table*} \centering \caption{Log of spectroscopic observations of sGRB host galaxies. The redshift and the emission or absorption lines of the spectroscopic target are also reported. \label{tab: SpecObs} } \begin{tabular}{lccccccccl} \hline \hline \\[-2.5mm] \textbf{GRB} &\textbf{Obs. Date} & \textbf{Telescope} & \textbf{Instrument} & \textbf{Grating} & \textbf{$\lambda_\textrm{cen}$} & \textbf{Exp.} & \textbf{Slit Width} & \textbf{Redshift} & \textbf{Lines} \\ &\textbf{UT} & & & & \textbf{(nm)} & \textbf{(s)} & \textbf{($\arcsec$)} & & \\ \hline 060121 & 05-27-2014& Keck & LRIS & 600/4000 & 330 & 2720 & 1.0 & -- & No trace \\ ... & ... & ... & ... & 400/8500 & 588 & 2720 & 1.0 & & \\ \hline 101224A & 05-27-2014& Keck & LRIS & 600/4000 & 330 & 1570 & 1.0 & $0.4536 \pm 0.0004$ & H$\alpha$,H$\beta$,H$\gamma$ \\ ... & ... & ... & ...& 400/8500 & 588 & 1570 & 1.0 & & [OII],[OIII]\\ \hline 110402A$^a$ & 05-27-2014 & Keck & LRIS & 400/3400 & 680 & 1800 & 1.0 & $0.854\pm0.001$ & [OII] \\ ... & ... & ... & ... & 400/8500 & 840 & 1800 & 1.0 & & \\ \hline 140622A & 05-27-2014& Keck & LRIS & 600/4000 & 330 & 900 & 1.0 & $0.959\pm0.001$ & [OII],[OIII] \\ ... & ... & ... & ... & 400/8500 & 588 & 900 & 1.0 & &\\ \hline 151229A & 09-10-2018 & Keck & LRIS & 400/3400 & 176 & 5520 & 1.0 & -- & No trace \\ ... & ... & ... & ...& 400/8500 & 622 & 5520 & 1.0 & & \\ ... & ... & ... & ... & 400/3400 & 544 & 5320 & 1.0 & & \\ ... & ... & ... &... & 400/8500 & 1021 & 5320 & 1.0 & & \\ \hline 160410A$^{a,b}$ & 04-10-2016 & Keck & LRIS & 400/3400 & 176 & 600 & 1.0 & $1.717\pm0.001$ & Ly$\alpha$,[SiII] \\ ... & ... & ... &... & 400/8500 & 622 & 600 & 1.0 & & [AlII] \\ ... & ... & ... & ...& 400/3400 &544 & 600 & 1.0& & \\ ... &... &...& ... & 400/8500 & 1020 & 600 & 1.0 & & \\ \hline 180618A$^a$ & 02-01-2021 & Gemini & GMOS-N & R400 & 710 & 3600 & 1.0 & $0.4^{+0.2}_{-0.1}$ $^c$ & No lines \\ \hline 180805B$^a$ & 09-10-2018 & Keck & LRIS & 400/3400 & 358 & 2440 & 1.0 & $0.6609\pm 0.0004$ &H$\beta$,H$\gamma$ \\ ... &... & ... & ... & 400/8500 & 763 & 2440 & 1.0 & &[OII],[OIII] \\ \hline 191031D & 11-03-2019 & Gemini & GMOS-N & R400 & 705 & 3600 & 1.0 & $0.5\pm0.2^c$ & No lines \\ \hline \end{tabular} \begin{flushleft} \quad \footnotesize{$^a$ Short GRB with extended emission.} \\ \quad \footnotesize{$^b$ Afterglow spectroscopy.} \\ \quad \footnotesize{$^c$ Photometric redshift $z_\textrm{phot}$ based on \texttt{prospector} \citep{Johnson2019} modeling of the host galaxy SED.} \\ \end{flushleft} \end{table*} \begin{table*} \centering \caption{Short GRB host galaxy properties. Magnitudes are corrected for Galactic extinction \citep{Schlafly2011}. } \label{tab: host properties} \begin{tabular}{lcccccccccc} \hline \hline \\[-2.5mm] \textbf{GRB} &\textbf{$\sigma_\textrm{tie}$} & \textbf{$\sigma_\textrm{AG}$}$^{b}$ & \textbf{$\sigma_\textrm{host}$} & \textbf{$R_o$ ($\arcsec$)} & \textbf{$R_o$ (kpc)} & \textbf{$R_e$ ($\arcsec$)} & \textbf{AB Mag}$^d$ & \textbf{Host?} & \textbf{$P_{cc}^d$} & $z$ \\ \hline \multicolumn{11}{c}{\textbf{Optical Localization}} \\ \hline 091109B & 0.04 & 0.10 & ... & ... & ... & ... & $>27.3^g$ & N & $>0.2^{g}$ & ... \\[0.5mm] 110112A & 0.11 & 0.09 & ... & ... & ... & ... & $>27.3^g$ & N &$>0.45^{g}$ & ... \\[0.5mm] 110402A$^a$ & 0.15 & 0.07 & 0.05 & $0.91\pm0.17$ & $7.2\pm1.3$ & 0.7 & $24.24\pm0.20$ & Y & 0.03 & 0.854 \\[0.5mm] 130912A & 0.06 & 0.3 & 0.04 & $0.68\pm0.31$ & $5.6\pm2.6^j$ & $0.32$ & $26.8\pm0.3^{g}$ & Y & $0.08^{g}$ & ... \\[0.5mm] 131004A & 0.16 & 0.05 & 0.01 & $0.41\pm0.17$& $3.1\pm1.3$ & 0.4 & $25.80\pm0.05^{g}$ &Y & 0.05$^g$ & $0.717$ \\[0.5mm] 140129B & 0.16 & 0.02 & 0.02 & $0.5\pm 0.2$& $3.0\pm1.0$ & 0.5 & $23.50\pm0.09$& Y & 0.009 & $0.6\pm0.1^f$ \\[0.5mm] 140930B & -- & 0.05 & 0.09 & $1.4\pm0.1$ & $8.8\pm 0.9^i$ & 0.4 & $23.8\pm0.2$ & Y & 0.02 & ... \\[0.5mm] 150423A & 0.06 & 0.04 & ... & ... & ...& ... & $>27.2^{g}$ &N & $>0.15^{g}$ & ... \\[0.5mm] 160408A & -- & 0.02 & ... & ... & ...& ... & $>25.8$ &N & $>0.13$ & ... \\[0.5mm] 160410A$^a$ & 0.16 & 0.08 & ... & ... & ... & ... & $>25.0$&N & $>0.5$ & $1.717^e$ \\[0.5mm] 160525B & 0.21 & 0.11 & 0.07 & $0.06\pm0.25$ & $0.4\pm1.6^i$ & 1.0 & $23.29\pm0.09$ & Y& 0.03 & ... \\[0.5mm] 160601A & 0.02 & 0.02 & ... & ... & ... & ... & $>25.9$ & Y & $>0.4$ & ... \\[0.5mm] 160927A & 0.04 & 0.08 & ... & ... & ... & ... & $>26.0$ &N & $>0.5$ & ... \\[0.5mm] 170428A & -- & 0.3 & 0.05 & $1.2\pm0.3$ & $7.2\pm1.8$ & 1.2 & $22.09\pm0.10$ &Y & 0.01 & 0.454$^e$ \\[0.5mm] 170728A & 0.15 & 0.08 & ... & ... & ... & ... & $>24.7$ &N & $>0.2$ & ... \\[0.5mm] 170728B$^a$ & 0.22 & 0.07 & 0.06 & $0.78\pm0.24$ & $5.5\pm1.7$ & 0.7 & $23.06\pm0.06$ &Y & $0.014$ & $0.6\pm0.1^f$ \\[0.5mm] 180618A$^a$ & 0.23 & 0.04 & 0.04 & $1.58\pm0.24$ & $8.8\pm1.3$ & 1.0 & $22.92\pm0.08$ &Y & 0.03 & $0.4^{+0.2}_{-0.1}$\,$^f$ \\[0.6mm] \hline \multicolumn{11}{c}{\textbf{XRT Localization}} \\ \hline 101224A & ... &3.8 & 0.01 & $2.4\pm2.7^k$& $14\pm17$ & 0.6 & $21.53\pm0.05$ &Y & 0.11/0.10$^h$ & 0.454 \\[0.5mm] 120305A & ... & 2.0 & 0.05 & $5.4\pm1.4^k$ & $34\pm9^i$& 1.1 & $21.53\pm0.04$ & Y & 0.07 & ... \\[0.5mm] 120630A & ... & 4.0 & 0.01 & $5.8\pm2.9^k$ & $40\pm20$ & 0.9 & $21.42\pm0.04$ &Y & 0.07/0.08$^h$ & $0.6\pm0.1^f$ \\[0.5mm] 130822A & ... & 3.3 & 0.003 & $22.0\pm2.3^k$ & $61\pm6$ & 2.7 & $18.13\pm0.01$ &Y & 0.08/0.06$^h$ & 0.154 \\[0.5mm] 140516A & ... & 2.7 & ... & ... & ... & ... & $>26.1$ & N & $>0.2$ & ... \\[0.5mm] 140622A & ... & 2.9 & 0.02 & $4.6\pm2.0^k$ & $38\pm17$ & 1.2 & $22.28\pm0.07$ &Y & 0.08/0.08$^h$ & 0.959 \\[0.5mm] 150831A & ... & 2.2 & ... & ... & ... & ... & $>25.6$ &N & $>0.25$ & ... \\[0.5mm] 151229A & ... & 1.4 & 0.02 & $1.0\pm1.0^k$ & $9\pm9$ & 0.4 & $25.75\pm0.16$ & Y& 0.25/0.10$^h$ & $1.4\pm0.2^f$ \\[0.5mm] 170127B & ... & 2.6 & ...& ... & ...& ... & $>26.0$ &N & $>0.5$ & ... \\[0.5mm] 171007A$^a$ & ... & 2.5 & ... &... & ... & ... & $>26.1$ &N& $>0.5$ & ... \\[0.5mm] 180727A & ... &2.3 & ... &... & ...& ... & $>26.1$ &N & $>0.6$ & ... \\[0.5mm] 180805B$^a$ & ... & 2.1 & 0.02 & $3.4\pm1.5^k$ & $25\pm11$ &0.60& $22.79\pm0.09$ &Y & 0.07/0.08$^h$ & 0.661 \\[0.5mm] 191031D & ... & 2.3 & 0.02 & $7.4\pm1.7^k$ & $47\pm11$ & 1.1 & $21.64\pm0.05$ &Y & 0.12/0.05$^h$ & $0.5\pm0.2^f$ \\[0.5mm] 200411A & ... & 1.4 & 0.04 & $4.5\pm1.0^k$ & $31\pm8$ & 1.2 & $22.52\pm0.05$ &Y & 0.11/0.08$^h$ & $0.6\pm0.1^f$ \\[0.5mm] \hline \end{tabular} \begin{flushleft} \quad \footnotesize{$^a$ Short GRB with extended emission.} \\ \quad \footnotesize{$^b$ XRT position error reported at 90\% CL; optical localization error reported at $1\sigma$ (68\%).} \\ \quad \footnotesize{$^d$ Host galaxy magnitude in $r$-band, and $P_{cc}$ computed using $r$-band magnitude \citep{Berger2010a}, unless otherwise specified.} \\ \quad \footnotesize{$^e$ Redshift from afterglow (AG) spectroscopy.} \\ \quad \footnotesize{$^f$ Photometric redshift $z_\textrm{phot}$ based on \texttt{prospector} \citep{Johnson2019} modeling of the host galaxy SED.} \\ \quad \footnotesize{$^g$ \textit{HST}/$F110W$ magnitude, and $P_{cc}$ computed using IR number counts \citep{Galametz2013}.} \\ \quad \footnotesize{$^h$ $P_{cc}$ computed using $z$-band number counts \citep{Capak2004}.}\\ \quad \footnotesize{$^i$ Projected physical offset assuming $z=0.5$.}\\ \quad \footnotesize{$^j$ Projected physical offset assuming $z=1.0$.}\\ \quad \footnotesize{$^k$ The uncertainty on the sGRB's offset is computed at the 68\% of the Rayleigh distribution.}\\ \end{flushleft} \end{table*} \bibliographystyle{mnras}
2003.07803
\section{Introduction} \subsection{The TanDEM-X Mission} \IEEEPARstart{T}{anDEM-X} satellite is a German civil and commercial high-resolution synthetic aperture radar (SAR) satellite which has almost identical configuration as its 'sister' TerraSAR-X satellite. Together with TerraSAR-X, they are aiming to provide a global high-resolution digital elevation model (DEM) \cite{bib:krieger2007tandem}. Both satellites use a spiral orbit constellation to fly in tight formation in order to acquire the image pair simultaneously, which significantly reduces the temporal decorrelation error and the atmospheric interference. Since its launch in 2010, TanDEM-X has been continuously providing high quality bistatic interferograms that are nearly free from deformation, atmosphere and temporal decorrelation. \subsection{SAR Tomography Techniques} Tomographic synthetic aperture radar (TomoSAR) is a cutting-edge SAR interferemetric technique that is capable of reconstructing the 3-D information of scatterers and retrieving the elevation profile. Among the many multi-baseline InSAR techniques, TomoSAR is the only one that strictly reconstructs the full reflectivity along the third dimension elevation. SAR tomography and its differential form (D-TomoSAR) have been extensively developed in last two decades \cite{bib:reigber2000first} \cite{bib:gini2002layover} \cite{bib:lombardini2005differential} \cite{bib:fornaro2005three} \cite{bib:fornaro2009four} \cite{bib:zhu2010very} \cite{bib:ge2018spaceborne} \cite{bib:zhu2018review}. They are excellent approaches for reconstructing the urban area and monitoring the deformation, especially when using high resolution data like TerraSAR-X \cite{bib:zhu2012demonstration} \cite{bib:zhu2013tomo} or COSMO-Skymed \cite{bib:fornaro2014multilook}. Compare to the classic multi-baseline InSAR algorithms, compressive sensing (CS) based methods \cite{bib:zhu2010tomographic} \cite{bib:budillon2011three} can obtain extraordinary accuracy for TomoSAR reconstruction and show the super-resolution (SR) power, which is very important for urban areas, since layover is dominant. Although TanDEM-X bistatic data has many advantages, there is only a limited number of acquisitions available for most areas. For a reliable reconstruction, SAR tomography usually requires fairly large interferometric stacks ($> \textrm{20}$ images), because the variance of the estimates is asymptotically related to the product of SNR and the number of acquisitions. Therefore, it is not appropriate for the micro-stacks, which have limited number of interferograms \cite{bib:zhu2012super}. \subsection{The proposed framework} As mentioned above, the accuracy of 3-D reconstruction replies on the product of SNR and the number of measurements $N$. Since the motivation of our work is the large-scale urban mapping, the data we adopted is TanDEM-X stripmap co-registered single look slant range complex (CoSSC), whose resolution is about 3.3 m in azimuth direction and 1.8 m in range direction. The typical number of available interferograms for most areas is 3 to 5 \cite{bib:rizzoli2017generation}. In \cite{bib:zhu2015joint}, the pixels with similar height are grouped for the joint sparsity estimation, which leads to an accurate inversion of TomoSAR using only six interferograms. Although the unprecedented result is obtained, the accurate geometric information is usually not available for most areas. Therefore, the feasible way to keep the required precision of the estimates is to increase the SNR. Recent works \cite{bib:dhondt2018nonlocal} \cite{bib:ferraioli2018nonlocal} \cite{bib:shi2018non} show that SNR can be dramatically increased by applying non-local filters to the TomoSAR processing for different sensors, such as airborne E-SAR, COSMO-Skymed and TerraSAR-X. In \cite{bib:dhondt2018nonlocal}, different non-local filters have been adopted to improve the estimation of the covariance matrix for distributed scatterers, which leads to a better height estimation for simulated data and airborne SAR data. Ferraioli et al. \cite{bib:ferraioli2018nonlocal} introduced the non-local filter and the total variation regularizer to improve the multi-baseline phase unwrapping process. In \cite{bib:shi2018non}, it is shown that we can achieve a reasonable reconstruction using only seven interferograms and better super-resolution properties when the number of interferograms is relative low. In this work, we extend the concept of non-local compressive sensing TomoSAR in \cite{bib:shi2018non} \cite{bib:shi2018sar} \cite{bib:shi2019non} and propose a new framework of spaceborne multi-baseline SAR tomography with TanDEM-X bistatic micro-stacks, i.e. 3 to 5 interferograms. The framework includes non-local filtering, spectral estimation, model selection and robust height estimation. Since the different spectral estimators have different estimation accuracy and computational cost, we compared the estimation accuracy of different estimators with micro-stacks. The demonstration of different TomoSAR inversion methods for a large-scale area has been shown in \cite{bib:wang2014efficient} \cite{bib:shi2018fast} \cite{bib:zhu2013tomo}. Only a few works on the validation of single buildings were reported in \cite{bib:ge2018spaceborne} \cite{bib:zhu2012demonstration} \cite{bib:budillon2017extension}. Therefore, the validation of the specified quality of the TomoSAR result at a larger scale, would be of considerable interests for the scientific and commercial users. We choose Munich city as a test site because of a high quality LiDAR reference available to us, and we propose a complete workflow to compare the TomoSAR point cloud \cite{bib:otepka2013georeferenced} generated by the proposed framework, TanDEM-X DEM product, and LiDAR data. \subsection{Contribution of this work} The major contributions of this work are summarized as follows; \begin{itemize} \item We make possible a new application of bistatic SAR data for global building height reconstruction. \item We have pointed out that for pixel-wise multi-master TomoSAR the well-known system equation is no longer valid in the multi-scatterer case. \item We have developed a framework for tomographic stacks with only 3 - 5 interferograms. A systematic investigation on the estimation accuracy and super-resolution power for the micro-stacks have been carried out, which was never done before. \item We use 5 TanDEM-X bistatic data to demonstrate the proposed framework. A systematic validation for large-scale TomoSAR reconstruction have been carried out and a method for comparing with other reference data is established. The results are quantitatively compared with LiDAR reference for more than 34,000 buildings. \end{itemize} The paper is organized as follows: In section II, the non-local TomoSAR framework is introduced; In section III, the estimation accuracy of TomoSAR with small stacks has been systematically studied; The experiments using real data, is presented in section IV; In section V, the quantitative validation is carried out; Finally, conclusions are given in section V. \section{Non-Local TomoSAR for Multi-Master InSAR} In this section, we introduce the non-local TomoSAR framework for multi-master multi-baseline InSAR configuration. Fig. \ref{fig:illu_multi_master} illustrates multi-master multi-baseline SAR imaging. The framework consists of several steps: (1) non-local filtering; (2) spectral estimation; (3) model selection; (4) robust height estimation. Fig. \ref{fig:workflow_nltomosar} shows the flowchart of the non-local TomoSAR framework. \subsection{The multi-master TomoSAR imaging model} \begin{figure}[!ht] \centering \includegraphics[width=0.5\textwidth]{tgrs_multi_master.pdf} \caption{Illustration of multi-master multi-baseline SAR imaging.} \label{fig:illu_multi_master} \end{figure} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{3DLSSS_tgrs.pdf} \caption{Workflow of non-local TomoSAR framework.} \label{fig:workflow_nltomosar} \end{figure*} For a fixed azimuth-range position, $\gamma(s)$ represents the reflectivity profile along elevation $s$. The measurement $g_n$, i.e. the complex value of the corresponding azimuth-range pixel, in the $n^{th}$ SAR image is then a sample of $\Gamma(k)$ -- the Fourier transform of $\gamma(s)$, where the elevation wavenumber $k$ is a scaled version of the sensor’s position $b_n$ projected on the cross-range-azimuth axis $b||s$ : \begin{equation} g_n = \Gamma(k_n) = \int \gamma(s) \exp(-jk_ns)ds \end{equation} with \begin{equation} k_n = -\dfrac{4 \pi b_n}{\lambda r} \end{equation} Note that $b_n$ are no baselines, but the positions of the sensor w.r.t. some origin. In case of monostatic multi-temporal data stacks, a \textit{single} master $g_0$ is chosen with $b_0$ and its phase is subtracted from all other acquisitions: $g_ng_0^*/\lvert g_0 \rvert$. This operation renders the phase spatially smooth and is a prerequisite for spatial phase unwrapping, averaging and graph (network) processing. It does not reduce information, since the phase of any (master) acquisition is random. Note that the choice of the point $b=0$ only defines the x-r-s coordinate system. This point need not necessarily be the master track position $b_0$. However, $b_0 = 0$ is a mathematically convenient choice and is assumed in all conventional TomoSAR system model like in \cite{bib:fornaro2003three}. All the equations in \cite{bib:fornaro2003three}, however, are actually independent of this particular choice. Only in the special case $b_0 = 0$ the $b_n$ are identical to baselines. Here we are dealing with stacks of \textit{bistatic} acquisitions, i.e. with the \textit{multi}-master case. From each of these acquisitions we get a master $g_{n,m} = \Gamma(k_n)$ taken at $b_{master} = b_n$ and a slave $g_{n,s} = \Gamma(k_n+\Delta k_n)$ image taken at $b_{slave} = b_n + \Delta b_n$, where $\Delta b_n$ is the bistatic baseline (which takes the \textit{effective} positions of the transmit-receive phase center into account). If we used a standard, i.e. single-master, TomoSAR inversion algorithm, we would confuse $\Delta b_n$ and $b_n$. In the case of a single scatterer in $\gamma(s)$, this misinterpretation would do no harm, because the Fourier transform of a single point has a constant magnitude and a linear phase. In order to determine the slope of the phase ramp we can take any two samples and divide their phase difference by the difference in wavenumbers (= baseline). This is no longer true for two or more scatterers. The example of two symmetric and equally strong scatterers makes this clear: \begin{gather} \gamma(s) = \delta(s+s_0) + \delta(s-s_0) \nonumber \\ \updownarrow \\ \Gamma(k) = 2\cos(s_0k)= 2\cos(2\pi \dfrac{2s_0}{\lambda r}b) \nonumber \end{gather} Hence, acquisitions with the same baseline $\Delta b$ are different depending on where the two sensors were located along $b$. Every bistatic acquisition provides three pieces of information: the two magnitudes $|\Gamma(k_n)|$ and $|\Gamma(k_n+\Delta k_n)|$ as well as the phase difference $\angle \Gamma(k_n+\Delta k_n)\Gamma^*(k_n)$, i.e. we must normalize the phase by the respective master in every acquisition, in order to become unaffected by deformation and atmospheric delay. Spectral estimation based conventional TomoSAR inversion algorithms, however, require complex spectral samples at several wavenumbers, phase-normalized to a \textit{single} master phase. In a current parallel work by one of the authors \cite{bib:ge2019single} it is shown that pixel-wise TomoSAR using multi-master acquisitions is a non-convex hard to solve problem. This is true for \textit{pixel-wise} tomographic inversion or for point scatterers. The situation becomes different, though, once we talk about averages of pixels, i.e. estimates of expectation values. Let us assume Gaussian distributed scattering with a backscatter coefficient along elevation of \begin{equation} \sigma_0(s) = E \left\{ |\gamma(s)|^2 \right\} \end{equation} Assuming further that $\gamma(s)$ is white, its power spectral density is stationary and is the autocorrelation function of $\Gamma(k)$, i.e. the Fourier transform of $\sigma_0(s)$ as a function of the baseline wavenumber $\Delta k$ : \begin{equation} E \left\{ \Gamma(k_n+\Delta k_n) \Gamma^*(k)\right\} = \int \sigma_0(s)\exp(-j\Delta k_n s)ds \end{equation} Instead of sampling the Fourier spectrum we sample its autocorrelation function by the bistatic data stack. Since this relationship is \textit{independent} of $k \propto b$ because of stationarity, it makes no difference, where the two acquisitions have been taken, only their baseline $\Delta b_n$ counts. In other words we can use standard TomoSAR inversion algorithms in this case. In this paper we use nonlocal filtering to improve SNR for micro-stacks. These filters perform ensemble averages with number of looks in the order of tens to hundreds. Hence, we tend to the assumption that we work with reasonably good estimates of $E \left\{ \Gamma(k_n+\Delta k_n) \Gamma^*(k)\right\}$ and can use the bistatic \textit{interferograms} for TomoSAR reconstruction. By introducing a noise $\boldsymbol{\varepsilon}$, the matrix notation of TomoSAR model can be formulated as: \begin{equation} \mathbf{g}=\mathbf{R}\mathbf{X}+\boldsymbol{\varepsilon} \label{equ:tomosar_basic} \end{equation} where $\mathbf{g} = [g_1, g_2, ..., g_n]^{\mathrm{T}}$ is vector notation of the complex-valued measurement with dimension $N \times 1$, and $\mathbf{X} \sim \sigma_0(s_l) = E\{|\gamma(s_l)|^2\}$ is the expectation value of reflectivity profile along elevation uniformly sampled at $s_l (l=1,2,...,L)$. $\mathbf{R}$ is a sensing matrix with the dimension $N \times L$, where $R_{nl} = \exp(-j\Delta k_ns_l)$. \subsection{Non-Local Procedure} Since we have only limited number of acquisitions for large-scale area, the SNR need to be dramatically increased in order to obtain the required accuracy. As shown in \cite{bib:shi2018non}, non-local procedure is an efficient way to increase the SNR of interferograms without notable resolution distortion. The idea of patch-wise non-local means considers all the pixels $s$ in the search window, when the patch with the central pixel $s$ is similar to the patch with central pixel $c$, the value of $s$ is selected for calculating the value of pixel $c$. The value of pixel $c$ is estimated by using a weighted maximum likelihood estimator (WMLE). \begin{equation} \hat{\boldsymbol{\Theta}}_c = \mathrm{argmax} \sum_s \mathbf{w}(i_s, j_s) \log p(\mathbf{g}_s|\boldsymbol{\Theta}) \end{equation} where weights $\mathbf{w}(i_s, j_s)$ can be calculated by using patch-wise similarity mesurement \cite{bib:shi2018non}. Assuming that we have two expressions $\mathbf{g} = (I_1, I_2, \phi)$ and $\boldsymbol{\Theta} = (\psi, \mu, \sigma^2)$, where $\mathbf{g}$ denotes the complex-valued measurement. $I_1$ and $I_2$ are the instensity of two SAR images. $\phi$ is the interferometric phase. $\boldsymbol{\Theta}$ is the true value of the parameters, where $\psi$ is the noise-free interferometric phase, $\mu$ is the coherence magnitude, and $\sigma^2$ is the variance. The likelihood function $p \left( \mathbf{g}_s | \boldsymbol{\Theta} \right) = p \left(I_{1,s}, I_{2,s}, \phi_{s} | \psi, \mu, \sigma^2 \right)$ is adopted from \cite{bib:goodman2007speckle} with following formualtion: \begin{multline} p(I_1, I_2, \phi | \psi, \mu, \sigma^2) = \dfrac{1}{16\pi^2 \sigma^4(1-\mu^2)} \\ \times \exp \left[-\dfrac{I_1 + I_2 - 2 \sqrt{I_1 I_2}\mu \cos(\phi - \psi)}{2\sigma^2(1-\mu^2)} \right] \end{multline} $\mathcal{N}(.)$ denotes the non-local estimator, where $\mathcal{N}(\mathbf{g}) = f( \hat{\boldsymbol{\Theta}})$. $\hat{\boldsymbol{\Theta}} = (\hat{\psi}, \hat{\mu}, \hat{\sigma^2})$ represents the parameters being estimated, where $\hat{\psi}$ is the estimated interferometric phase, $\hat{\mu}$ stands for the coherence magnitude, and $\hat{\sigma^2}$ stands for the variance. $f( \hat{\boldsymbol{\Theta}})$ is the maximum likelihood estimator and the estimated parameters can be formulated as \begin{eqnarray} \hat{\psi} &=& -\arg \left(\sum_s \mathbf{w}_s \mathbf{g}_{1,s} \mathbf{g}_{2,s}^{*} \right) \\ \hat{\mu} &=& \dfrac{2 \sum_s \mathbf{w}_s |\mathbf{g}_{1,s}| |\mathbf{g}_{2,s}|}{\sum_s \mathbf{w}_s \left(|\mathbf{g}_{1,s}|^2 + |\mathbf{g}_{2,s}|^2 \right)} \\ \hat{\sigma^2} &=& \dfrac{\sum_s \mathbf{w}_s \left(|\mathbf{g}_{1,s}|^2 + |\mathbf{g}_{2,s}|^2 \right)}{4 \sum_s \mathbf{w}_s} \end{eqnarray} The patch size and search window size are set to be 7 $\times$ 7 and 21 $\times$ 21 according to experimental study, which is also reported by other works \cite{bib:deledalle2011nl} \cite{bib:zhu2018potential}. Each pixel represents 2.17 m in azimuth and 1.36 m in range. \begin{figure*} \centering \subfloat[]{\includegraphics[width=0.48\textwidth]{crlb_simulation_single_svd_cs_5_r1}} \subfloat[]{\includegraphics[width=0.48\textwidth]{crlb_simulation_single_svd_345_r1}} \caption{Monte Carlo simulations of single scatterer with SNR in [0 30] (dB). X-axis presents $N \cdot \mathrm{SNR}$ in dB. Y-axis is the normalized CRLB $\sigma_s / \rho_s$. (a) Comparison of CRLB with different spectral estimators with five acquisitions. SVD (red solid line), CS (blue solid line), CLRB (black dash-dotted line) (b) Comparison of CRLB using SVD with three to five acquisitions. $N=3$ (red solid line), $N=4$ (blue solid line), $N=5$ (green solid line), CLRB (black dash-dotted line). The vertical black dash-dotted line indicates the estimation accuracy for $N \cdot \textrm{SNR} = 11$ dB. The red, blue and green markers represent $N = 3,4,5$, respectively. } \label{fig:simulation_single_scatterer} \end{figure*} \subsection{Spectral Estimation} After the non-local procedure, spectral estimation is applied. The most relevant spectral estimation algorithms, including singular value decomposition (SVD) \cite{bib:fornaro2005three} \cite{bib:zhu2010very}, compressive sensing (CS) are introduced in the following. \begin{itemize} \item SVD: \begin{equation} \hat{\mathbf{X}} = \left( \mathbf{R}^{\textrm{H}} \mathbf{C}_{\varepsilon\varepsilon}^{-1} \mathbf{R} + \mathbf{C}_{XX}^{-1} \right)^{-1} \mathbf{R}^{\textrm{H}} \mathbf{C}_{\varepsilon\varepsilon}^{-1} \mathcal{N}(\mathbf{g}) \end{equation} \item CS: \begin{equation} \hat{\mathbf{X}} = \arg \min_{\mathbf{X}} \{ \Vert \mathbf{R}\mathbf{X} - \mathcal{N}(\mathbf{g}) \Vert^2_2 + \lambda \Vert \mathbf{X} \Vert_1 \} \label{equ:opt_nll1lsp} \end{equation} \end{itemize} where $\mathbf{C}_{\varepsilon\varepsilon}$ is the noise covariance matrix, which is defined as: \begin{equation} \mathbf{C}_{\varepsilon\varepsilon} = \left( \mathbf{g} - \mathbf{R}\mathbf{X} \right) \cdot \left( \mathbf{g} - \mathbf{R}\mathbf{X} \right)^\mathrm{H} \end{equation} Under the assumption that the model errors are circular Gaussian distributed with zero mean, the noise covariance matrix is formulated as $\mathbf{C}_{\varepsilon\varepsilon} = |\sigma_{\varepsilon}|^2 \mathbf{I}$ and $|\sigma_{\varepsilon}|^2$ is the noise power level. $\mathbf{C}_{XX}$ is the covariance matrix of the prior, if it is assumed to be white, i.e. $\mathbf{C}_{XX} = \mathbf{I}$. The choice of different combinations of spectral estimators depends on the required accuracy, the computational time and others. We follow the procedure proposed in \cite{bib:wang2014efficient}. It consists of three parts: (1) an efficient low-order spectral estimation; (2) the discrimination of the number of scatterers; (3) an accurate high-order spectral estimation. The elevation profile is first estimated by an efficient low-order spectral estimator in order to discriminate the number of scatterers in one resolution cell. Then, CS-based approach is adopted for the pixel which has multiple scatterers. This method decreases the amount of pixels that need the $L_1$ minimization, which leads to reduce the computational cost. Furthermore, the rest of pixels can be efficiently solved by randomized blockwise proximal gradient method \cite{bib:shi2018fast}. \subsection{Model Selection} The abovementioned spectral estimators retrieve a nonparametric reflectivity profile. Since our data is in urban area, we assume only a few dominant scatterers exist along the reflectivity profile. Therefore, the number of scatterers $\hat{K}$ is estimated by a model order selection algortihm as well as their elevation in one azimuth-range pixel \cite{bib:zhu2010very}. The estimator can be expressed as follows. \begin{equation} \hat{K} = \arg \min_{K} \left\{ -2 \ln p \left( \mathbf{g}| \boldsymbol{\theta}\right) + 2 C(K) \right\} \end{equation} where $C(k)$ is a model complexity penalty term which avoids more complicated model overfitting the observed data. The classical penalized likelihood criteria are the Bayesian information criterion (BIC), the Akaike information criterion (AIC), and the minimum description length (MDL) principle \cite{bib:lombardini2005model}. As mentioned in \cite{bib:zhu2010very}, the criteria of model order selection has to be chosen according to the experiments for the particular situation, because it is difficult to remove the bias of the selection. \subsection{Robust Height Estimation} To tackle the possible remaining outliers in the height estimates, the final height will be fused from the result of multiple neighbouring pixels as a post-processing. But instead of simple averaging, the height will be adjusted robustly using an \textit{M-estimator}. Instead of minimizing the sum of squared residuals in averaging, M-estimator minimizes the sum of a customized function $\rho\left(\centerdot\right)$ of the residuals: \begin{equation} \tilde{s}=\mathop{\arg}\underset{\textit{s}}{\mathop{\min}}\sum\limits_{i}{\rho\left(\hat{s}_i- s\right)}, \label{eq:M_estimator} \end{equation} where $\hat{s}_i$ is the elevation estimates of the $i$th neighbouring pixel. It is shown that the close-formed solution of Eq. (\ref{eq:M_estimator}) is simply a weighted averaging of the heights of the neighbouring pixels \cite{bib:wang2016robust}. The weighting function can be expressed as follows, if the derivative of $\rho\left(x\right)$ exists. \begin{equation} w\left(x\right)=\frac{\partial\rho\left(x\right)}{x\partial x} \end{equation} The robust estimated height can be written as follows: \begin{equation} \tilde{h} = \dfrac{\sum \limits_{i} w(x_i) \cdot \hat{h}}{\sum \limits_{i} w(x_i)} \end{equation} where $\hat{h} =\hat{s} \cdot \sin\theta$, and $\theta$ is incident angle. The choice of the weighting function depends on the distribution of the heights. Without prior knowledge of the distribution, promising robust weighting functions are Tukey's biweight or t-distributed weighting \cite{bib:wang2016robust}. For instance, the formulation of Tukey's biweight loss function can be written as: \begin{equation} \rho(x) = \left\{ \begin{array}{ccl} -\frac{\left(c_r^2-x^2\right)^3}{6c_r^4} + \frac{c_r^2}{6} & & {|x| < c_r}\\ \frac{c_r^2}{6} & & {\mathrm{elsewhere}} \end{array} \right. \end{equation} and the weighting function can be formulated as: \begin{equation} w(x) = \left\{ \begin{array}{ccl} 1 - \frac{x^4}{c_r^4} - \frac{2x^2}{c_r^2} & & {|x| < c_r}\\ 0 & & {\mathrm{elsewhere}} \end{array} \right. \end{equation} \section{Estimation accuracy of TomoSAR with small stacks} This section will discuss the theoretical 3-D reconstruction accuracy of a micro-stack with 3-5 interferograms. The estimation accuracy of TomoSAR has been systematically investigated. It is exhaustively shown in \cite{bib:zhu2012super} that the elevation estimation accuracy and SR power depend asymptotically on the multiplication $N \cdot \textrm{SNR}$. In this section, we investigate the estimation accuracy of TomoSAR with the extremely small number of interferograms, which is 3 to 5. \begin{figure*} \centering \subfloat[]{\includegraphics[width=0.33\textwidth]{crlb_simulation_double_ifg5_snr10_example00_r1}} \subfloat[]{\includegraphics[width=0.33\textwidth]{crlb_simulation_double_ifg5_snr10_svd00_r1}} \subfloat[]{\includegraphics[width=0.33\textwidth]{crlb_simulation_double_ifg5_snr10_cs00_r1}} \caption{Monte Carlo simulations of double scatterer with different normalized distances: $\kappa \in [0.1, 1.5]$ and $\textrm{SNR} = 10$ dB. X-axis represents normalized true distance $\kappa$ of simulated facade and ground. Y-axis is normalized estimated distance $\hat{\kappa}$ of simulated facade and ground. The blue dot marker denotes the estimated location of facade and the error bar indicates the standard deviation of the estimates, whereas the red dot marker represents the estimated location of ground. The green dot suggests that detection rate of double scatterers is below 5\% and denotes the estimated result of single scatterer. (a) Illustration (b) SVD (c) CS. } \label{fig:simulation_double_scatterers} \end{figure*} \subsection{The Lower Bound for Micro-stacks} In the case of pixel-wise TomoSAR inversion, i.e. without spatial averaging, each of our $N$ bistatic pairs contain three pieces of information, as mentioned before. If we want to reconstruct elevation profiles containing $M$ discrete scatterers we need to infer $3M$ parameters, i.e. elevation, magnitude and phase for each scatterer. Hence an absolute lower bound of the micro-stack size is $N \geq M$. Distributed scatterers, on the other hand, are characterized by only two parameters each: elevation and backscatter coefficient. Likewise each interferogram provides only two parameters, magnitude and phase (difference). Since our goal is 3-D reconstruction based on bistatic data, we disregard motion-induced phase here. Hence, also in this case the absolute lower limit is $N \geq M$. This limit is only a necessary condition, however not sufficient from the robustness point of view, because of ambiguities in the inversion cost functions. For 3-D urban mapping the single and double scattering cases are the dominant ones. We investigate the cases $N = 3 - 5$ in this paper, because these are close to the mentioned limits and are relevant for TanDEM-X. \subsection{CRLB} It is demonstrated in \cite{bib:zhu2010very} that the Cramer-Rao lower bound (CRLB) of the elevation estimates for single scatterer can be expressed as: \begin{equation} \sigma_{s} = \dfrac{\lambda r}{4 \pi \cdot \sigma_b \cdot \sqrt{2 \cdot \textrm{SNR} \cdot N}} \label{eq:crlb} \end{equation} where $\sigma_b$ is the standard deviation of the baseline distribution. $N$ is the number of interferograms, and SNR is the signal-to-noise ratio. For the double scatterers' case, the CRLB can be written as: \begin{equation} \sigma_{s_q} = c_0 \cdot \sigma_{s_q,0} \end{equation} where $\sigma_{s_q,0}$ represents the CRLB on the elevation estimation of the $q$th scatterer without the interference with the others. $c_0$ is the correction factor of the interference for the scatterers, which are closely located \cite{bib:zhu2012super}. It is nearly free from $N$ and SNR, which can be written as: \begin{equation} c_0 = \max \left \{ \sqrt{\dfrac{40\kappa^{-2}(1-\kappa/3)}{9-6(3-2\kappa) \cos(2 \Delta \varphi)+(3-2\kappa)^2}}, 1 \right \} \end{equation} where $\Delta \varphi$ is the phase difference of the two scatterers. $\kappa$ is the normalized distance between two scatterers (defined in next section). Since $\Delta \varphi$ is a random variable, the approximated formulation of $c_0$ can be calculated by integrating the variances over $\Delta \varphi$. \begin{equation} c_0 = \max \left \{ 2.57(\kappa^{-1.5} - 0.11)^2 + 0.62, 1 \right \} \end{equation} A note on the baseline distribution is worth mentioning: all the Eqs. (21)-(24) are satisfied with large stacks. In the micro-stacks with only 3 - 5 acquisitions, the baseline distribution may be unfavorable, even if the baseline spread $\sigma_b$ is acceptable. For example, if two baselines were very similar, the information content would be reduced. It is therefore desirable to have the baselines possibly statistically uniformly distributed. \begin{figure*} \centering \subfloat[]{\includegraphics[width=0.95\textwidth]{Tomo1_crop2_marked.png}} \hfil \subfloat[]{\includegraphics[width=0.95\textwidth]{munich_tdm_view1_crop_marked.png}} \caption{Visual comparison of NL-TomoSAR point clouds and TanDEM-X DEM over Munich, Germany. Color code: 565 m (blue) – 596 m (red), scene size: 15 km $\times$ 9 km, north = top. The voilet bounding box indicates the region of interest (ROI) over the area of European bureau of patent and the white bounding box indicates the ROI near Munich central station. (a) Point Clouds generated by NL-TomoSAR with five interferograms. (b) TanDEM-X DEM.} \label{fig:comp_dem_tomosar_large} \end{figure*} \subsection{Monte Carlo Simulations} In this section, we compare different spectral estimators using simulated data. Two cases were carried out. The first case considers only a single scatterer in the interest of exploring the effect of $N$ and SNR on the estimation accuracy for micro-stacks and the performance of different estimators. The second case considers double scatterers to investigate the estimation accuracy and the super-resolution power for different estimators. The inherent (Rayleigh) elevation resolution $\rho_s$ is inversely proportional to the maximal elevation aperture $\Delta b$ \cite{bib:zhu2012super}. \begin{equation} \rho_s = \dfrac{\lambda r}{2\Delta b} \end{equation} The normalized distance is defined as \begin{equation} \kappa = \dfrac{s}{\rho_s} \end{equation} For the first test case, only one scatterer is placed at $s = 0$, and the SNR is in the range between 0 and 30 dB. For each $N \cdot \textrm{SNR}$ value, 100 different baseline distributions were generated. We carried out a Monte Carlo simulation for each baseline distribution with 10,000 realizations. Afterwards, the CRLB was evaluated by averaging the value of 100 different baselines. Fig. \ref{fig:simulation_single_scatterer} (a) shows a performance comparison between SVD, and CS on simulated data with five acquisitions for a single scatterer. X-axis presents $N \cdot \mathrm{SNR}$ in dB. Y-axis is the normalized CRLB $\sigma_s / \rho_s$. As one can see, both approaches have similar estimation accuracy. They are asymptotically towards the CRLB and collapse it when $N \cdot \mathrm{SNR}$ is large. More interestingly is when $N \cdot \textrm{SNR}$ is fixed, leaving $N$ as a variable. Fig. \ref{fig:simulation_single_scatterer} (b) presents the estimation accuracy of SVD with $N = 3,4,5$. It shows that the accuracy when $N=3$ is the smallest. This indicates that SNR carries more weight than $N$ on the estimation accuracy when $N$ is very small. With the 3-D reconstruction accuracy of single scatterer clearly analyzed, we switch to the double scatterers case. In the simulation, the elevation of one scatterer is fixed at 0. the normalized elevation of the other scatterer is increased from 0.1 to 1.5, in order to mimic the layover of a ground layer and a facade layer. The number of acquisitions is set to $N = 3-5$, same as the first simulation. SNR is set to be 10 dB, since the SNR of TanDEM-X bistatic data is usually higher than this value in urban area \cite{bib:zhu2012sparse}. The Monte Carlo simulation result is shown in Fig. \ref{fig:simulation_double_scatterers}. The x-axis represents the true normalized elevation distance $\kappa$ of the simulated facade and ground layers. Y-axis is the estimated normalized elevation distance $\hat{\kappa}$ of the simulated facade and ground layers. The two solid lines in the Fig. \ref{fig:simulation_double_scatterers} (a) (b) (c) represent the true position of the building facade and the ground, respectively. The dashed lines imply the true position plus and minus the CRLB. The blue bar and dot imply the standard deviation and the mean of the estimated elevation of the facade scatterers, whereas the red ones represent those of the ground scatterers. The green dot indicates that the detection rate of double scatterers is below 5\% and denotes the estimated result of the single scatterer. Fig. \ref{fig:simulation_double_scatterers} (b) (c) show the estimated results by SVD and CS, respectively. As one can see in Fig. \ref{fig:simulation_double_scatterers} (b) (c), the result of SVD has larger bias and slightly bigger standard deviation than CS. Note that, comparing to SVD, CS can give the better result, not only the accuracy of the estimation but also the super-resolution power. As one can see that the SVD has scarcely no super-resolution power, which can only distinguish double scatterers tile one Rayleigh resolution $\rho_s$. In contrast, CS can achieve until 0.6 $\rho_s$. \section{Practical Demonstration} \subsection{Data Description} We make use of a stack of five co-registered TanDEM-X bistatic interferograms to evaluate the proposed algorithm. The dataset is over Munich, Germany, whose slant range resolution is 1.8 m and the azimuth resolution is 3.3 m. The images were acquired from July 2016 to April 2017. The most pertinent parameters of a TanDEM-X bistatic stripe map acquisition of Munich are listed in Table \ref{tab:realData_char} and Table \ref{tab:realData_char1}. All the preprocessing steps, like deramping, are standard that are known from bistatic forest tomography. For interested readers, please refer to \cite{bib:reigber2000first}. \begin{table}[!ht] \begin{center} \caption{Parameters of Tandem-X Stripe Map Acquisition of Munich} \label{tab:realData_char} \begin{tabular}{lll} \toprule Name & Symbol & Value\\ \midrule Distance from the scene center & $r$ & 698 km \\ \rule{0pt}{4ex}Wavelength & $\lambda$ & 3.1 cm\\ \rule{0pt}{4ex}Incidence angle at scene center & $\theta$ & $50.4^{\circ}$\\ \rule{0pt}{4ex}Maximal elevation aperture & $\Delta b$ & 187.18 m\\ \rule{0pt}{4ex}Number of interferograms & $N$ & 5\\ \bottomrule \end{tabular} \end{center} \end{table} \begin{table}[!ht] \begin{center} \caption{Detailed Information of Tandem-X Stripe Map Acquisition for used Dataset} \label{tab:realData_char1} \begin{tabular}{cccc} \toprule No. & Date & Baseline [m] & Height Ambiguity [m/cycle]\\ \midrule 1 & 2016-07-25 & 184.40 & 50.30 \\ 2 & 2016-09-07 & 171.92 & 54.01 \\ 3 & 2017-02-19 & 32.30 & 286.03 \\ 4 & 2017-04-26 & -2.78 & -8710.99 \\ 5 & 2017-07-01 & 9.30 & 1073.03 \\ \bottomrule \end{tabular} \end{center} \end{table} \subsection{Visual Comparison with TanDEM-X raw DEM} In this work, the TanDEM-X raw DEM is adopted for visual comparison with TomoSAR point clouds of the test area, which is formed by two TanDEM-X bistatic acquisitions using the Integrated TanDEM-X Processor (ITP). \begin{figure}[!ht] \centering \subfloat[]{\includegraphics[width=0.24\textwidth]{pa_nltomosar_1.png}} \hfil \subfloat[]{\includegraphics[width=0.24\textwidth]{pa_dem_1.png}} \caption{Visual comparison of NL-TomoSAR point clouds and TanDEM-X DEM, close-up 3-D view over the area of European bureau of patent. (a) TomoSAR point clouds. (b) TanDEM-X DEM.} \label{fig:comp_dem_tomosar_patentamt} \end{figure} \begin{figure}[!ht] \centering \subfloat[]{\includegraphics[width=0.5\textwidth]{munich_tdm_nl_view3.png}} \hfil \subfloat[]{\includegraphics[width=0.5\textwidth]{munich_tdm_view3.png}} \caption{Visual comparison of NL-TomoSAR point clouds and TanDEM-X DEM, close-up 3-D view over the area of Munich central station. (a) TomoSAR point clouds. (b) TanDEM-X DEM.} \label{fig:comp_dem_tomosar_small} \end{figure} \begin{figure*} \centering \subfloat[]{\includegraphics[width=0.3\textwidth]{b1.jpg}} \hfil \subfloat[]{\includegraphics[width=0.3\textwidth]{b2.jpg}} \hfil \subfloat[]{\includegraphics[width=0.3\textwidth]{b3.jpg}} \vfil \subfloat[]{\includegraphics[width=0.3\textwidth]{b4.jpg}} \hfil \subfloat[]{\includegraphics[width=0.3\textwidth]{b5.jpg}} \hfil \subfloat[]{\includegraphics[width=0.3\textwidth]{b6.jpg}} \vfil \subfloat[]{\includegraphics[width=0.3\textwidth]{b7.jpg}} \hfil \subfloat[]{\includegraphics[width=0.3\textwidth]{b8.jpg}} \hfil \subfloat[]{\includegraphics[width=0.3\textwidth]{b9.jpg}} \caption{ Optical images of nine test sites for quantitative comparison of NL-TomoSAR point clouds and TanDEM-X DEM. (a) Munich central station. (b) European bureau of patent. (c) Technical University of Munich. (d) Railway signal light stand. (e) Train repair garage. (f) Residential building between two bridges. (g) Munich University of Applied Sciences. (h) Residential building near Lowenbrau beer company. (i) Karstadt (shopping mall).} \label{fig:optical_comp9} \end{figure*} A top view of the reconstructed point cloud of TomoSAR is shown in Fig. \ref{fig:comp_dem_tomosar_large} (a). The black regions in the figure is where the pixels are not coherent. The corresponding area of TanDEM-X raw DEM is presented in Fig. \ref{fig:comp_dem_tomosar_large} (b) as a comparison. It is clear that the result of TomoSAR point cloud preserves more detailed building structures. The road layer is also better represented in the TomoSAR result as well. In Fig. \ref{fig:comp_dem_tomosar_large} (b), the flat ground surface are well reconstructed. But when it comes to complex or high-rise buildings, their accuracy is compromised. For instance, the building of European bureau of the patent in the bottom right (red color) along the Isar river. A close view to this building can be seen in Fig. \ref{fig:comp_dem_tomosar_patentamt}. Due to the complex building structure, as well as the multilooking processing, the TanDEM-X raw DEM merges several buildings together and exhibits lower accuracy on the height of the buildings. As another example, Fig. \ref{fig:comp_dem_tomosar_small} shows the visual comparison over the area around Munich central station. It is clear that NL-TomoSAR result can show more detailed structures, such as the bridge, the central station, and roads. \section{Quantitative Validation} In this section, we have quantitatively compared the TomoSAR point clouds with TanDEM-X raw DEM, as well as a much more precise LiDAR reference. The LiDAR dataset of Munich is provided by Bavarian State Office for Survey and Geoinformation with ten centimeter accuracy \cite{bib:lidar2017munich}. Since the TomoSAR point cloud is with respect to a reference point that was chosen during the TomoSAR processing, its location is not with respect to a geo-coordinate system. We coregistrated the point cloud of TomoSAR with the DEM and the LiDAR point cloud. In addition, in order to compare point clouds with DEM, we rasterize the two point clouds. These preparing steps are briefly explained in this section. \subsection{Geocoding} Since the result of TomoSAR inversion is a 3-D point cloud in the range-azimuth coordinate, the first step is to transform the result to Universal Transverse Mercator (UTM) coordinate with the Range-Doppler approach \cite{bib:schwabisch1998fast}. \begin{table*} \centering \caption{Statistics of quantitative comparison of nine test structures. First column shows the number of each structure. Second column implies the source of each result, i.e., t (tomosar), l (lidar), d (dem). Third and Fourth columns present the statistics (min, max, mean and standard deviation) of sample points at top layer and bottom layer. Fifth column demonstrates the relative height of each structure, which is calculated by using the mean value of top layer minus the mean value of bottom layer. Sixth column shows the relative height difference between TomoSAR point clouds and LiDAR data, as well as between TanDEM-X raw DEM and LiDAR data.} \label{tab:statistics_comp9} \begin{tabular}{c|c|cccc|cccc|c|c} \toprule \toprule \multirow{2}{*}{Structures} & \multirow{2}{*}{Sources} & \multicolumn{4}{c|}{Top} & \multicolumn{4}{c|}{Bottom} & \multirow{2}{*}{Height} & \multirow{2}{*}{Absolute Height Difference} \\ \cmidrule(lr){3-6} \cmidrule(lr){7-10} \multirow{2}{*}{} & & Min & Max & Std & Mean & Min & Max & Std & Mean & \\ \midrule \multirow{3}{*}{Structure 1} & T & -3.76 & 2.59 & 1.22 & -0.75 & -19.30 & -12.87 & 1.15 & -15.26 & 14.51 & \textbf{0.69}\\ & L & - & - & - & 539.01 & - & - & - & 525.19 & 13.82 & \textbf{-}\\ & D & 583.19 & 597.58 & 2.32 & 587.91 & 566.16 & 570.15 & 2.01 & 568.06 & 19.84 & \textbf{6.02}\\ \midrule \multirow{3}{*}{Structure 2} & T & 20.39 & 22.62 & 0.56 & 21.39 & -27.85 & -22.62 & 1.18 & -25.30 & 46.70 & \textbf{0.75}\\ & L & - & - & - & 559.09 & - & - & - & 513.14 & 45.95 & \textbf{-}\\ & D & 598.57 & 642.43 & 8.35 & 624.99 & 556.61 & 583.46 & 4.45 & 574.83 & 50.16 & \textbf{4.21}\\ \midrule \multirow{3}{*}{Structure 3} & T & 13.84 & 17.04 & 0.97 & 15.32 & -25.52 & -21.56 & 1.09 & -23.67 & 38.49 & \textbf{0.90}\\ & L & - & - & - & 552.97 & - & - & - & 515.38 & 37.59 & \textbf{-} \\ & D & 594.31 & 599.13 & 2.12 & 596.15 & 562.03 & 569.28 & 1.60 & 565.18 & 30.97 & \textbf{6.62}\\ \midrule \multirow{3}{*}{Structure 4} & T & -4.61 & -1.90 & 0.74 & -2.91 & -13.84 & -10.35 & 0.83 & -12.05 & 9.14 & \textbf{0.67}\\ & L & - & - & - & 535.04 & - & - & - & 526.57 & 8.47 & \textbf{-}\\ & D & 582.41 & 584.94 & 0.84 & 584.06 & 572.76 & 574.96 & 0.48 & 573.42 & 10.64 & \textbf{2.17}\\ \midrule \multirow{3}{*}{Structure 5} & T & -3.95 & -0.96 & 0.63 & -2.44 & -15.94 & -13.67 & 0.54 & -14.61 & 12.17 & \textbf{0.96}\\ & L & - & - & - & 535.62 & - & - & - & 524.41 & 11.21 & \textbf{-}\\ & D & 583.26 & 587.26 & 0.96 & 584.77 & 572.71 & 578.98 & 1.22 & 575.58 & 9.19 & \textbf{2.02}\\ \midrule \multirow{3}{*}{Structure 6} & T & 10.42 & 13.47 & 0.49 & 12.11 & -16.05 & -14.31 & 0.82 & -15.61 & 27.72 & \textbf{0.67}\\ & L & - & - & - & 551.73 & - & - & - & 523.34 & 28.39 & \textbf{-}\\ & D & 587.23 & 594.29 & 2.12 & 589.76 & 567.22 & 570.82 & 1.32 & 569.36 & 20.4 & \textbf{7.99}\\ \midrule \multirow{3}{*}{Structure 7} & T & 2.71 & 6.65 & 0.87 & 4.99 & -27.57 & -21.32 & 1.41 & -24.25 & 29.24 & \textbf{0.60}\\ & L & - & - & - & 548.01 & - & - & - & 519.37 & 28.64 & \textbf{-}\\ & D & 574.71 & 597.30 & 5.21 & 588.27 & 563.98 & 571.78 & 2.26 & 568.09 & 20.18 & \textbf{8.46}\\ \midrule \multirow{3}{*}{Structure 8} & T & 0.06 & 6.42 & 1.34 & 4.06 & -20.96 & -20.27 & 0.18 & -20.69 & 24.75 & \textbf{0.94} \\ & L & - & - & - & 542.96 & - & - & - & 517.27 & 25.69 & \textbf{-}\\ & D & 584.73 & 598.09 & 3.36 & 590.40 & 566.26 & 574.70 & 2.62 & 570.12 & 20.28 & \textbf{5.41} \\ \midrule \multirow{3}{*}{Structure 9} & T & -7.53 & -6.73 & 0.16 & -7.41 & -23.13 & -22.57 & 0.29 & -22.85 & 15.44 & \textbf{0.89}\\ & L & - & - & - & 530.39 & - & - & - & 515.84 & 14.55 & \textbf{-}\\ & D & 580.67 & 581.22 & 0.11 & 580.97 & 567.14 & 573.42 & 1.56 & 569.3 & 11.67 & \textbf{2.88}\\ \bottomrule \bottomrule \end{tabular} \end{table*} \subsection{Coregistration of different point clouds} Consequently, when the TomoSAR point cloud is transformed to a UTM coordinate, its position may differ from the ground truth since the height of the reference point is unknown. Hence, the alignment of different point clouds is necessary. The most popular 3-D point cloud registration algorithm is iterative closest points (ICP) approach \cite{bib:zhang1994iterative}. The performacne of ICP depends on the initial alignment. Hence, a coarse alignment is adopted before applying ICP, which includes three steps: (1) the edge image is extracted by an edge detector, such as Sobel algorithm \cite{bib:sobel2017isotropic}; (2) the horizontal coregistration of two edge images is using cross-correlation of two edge images; (3) the vertical coregistration is using cross-correlation of the two height histograms. After the coarse alignment, ICP can be applied for the fine alignment \cite{bib:wang2017fusing}. \subsection{Object-based raster data generation} The direct comparison of TomoSAR and LiDAR points is not feasible, as the central position of two corresponding points (one TomoSAR and one LiDAR point) and the footprint of the points are differing. Consequently, for comparing both data, an object-based raster need to be generated by using geographic information system GIS data. \subsection{Comparison of individual structure} In order to evaluate the estimation accuracy, nine test sites with high average SNR have been chosen for individual quantitative comparison. Fig. \ref{fig:optical_comp9} shows the optical images of nine test sites for quantitative comparison of NL-TomoSAR point clouds and TanDEM-X DEM. They are (1) Munich central station, (2) European bureau of patent, (3) Technical University of Munich, (4) A railway signal light stand near Hirschgarten, (5) A train repair garage near Hirschgarten, (6) A residential building between two bridges (Hackerbr{\"u}cke and Donnersbergebr{\"u}cke), (7) Munich University of Applied Sciences, (8) A residential building near Lowenbrau beer company and (9) Karstadt (shopping mall). The summary of the results is shown in Tab. \ref{tab:statistics_comp9}. From Tab. \ref{tab:statistics_comp9} we can see that the height differences between TomoSAR result and LiDAR data are within one meter and the height differences between TanDEM-X DEM product and LiDAR data vary from 2.5 m to 8.5 m. Similar performance is shown in standard deviation, for NL-TomoSAR is up to 1.4 m and for TanDEM-X DEM is up to 8.4 m. \subsection{Average accuracy} In order to have an assessment of the overall accuracy in a city scale, we compared all the 36,499 buildings in the area with the LiDAR point cloud. 38.7\% buildings are within 1 m accuracy. 62.8\% are within 2 m accuracy. A detailed distribution of accuracy is listed in Tab. \ref{tab:statistics_whole}. However, the two datasets (TanDEM-X CoSSC and LiDAR) were acquired at different time. It is almost certain that changes happened during the period. Therefore, in order to obtain a more realistic assessment, we truncated the distribution of height difference at $\pm \textrm{15 m}$. The truncated histogram can be seen in Fig. \ref{fig:comp_munich_hist}. 34,054 buildings remains after the truncation. Their overall standard deviation is 1.96 m. \begin{table}[!ht] \begin{center} \caption{Statistics of quantitative comparison of the whole city} \label{tab:statistics_whole} \begin{tabular}{cc} \toprule Percentage of buildings & Estimation accuracy \\ \midrule 38.7\% & within 1 m \\ 62.8\% & within 2 m \\ 93.3\% & within 15 m \\ \bottomrule \end{tabular} \end{center} \end{table} \begin{figure}[!ht] \centering \includegraphics[width=0.5\textwidth]{comp_munich_hist_rb_r1} \caption{Histogram of height differences of structures in the whole Munich area.} \label{fig:comp_munich_hist} \end{figure} \section{Conclusion} A new SAR tomographic inversion framework tailored for very limited number of measurements is proposed in this paper. A systematic investigation of the estimation accuracy of TomoSAR with a micro-stacks is carried out using simulated data. Our experiments show that SVD and CS-based methods have almost identical performance on the estimation of single scatterer and the SNR plays a more important role than $N$ for the estimation accuracy, when $N$ is small. For the estimation of double scatterers, CS-based approach outperforms the other spectral estimators. Experiments using TanDEM-X bistatic data shows the relative height accuracy of 2 m can be achieved in large scale. Thus it demonstrates the proposed framework being a promising solution for high quality large-scale 3-D urban mapping.
2003.07769
\section{Introduction} Nearly one year ago, Event Horizon Telescope (EHT) Collaboration released the first image of the supermassive black hole located at the center of M87 galaxy \cite{Akiyama1,Akiyama2,Akiyama3,Akiyama4,Akiyama5,Akiyama6}. This fruitful outcome shows a fine structure near a black hole horizon and opens up a new window to test the strong gravity regime. The result indicates that the black hole shadow has a diameter 42$\pm$3 $\mu$as. Modeled M87* with a Kerr geometry, the observations were found to be well consistent with the prediction of general relativity. However, there still exists a living space for some modified gravities due to the finite resolution. With future observations, like the Next Generation Very Large Array \cite{Hughes}, the Thirty Meter Telescope \cite{Sanders}, and the BlackHoleCam \cite{Goddi}, it will offer us a good opportunity to peek into the regime of strong gravity, and to distinguish different modified gravities. Meanwhile, more knowledge and information on modified gravities and quantum gravity will be greatly revealed. EHT observation restimulates the study of the black hole shadow. As we know, when photons are emitted from its source, then fly past a black hole, they will have three results---absorbed by the black hole, reflected by the black hole and then escaping to infinity, or surrounding the black hole one loop by one loop. These photons escaped from the black hole will illuminate the sky of an observer. However, these photons absorbed by the black hole will leave a dark zone for the observer. This dark zone is actually the black hole shadow. As early as 1966, Synge studied the observed angular radius of a Schwarzschild black hole \cite{Synge}. Later, a formula describing the shadow size was given by Luminet \cite{Luminet}. Now it is generally known that the shadow shape of a spherically symmetric black hole is round. While it will be elongated, for a rotating black hole, in the direction of the rotating axis due to spacetime dragging effect \cite{Bardeen,Chandrasekhar}. The size and distortion of the shadow have a close relation with the spacetime geometry. Thus by constructing different observables, a number of papers concern how do these observables depend on the parameters of the black hole spacetime \cite{Hioki,Amarilla2,Johannsen,Ghasemi,Bambi,Amarilla,Stuchlik,Amarilla13,Nedkova, Wei,Tsukamoto,Bambi3,Atamurotov,Mann,WWei,FAtamurotov2,AmirAhmedov,Songbai,Balendra, Tsukamoto2,Jiliang,Rajibul,hou,Cunha,Cunha2,Cunha3,Tsupko,Perlick,Kocherlakota, WangXu,WeiLiu,WeiLiu2,Younsi,cBambi,Akashaa,Abdujabbarov,Shaikh,Chenw,Freese, Konoplyab,Vagnozzib,Zhub,Banerjeeb,Lua,Fengb,Renb,Guob,Zhuc}. There are other related works \cite{Toth,Davoudiasl,Bar,Tian3,Kumarg,Allahyari,Rummel,Kumarw,Narang} that aim to constrain the parameters of the black holes or other compact objects from astronomical observations of M87*. Moreover, it is also expected to cast deep insight into modified gravities. In the past few decades, different modified gravity theories were proposed with the attempt to solve the fundamental questions, such as the quantum gravity and singularity problem. Among them, Gauss-Bonnet (GB) gravity including higher curvature corrections is one of the most promising approaches. It is well known for a long time that there exist different static and spherically symmetric black hole solutions from general relativity in $d$-dimensional spacetime with $d>4$. While no different black hole solution exists in four dimensions due to that the GB term is a total derivative, and thus it has no contribution to the gravitational dynamics. Since the dimension of the observed spacetime is four, it is extremely hard to test the nature of GB gravity through astronomical observations. In Refs. \cite{Tomozawa,Cognola}, the authors considered the GB gravity in four dimensions by rescaling the GB coupling parameter $\alpha\rightarrow\frac{\alpha}{d-4}$. Very recently, Glavan and Lin \cite{Glavan} reconsidered this issue and proposed a general covariant modified gravity in four dimensions, in which only the massless graviton propagates. It can also bypass the Lovelock's theorem and avoid Ostrogradsky instability. Taking the dimension number $d\rightarrow4$, the GB term shows a nontrivial contribution to the gravitational dynamics. And then a nontrivial and novel four-dimensional static and spherically symmetric black hole solution was discovered. Such black hole, in particular, offers us a promising bed to test the nature of GB gravity. Subsequently, the quasinormal modes of scalar, electromagnetic and gravitational perturbations were calculated in \cite{Zinhailo}, where the results show that varying with the GB coupling parameter, the damping rate is more sensitive characteristic than the real part. Dynamical eikonal instability occurs for larger GB coupling parameter. The shadow cast by the spherically symmetric black hole was examined in Refs. \cite{Zinhailo,Guoli}. The shadow size exhibits a close relation with the coupling parameter. In addition, all the radii of the innermost stable circular orbit, black hole horizon, and the photon sphere are decreasing functions of the GB coupling parameter \cite{Guoli}. This black hole solution was generalized to the charged case \cite{Fernandes}. The cosmological and black hole solutions arising from the gravity was also discussed in Ref. \cite{Casalino}. Although this novel gravity admits a nontrivial contribution in four dimensions, there are works concerning about whether this gravity makes sense in general. Different multiple pathologies were pointed out in Refs.~\cite{LuPang,Ai,Gurses,Mahapatra,Hennigarq,Tian,Arrechea,Aoki,LiuLiuLiu}. For example in Ref. \cite{Gurses}, the authors showed that such gravity does not admit a description in terms of a covariantly-conserved rank-2 tensor in four dimensions. Other references argued that the dimensional regularization procedure is ill-defined in a general spacetime. However, some regularized 4D EGB theories were presented in Refs.~\cite{Fernandes2,Lu1,Hennigar1}. The regularization procedures include introducing an extra GB term constructed by a conformal metric~\cite{Fernandes2,Hennigar1} and compactification of the $D$-dimensional EGB theory~\cite{Lu1}. The resulting gravity has an extra scalar degree of freedom which helps us to obtain the field equations in a 4D version. Generally, it is believed that the regularization can be applied to the maximally symmetric or spherical symmetric space in 4D spacetimes, while may be problematic for the case of the axially symmetric case \cite{Hennigar1}. This approach suggests that the GB gravity belongs to the family of Horndeski gravity. Following Ref.~\cite{Hennigarq}, the treatment was also found to be applicable in a spacetime with low symmetry, for example the cylindrically symmetric spacetime \cite{LiuLiuLiu}. However, it is worth to check this also works for the axially symmetric spacetime. Nevertheless, it is still worth to investigate the novel properties of the black hole solution in this gravity. It is generally believed that almost all the astronomical black holes have spin. So it is worth to study the particular properties for the rotating black hole, which will provide us opportunities to test the nature of GB gravity by using astronomical observations, especially the observations of M87* by the EHT Collaboration. In this paper, we mainly focus on studying the shadow cast by the GB black hole, and constraining the GB coupling parameter. So we first generalize the black hole to a rotating one by using the Newman-Janis (NJ) algorithm~\cite{Janis}. Although it was found in Ref.~\cite{Hansen} that the NJ trick is not generally applicable in higher curvature theories, the solution might describe a black hole with matter fields such as the fluids~\cite{Azregainou}. Therefore, it is worth to study the rotating black hole in this GB gravity. Then, by modeling M87* with this rotating GB black hole, we constraint the GB coupling parameter via the observations of EHT Collaboration. The result reveals that negative GB coupling parameter is more favored. Another study on the shadow in Eeinstein-dilaton-Gauss-Bonnet black holes can be found in Ref. \cite{Cunhah}. The paper is organized as follows. In Sec. \ref{foursbh}, we first show the nonrotating GB black hole solution, and then extend it to its rotating counterpart by the NJ algorithm. Different regions of the spacetime in the parameter space are also displayed. Null geodesics and circular photon orbit are given in Sec. \ref{shadows}. In Sec. \ref{observa}, the shadow shapes are exhibited. Basing on them, we construct several observables and obtain their behaviors with the spin and GB coupling parameter, which provides us the preliminary nature on the four-dimensional GB black hole. Comparing with the Kerr black hole, we observe that positive coupling shrinks the shadow, while negative coupling enlarges it. Then, after obtaining these results, we constraint the GB coupling parameter by calculating the angular diameter of the shadow via the observation of M87* in Sec. \ref{M87}. Finally, the conclusions and discussions are presented in Sec. \ref{Conclusions}. \section{Four-dimensional Gauss-Bonnet black hole} \label{foursbh} In this section, we will focus on the properties of the four-dimensional nonrotating and rotating black holes. \subsection{Non-rotating black hole} \label{nospin} In GB gravity, it is well known that there are the static and spherically symmetric black hole solutions in a spacetime with $d\geq5$, for examples see Refs. \cite{Boulware,Wiltshire,caii,Nojiria,Cvetica}. However, in four-dimensional spacetime, the GB term is a total derivative, and thus it has no contribution to the gravitational dynamics. Until recently, a four-dimensional nontrival black hole was discovered by Glavan and Lin \cite{Glavan}. They first rescaled the GB coupling parameter $\alpha\rightarrow\alpha/(d-4)$, then took the limit $d\rightarrow4$, and finally found a static and spherically symmetric black hole solution \cite{Glavan} \begin{eqnarray} ds^2&=&f(r)dt^2-\frac{dr^2}{f(r)}-r^2(d\theta^2+\sin^2\theta d\phi^2),\label{GBme}\\ f(r)&=&1+\frac{r^2}{2\alpha}\left(1-\sqrt{1+\frac{8\alpha M}{r^3}}\right).\label{meme} \end{eqnarray} This black hole solution exactly coincides with that obtained in a gravity with conformal anomaly \cite{caicao,cai3}. Here $M$ is the black hole mass. This solution can be obtained by solving the following action \begin{eqnarray} S&=&\int\sqrt{-g}d^4x\left(R+\alpha L_{GB}\right),\\ L_{GB}&=&R_{\mu\nu\rho\sigma}R^{\mu\nu\rho\sigma}-4R_{\mu\nu}R^{\mu\nu}+R^2, \end{eqnarray} where we have taken $16\pi G=1$. This solution can bypass the Lovelock's theorem and avoid Ostrogradsky instability. When $r\rightarrow0$ and $r\rightarrow\infty$, we have \begin{eqnarray} f(r\rightarrow0)&=&1-\sqrt{\frac{2M}{\alpha}}r^{\frac{1}{2}}+\frac{1}{2\alpha}r^2 +\mathcal{O}\left(r^{\frac{7}{2}}\right),\\ f(r\rightarrow\infty)&=&1-\frac{2M}{r}+\frac{4\alpha M^2}{r^4}+\mathcal{O}\left(\frac{1}{r^7}\right). \end{eqnarray} So, this metric is asymptotic flat. As we know, GB gravity arises from low energy limit from heterotic string theory. The GB coupling parameter $\alpha$ has dimensions of the square of length and plays the role of the inverse string tension. The black hole horizons can be obtained by solving $f(r)=0$, which gives \begin{equation} r_\pm=M\pm\sqrt{M^2-\alpha}. \end{equation} For positive $\alpha$, we have two horizons for $\alpha/M^2<1$, one degenerate horizon (corresponding to an extremal black hole) for $\alpha/M^2=1$, while no horizon for $\alpha/M^2>1$. Although $\alpha$ acts as the inverse string tension and should be positive, the solution (\ref{meme}) allows the existence of \underline{a negative} $\alpha$. This might extend the conventional GB gravity. So it is interesting to examine the properties of this extended GB gravity with \underline{a negative} $\alpha$. We expect some interesting properties could be revealed. For negative $\alpha$, it was argued in Ref. \cite{Glavan} that no real solution exists at short radial distances when $r^3<{-8\alpha M}$. However it was noted that in the range $-8\leq\alpha/M^2<0$, the singular short radial distances are always hidden inside the outer horizon $r_+$ \cite{Guoli}, which therefore provides a well behaved external solution. Therefore, we consider the black hole solution in the region $-8\leq\alpha/M^2<1$ in this paper. \subsection{Rotating black hole} \label{spin} In this subsection, we would like to adopt the NJ algorithm \cite{Janis} to generate the rotating GB black hole solution from (\ref{GBme}) by following the approach \cite{Azregainou}. First, we introduce the Eddington-Finkelstein coordinates ($u$, $r$, $\theta$, $\phi$) with \begin{equation} du=dt-\frac{dr}{f(r)}. \end{equation} Then the metric of the nonrotating GB black hole becomes \begin{equation} ds^2=f(r)du^2+2dudr-r^2d\theta^2-r^2\sin^2\theta d\phi^2.\label{sur} \end{equation} Further, the metric can be expressed as \begin{equation} g^{ab}=l^am^b+l^bn^a-m^a\bar{m}^b-m^b\bar{m}^a, \end{equation} with the null tetrads given by \cite{Azregainou} \begin{eqnarray} l^a&=&\delta^a_r,\\ n^a&=&\delta^a_\mu-\frac{f(r)}{2}\delta^a_r,\\ m^a&=&\frac{1}{\sqrt{2}r}\left(\delta^a_\theta+\frac{i}{\sin\theta}\delta^a_\phi\right). \end{eqnarray} It is easy to find that these null tetrads have the following relations: \begin{eqnarray} l^al_a=n^an_a=m^am_a=\bar{m}^a\bar{m}_a=0,\\ l^am_a=l^a\bar{m}_a=n^am_a=n^a\bar{m}_a=0,\\ l^an_a=-m^a\bar{m}_a=1. \end{eqnarray} Now, we perform the complex coordinate transformations in ($u$, $r$)-plane following the NJ algorithm \begin{eqnarray} &&u'\rightarrow u-ia\cos\theta,\nonumber\\ &&r'\rightarrow r+ia\cos\theta,\label{rp} \end{eqnarray} with $a$ a spin parameter of the black hole. Next step is complexifying the radial coordinate $r$ in the NJ algorithm. However it is not necessary. As shown by \cite{Azregainou}, this complexifying process can be dropped by considering that $\delta^\mu_\nu$ transforms as a vector under (\ref{rp}). At the same time the metric functions of (\ref{sur}) transform to new undetermined ones \begin{eqnarray} &&f(r)\rightarrow F(r, a, \theta),\\ &&r^2\rightarrow H(r, a, \theta). \end{eqnarray} After this transformation, the null tetrads become \begin{eqnarray} l^a&=&\delta^a_r,\\ n^a&=&\delta^a_\mu-\frac{F}{2}\delta^a_r,\\ m^a&=&\frac{1}{\sqrt{2H}}\left((\delta^a_\mu-\delta^a_r)ia\sin\theta+\delta^a_\theta +\frac{i}{\sin\theta}\delta^a_\phi\right). \end{eqnarray} Making use of the new null tetrads, the rotating metric in the Eddington-Finkelstein coordinates is given by \begin{eqnarray} ds^2&=&Fdu^2+2dudr+2a\sin^2\theta(1-F)dud\phi-2a\sin^2\theta drd\phi\nonumber\\ &&-Hd\theta^2-\sin^2\theta\left(H+a^2\sin^2\theta(1-F)\right)d\phi^2. \end{eqnarray} Bringing this coordinates back to the Boyer-Lindquist ones, we can obtain the rotating GB black hole. In order to achieve it, we introduce a global coordinate transformation \begin{eqnarray} du&=&dt+\lambda(r)dr,\\ d\phi&=&d\phi'+\chi(r)dr, \end{eqnarray} with \cite{Azregainou} \begin{eqnarray} \lambda(r)&=&-\frac{a^2+r^2}{a^2+r^2f(r)},\\ \chi(r)&=&-\frac{a}{a^2+r^2f(r)}. \end{eqnarray} At last, we choose \begin{eqnarray} F&=&\frac{(r^2f(r)+a^2\cos^2\theta)}{H},\\ H&=&r^2+a^2\cos^2\theta. \end{eqnarray} Then the rotating black hole metric reads \begin{eqnarray} ds^2=-\frac{\Delta}{\rho^2}(dt-a\sin^2\theta d\phi)^2+\frac{\rho^2}{\Delta}dr^2+\rho^2d\theta^2 +\frac{\sin^2\theta}{\rho^2}\left(adt-(r^2+a^2)d\phi\right)^2.\label{Romet} \end{eqnarray} Note that, we have changed the sign of the metric according to the convention. For the four-dimensional rotating GB black hole, the metric functions are \begin{eqnarray} \rho^2&=&r^2+a^2\cos^2\theta,\\ \Delta&=&r^2+a^2+\frac{r^4}{2\alpha}\left(1-\sqrt{1+\frac{8\alpha M}{r^3}}\right).\label{DDm} \end{eqnarray} It is worth to point out that a couple of days later after we submitted this paper to arXiv, there appears another paper concerning the same rotating black hole case and its shadow \cite{Kumar2}. We have set $16\pi G$=1 in the action, so both the two black hole solutions are the same. Here we would like to give a comment on the above black hole solution. Since we cannot write out the field equations, it is extremely hard to directly check the solution (\ref{Romet}). On the other hand, one can check the trace of the field equations, $R=-\frac{\alpha}{2}L_{GB}$, where the rescaling $\alpha\rightarrow\alpha/(d-4)$ has been done. It is easy to find that the trace equation holds for $\theta=\pi/2$, which is because that the spacetime has a higher symmetry on the equatorial plane. While the case behaves differently when it deviates this plane. After the NJ algorithm, we argue that some matter fields such as fluids or scalar fields are introduced in the GB action~\cite{Azregainou}. So the solution obtained here is not the GB vacuum solution. On the other hand, as we know, in GR, the Kerr solution can be obtained from a Schwarzschild one by using the NJ algorithm, and both black holes are the vacuum solutions. However, when applying the NJ algorithm to the four-dimensional GB black hole, the rotating one is not a vacuum solution anymore. This as one wishes may be related to the pathological behaviour of the four-dimensional novel GB gravity, where only highly symmetric spacetimes make sense. For spacetimes with low symmetry, some matter fields must be included. Furthermore, considering the gravitational field equations, one can calculate the energy-momentum tensor. However, for this four-dimensional GB gravity, one could not do this since there is no explicit field equations. But this could be resolved according to Refs. \cite{Lu1,Fernandes2,Hennigar1}, where some regularized 4D EGB theories were constructed and the gravitational field equations were given. When the spin $a=0$, this metric will reduce to the static and spherically symmetric one (\ref{GBme}). On the other hand, at the small $\alpha$ limit, we have \begin{equation} \Delta(\alpha\rightarrow0)=\Delta_{Kerr}+\frac{4M^2}{r^2}\alpha+\mathcal{O}(\alpha^2), \end{equation} where $\Delta_{Kerr}=r^2-2Mr+a^2$ for the Kerr black hole. It is clear that the GB coupling parameter $\alpha$ modifies the Kerr solution. The horizons of the black hole can be obtained by solving $\Delta=0$. For positive $0\leq\alpha/M^2\leq1$, it is similar with the Kerr case. There can be two horizons, one horizon, and no horizon. There exists a maximal value of spin $a$ for certain $\alpha$, beyond which only the naked singularity is presented. While for negative $\alpha$, only one positive real root of $\Delta=0$ is found, which indicates that there only exists one black hole horizon. Also, similar to the nonrotating black hole, the solution only exists for \begin{equation} r\geq r_0=2(-\alpha M)^{\frac{1}{3}}. \end{equation} In order to guarantee the existence of the horizon and $r_+\geq r_0$, one should have $\Delta(r_0)\leq0$, which requires that the black hole spin satisfies the following relation \begin{equation} a^2/M^2\leq 4(-\alpha/M^2)^{\frac{1}{3}}\left(2-(-\alpha/M^2)^{\frac{1}{3}}\right). \end{equation} At $\alpha=-M^2$, the spin approaches its maximum 2$M$. In Fig. \ref{ppPT}, we show the different regions of the spacetime in $a/M$-$\alpha/M^2$ plane. In regions I and II, they are black holes with two and one horizon, respectively. Region III is for the naked singularity. Differently, in region IV, the spacetime has no horizon and has no real solution at the short radial distances. Therefore, we only consider the black hole regions I and II. Although the rotating black hole is not a vacuum solution in this GB gravity. It still worths to examine its gravitational effects. And this might show some interesting properties of this four-dimensional GB gravity. \begin{figure} \center{ \includegraphics[width=9cm]{Aalpha_1.pdf}} \caption{Regions I and II are black holes with two and one horizon, respectively. Region III is for the naked singularity. In region IV, the spacetime does not have horizon and real solution at the short radial distances.}\label{ppPT} \end{figure} \section{Geodesics and circular photon orbits} \label{shadows} In this section, we would like to investigate the geodesics in the background of (\ref{Romet}). We will study the circular photon orbits of the black hole, which is a key quantity on examining the shadows cast by the four-dimensional GB black hole. The geodesics of a particle moving in this background can be obtained by solving the geodesic equation. Alternately, one can adopt the Hamilton-Jacobi approach. The Hamilton-Jacobi equation describing the particle is \begin{equation} \frac{\partial S}{\partial\lambda}=-\frac{1}{2}g^{\mu\nu}\frac{\partial S}{\partial x^{\mu}}\frac{\partial S}{\partial x^{\nu}},\label{sequation} \end{equation} where $\lambda$ is the the affine parameter. For this black hole background, there are two Killing fields $\partial_t$ and $\partial_\phi$, which give us two constants, the particle energy $E$ and orbital angular momentum $l$ along each geodesics \begin{eqnarray} -E&=&g_{t\mu}\dot{x}^{\mu},\\ l&=&g_{\phi\mu}\dot{x}^{\mu}. \end{eqnarray} Then the Jacobi action can be separated as \begin{equation} S=\frac{1}{2}\mu^2\lambda-Et+l\phi+S_{r}(r)+S_\theta(\theta),\label{jaction} \end{equation} where $\mu^2$ is the rest mass of the particle. The functions $S_{r}(r)$ and $S_{\theta}(\theta)$, respectively, depends only on $r$ and $\theta$. Substituting the Jacobi action (\ref{jaction}) into the Hamilton-Jacobi equation (\ref{sequation}), one obtains \begin{eqnarray} S_r(r)=\int^r\frac{\sqrt{\mathcal{R}(r)}}{\Delta(r)}dr,\\ S_\theta(\theta)=\int^\theta\sqrt{\Theta(\theta)}d\theta, \end{eqnarray} with \begin{eqnarray} \mathcal{R}&=&\Big[(r^2+a^2)E-al\Big]^2-\Delta\Big[\mu^2r^2+\mathcal{K}+(l-aE)^2\Big],\\ \Theta&=&\mathcal{K}+\cos^2\theta\left(a^2(E^2-\mu^2)-l^2\sin^{-2}\theta\right), \end{eqnarray} where the Cater parameter $\mathcal{K}$ related to the Killing-Yano tensor field is another constant of the geodesics. Further combining $g_{\mu\nu}\dot{x}^{\mu}\dot{x}^{\nu}=-\mu^2$, we can obtain the following equation of motion for the particle in the background of a rotating GB black hole \begin{eqnarray} \rho^{2}\frac{dt}{d\lambda}&=&a(l-aE\sin^{2}\theta) +\frac{r^{2}+a^{2}}{\Delta}\Big(E(r^{2}+a^{2})-al\Big),\label{rhot}\\ \rho^{2}\frac{dr}{d\lambda}&=&\pm\sqrt{\mathcal{R}},\label{Rad}\\ \rho^{2}\frac{d\theta}{d\lambda}&=&\pm\sqrt{\Theta},\\ \rho^{2}\frac{d\phi}{d\lambda} &=&(l\csc^{2}\theta-aE)+\frac{a}{\Delta}\Big(E(r^{2}+a^{2})-al\Big).\label{rhophi} \end{eqnarray} Now we focus on the circular photon orbits by analyzing the radial motion. Since we consider the motion of photon, we take $\mu^2=0$ in the following. The radial motion (\ref{Rad}) can be reexpressed as \begin{equation} \left(\rho^{2}\frac{dr}{d\lambda}\right)^2+V_{eff}=0. \end{equation} The effective potential reads \begin{equation} V_{eff}/E^2=-\Big[(r^2+a^2)-a \xi\Big]^2+\Delta\Big[\eta+(\xi-a)^2\Big], \end{equation} where $\xi=l/E$ and $\eta=\mathcal{K}/E^2$, and the function $\Delta$ is given in Eq. (\ref{DDm}). The unstable circular photon orbit satisfies the following conditions \begin{equation} V_{eff}=0,\quad \frac{\partial V_{eff}}{\partial r}=0, \quad \frac{\partial^2 V_{eff}}{\partial r^2}<0. \label{qqq} \end{equation} The third condition ensures the orbit is unstable. Solving them, we obtain \begin{eqnarray} \xi&=&\frac{\left(a^2+r^2\right) \Delta'-4 \Delta r}{a \Delta'},\label{xx1}\\ \eta&=&\frac{r^2 \left(16 \Delta \left(a^2-\Delta\right)-r^2 \Delta'^2+8 \Delta r \Delta'\right)}{a^2 \Delta'^2},\label{xx2} \end{eqnarray} where the prime denotes the derivative with respect to $r$. Inserting them into the unstable condition (\ref{qqq}), it requires that the radius of the unstable orbit satisfies \begin{eqnarray} r+2\frac{\Delta}{\Delta'^2}(\Delta'-r\Delta'')>0,\label{ccdc} \end{eqnarray} where we have used $\Delta>0$ outside the horizon. It is easy to check that this condition holds for the Schwarzschild black hole, where $a=\alpha=0$, and the radius $r=3M$ of the photon sphere. This can also be numerically checked for the rotating GB black hole. \section{Black hole shadows and observables} \label{observa} In this section, we would like to study the shadows cast by the nonrotating and rotating GB black holes. Before performing the study, it is worthwhile pointing out that in our case, all light sources are located at infinity and distributed uniformly in all directions. The observer is likewise located at infinity. In order to describe the shadow under our assumption, one needs to introduce two celestial coordinates \cite{Bardeen} \begin{eqnarray} X&=&\lim_{r\rightarrow \infty} \bigg(-r^{2}\sin\theta\frac{d\phi}{dr} \bigg|_{\theta\rightarrow \theta_{0}}\bigg) =-\xi\csc\theta_{0},\label{alpha}\\ Y&=&\lim_{r\rightarrow \infty} \bigg(r^{2}\frac{d\theta}{dr}\bigg|_{\theta\rightarrow \theta_{0}}\bigg) =\pm\sqrt{\eta+a^{2}\cos^{2}\theta_{0}-\xi^{2}\cot^{2}\theta_{0}},\label{beta} \end{eqnarray} where the equations of motion (\ref{rhot})-(\ref{rhophi}) are used, and $\theta_0$ is the inclination angle of the observer. If the observer locates on the equatorial plane, these celestial coordinates simplify to \begin{eqnarray} X&=&=-\xi\\ Y&=&\pm\sqrt{\eta}. \end{eqnarray} The shadow can be obtained by producing a parametric plot in the $X$-$Y$ celestial plane consistent with Eqs. (\ref{xx1}) and (\ref{xx2}), where the parameter governing the plot is $r$. Such region is actually not illuminated by the photon sources. The boundary of the shadow can be determined by the radius of the circular photon orbits. \subsection{Nonrotating black hole shadows} Here we first consider the nonrotating black hole case, i.e., $a$=0, which is described by the metric (\ref{GBme}). On the other hand, because the spherically symmetry of the black hole, the inclination angle of the observer effectively equals to $\theta_0=\pi/2$. For this nonrotating black hole with $a$=0, the unstable orbit is just the photon sphere of the black hole. Solving the first two conditions in (\ref{qqq}), one can easily obtain the photon sphere radius $r_{\text{ps}}$, which is \cite{Guoli} \begin{equation} r_{\text{ps}}=2\sqrt{3}M\cos\left(\frac{1}{3}\arccos(-\frac{4\alpha}{3\sqrt{3}M^2})\right).\label{rs} \end{equation} When $\alpha=0$, this gives $r_{\text{ps}}=3M$, which is just the result of the Schwarzschild black hole case. When $\alpha$ takes its upper and lower bounds, i.e., -8$M^2$ and $M^2$, we have $r_{\text{ps}}=4.7428M$ and $2.3723M$, respectively. The relation (\ref{rs}) shows that $r_{\text{ps}}$ decreases with $\alpha$. Moreover, it is easy to check the condition (\ref{ccdc}). The black hole shadow is round in this case. Employing the photon sphere radius $r_{\text{ps}}$, we can find the radius of the black hole shadow. After a simple calculation, we obtain the equation describing the boundary of the black hole shadow, which is given by \begin{eqnarray} X^2+Y^2=R_s^2, \end{eqnarray} where $R_s$ denotes the radius of the shadow \begin{eqnarray} R_s=\sqrt{\frac{2 \alpha r_{\text{ps}}^2}{2 \alpha -\sqrt{8 \alpha r_{\text{ps}}+r_{\text{ps}}^4}+r_{\text{ps} }^2}}. \end{eqnarray} In Fig. \ref{PA0xy}, we show the shadows for the nonrotating GB black holes with $\alpha/M^2$=-8, -2, 0, 0.5, and 1 from outside to inside. Obviously, the shadow shrinks with the increase of $\alpha$. The radius $R_s$ of the shadow is also plotted as a function of $\alpha$ in Fig. \ref{PA0rsa}. We can see that positive $\alpha$ shrinks the shadow and negative one enlarges the shadow. In the following subsection, we will show that the influence of the black hole spin on the size of the shadow is very tiny, and the shadow size is mainly dependent on the GB coupling $\alpha$. \begin{figure} \center{\subfigure[]{\label{PA0xy} \includegraphics[width=5cm]{A0xy_2a.pdf}} \subfigure[]{\label{PA0rsa} \includegraphics[width=7cm]{A0rsa_2b.pdf}}} \caption{(a) Shadows cast by the nonrotating GB black holes with $\alpha/M^2$=-8, -2, 0, 0.5, and 1 from outside to inside. (b) The radius of the shadows as a function of $\alpha$.}\label{ppA0rsa} \end{figure} \subsection{Rotating black hole shadows} When the black hole spin is included, the shape of the shadow behaves quite differently. The critical photons moving from two different sides of the black hole have different values of $\xi$ and $\eta$. This effect makes the black hole shadow be elongated in the direction of the spin axis. So for a rotating black hole shadow, its shape is not a round but a distorted one. In this subsection, we focus on the shadow shapes cast by the rotating GB black holes. \begin{figure} \center{\subfigure[$\theta_0=\frac{\pi}{2}$, $a/M$=0.5]{\label{PAxy9005} \includegraphics[width=7cm]{Axy9005_3a.pdf}} \subfigure[$\theta_0=\frac{\pi}{2}$, $a/M$=0.9]{\label{PAxy9009} \includegraphics[width=7cm]{Axy9009_3b.pdf}} \subfigure[$\theta_0=\frac{\pi}{6}$, $a/M$=0.5]{\label{PAxy3005} \includegraphics[width=7cm]{Axy3005_3c.pdf}} \subfigure[$\theta_0=\frac{\pi}{6}$, $a/M$=0.9]{\label{PAxy3009} \includegraphics[width=7cm]{Axy3009_3d.pdf}}} \caption{Shadows of the rotating GB black holes. In the left panel, $\alpha/M^2$=-1.0, 0.2, 0.4, and 0.5157 from outside to inside. In the right panel, $\alpha/M^2$=-1, 0.01, 0.04, and 0.0648 from outside to inside.}\label{ppAxy3009} \end{figure} Making use of (\ref{alpha}) and (\ref{beta}), we plot the shadows for the rotating GB black holes in Fig. \ref{ppAxy3009}. All these figures show that the shadow shrinks with the increase of $\alpha$. Moreover, when the black hole spin approaches its maximal value, the shadows get more deformed. However, when the inclination angle of the observer decreases such that $\theta_0=\pi/6$, the shadows get less deformed when its spin approaches the maximal value. As a brief summary, we can conclude that the black hole shadow size is mainly dependent of $\alpha$, while its distortion is dependent of the black hole spin $a$. \begin{figure} \center{ \includegraphics[width=9cm]{Schematic_4.pdf}} \caption{Schematic picture of the black hole shadow with nonvanishing spin.}\label{ppschematic} \end{figure} In order to get the black hole parameters through fitting the observed data, observables play a key role. The size and distortion are two important aspects of the shadows. So most of the observables are constructed through this fact. For clear, the schematic picture of the black hole shadow is given in Fig. \ref{ppschematic} for nonvanishing spin. There are four characteristic points, the right point ($X_r$, 0), left point ($X_l$, 0), top point ($X_t$, $Y_t$) and bottom point ($X_b$, $Y_b$) of the shape. Considering the symmetry, one has $X_t=X_b$ and $Y_t=-Y_b$. As suggested in Ref. \cite{Hioki}, the size of the shadow can be approximately measured by the radius $R_s$ of the reference circle, which is just the circle passing the right, top, and bottom points of the shadow. Its center C is also on the $X$ axis. The reference circle cuts the $X$ axis at ($\tilde{X}_l$, 0). For a shadow cast by a nonrotating black hole, we will have $\tilde{X}_l=X_l$. However, the spin will departure these two points, and the shadow shape gets distorted. Another observable $\delta_s$ is also given in Ref. \cite{Hioki} to measure the distortion of the shadow with respect to the reference circle. In the following, we will study these observables as function of the GB coupling parameter $\alpha$. From the geometry of the shadow, the observables $R_s$ and $\delta_s$ can be expressed with the coordinates of these characteristic points as \begin{eqnarray} R_s&=&\frac{(X_t-X_r)^2+Y_t^2}{2(X_r-X_t)},\label{rrrs}\\ \delta_s&=&\frac{d_s}{R_s}=\frac{X_l-\tilde{X}_l}{R_s}. \end{eqnarray} Recently, the ratio $k$ of two diameters $\Delta X$ and $\Delta Y$ attracts much more attention. It is a new observable that can be fitted by the data of M87*. In terms of these coordinates, the ratio reads \begin{equation} k_s=\frac{\Delta Y}{\Delta X}=\frac{Y_t-Y_b}{X_r-X_l}=\frac{2Y_t}{(2-\delta_s)R_s}. \end{equation} where we have used the relations $\tilde{X}_l=X_r-2R_s$. So it is clear that these observables are not independent of each other. As shown in Fig. \ref{ppAxy3009}, we can see that if a black hole spin is not very close to its maximum, the shadow is almost round. This conclusion is more true when the observer leaves the equatorial plane. Adopting this result, one has $R_s\approx2Y_t$, and thus $k_s\approx1/(2-\delta_s)$. Moreover, since $\Delta Y\geq\Delta X$, we have $k_s\geq1$. For black hole spin $a/M$=0.1, 0.3, 0.5, and 0.9, we calculate these observables when $\theta_0$=$\pi/2$ and $\pi/6$. The results indicate that the influence of the spin is mainly on the maximum value of $\alpha$, while on the shadow size $R_s$ is very tiny. So we will not show them here. The behaviors of observables $\delta_s$ and $k_s$ are exhibited in Fig. \ref{ppOds90}. From the figures, we find that both $\delta_s$ and $k_s$ increase with $\alpha$ and $a$, while decrease with $\theta_0$. Further comparing with the Kerr black hole case, positive $\alpha$ will increase $\delta_s$ and $k_s$, while negative ones decrease them. For example, when $a/M$=0.9, the Kerr black hole has $\delta_s$=13.87\% and $k_s$=1.07 for $\theta_0=\pi/2$. Meanwhile, the rotating GB black hole can achieve $\delta_s$=23.61\% and $k_s$=1.13, respectively. One can expect, these values get larger for high black hole spin. In summary, we obtain the following result: 1) the shadow size $R_s$ is mainly dependent of the GB coupling parameter $\alpha$. Positive $\alpha$ decreases the size, while negative one increases it. 2) When the black hole approaches its extremal case, the shadow will have larger values of $\delta_s$ and $k_s$. We believe these results will provide us the information on how to constraint the black hole spin $a$ and the GB coupling parameter $\alpha$. \begin{figure} \center{\subfigure[$\theta_0=\frac{\pi}{2}$]{\label{POds90} \includegraphics[width=7cm]{Ods90_5a.pdf}} \subfigure[$\theta_0=\frac{\pi}{2}$]{\label{POks90} \includegraphics[width=7cm]{Oks90_5b.pdf}} \subfigure[$\theta_0=\frac{\pi}{6}$]{\label{POds30} \includegraphics[width=7cm]{Ods30_5c.pdf}} \subfigure[$\theta_0=\frac{\pi}{6}$]{\label{POks90} \includegraphics[width=7cm]{Oks90_5d.pdf}}} \caption{Observables $\delta_s$ and $k_s$ for the rotating GB black hole shadows. The spin is set as $a/M$=0.1 (black solid curve), 0.3 (dash purple curve), 0.5 (dotted dash red curve), and 0.9 (dotted blue curve) from bottom to top.}\label{ppOds90} \end{figure} Before ending this section, we would like to note that here we only limit our attention in regions I and II, where the black hole always has one horizon at least. In region III, it denotes the naked singularity. Similarly, a naked singularity can also cast a shadow in the sky of an observer. However, its shadow is significantly different from the black hole shadow. Generally, the shadow shape of a naked singularity will not be a two-dimensional dark zone, while a one-dimensional opened dark arc. Considering that in a realistic scenario, the neighborhood of the arc will also be darkened, and thus there will be a dark lunate shape for the naked singularity. On the other side, if the value of the spin does not exceed the extreme case too much, the shape might similar to an extremal black hole. Surely, there is no evidence that the naked singularity can exist in our universe beyond the cosmic censorship conjecture. It is also worth to consider the shadow cast by a naked singularity. However, we only consider the black hole shadow in this paper, and the shadow of the naked singularity is a valuable issue for future. \section{Shadow of M87* and Gauss-Bonnet coupling} \label{M87} In this section, we would like to use the result of the EHT Collaboration to fit the GB coupling by the observation of M87*. For M87*, its observed shadow has an angular diameter of 42 $\pm$3 $\mu$as. The observation of the jet indicates that the inclination angle is about 17$^\circ$ \cite{Walker}. Based on the stellar population measurements, the distance $D$ of M87* from us was estimated to be $D$=16.8$\pm$0.8 Mpc \cite{Blakeslee,Bird,Cantiello}. The stellar dynamics and gas dynamics studies showed that the mass of M87* is about $6.2^{+1.1}_{-0.5}\times10^9$ $M_{\odot}$ \cite{Gebhardt} and $3.5^{+0.9}_{-0.3}\times10^9$ $M_{\odot}$ \cite{Walsh}, respectively. Meanwhile, the EHT Collaboration reported that the mass of M87* is 6.5$\pm0.7\times10^9$ $M_{\odot}$. Their result also implies that the absolute value of the dimensionless black hole spin $a/M$ is in the range (0.5, 0.94). For simplicity, we adopt the following data, $D$=16.8 Mpc, $M$=6.2$\times10^9$ $M_{\odot}$, and 6.5$\times10^9$ $M_{\odot}$. The inclination angle is chose to be 17$^\circ$. According to the Blandford-Znajek mechanism, the jet of M87* is powered by the black hole spin. Thus we suppose that the inclination angle equals to the jet angle, and this result holds for this four-dimensional GB black hole. Although as we show above, the rotating black hole solution only obeys the vacuum field equation on the equatorial plane. However, note that when matter fields are included, the field equation could be satisfied. So the choice of the inclination angle deviating from $\pi/2$ may be appropriate. Then we examine the average angular diameter $D_s=2R_s$ with $R_s$ given by (\ref{rrrs}) to fit the shadow size 42 $\pm$3 $\mu$as, or even 10\% offset is considered. First, we take $M$=6.2$\times10^9$ $M_{\odot}$ from the stellar dynamics, and show the contours of the angular diameter $D_s$ of M87* in $a/M$-$\alpha/M^2$ plane in Fig. \ref{ppFitaP1}. From Fig. \ref{PFitaN7}, we see that $\alpha/M^2$ is about in the range (-4.5, -1) for the observed shadow diameter 42 $\pm$3 $\mu$as. Moreover even if the 10\% offset of the diameter is taken into account, which reduces the corresponding angular diameter to 37.8 $\mu$as, the associated GB coupling $\alpha/M^2$=-0.28 and -0.62 for $|a|/M$=0.5 and 0.9, respectively. Therefore, the observation favors negative values of the GB coupling $\alpha$. For clarity, we also show the contours in $\alpha/M^2\in$ (0, 1) in Fig. \ref{PFitaP1a}. It is evident that in the positive range of $\alpha$, the diameter is about in 34$\sim$37.5$\mu$as, which obviously lies out the observation. When taking the mass of M87* $M$=6.5$\times10^9$ $M_{\odot}$ given by the EHT Collaboration, we plot the corresponding contours in Fig. \ref{ppFitaP1B}. For $D_s\in(39, 45)\mu$as, it also falls in the range of negative $\alpha$. While taking into account of the 10\% offset of the diameter such that $D_s=37.8\mu$as, $\alpha/M^2$ takes 0.26 for $a/M$=0.5. However when $a/M>$0.83, $\alpha$ will become negative, i.e., $\alpha/M^2$=-0.05 for $a/M$=0.94. In summary, combining with the mass estimated from the stellar dynamics or by the EHT Collaboration, the observation favors $\alpha/M^2\in$(-4.5, 0) when considering the black hole spin $a/M\in$(0.5, 0.94). Therefore, modeling M87* with a rotating GB black hole, astronomical observation favors a negative GB coupling constant. \begin{figure} \center{\subfigure[]{\label{PFitaN7} \includegraphics[width=7cm]{FitaN7_6a.pdf}} \subfigure[]{\label{PFitaP1a} \includegraphics[width=7cm]{FitaP1_6b.pdf}}} \caption{Contours of the angular diameter of M87* in $a/M$-$\alpha/M^2$ plane with $D$=16.8 Mpc, $M=6.2\times10^9$ $M_{\odot}$. (a) $\alpha/M^2\in$ (-7, 1). (b) $\alpha/M^2\in$ (0, 1).}\label{ppFitaP1} \end{figure} \begin{figure} \center{\subfigure[]{\label{PFitaN7B} \includegraphics[width=7cm]{FitaN7B_7a.pdf}} \subfigure[]{\label{PFitaP1aB} \includegraphics[width=7cm]{FitaP1B_7b.pdf}}} \caption{Contours of the angular diameter of M87* in $a/M$-$\alpha/M^2$ plane with $D$=16.8 Mpc, $M=6.5\times10^9$ $M_{\odot}$. (a) $\alpha/M^2\in$ (-7, 1). (b) $\alpha/M^2\in$ (0, 1)..}\label{ppFitaP1B} \end{figure} \section{Conclusions and discussions} \label{Conclusions} In this paper, we first constructed a four-dimensional rotating GB black holes by using the NJ algorithm. Then, we studied the geodesics of a particle moving in this background. Employing the null geodesics, we investigated the shadow cast by nonrotating and rotating black holes. At last, by combining with the observation of M87*, we constrained the possible range of the GB coupling parameter. For the nonrotating black hole in the case of positive GB coupling parameter $\alpha$, the property of its horizon is similar to the charged black hole. Black hole with two horizons and naked singularity are bounded by the extremal black hole with $\alpha=M^2$. While for the negative $\alpha$ case, it was found that at the short radial distances, the spacetime has no real solution. However, if this range is hidden behind the horizon, the solution appears as a well behaved external solution. Thus, the study can be extended to $\alpha/M^2$=-8. For the rotating black hole case, the horizon will be deformed by the black hole spin. However, the major property is still unchanged. As shown above, we have $-8\leq\alpha/M^2\leq1$ and $-2\leq a/M\leq2$. The particular details were given in Fig. \ref{ppPT}. The geodesics of a particle moving in the GB black hole background was solved following the Hamilton-Jacobi approach, which was also found to be a Kerr-like result. Based on the null geodesics, the shadow in the celestial coordinates was displayed. We observed that the shadow size is mainly dependent of $\alpha$, for example, positive $\alpha$ decreases the size while negative one increases it. On the other hand, the distortion mainly depends on the black hole spin and the extremal bound. The distortion $\delta_s$ and the ratio $k_s$ of these two diameters were also studied. Both them increase with $a$ and $\alpha$. Comparing with the Kerr black hole case, $\delta_s$ and $k_s$ increase with positive $\alpha$ and decrease with the negative one. These results reveal the particular property of the GB coupling on the shadow shape. Finally, we modeled M87* with this rotating GB black hole, and used the observation to constraint the possible range of the GB coupling parameter. Considering the inclination angle $\theta_0=17^\circ$ from the observation of the jet and the distance $D=16.8$Mpc from stellar population measurements, we plotted the contours of the angular diameter of the rotating GB black hole with $M$=6.2$\times10^9$ $M_{\odot}$ and 6.5$\times10^9$ $M_{\odot}$ from the stellar dynamics and EHT Collaboration, respectively. Comparing with the angular diameter 42 $\pm$3 $\mu$as of M87*, our result supports that the GB parameter $\alpha$ is negative. Even if the 10\% offset of the diameter is taken into account, the most possible range still falls in the negative $\alpha$ region. Although the EHT Collaboration molded M87* with the Kerr black hole, and confirmed that the observation supports the GR, it also leaves us a possible window to test modified gravity due to the resolution of the observation and the unknown mass of the accretion of M87*. Combing with these two different mass data of M87*, our results favor that the GB coupling should fall in the possible negative range $\alpha/M^2\in$(-4.5, 0) or take very small positive values. Since the four-dimensional GB black hole solution has just been discovered, it is worth to constraint the GB coupling parameter from other astronomical observations. This also provides us a promising way to understand GB gravity in four dimensions. Note that the shadow was also studied in Ref. \cite{Kumar2} after us for the same black hole solution. They calculated the area and oblateness of the shadow and then found that the rotating black hole is consistent with M87* within a finite parameter space. Here we investigated the distortion and oblateness of the shadow, and then fitted the angular diameter with M87* data. It is worth to mention that in our approach, we also considered the negative GB coupling parameter. Although there is no real solution at short distance, it is always hidden behind the outer horizon. Therefore, such parameter range is also worth to be examined. \section*{Acknowledgements} We would like to thank Dr. M. Guo for useful discussions about the rotating black hole. This work was supported by the National Natural Science Foundation of China (Grants No. 11675064 and No. 11875151) and the Fundamental Research Funds for the Central Universities (Grants No. lzujbky-2019-ct06).
2108.08477
\section{Introduction} For decades, LEGO\textregistered\, bricks have been a staple of entertainment for children and adults alike, offering the ability to construct anything one can imagine from simple building blocks. For all but the most exceptional LEGO\textregistered\, engineers, however, dreams quickly outgrow skills, and constructing the complex images around them becomes too great a challenge. LEGO\textregistered\, bricks are extraordinarily flexible by nature and have been assembled into intricate and fantastical structures in many cases, and simplifying the process of constructing the more complex designs is an important step to maintain appeal for amateur builders and attract a new generation of LEGO\textregistered\, enthusiasts. To make these creative possibilities accessible to all, we develop an end-to-end approach for producing LEGO\textregistered\,-type brick 3D models directly from 2D images. Our work has three sequential components: it (i) converts a 2D image to a latent representation, (ii) decodes the latent representation to a 3D voxel model, and (iii) applies an algorithm to transform the voxelized model to 3D LEGO\textregistered\, bricks. As such, our work represents the first complete approach that allows users to generate real LEGO\textregistered\, sets from 2D images in a single pipeline. A high-level demonstration of the full Image2LEGO\textregistered\, pipeline is presented in Figure \ref{fig:airplane_intro}, where a gray-scale 2D photograph of an airplane is converted to a 3D LEGO\textregistered\, model, and the corresponding instructions and brick parts list are used to construct a physical LEGO\textregistered\, airplane build. We tackle the issues specific to constructing high-resolution real 3D LEGO\textregistered\, models such as color and hallow structures. Our key contributions are: \begin{itemize} \item A pipeline that combines creating a 3D model from a 2D image with an algorithm for mapping this 3D model to a set of LEGO\textregistered\,-compatible bricks, to provide this new Image2LEGO\textregistered\, application, \item An evaluation by examples and analysis to show how and when this pipeline works. \end{itemize} Though we focus in this paper on our novel approach for multi-class object-image-to-lego construction, the same approach is extended to other creative applications by leveraging previous image-to-model work. For instance, generating LEGO\textregistered\, models from pictures of one's face is already an application of interest, but current work is limited to the generation of 2D LEGO\textregistered\, mosaics from images, such as that shown to the left of Figure \ref{fig:cp_build} (generated by the commercial product called LEGO\textregistered\, Mosaic Maker \cite{legomosaicmaker}). However, we extend the Image2LEGO\textregistered\, pipeline to include the pre-trained Volumetric Regression Network (VRN) for single-image 3D reconstruction of faces \cite{jackson2017vrn}. As shown in Figure \ref{fig:cp_build}, in contrast to the 2D mosaic, our approach generates a 3D LEGO\textregistered\, face from a single 2D image. Moreover, other learned tools may be appended or prepended to the base pipeline to develop more imaginative tools. For instance, by prepending the VRN with a sketch-to-face model \cite{chenDeepFaceDrawing2020}, we develop a tool that directly converts an imagined drawing into a LEGO\textregistered\, model, as demonstrated in Figure \ref{fig:sketch_example}, offering nearly limitless creative possibilities. In \S\ref{sec:caption2lego}, we demonstrate another extension, where we apply the Image2LEGO\textregistered\, pipeline with DALL-E~\cite{Ramesh2021} outputs to create a tool that automatically converts captions to LEGO\textregistered\, models. \subsection{Related Work} \paragraph{Image-to-3D} \begin{figure*}[!ht] \centering \includegraphics[width=\textwidth]{cp_build.png} \caption{Center-Left: input image; Left: 2D Lego Mosaic output; Right: 3D LEGO\textregistered\, Model output and corresponding build process.} \label{fig:cp_build} \end{figure*} 3D model construction from 2D images of objects is an active research area \cite{Fu2021,Kniaz2020,Yu2021}. For example, Lim et al. \cite{Lim2013} demonstrate an algorithm for modeling fine-pose of objects within captured 2D images and matching them to a set of 3D models. GAN-based approaches \cite{pan20202d, hu2021self} for 3D reconstruction demonstrate high quality outputs and have recently been extended to allow control over the output. Girdhar et al. \cite{Girdhar2016} develop vector representations of 3D objects that are predictable from 2D images, and Stigeborn \cite{Stigeborn2018} develops machine learning methods for automatic generation of 3D models through octree-based pruning. The methods described in any of these previous works could feasibly be utilized for producing the desired 3D models for the goals of this work, though the end goal of a LEGO\textregistered\, model has not been previously considered and requires adaptations, which we explore in this work (e.g. image to flexible-resolution voxelized 3D models), to create models optimal for creative LEGO\textregistered\, engineering. \paragraph{3D-to-LEGO\textregistered\,} The challenge of converting voxelized 3D models into LEGO\textregistered\, designs has been previously explored as well. Silva et al. \cite{Silva2009} demonstrate real-time conversion of surface meshes to voxels to LEGOs\textregistered\,, and Lambrecht \cite{Lambrecht2006} describes methods for high-detail LEGO\textregistered\, representations of triangle mesh boundaries. However, a gap has remained between 3D model generation from images and LEGO\textregistered\, generation from 3D models. The goal of our work is to bridge this gap by developing a complete Image2LEGO\textregistered\, pipeline, allowing anyone to create custom LEGO\textregistered\, models from 2D images. The problem of LEGO\textregistered\, generation from images adds an additional goal to just 3D model generation, namely that it is important to have flexibility in the output resolution. Additionally, the latent space should have some flexibility so as to generate unseen structures from new input images. The former is useful in providing users with LEGO\textregistered\, designs of different scales and resolutions, to better achieve varying levels of difficulty, availability of material resources, and cognitive effort. For instance, small renditions of an object may be useful as fine elements in a greater scene, while larger renditions may serve as independent LEGO\textregistered\, models. The latter feature of a generalizable latent space allows users to generate new LEGO\textregistered\, sets that are associated with new captured images. This work represents the first effort to combine these approaches, using an octree-structured autoencoder in the image-to-model pipeline. We evaluate its ability to perform this task on new images in several examples. \begin{figure} \centering \includegraphics[width=\columnwidth]{sketch_example.png} \caption{Left: input sketch; Center: 2D face image; Right: 3D face LEGO\textregistered\,.} \label{fig:sketch_example} \end{figure} \section{Methods} Since the VRN, which we use for faces, is pre-trained, the majority of this section is focused on the construction and training of the TL-Octree network (applied to object images). This network is inspired by the Octree Generating Network \cite{Tatarchenko2017}, which consists of a convolutional block in an `octree' structure that sequentially up- (or down-) samples the resolution of a 3D model by a factor of two in each spatial dimension, and the TL-Embedding network \cite{Girdhar2016}, which accomplishes image-to-3D reconstruction by simultaneously training an autoencoder to compress the 3D model representation into a latent vector and an image encoder to predict the latent representation of a model from a single 2D image. Our model combines these two approaches, as shown in Figure \ref{fig:architecture}. Specifically, an autoencoder is trained on multiple classes of 3D models, and the latent representations of this autoencoder are taken as the targets for a separate 2D image encoder. At testing time, the `encoder' portion of the autoencoder is discarded, such that an image is fed through the image encoder and the `decoder' of the autoencoder to reconstruct the 3D model. \subsection{Dataset} The TL-Octree network is trained and tested on the ModelNet40 dataset~\cite{Wu2015}, a collection of 40 categories of 3D meshes composed of 8,672 models for training and 2,217 models for testing. The surfaces of these meshes are sampled to generate point clouds, and these point clouds are subsequently converted to voxels at a resolution of $32^3$. To obtain paired 2D images, we render the 3D models in a set pose using Pyrender \cite{pyrender}. We focus on model reconstruction from a single, set pose here to demonstrate a simple working example of our Image2LEGO\textregistered\, pipeline, but note that the pipeline may be readily extended to 3D reconstruction from different or multiple poses by incorporating ideas from previous work in this area \cite{Choy2016}. \subsection{3D-Model Autoencoder} We begin by training an autoencoder on the voxelized 3D models. The decoder of this network is adapted from the Octree Generating Network \cite{Tatarchenko2017}. It consists of multiple blocks, each of which upsamples the resolution by a factor of two in each dimension (i.e. an octree structure) and changes the number of channels, beginning with the latent representation, parameterized as a 256-dimensional vector (i.e. a single voxel with 256 channels). Below, we discuss a brief hyperparameter optimization campaign to determine the number of channels in each subsequent layer after the latent representation. The resolution upsampling is accomplished by generative convolution transpose layers, which efficiently parse and prune the sparse structure of the input tensor to minimize memory usage and runtime \cite{gwak2020generative}. Our encoder mirrors the decoder, replacing each generative convolution transpose with a conventional stride-2 convolutional layer. In both the encoder and decoder, each convolutional layer (or generative convolutional layer) is followed by a batch normalization layer to reduce overfitting, and an Exponential Linear Unit (ELU) activation. This autoencoder architecture is shown in Figure \ref{fig:architecture}A. \begin{figure*}[!ht] \centering \includegraphics[width=\textwidth]{full_architecture.png} \caption{Our TL-Octree network, consisting of a 3D model autoencoder and an image encoder. At train time, we optimize the autoencoder weights for reconstruction of the voxelized 3D models. We additionally minimize the mean squared error between the latent representations of this encoder and those produced by the image encoder from a corresponding 2D image. At test time, we compose the image encoder with the 3D decoder network to produce a voxelized 3D model from an image. Finally, we algorithmically produce a matching LEGO\textregistered\, approximation.} \label{fig:architecture} \end{figure*} We implement the layers and sparse tensor storage for voxelized objects using the Minkowski Engine \cite{Choy2019}, which implements sparse tensor objects and sparse tensor layers. Critically, employing this sparse tensor framework makes our network generalizable to higher 3D model resolutions without encountering severe runtime or memory restrictions, whereas other models that represent the 3D objects densely are limited to low resolutions. For a single example object, the loss is computed by summing the cross-entropy between the sigmoid activation of the decoded object across every decoding step (i.e. at resolutions $4^3$, $8^3$, $16^3$, and $32^3$), with the corresponding resolution discretization of the input object: \begin{equation} \mathcal{L}_{AE} = -\sum_{r=1}^{N}\sum_{n=1}^{N_r}\left[y_n\ln\sigma(x_n) + (1 - y_n)\ln(1 - \sigma(x_n))\right] \end{equation} where $N$ is the total number of resolutions output by the model (in this case, $N = 4$), $N_r$ is the number of voxels in the output of an object at resolution $r$, $x_n$ is the pre-activated output at a specific voxel location, $\sigma$ is the sigmoid activation, and $y_n$ is the corresponding target voxel status (1 for filled, 0 for unfilled). Thus, from the latent representation we may recover voxelized objects of four different resolutions while only training the network a single time, meeting the design criterion of resolution-flexibility set forth previously. We train this network using stochastic gradient descent with a mini-batch size of 16. We evaluated our training procedure with different sized networks to examine how this affected downstream performance. These results are plotted in Figure \ref{fig:optimization}, for decoders with the following number of channels on each layer: $[265, 384, 256, 96, 1]$, $[64, 128, 64, 32, 1]$, and $[96, 128, 96, 64, 1]$, chosen to be similar to the network size of the original TL-Embedding network \cite{Girdhar2016}. We observe the largest network overfitting to the training data, resulting in worse test performance. While performance is similar between the two reduced-size networks, the folding chair reproduction in the medium-sized network is qualitatively slightly improved (the smallest network appears to introduce rough-edge artifacts). As such, we use the $[96, 128, 86, 64, 1]$ network. \subsection{2D Image Encoder} Next, we train a separate encoder network to predict the 256-dimension latent representations learned from our 3D autoencoder using the rendered single 2D images of the objects. Our encoder design is based on AlexNet \cite{Krizhevsky2012}, a seminal network in image classification. Starting from single-channel gray-scale images with resolution $128^2$, convolutional layers increase the number of channels to 96, 256, 384, and 256, while intermittent max pooling layers decrease the resolution by a factor of two each time. Finally, fully-connected layers convert the 256 channel $4^2$ tensor to a 256-dimension feature vector. For optimization, we compute the mean-squared loss with the corresponding autoencoder latents: $\mathcal{L}_{enc} = \frac{1}{n}\sum_{n}(y_n - x_n)^2$, where $x_n$ represents the predicted latent feature on dimension $n$, and $y_n$ represents the target latent feature. This network is trained using the Adam optimizer with a mini-batch size of 128 for consistency with prior work. The fully trained image encoder and 3D autoencoder implement the first two steps of our Image2LEGO\textregistered\, pipeline. As depicted in Figure \ref{fig:architecture}, the image encoder takes an input image and produces a feature vector, which is fed to the 3D decoder network. As such, at test time, we do not need the 3D encoder network. \begin{figure*}[!t] \centering \includegraphics[width=\textwidth]{optimization.png} \caption{Depiction of Autoencoder training and validation loss as a function of iteration number for different channel numbers. An input image and reconstructed output for each architecture is depicted for visual interpretation of the loss.} \label{fig:optimization} \end{figure*} \subsection{Voxel to LEGO\textregistered\, Conversion} The final step is to convert voxelized 3D models to LEGO\textregistered\, builds. This is done deterministically with an algorithm adapted from the ColouredVoxels2Lego project \cite{Marsden2019}. This algorithm first converts each voxel to a 1 by 1 LEGO\textregistered\, brick, and then iterates through each layer in the $z$-coordinate of the model to find optimal groups of bricks to be combined into larger cuboid bricks. The output of this software is a written LDraw file, which may be visualized in a software such as LeoCAD~\footnote{\href{https://www.leocad.org/}{https://www.leocad.org/}}. For colored voxels, the original implementation of this algorithm matched each voxel to a LEGO\textregistered\, brick of existing color by finding the LEGO\textregistered\, color that minimizes the Euclidean distance to the voxel color in RGB space. However, for models that include color, we have adapted the handling of color to limit excessive variations in the face of a continuous color gamut in the 3D models, which will be discussed in the following section. By appending this adapted algorithm to the output of the 3D model decoder, we obtain our full Image2LEGO\textregistered\, pipeline. \subsection{Color Quantization} In certain application of the Image2LEGO\textregistered\, pipeline, such as for LEGO\textregistered\, models of faces, color is a critical feature. Although color is not handled in the TL-Octree network, which focuses on shape reconstruction, it is required for the LEGO\textregistered\, reconstruction of faces using the VRN. The 3D objects generated by the VRN contain a continuous gamut of colors, which, when fed to the original implementation of the voxels-to-LEGO\textregistered\, algorithm, may produce a patchiness due to the discrete and rather limited gamut of LEGO\textregistered\, brick colors. To increase the uniformity of color in the LEGO\textregistered\, products, we prepend the voxels-to-LEGO\textregistered\, algorithm with a $k$-means clustering algorithm, which determines from the span of colors in the 3D object the $k$ predominant colors and subsequently assigns each voxel to the nearest predominant color in RGB space (with a Euclidean distance metric). These $k$ predominant colors are then mapped to the nearest LEGO\textregistered\, brick color. For the examples shown in this work, $k = 4$, which was qualitatively found to sufficiently distinguish important features of the face (e.g. eyes, nose, mouth, facial hair) without producing excessive variations in color across the skin. \subsection{Hollow Shell vs. Full 3D Model} \begin{figure} \centering \includegraphics[width=\columnwidth]{cp_legov.png} \caption{A sample of LEGO\textregistered\, build instruction steps with an image of a face. Top: The internal structure is hollow, which leads to practical build challenges. Bottom: to mitigate this, we modify the original algorithm to fill in the structure with additional bricks.} \label{fig:3dface-shell} \end{figure} For 3D faces, our original models were hollow, resembling a shell as shown in Figure \ref{fig:3dface-shell} for LEGO\textregistered\, of a 3D face. We found that a hollow structure is difficult to physically build since the model often collapses inwards when built on a LEGO\textregistered\, surface. We therefore modified our original algorithm to fill in the 3D structure with bricks, rather than generating a hollow shell. \section{Results} We make our models and code for training and running our pipeline available on the project web page.\footnote{Project~page:~\href{https://krlennon.github.io/image2lego/}{https://krlennon.github.io/image2lego/}} \subsection{Learned Model Performance} The autoencoder was evaluated quantitatively using the Jaccard index, or intersection-over-union. Intersection-over-union (IoU) loss is a well established loss for 3D object detection\cite{10.1007/978-3-030-58565-5_28}, image segmentation\cite{Rezatofighi_2019_CVPR}, and pixelwise prediction \cite{pmlr-v139-yu21e} with the advantage of scale invariance. We calculate the test performance of the autoencoder using the IoU score between the model and reconstruction $\mathrm{IoU} = \frac{|X \cap Y|}{|X \cup Y|}$, where $X$ represents the set of filled voxels in the prediction, $Y$ the set of filled voxels in the target, and $|.|$ the cardinality of a set. These scores are listed in Table \ref{tab:iou} for the chair and airplane categories of the ModelNet40resolutions. The results show high fidelity reconstructions, with accuracy greater than 70\% up to $16^3$ resolution. This performance is comparable to that in previous studies \cite{Tatarchenko2017}, with the slightly lower reconstruction accuracy likely attributable to the fact that our model has been trained on many object classes, whereas many previous studies focus on single classes. The accuracy falls at $32^3$, which may be attributed to the decreased ability of the autoencoder to capture details on the object surface, compared to general features of the object shape and size. We note the Image-to-Model IoU scores are consistently within $\sim 0.2-0.3$ of the reconstructions, with performance decreases due primarily to the more limited representational capacity of a 2D image compared to a 3D model. \begin{table*}[!t] \centering \begin{tabular}{c|cccc|cccc} & \multicolumn{4}{c}{Autoencoder IoU} & \multicolumn{4}{c}{Image-to-Model IoU} \\ Resolution & $4^3$ & $8^3$ & $16^3$ & $32^3$ & $4^3$ & $8^3$ & $16^3$ & $32^3$ \\\hline Chair & 0.972 & 0.870 & 0.700 & 0.465 & 0.796 & 0.633 & 0.455 & 0.240 \\ Airplane & 0.985 & 0.868 & 0.705 & 0.439 & 0.749 & 0.528 & 0.431 & 0.228 \\\hline \end{tabular} \vspace{0.2cm} \caption{Intersection-over-union scores for the voxelized 3D model reconstructions at each resolution by the 3D autoencoder and the full image-to-model pipeline (the image encoder prepended to the decoder of the 3D autoencoder).} \label{tab:iou} \end{table*} Two particular challenges evident in the 3D reconstruction outputs are the models limited ability to construct fine details regarding depth, and it's limited ability to anticipate the shape of occluded parts of the model in the rendered images. These challenges are typical of 3D model reconstruction from single images \cite{Choy2016}. The former is typically reflected in rough surface features, and the latter reflected in missing voxels, for example in the partially or totally occluded back legs of chairs. However, we note that these challenges may be remedied by extending the Image2LEGO\textregistered\, pipeline to include 3D model reconstruction from multiple images of a single object, which we leave to future work. \subsection{Image2LEGO\textregistered\, Examples} \begin{figure*}[!t] \centering \includegraphics[width=0.9\textwidth]{results_a.png} \caption{Example results: an input image that our model renders as voxels, from which we then produce a LEGO\textregistered\, version (target voxelized object shown at right).} \label{fig:chair} \end{figure*} \begin{figure*}[!t] \centering \includegraphics[width=\textwidth]{results_b.png} \caption{Example results: variable resolution LEGO\textregistered\, arrangements from one given gray-scale image of an airplane. The mid-resolution airplane is physically build using real LEGO\textregistered\, bricks, as shown on the right.} \label{fig:plane} \end{figure*} Figure \ref{fig:chair} demonstrates the steps of the completed pipeline for a chair from the test data set. The output model at maximum resolution is qualitatively similar to the target model -- specifically, it shares the same structure of legs, arms, and chair back, as well as similar size and aspect ratio. The reconstruction is clearly imperfect, as demonstrated by the surface roughness. Upon close examination, one notices that the final LEGO\textregistered\, model consists of bricks of various shapes, as a result of the conversion of voxels to LEGO\textregistered\, bricks. The inclusion of many brick shapes leads both to a more interesting building experience, and to enhanced structural stability of the LEGO\textregistered\, model. However, the LEGO\textregistered\, structure has not been quantitatively evaluated for stability, and therefore may not in all cases be physically realizable without further modifications. Future work might consider modifying current algorithms with stability-inducing heuristics, as previously explored in other works studying 3D model to LEGO\textregistered\, conversion \cite{luo2015legolization}. To demonstrate another important feature of our pipeline, we use a single image to obtain LEGO\textregistered\, models of various sizes. Figure \ref{fig:plane} demonstrates the multiple-scale resolution capabilities of our model for an example airplane image, also from the test data set. LEGO\textregistered\, models are generated at $8^3$, $16^3$, and $32^3$ resolution (the $4^3$ model for this example is too low-resolution to be identified as a plane, so it is omitted here). At all resolutions higher than $4^3$, the plane-like structure is evident. These resolutions are able to meet varying user needs. For instance, a smaller model may be useful as a part of a larger LEGO\textregistered\, scene, while the larger models may be useful for stand-alone sets. At the lower two resolutions, the model is fully connected and represents a physically realizable build. To this end, we have assembled a list of brick pieces needed to construct the $16^3$ LEGO\textregistered\, airplane (a total of 40 bricks), and automatically generated a set of building instructions in LeoCAD. We next demonstrate the capabilities of our model in the real world, with user-supplied images. Figure \ref{fig:chair-intro} presents the full pipeline for a real photograph captured with one author's smartphone, which, to our knowledge, represents the first ever example of a LEGO\textregistered\, model produced directly from a photograph, through a combination of background matting \cite{sengupta2020background,lin2021real} and our Image2LEGO\textregistered\, method. This is a significant step forward for hopeful hobbyists, and we expect that it could lead to an easily implementable and highly accurate program once trained with a larger and more varied data set in the future. \begin{figure*}[!t] \centering \includegraphics[width=\textwidth]{chair_example.png} \caption{The full pipeline of our model: we input a photograph and apply background matting first, as a preprocessing step. Our model then takes this as input and produces a 3D voxel grid. We then supply this to an algorithm that maps it to a LEGO\textregistered\, structure. The output resolution is flexible, offering opportunities to construct plausible LEGO\textregistered\, models at different levels of effort, expertise, and availability of bricks for construction.} \label{fig:chair-intro} \end{figure*} \begin{figure*}[!t] \centering \includegraphics[width=0.9\textwidth]{dalle_example.png} \caption{An example representation of the capabilities for combining our system with DALL-E to produce LEGO\textregistered\, models from written descriptions. The DALL-E model produces an image of a chair based on a description provided by the user, following which the image of the chair is converted to a 3D LEGO\textregistered\, model by our program. } \label{fig:dalle} \end{figure*} \subsection{Caption2LEGO\textregistered\,} \label{sec:caption2lego} As discussed in the Introduction, the Image2LEGO\textregistered\, pipeline is amenable to various extensions, be they different adaptations of the image-to-3D model architecture or pre- or post-procesing techniques that may enhance the input or output space of the pipeline. One example that we consider here is to add an initial caption-to-image conversion step, accomplished by the DALL-E model~\cite{Ramesh2021}, which produces images from arbitrary written descriptions. This extension of the pipeline dramatically expands the creative space for users by not limiting LEGO\textregistered\, designs to those that can be easily photographed, but rather to any fictional or real object that can be described in words. We present an example in Fig.~\ref{fig:dalle}, with the input caption: "An armchair in the style of a gourd" -- an object that may or may not exist to be photographed in the real world, but which DALL-E can produce as a photorealistic rendering. For all examples, see our project page. \section{Conclusion} We present a pipeline for producing 3D LEGO\textregistered\, models from 2D images. This work represents a significant advancement in the quest to generate physical LEGO\textregistered\, models from single 2D images. Our newly-trained TL-Octree structured network is able to construct the defining geometric features of multiple classes of objects from two-dimensional input images. We also demonstrate the pipeline extension to other, pre-trained or differently structured networks, such as the VRN for 3D face reconstruction, which generates colorful LEGO\textregistered\, models of a variety of object classes. In the future, these capabilities may be further improved with an expanded data set, which may include multiple object poses to mitigate the current limitations related to occluded spaces and fine depth features, and by incorporating additional features in the pipeline to make builds more usable and sophisticated (e.g. ensuring stable structures and expanding the catalog of LEGO\textregistered\, piece shapes beyond cuboid bricks). We believe this work will greatly improve the accessibility of creative, personalized LEGO creations for builders of all skills and all ages. {\small \bibliographystyle{ieee_fullname}
1806.03224
\section{Introduction and Motivational Use Case} As part of the Fermilab HEPCloud project \cite{Holzman2017-short}, we have been constructing an intelligent decision support system (IDSS). HEPCloud is rapidly becoming the primary system for provisioning compute resources for all Fermilab-affiliated experiments. This provisioning is responsible for managing time allocations and monetary budget usage. It spans facilities including the High Performance Computing centers like Cori at National Energy Research Scientific Computing Center and commercial clouds like Google Compute Engine and Amazon Web Services. Our IDSS, the Decision Engine (DE) \cite{decision-engine-short}, provides the automation of requests for computing resource allocations across all participating experiments and affiliated facilities. An overall goal of the DE is to use both administration-defined and management-defined policies to create resource scheduling requests on behalf of the HEPCloud facility. The DE is responsible for ensuring that \textit{policies} are applied in a reliable, traceable and consistent manner. The policies that are carried out ultimately result in resource requests, and ensure that these requests match incoming job requirements. In order to reliably meet peak demands, Fermilab must plan to provision enough resources to cover any forecasted peak. Using some other statistic such as median demand can be cost ineffective, since some resources may be underutilized during non-peak periods---even when resource sharing (enabled by HEP grids) is accounted for. Scientific productivity will be affected if the demand is underestimated, since there is a long lead time to significantly increase the use of local or remote resources. HEPCloud intends to mitigate these problems by intelligently extending the current Fermilab compute facility to execute jobs submitted by scientists on a diverse set of resources. The DE is a key component in this process. It provides the necessary real-time infrastructure to efficiently operate in an era of diverse resource needs, competitive cloud resource providers, and HPC facilities. A typical workflow executed by a workload management systems (WMS) in provisioning computing resources for facility expansion is shown in figures \ref{fig:channel_1} through \ref{fig:channel_3}. This multi-stage decision making can alternatively be combined into a single decision making stage, but, at the cost of adding complexity to the system. During each stage, the WMS periodically executes, the information gathering phase (first row of each of these diagrams), the decision making phase (decision block) and the result publication phase (publish block). First, the WMS queries different systems and services to identify computational jobs in various job queues that are in need for compute resources. Based on the the jobs and resources manifests, the WMS short lists candidate resources eligible to run these jobs. During the second stage, the WMS uses these shortlisted resources, their price performance metrics, their costing information, current occupancy and state. It ranks them based on a given criteria like figure of merit (cost--benefit) in this case. During the final stage the WMS applies administration-defined and management-defined policies to generate resource requests that are used by a provisioner to expand the facility. \begin{figure}[h] \centering \includegraphics[width=0.69\columnwidth]{images/ccgrid-workflow-channel1} \caption{\label{fig:channel_1}Determine resources available to run jobs.} \centering \includegraphics[width=0.82\columnwidth]{images/ccgrid-workflow-channel2} \caption{\label{fig:channel_2} Select best resources for jobs.} \centering \includegraphics[width=.72\columnwidth]{images/ccgrid-workflow-channel3} \caption{\label{fig:channel_3}Generate resource requests.} \end{figure} \section{Architectural Overview} \begin{figure}[h] \centering \includegraphics[width=.38\textwidth]{images/HEPCloud_DE_Design_to_Runtime_jbk} \caption{\label{fig:design} Overview of the Decision Engine architecture.} \end{figure} The primary drivers of the design were: \begin{inparaenum}[(1)] \item the need for a framework that enforces the processing stages defined and implemented by the program, and which provides for the injection of user-supplied code and expert knowledge; \item the need for a configuration and assembly system that instantiates the appropriate user-supplied code, and that provides the necessary context-dependent information to realize different parameterizations of that code; and \item a means to manage the data being processed and the varying timescales for the relevance and validity of those data. \end{inparaenum} The DE is a system that can manage and run algorithms of varying complexity for the purpose of requesting resources for computing jobs. The DE defines a \emph{Decision Channel} as a grouping of tasks that generate a \emph{decision}. A decision consists of a recommendation of one or more actions that should be executed (such as allocation of computing resources), actions that are directly executed (such as updating of monitoring systems), or both. The modularity provided by Decision Channels allows the DE to manage decision making processes as distinct units and allows different algorithms to be developed and tested independently by different domain experts. Each Decision Channel task, implemented as a Python class, contains several \emph{modules}, each of which adheres to a common protocol. We define four module types: \emph{Source}, \emph{Transform}, \emph{Logic Engine}, and \emph{Publisher}. A Decision Channel minimally consists of one of each of these kinds of modules as shown in Figure~\ref{fig:design}. Each module adheres to a specific \textit{contract} that governs how the modules connect. For example, each module (except Sources) expresses the names of all the data products the module consumes, and (except for Publishers) the names of all the data products the module produces. Each Source is scheduled periodically by the framework and is responsible for communicating with an external system to gather data that acts as input to the decision making process. A Transform module contains algorithms to convert input data into new data. A Transform consumes one or more data products (produced by one or more Sources, Transforms, or both) within a Decision Channel, and produces one or more new data products. The Logic Engine is a rule-based forward-chaining inference engine that operates on facts. Each fact has a name and an expression that evaluates to a boolean. The value of a fact is the value of the expression. Expressions can access and operate on data produced by Source and Transform modules. A rule consists of a condition composed of references to facts and boolean operations on their values. Actions are triggered when the rule evaluates to boolean ``True''. Logic Engine rules can produce new facts that evaluate to the result of the rule's boolean expression. This fact can be used by subsequent rules. In this manner, rules can be developed separately as blocks and chained together. Publishers consume data products produced by Sources and Transforms. They use remotely exposed APIs to publish the data products to the external systems. \section*{Acknowledgments} This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. \bibliographystyle{IEEEtran}
2012.00870
\section{Introduction} Let $p$ be a prime and $q=p^n$. A map $f:\F_q \to \F_q$ is called differentially $d$-uniform (abbreviated $d$-uniform), if \[ d=\max_{a\neq 0,b\in\F_q} |\{x\in\F_q: f(x+a)-f(x)=b\}|. \] A 1-uniform map $f:\F_q \to \F_q$ is called planar, that is $f$ is planar if $f(x+a)-f(x)$ is a permutation for any $a\in\F_q^*$. Planar maps exist if and only if $q$ is odd. A map $f$ is called almost perfect nonlinear (APN) if $f$ is 2-uniform. Observe that if $q$ is even, then an equation $f(x+a)+f(x)=b$ has always an even number of solutions, since $x$ solves it if and only if $x+a$ does so. In particular there are no 1-uniform maps for $q$ even, and the APN maps have the smallest possible uniformity on binary fields. APN maps and more generally maps in characteristic $2$ with low uniformity are an important research object in cryptography, mainly because they provide good resistance to differential attacks when used as an S-box of a block cipher. For a thorough survey detailing the importance of such maps for cryptography, we refer to~\cite{nybergsurvey}. Moreover, maps with low uniformity are intimately connected to certain codes~\cite{cczpaper,carletdingcodes}. Planar maps can be used for the construction of various structures in combinatorics and algebra, for example difference sets, projective planes and semifields~\cite{pottsurvey}. \\ \noindent A celebrated result of Ding and Yuan obtained in \cite{ding2006diffset} shows that image sets of planar maps yield skew Hadamard difference sets which are inequivalent to the Paley–Hada\-mard difference sets. This disproved a longstanding conjecture on the classification of skew Hadamard difference sets and motivated an interest to a better understanding of image sets of planar maps, see for example \cite{coulter2011number,kyureghyan2008some,weng2007pseudo}. The image sets of $d$-uniform maps with $d>1$ can be used to construct optimal combinatorial objects too, as shown in \cite{carlet2016quadratic}. However the case $d>1$ is less studied compared to $d=1$, although also $d=1$ is far from being completely understood too. Here we extend some of results on the image sets of planar maps with $d=1$ to cover a general $d$. The behavior of the image sets of $d$-uniform maps and proofs are more complex for $d>1$. This is simply explained by the fact that the preimage distribution of a difference map $f_a: x\mapsto f(x+a)-f(x)$ is not unique and more difficult to control when $d>1$. The smaller values $d$ are easier to handle.\\ \noindent In this paper we obtain a lower bound for the size of $d$-uniform maps, which is sharp for several classes of $d$-uniform maps. However there are cases, where we expect that our bound can be improved. We prove several results on the preimage distribution of $d$-uniform maps. We observe that some classes of $d$-uniform Dembowski-Ostrom polynomials are uniquely characterized by the size of their image set. Further we consider in more detail the case $d=2$, that is APN maps, on binary fields. For an APN map $f:\F_{2^n} \to \F_{2^n}$ the lower bound is \[ |\Image(f)|\geq \begin{cases} \frac{2^n+1}{3} & n \text{ is odd,} \\ \frac{2^n+2}{3} & n \text{ is even}. \end{cases} \] The first published proof for this bound appears in \cite[Lemma 5]{carlet-heuser-picek}, where a lower bound on the differential uniformity via image set size is presented. Since the study of image sets of APN maps was not a goal of \cite{carlet-heuser-picek}, the lower bound in it remained unnoticed by most of researchers on APN maps. A systematic study of the image sets of APN maps is originated in \cite{czerwinski2020minimal}. Beside the lower bound in \cite{czerwinski2020minimal} several properties and examples of the image sets of APN maps are presented. In this paper we develop the study of image sets of APN maps further. Our results indicate that the APN maps with minimal image size play a major role for understanding fundamental properties of APN maps. We believe that a deeper analysis of the image sets of APN maps is an interesting research direction which will allow to progress in several of current challenges on APN maps. \\ \noindent Presently, the only known primary families of APN maps are monomials $x\mapsto x^k$ or Dembowski-Ostrom polynomials. These maps serve as a basis for a handful known secondary constructions of APN maps \cite{nybergsurvey,pottsurvey}. Whereas the image sets of monomial maps are multiplicative subgroups extended with the zero element and so they are uniquely determined by $\gcd(q-1,k)$, the behavior of the image sets of Dembowski-Ostrom polynomials is complex and not very well understood yet. {Results from \cite{carlet2016quadratic} imply that if $n$ is even, then APN Dembowski-Ostrom polynomials of shape $f(x^3)$ have the minimal size $(2^n+2)/3$. For an odd $n$, the image size of APN Dembowski-Ostrom polynomials with exponents divisible by $3$ (as integers) is not unique. We present such families with image sizes $2^n, 2^{n-1}$ and $5\cdot2^{n-3}$ in this paper. }\\ \noindent At beginning of our studies we were quite certain that having an image set of minimal size is a rare property and not many of known APN maps will satisfy it. Despite our intuition we found that APN maps constructed by Zhou-Pott and their generalizations suggested by G\"olo\u{g}lu are such maps. This is quite remarkable since these families contain large number of inequivalent maps as shown in \cite{kaspers}.\\ \noindent {The set of component-wise plateaued maps includes quadratic maps, and hence also Dembowski-Ostrom polynomials. For $n$ even, component-wise plateaued maps with certain preimage distribution have a very special Walsh spectrum. For APN maps this result implies that almost-$3$-to-$1$ component-wise plateaued maps have the classical Walsh spectrum, as observed in \cite{carletplateaued}. This combined with the knowledge on the behavior of image sets explains why several important families of APN maps have the classical Walsh spectrum.} For $n$ odd we find a direct connection between the image set of an almost bent map and the number of its balanced component functions. As a consequence, we show that any almost bent map has a balanced component function. We conclude our paper with an upper bound on the image size of non-bijective almost bent maps and component-wise plateaued APN maps. To our knowledge these are the only currently known non-trivial upper bounds on the image size of APN maps.\\ \section{Images of $d$-uniform functions}\label{general} In this section we extend some of the results from \cite{coulter2011number,kyureghyan2008some,weng2007pseudo} on the image sets of planar maps with $d=1$ to cover a general $d$. The behavior of the image sets of $d$-uniform maps and the proofs are more complex for $d>1$. This is simply explained by the fact that the preimage distribution of a difference map $f_a: x\mapsto f(x+a)-f(x)$ is not unique and more difficult to control when $d>1$. \\ \noindent Let $\Image(f)$ be the image set of a map $f:\F_q \to \F_q$. For $r\geq 1$ we denote by $M_r(f)$ the number of $y\in\F_q$ with exactly $r$ preimages. Further, let $N(f)$ denote the number of pairs $(x,y)\in\F_q^2$, such that $f(x)=f(y)$. Note $N(f)\geq q$ and $N(f) =q$ exactly when $f$ is a permutation on $\F_q$. Let $m$ be the degree of the map $f$, that is the degree of its polynomial representation of degree not exceeding $q-1$. Then $M_r(f) = 0$ for every $r>m$. The following identities follow directly from the definition of $M_r(f)$ and $N(f)$ \begin{align} \sum_{r=1}^m M_r(f) &= |\Image(f)| \label{eq_Mr}\\ \sum_{r=1}^m rM_r(f) &= q \label{eq_rMr}\\ \sum_{r=1}^m r^2M_r(f) &= N(f) \label{eq_r2Mr}. \end{align} The quantities $M_r$ and $N(f)$ appear naturally when studying the image sets of maps on finite fields, see for example \cite{carletdingnonlinearities,coulter2011number,kyureghyan2008some}. A map $f:\F_q \to \F_q$ is called $k$-to-1, if every element in the image of $f$ has exactly $k$ preimages, that is if $M_r(f) =0$ for any $0<r\neq k$. \begin{lemma}\label{Lem_Cauchy_Schwarz} Any map $f:\F_q\to\F_q$ fulfills \[|\Image(f)|\geq\frac{q^2}{N(f)},\] with equality if and only if $f$ is $k$-to-1. \end{lemma} \begin{proof} It follows from the Cauchy-Schwarz inequality with (\ref{eq_Mr}),~(\ref{eq_rMr}) and (\ref{eq_r2Mr}) that \[ q^2=\left(\sum_{r=1}^m r M_r(f)\right)^2 \leq \left(\sum_{r=1}^m r^2M_r(f)\right)\left(\sum_{r=1}^m M_r(f)\right)=N(f)|\Image(f)|. \] The equality above holds if and only if there is a $k \in \R$ such that $r\sqrt{M_r(f)}=k\sqrt{M_r(f)}$ for all $1\leq r\leq m$, that is $M_k(f)=|\Image(f)|$ and $M_r(f)=0$ for $r\neq k$. \qed \end{proof} The following proof is an adaption for any $d$ of \cite[Lemma 2]{kyureghyan2008some}, where planar functions with $d=1$ were considered. \begin{lemma}\label{Lem_Nfg} Let $f:\F_q\to\F_q$ be $d$-uniform. Then \[ N(f) \leq q+d\cdot t_0(f), \] where $t_0(f)$ is the number of elements $a\ne 0$ in $\F_q$ for which $f(x+a)-f(x)=0$ has a solution $x$ in $\F_q$. The equality holds if and only if every of these $t_0(f)$ equations has exactly $d$ solutions. \end{lemma} \begin{proof} Note that \begin{align*} N(f) &= |\{(u, v)\in\F_q^2: f(u) = f(v)\}| \\ &= |\{(u, v)\in\F_q^2: f(u)-f(v) = 0\}| \\ &= |\{(a, v)\in\F_q^2: f(v+a)-f(v) = 0\}|. \end{align*} For $a=0$ every pair $(0,v)$ with $v\in\F_q$ contributes to $N(f)$. If $a\neq 0$, then $f(v+a)-f(v)=0$ has at most $d$ solutions because $f$ is $d$-uniform. Therefore \[ N(f)\leq q+ d\cdot t_0(f). \] \qed \end{proof} Observe that for a planar map $N(f) = 2q-1$, since $f(v+a)-f(v)=0$ has a unique solution for every non-zero $a$. Generalizing this, a map $f:\F_q \to \F_q$ is called zero-difference $d$-balanced if the equation $f(x+a)-f(x)=0$ has exactly $d$ solutions for every non-zero $a$, see \cite{carlet2016quadratic}. Hence $N(f) = q + (q-1)d = (d+1)q-d$ for a zero-difference $d$-balanced map. \begin{corollary}\label{cor:Nf} Let $f:\F_q\to\F_q$ be $d$-uniform. Then \[ N(f) \leq (d+1)q-d. \] The equality holds if and only if $f$ is zero-difference $d$-balanced. \end{corollary} \begin{proof} The statement follows from Lemma \ref{Lem_Nfg} and $t_0(f) \leq q-1$. \qed \end{proof} \begin{remark} Note that several of the results in this paper hold for any map $f$ with $N(f)\leq (d+1)q-d$, and not only for $d$-uniform ones. Some of our proofs can easily be adapted if $N(f)=kq\pm \varepsilon$ is known. \end{remark} \begin{theorem}\label{thm:Image_Size1} Let $f:\F_q\to\F_q$ be $d$-uniform. Then \[ |\Image(f)|\geq \left\lceil\frac{q}{d+1}\right\rceil. \] \end{theorem} \begin{proof} With Lemma~\ref{Lem_Cauchy_Schwarz} and Corollary~\ref{cor:Nf} it follows that \begin{align}\label{equality} |\Image(f)|&\geq \left\lceil\frac{q^2}{N(f)}\right\rceil \geq \left\lceil\frac{q^2}{(d+1)q-d}\right\rceil \geq \left\lceil\frac{q}{d+1}\right\rceil. \end{align} \qed \end{proof} The proof of Theorem~\ref{thm:Image_Size1} shows that the gap between $|\Image(f)|$ and $\left\lceil\frac{q}{d+1}\right\rceil$ is small, when $f$ is close to being $k$-to-1 and $N(f)$ is about $(d+1)q-d$. Furthermore, the bound in Theorem~\ref{thm:Image_Size1} is sharp; if $d+1$ is a divisor of $q-1$, then the map $m(x)= x^{d+1}$ reaches the lower bound of Theorem~\ref{thm:Image_Size1} and it is $d$-uniform. To see that $m(x)$ is indeed $d$-uniform observe that for any non-zero $a$ the difference map $m(x+a)-m(x) = (x+a)^{d+1}-x^{d+1}$ has degree $d$ and if $\omega \ne 1$ with $\omega ^{d+1}=1$ then $x:= (\omega -1)^{-1}$ satisfies $(x+1)^{d+1}-x^{d+1} =0$. \\ Theorem~\ref{thm:Image_Size2} extends \cite[Theorem 2]{kyureghyan2008some} to cover an arbitrary $d$. Besides of giving a different proof for Theorem~\ref{thm:Image_Size1}, it additionally provides information on the possible preimage distribution of a $d$-uniform map with minimal image set. For a map $f:\F_q\to\F_q$ and $S\subseteq\F_q$, $a\in\F_q$, we denote by $f^{-1}(S)$ the preimage of $S$ under $f$ and by $\omega(a)$ the size of $f^{-1}(\{a\})$. \begin{theorem}\label{thm:Image_Size2} Let $f:\F_q\to\F_q$ be $d$-uniform. Then \[ |\Image(f)|\geq \left\lceil\frac{q}{d+1}\right\rceil. \] If \[ |\Image(f)| = \left\lceil\frac{q}{d+1}\right\rceil = \frac{q+\varepsilon}{d+1} \] with $1\leq\varepsilon\leq d$, then \begin{equation}\label{eq_min_image_set1} \sum_{y\in \Image(f)}(\omega(y)-(d+1))^2 \leq (d+1)(\varepsilon-1) + 1. \end{equation} \end{theorem} \begin{proof} By Corollary~\ref{cor:Nf} \[ \sum_{y\in \Image(f)}(\omega(y))^2 = \sum_{y\in\F_q} (\omega(y))^2 = N(f) \leq (d+1)q-d. \] Since \[\sum_{y\in \Image(f)}\omega(y) = q,\] we get \begin{align*} 0 &\leq \sum_{y\in \Image(f)}(\omega(y)-(d+1))^2 = \sum_{y\in \Image(f)}((\omega(y))^2-2(d+1)\omega(y)+(d+1)^2) \\ &= N(f)-2(d+1)q+(d+1)^2|\Image(f)| \leq (d+1)q-d-2(d+1)q+(d+1)^2|\Image(f)| \\ &= -(d+1)q-d+(d+1)^2|\Image(f)|. \end{align*} Hence \[ (d+1)^2|\Image(f)| \geq (d+1)q+d \] and \[ |\Image(f)|\geq \left\lceil\frac{(d+1)q+d}{(d+1)^2}\right\rceil = \left\lceil\frac{q}{d+1}+\frac{d}{(d+1)^2}\right\rceil \geq \left\lceil\frac{q}{d+1}\right\rceil. \] Now let \[|\Image(f)| = \left\lceil\frac{q}{d+1}\right\rceil = \frac{q+\varepsilon}{d+1}\] with $1\leq\varepsilon\leq d$. Then \begin{align*} \sum_{y\in \Image(f)}(\omega(y)-(d+1))^2 &\leq -(d+1)q-d+(d+1)^2\frac{q+\varepsilon}{d+1}\\ &= \varepsilon d -d + \varepsilon=(d+1)(\varepsilon-1) + 1. \end{align*} \qed \end{proof} Later we use the following observation: Let $f$ be a $d$-uniform map and \[|\Image(f)| = \frac{q+\varepsilon}{d+1}.\] Define \[ D = \{y\in \Image(f): \omega(y)\neq d+1\}. \] Then we have \[ q = \sum_{y\in \Image(f)} \omega(y) = \sum_{y\in \Image(f)\setminus D} \omega(y) + \sum_{y\in D} \omega(y) = \left(\frac{q+\varepsilon}{d+1}-|D|\right)(d+1) + \sum_{y\in D} \omega(y), \] implying \begin{equation}\label{eq_min_image_set2} \sum_{y\in D}\omega(y) = |D|(d+1)-\varepsilon. \end{equation} The following theorem is a generalization of \cite[Theorem 1]{coulter2011number} and it provides information on the possible preimage distribution of a $d$-uniform map. \begin{theorem}\label{th:2.7} Let $f:\F_q \to \F_q$ be $d$-uniform. Then \begin{equation} \label{th-1case} \sum_{r=1}^d r(d+1-r)M_r(f)\geq d \end{equation} and \begin{equation} \label{th-2case} \sum_{r=1}^{d+1}r(d+2-r)M_r(f) \geq q+d. \end{equation} { The equality in \eqref{th-1case} holds if and only if $N(f)=(d+1)q-d$ and $M_r(f)=0$ for all $r \geq d+2$; and the equality in \eqref{th-2case} holds if and only if $N(f)=(d+1)q-d$ and $M_r(f)=0$ for $r>d+2$. The latter case reduces to \[ \sum_{r=1}^d r(d+1-r)M_r(f) = (d+2)M_{d+2}(f)+d. \] } \end{theorem} \begin{proof} Let $m$ be the degree of $f$. With Corollary~\ref{cor:Nf} we have $N(f)\leq(d+1)q-d$. Using \eqref{eq_rMr} and \eqref{eq_r2Mr} we get \[ \sum_{r=1}^m r^2M_r(f) = N(f) \leq (d+1)q-d = (d+1)\sum_{r=1}^m rM_r(f) - d, \] so that \begin{equation}\label{eq1} \sum_{r=1}^{d+1} (r(d+1)-r^2)M_r(f) -d\geq \sum_{r=d+2}^m (r^2-(d+1)r)M_r(f) \end{equation} As the right hand side is non-negative, we have in particular \[ \sum_{r=1}^d r(d+1-r)M_r(f)\geq d, \] with equality if and only if $M_r(f)=0$ for all $r \geq d+2$ and $N(f) = (d+1)q-d$. Note that for $r\geq d+2$ it holds that $r^2-(d+1)r\geq r$, so that \eqref{eq1} turns into \begin{equation}\label{eq2} \sum_{r=1}^{d+1} r(d+1-r)M_r(f)\geq \sum_{r=d+2}^m (r^2-(d+1)r)M_r(f)+d\geq \sum_{r=d+2}^m rM_r(f) + d. \end{equation} Adding $\sum_{r=1}^{d+1}rM_r(f)$ on both sides of \eqref{eq2} and using \eqref{eq_rMr} gives \[ \sum_{r=1}^{d+1} r(d+2-r)M_r(f) \geq \sum_{r=1}^m rM_r(f) +d = q+d. \] For equality to hold, we need equality in \eqref{eq2}. The first equality in \eqref{eq2} holds if and only if $N(f)=(d+1)q-d$, the second equality holds if and only if \[ \sum_{r=d+2}^m (r^2-(d+1)r)M_r(f) = \sum_{r=d+2}^m rM_r(f), \] that is $M_r(f)=0$ for $r> d+2$. In that case \[ \sum_{r=1}^d r(d+1-r)M_r(f) = (d+2)M_{d+2}(f)+d. \] \qed \end{proof} \section{Dembowski-Ostrom $d$-uniform polynomials} A polynomial/map $f \in \F_q[x]$ is called Dembowski-Ostrom (DO), if it can be written as \[ f(x) = \sum_{i,j=0}^{n-1}a_{ij}x^{p^i+p^j} \] when $q$ is odd and \[ f(x) = \sum_{\substack{i,j=0\\i\neq j}}^{n-1}a_{ij}x^{2^i+2^j} \] when $q$ even. Note that $x^2$ is a DO polynomial for any odd $q$, but not for even $q$. Maps obtained as the sum of a DO map with an $\F_p$-affine one are called quadratic. Let $k$ be a divisor of $q-1$. We call a map $f$ $k$-divisible, if it can be written as $f(x)=f'(x^k)$ for a suitable $f'$. Observe that $f$ is $k$-divisible if and only if $f(x)=f(\omega x)$ for all $x\in \F_q$ and all $\omega \in \F_q^*$ whose order divides $k$. Further, we call a map $f$ almost-$k$-to-1\footnote{Note that in many papers such maps are called just $k$-to-1. However we use the terminology almost-$k$-to-1 to avoid confusion with $k$-to-1 maps considered in Section \ref{general}.}, if there is a unique element in $\Image(f)$ with exactly 1 preimage and all other images have exactly $k$ preimages. Note that if $f(x)$ is $d$-uniform, then so is $f(x+c)+u$ for arbitrary $c,u\in\F_q$. Hence we may without loss of generality assume that $f(0)=0$ and that 0 is the unique element with exactly one preimage, when considering the $d$-uniform property of an almost-$k$-to-$1$ map $f$.\\ For a non-zero $a\in \F_q$ we define $$D_a(f):= \{f(x+a)-f(x) : x\in \F_q\},$$ which we call a differential set of $f$ in direction $a$. It is well-known and easy to see that the differential sets of quadratic maps are $\F_p$-affine subspaces. The following result can be partly deduced from Proposition 3 and Corollary 1 and their proofs in \cite{carlet2016quadratic}. We include its proof for the convenience of the reader. \begin{lemma}\label{Lem_hyperplane} Let $q=p^n$ with $p$ prime, $d+1$ be a divisor of $q-1$ and $f:\F_q \to \F_q$ be a $(d+1)$-divisible DO polynomial which is almost-$(d+1)$-to-1. Then \begin{itemize} \item[(a)] $f$ is zero-difference $d$-balanced; \item[(b)] $f$ is $d$-uniform and all its differential sets are $\F_p$-linear subspaces; \item[(c)] $d=p^i$ for some $i\geq 0$. \end{itemize} \end{lemma} \begin{proof} First we prove statements (a) and (b): Since $f$ is a DO polynomial, it is $d$-uniform in the case it is zero-difference $d$-balanced. First we show that for any non-zero $a$ the equation $f_a(x) = f(x+a)-f(x) =0$ has a solution (equivalently, $D_a(f)$ is a subspace). Indeed, let $1\neq\omega\in\F_q$ be a zero of $x^{d+1}-1$ and set $x=(\omega-1)^{-1}a$. This $x$ fulfills $x+a = \omega x$, and hence $f_a(x) = f(\omega x)-f(x) =0$. In particular, $f_a(x)=0$ hast at least $d$ solutions. On the other side, since $f$ is $(d+1)$-divisible and almost-$(d+1)$-to-1, the equation $f(x+a)=f(x)$ is fulfilled if and only if $x+a = \omega x$ for an element $\omega$ satisfying $\omega^{d+1}=1$. This implies that a solution $x$ must be given by $a(\omega-1)^{-1}$. And hence there are at most $d$ solution for $f_a(x)=0$. The statement in (c) follows from (b). Indeed, the differential sets of $f$ are linear subspaces of size $p^n/d$, and hence $d=p^i$ for some $i \geq 0$. \qed \end{proof} The following result holds if $f$ is not a DO polynomial too: \begin{theorem}\label{Thm_d+1to1_if_d-uniform} Let $d+1$ be a divisor of $q-1$ and $f:\F_q\to\F_q$ be $(d+1)$-divisible and $d$-uniform. Then $f$ is almost-$(d+1)$-to-1. \end{theorem} \begin{proof} As $f$ is $(d+1)$-divisible, we have $|\Image(f)|\leq \frac{q+d}{d+1}$. By Theorem~\ref{thm:Image_Size1} we have $|\Image(f)|\geq \frac{q+d}{d+1}$ and therefore $|\Image(f)|= \frac{q+d}{d+1}$ and $f$ is almost-$(d+1)$-to-1. \qed \end{proof} A fascinating property of DO planar polynomials proved in \cite{coulter2011number,weng2012further} is: A DO polynomial is planar if and only if it is almost-2-to-1. Observe that for an odd $q$ a DO polynomial is always $2$-divisible. Corollary 1 in \cite{carlet2016quadratic} proves an analog of this result for the $d$-uniform case; Theorem \ref{th_divisible} is a reformulation of it using the terminology introduced in this paper. \begin{theorem}\label{th_divisible} Let $d+1$ be a divisor of $q-1$. A $(d+1)$-divisible DO polynomial $f$ is $d$-uniform if and only if $f$ is almost-$(d+1)$-to-1. \end{theorem} \begin{proof} It follows directly from Theorem~\ref{Thm_d+1to1_if_d-uniform} and Lemma~\ref{Lem_hyperplane}. \qed \end{proof} Note that if $d=p^i$ and $d+1=p^i+1$ is a divisor of $q-1$, the map $x\mapsto x^{d+1}$ is a $d$-uniform DO Polynomial that is almost-$(d+1)$-to-1. \\ \section{Image sets of APN maps of binary finite fields} In the following sections we study the image sets of APN maps on binary fields. Such maps are of particular interest because of their applications in cryptography and combinatorics. \\ \noindent {The first systematic study of the image sets of APN maps was originated in \cite{czerwinski2020minimal}, where it is shown that the image set of an APN map on $\F_{2^n}$ contains at least $\lceil (2^n+1)/3 \rceil$ elements. This lower bound is proved by methods of linear programming in \cite{czerwinski2020minimal}, which is a novel approach for studying image sets of maps on finite fields. The APN monomials have the image size $(2^n+2)/3$ for $n$ even, showing that the lower bound is sharp for $n$ even. Several numerical results presented in \cite{czerwinski2020minimal} suggest that the minimal image size of APN maps is much larger, probably around $2^{n-1}$, if $n$ is odd.} {Preprint \cite{carlet2020} comments that the lower bound on the image sets of APN maps appears (in an equivalent form) already in \cite[Lemma 5]{carlet-heuser-picek}, where a lower bound on the differential uniformity via image set size is presented. The arguments proving Lemma 5 in \cite{carlet-heuser-picek} are similar to ours presented for Lemma \ref{Lem_Cauchy_Schwarz} and Corollary \ref{Lem_Nfg}. These are more or less standard for studying image sets of maps with special additive properties on finite sets, see \cite{kyureghyan2008some,weng2007pseudo} .} \\ {Results of Section \ref{general} allow, beside the lower bound on the image size, to describe also the possible preimage distributions of APN maps meeting it, see Theorem \ref{thm:APN_Lower_Bound}. For the APN maps on $\F_{2^n}$ Theorem~\ref{th:2.7} reduces to:} \begin{corollary}\label{cor:apn-1-2} Let $f:\F_{2^n} \to \F_{2^n}$ be APN. Then \begin{itemize} \item[(a)] \[ M_1(f)+M_2(f)\geq 1, \] and hence there is at least one element with exactly 1 or 2 preimages. For $n$ even, the inequality is sharp if and only if $f$ is almost-$3$-to-$1$. For $n$ odd, the inequality is sharp if and only if there is a unique element in $\Image(f)$ with exactly two preimages and the remaining elements have exactly three preimages. \item[(b)] \[ 3M_1(f) + 4M_2(f) + 3M_3(f) \geq 2^n+2. \] The equality holds if and only if $N(f)=3\cdot 2^n-2$ and $M_r(f)=0$ for $r>4$, in which case \[ M_1(f)+M_2(f) = 2M_4(f)+1. \] \end{itemize} \end{corollary} \begin{proof} {The inequalities as well as the equality case in (b) follow directly from Theorem~\ref{th:2.7}. Let $M_1(f)+M_2(f) = 1$. Then $M_r(f) = 0$ for every $r\geq 4$ by Theorem~\ref{th:2.7}. To complete the proof note that the value $2^n \pmod 3$ forces $(M_1(f),M_2(f)) =(1,0)$ resp. $(M_1(f),M_2(f)) =(0,1)$ depending on the parity of $n$. \qed} \end{proof} {The observation that an APN map must have at least one element with exactly 1 or 2 preimages was done already in \cite{czerwinski2020minimal}.}\\ \noindent The next theorem is a consequence of Theorem~\ref{thm:Image_Size1} and Theorem~\ref{thm:Image_Size2}. Corollary~\ref{cor:apn-1-2} along with identity \eqref{eq_min_image_set2} and inequality \eqref{eq_min_image_set1} yield the possible preimage distributions of an APN map meeting the lower bound. \begin{theorem}\label{thm:APN_Lower_Bound} Let $f:\F_{2^n} \to \F_{2^n} $ be APN. Then \[ |\Image(f)|\geq \begin{cases} \frac{2^n+1}{3} & n \text{ is odd,} \\ \frac{2^n+2}{3} & n \text{ is even}. \end{cases} \] If $n$ is odd and \[|\Image(f)| = \frac{2^n+1}{3},\] then $\omega(y_0)=2$ for one element $y_0\in \Image(f)$ and $\omega(y)=3$ for $y\in \Image(f)\setminus\{y_0\}$. If $n$ is even and \[|\Image(f)| = \frac{2^n+2}{3},\] then one of the following cases must occur: \begin{enumerate} \item $\omega(y_0)=1$ for one element $y_0\in \Image(f)$ and $\omega(y)=3$ for all $y\in \Image(f)\setminus\{y_0\}$, that is $f$ is almost-3-to-1. \item $\omega(y_i)=2$ for two elements $y_0, y_1\in \Image(f)$ and $\omega(y)=3$ for all $y\in \Image(f)\setminus\{y_0, y_1\}$. \item $\omega(y_i)=2$ for three elements $y_0, y_1, y_2\in \Image(f)$, $\omega(y_3)=4$ for a unique $y_3\in \Image(f)\setminus\{y_0, y_1, y_2\}$ and $\omega(y)=3$ for all $y\in \Image(f)\setminus\{y_0, \ldots, y_3\}$. \end{enumerate} \end{theorem} { \begin{proof} Theorem~\ref{thm:Image_Size2} immediately yields the lower bound. We apply now \eqref{eq_min_image_set1} and \eqref{eq_min_image_set2} to prove the statements on the preimage distribution. Set $D = \{y\in \Image(f): \omega(y)\neq 3\}$. If $n$ is odd, by \eqref{eq_min_image_set1} we get \[\sum_{y\in D}(\omega(y)-3)^2 \leq 1.\] Hence there is at most one $y_0 \in \Image(f)$ such that $\omega(y)\neq 3$ and it must satisfy $\omega(y_0) \in\{2,4\}$. Corollary \ref{cor:apn-1-2} completes the proof. Let $n$ be even. Then from \eqref{eq_min_image_set1} and \eqref{eq_min_image_set2} we get \[\sum_{y\in D}(\omega(y)-3)^2 \leq 4\] and \[\sum_{y\in D}\omega(y) = 3|D|-2.\] Clearly, if $|D|=1$, then $f$ is almost-$3$-to-$1$. If $|D|=2$, then $\omega(y)=2$ for every $y \in D$. Note that $|D|=3$ is not possible, since $\omega(y)\in \{2,4\}$ for all $y \in D$, contradicting the second equation since $3|D|-2$ is odd in this case. If $|D|=4$, we have again $\omega(y)\in \{2,4\}$ for all $y \in D$ and the only solution to the second equation is $\omega(y)=2$ for $3$ elements and $\omega(y)=4$ for one element. $|D|>4$ violates the first equation, so we exhausted all possibilities. \qed \end{proof}} The APN monomials meet the lower bound for $n$ even; we present several further such families later in this paper. All these examples of APN maps are almost-3-to-1. We believe that cases 2. and 3. for $n$ even never occur. \\ \noindent { {\bf Open Problem:} Let $n$ be even and $f:\F_{2^n} \to \F_{2^n} $ be APN map with $|\Image(f)| = (2^n+2)/3$. Can $f$ have the preimage distribution described in case 2. or 3. of Theorem \ref{thm:APN_Lower_Bound}\,?}\\ \noindent Numerical results suggest that there are no APN maps meeting the lower bound for $n$ odd. We show later in this paper that the image sizes of almost bent maps never fulfill the lower bound in Theorem~\ref{thm:APN_Lower_Bound}. The APN maps with smallest sizes which we found are \begin{itemize} \item[-] for $n=7$ the map $x \mapsto x^3+x^{64}+x^{16}+x^4$ with the image size $57=2^6-7$; \item[-] for $n=11$ the map $x \mapsto x^3 +x^{256} $ with the image size $1013=2^{10}-11$. \end{itemize} In \cite{czerwinski2020minimal} it is shown that the APN binomial $b(x)= x^3+x^4$ is $2$-to-1 if $n$ is odd. This binomial is studied in \cite{kyureg-mueller-wang}: For an even $n$ the image set of $b(x)=x^3+x^4$ satisfies $M_1(b) = 2(2^n-1)/3, ~ M_2(b)=1$ and $M_4(b)=(2^n-4)/12$, and hence $|\Image(b)|= 3\cdot 2^{n-2}$.\\ \noindent The lower bound in Theorem~\ref{thm:APN_Lower_Bound} can be used to prove several structural results for APN maps. For example, it gives an easy proof for the following well-known property of monomial APN maps. \begin{corollary} Let $q=2^n$ and $f(x)=x^k$ be APN on $\F_{q}$. Then $\gcd(k, q-1)=1$ if $n$ is odd and $\gcd(k, q-1)=3$ if $n$ is even. \end{corollary} \begin{proof} Since \[|\Image(f)\setminus\{0\}| = \frac{q-1}{\gcd(k, q-1)},\] Theorem~\ref{thm:APN_Lower_Bound} forces $\gcd(k, q-1)\leq 3$. For $n$ odd we get $\gcd(k, q-1)=1$. Now let $n$ be even and $\gcd(k, q-1)=1$. Then $f$ is an APN permutation on all subfields of $\F_q$. In particular it must be an APN permutation on $\F_4$. It is easy to check that such a permutation does not exist. Hence $\gcd(k, q-1)=3$. \qed \end{proof} The following characterization for APN $3$-divisible DO polynomials is a direct consequence of Theorem~\ref{th_divisible}: \begin{corollary}\label{th_iff} Let $n$ be even and $f:\F_{2^n}\to \F_{2^n}$ be a 3-divisible DO polynomial. Then $f$ is APN if and only if $f$ is almost-3-to-1. \end{corollary} We take a closer look at 3-divisible APN maps in the next section.\\ Next we observe that Zhou-Pott APN quadratic maps constructed in \cite{zhou2013new} provide examples of almost-3-to-1 APN maps which are not 3-divisible. \begin{theorem} \label{thm:zhoupott31} Let $m, i\geq 2$ even, $\gcd(k, m)=1$ and $\alpha\in\F_{2^m}$ not a cube. \begin{itemize} \item[(a)] Then $f:\F_{2^m}\times\F_{2^m}\to\F_{2^m}\times\F_{2^m}$ given by \begin{equation} f(x,y) = (x^{2^k+1}+\alpha y^{(2^k+1)2^i}, xy) \label{eq:zhoupott} \end{equation} is almost-3-to-1. More precisely $f(x,y) = f(u,v)$ if and only if $(x,y)=(\omega u, \omega^2 v)$ with $\omega \in \F_4^*$. { The corresponding to $f(x,y)$ map on $\F_{2^{2m}}$ has a univariate representation which is not a DO-polynomial.} \item[(b)] Then $g:\F_{2^m}\times\F_{2^m}\to\F_{2^m}\times\F_{2^m}$ given by \begin{equation} g(x,y) = (x^{2^k+1}+\alpha y^{(2^k+1)}, xy^{2^{m-i}}) \label{eq:zhoupott} \end{equation} is almost-3-to-1. {The corresponding to $g(x,y)$ map on $\F_{2^{2m}}$ has a univariate representation which is a DO-polynomial that is not 3-divisible.} \end{itemize} \end{theorem} \begin{proof} Note that $\gcd(2^{2m}-1, 2^k+1) =3$ and 3 is a divisor of both $2^m-1$ and $2^i-1$.\\ (a) Let $(x,y),(u,v)\in\F_{2^m}\times\F_{2^m}$ with $f(x,y)=f(u,v)$. Then we have \begin{align*} x^{2^k+1}+\alpha y^{(2^k+1)2^i} &= u^{2^k+1}+\alpha v^{(2^k+1)2^i} \\ xy &= uv. \end{align*} First suppose $v=0$, and hence $x=0$ or $y=0$ too. For $x=0$, we get $$ \alpha y^{(2^k+1)2^i} = u^{2^k+1}, $$ which forces $y=u=0$, since $\alpha$ is a non-cube. For $x, u \ne 0$ and $y=0$ we get $$ x^{2^k+1} = u^{2^k+1}, $$ which is satisfied if and only if $x = \omega u$ with $\omega \in \F_4^*$. Now let $v\ne 0$. Setting $u=\frac{xy}{v}$ and rearranging the first equation we get \[ x^{2^k+1}+\left(\frac{xy}{v}\right)^{2^k+1} = \alpha (y^{2^k+1}+v^{2^k+1})^{2^i} \] or equivalently \[ x^{2^k+1}\left(1+\left(\frac{y}{v}\right)^{2^k+1}\right) = \alpha v^{(2^k+1)2^i}\left(1+\left(\frac{y}{v}\right)^{2^k+1}\right)^{2^i}. \] If $1+\left(\frac{y}{v}\right)^{2^k+1}\neq 0$, we can divide by it and obtain \begin{equation} x^{2^k+1} = \alpha v^{(2^k+1)2^i}\left(1+\left(\frac{y}{v}\right)^{2^k+1}\right)^{2^i-1}. \label{eq:cube_eq} \end{equation} Note that \eqref{eq:cube_eq} has no solution, since $x^{2^k+1}$, $ v^{(2^k+1)2^i}$ and $\left(1+\left(\frac{y}{v}\right)^{2^k+1}\right)^{2^i-1}$ are all cubes and $\alpha$ is not a cube. Finally observe that $\left(\frac{y}{v}\right)^{2^k+1}= 1$ holds if and only if $y =\omega v$ with $\omega \in \F_4^*$.\\ Next we show that the corresponding to $f(x,y)$ map on $\F_{2^{2m}}$ is not given by a univariate DO polynomial. Let $(u_1, u_2)$ be a basis of $\F_{2^{2m}}$ over $\F_{2^m}$ and $(v_1, v_2)$ its dual basis. Then an element $z$ of $\F_{2^{2m}}$ has the representation $(v_1z+\overline{v_1z})u_1 + (v_2z+\overline{v_2z})u_2$, where $\overline{a} = a^{2^m}$. Thus we get \begin{eqnarray*} f(z) & = &f(v_1z+\overline{v_1z}, v_2z+\overline{v_2z}) \\ &= &\left((v_1z+\overline{v_1z})^{2^k+1}+\alpha (v_2z+\overline{v_2z})^{(2^k+1)2^i}\right)u_1 \\ &+& (v_1z+\overline{v_1z})\cdot (v_2z+\overline{v_2z}) u_2 \\ & = & \ldots +( (v_1\overline{v_2} + \overline{v_1}v_2)z^{2^m+1}+ v_1v_2z^2 + \overline{v_1v_2}z^{2\cdot2^m})u_2. \end{eqnarray*} Since $k\ne 0$, there will be no term $z^2$ in the summand for $u_1$, and hence the above polynomial contains a non-zero term with $z^2$, showing that $f(z)$ is not a DO-polynomial.\\ (b) Note that $g(x,y)$ is obtained from $f(x,y)$ by a linear bijective transformation $(x,y) \mapsto (x, y^{2^{m-i}})$. In particular, $g(x,y)$ is almost 3-to-1, too. Next we describe the univariate representation of the corresponding to $g(x,y)$ map on $\F_{2^{2m}}$. Again let $(u_1, u_2)$ be a basis of $\F_{2^{2m}}$ over $\F_{2^m}$ and $(v_1, v_2)$ its dual basis. Then we get \begin{eqnarray*} g(z) & = &g(v_1z+\overline{v_1z}, v_2z+\overline{v_2z}) \\ &= &\left((v_1z+\overline{v_1z})^{2^k+1}+\alpha (v_2z+\overline{v_2z})^{(2^k+1)}\right)u_1 \\ &+& (v_1z+\overline{v_1z})\cdot (v_2z+\overline{v_2z})^{2^{m-i}} u_2, \end{eqnarray*} which is a DO polynomial. Finally, note that $g(x,y) \ne g(\omega x, \omega y)$ for $\omega \in \F_4 \setminus \F_2$, and hence the DO polynomial $g(z)$ is not 3-divisible. \qed \end{proof} \begin{remark} { Observe that our proof of Theorem \ref{thm:zhoupott31} does not use the property that $f(x,y)$ is APN. Hence Theorems \ref{thm:zhoupott31} and \ref{thm:walsh_spectrum_three_to_one} prove that the maps $f(x,y)$ and $g(x,y)$ are APN. } \end{remark} {In ~\cite{carlet2016quadratic} a map $f:\F_q \to \F_q$ is called $\delta$-vanishing if for any non-zero $a$ the equation $f(x+a)-f(x) =0$ has $t_a$ solutions, where $ 0 < t_a \leq \delta$. Note that any zero-difference $d$-balanced map is $d$-vanishing. Problem 1 in ~\cite{carlet2016quadratic} asks whether any quadratic $\delta$-vanishing map must be $d$-divisible with an appropriate $d$. Theorem \ref{thm:zhoupott31} shows that the answer to this problem is negative. Indeed, the Zhou-Pott maps $f(x,y)$ and $g(x,y)$ are quadratic APN maps which are not $3$-divisible. These maps are almost-3-to-1 and hence $N(f)=N(g)=3q-2$ and then by Corollary \ref{cor:Nf} they are zero-difference 2-balanced. } \\ \noindent { The next sufficient condition for an APN map to be almost-3-to-1 follows immediately from the lower bound in Theorem \ref{thm:APN_Lower_Bound}. \begin{proposition}\label{prop:apn_3to1} Let $n$ be even and $f:\F_{2^n}\to \F_{2^n}$ be an APN map satisfying \begin{itemize} \item $f(0)=0$, \item every $y \in \Image(f)\setminus \{0\}$ has at least $3$ preimages. \end{itemize} Then $f$ is almost-3-to-1. \end{proposition} \qed The Zhou-Pott construction of APN maps was recently generalized in ~\cite{farukapn}. Next we use Proposition \ref{prop:apn_3to1} to show that the APN maps of this construction are almost-$3$-to-$1$ too. \begin{theorem}\label{apn-faruk} Define the following maps on $\F_{2^m} \times \F_{2^m}$ \[f_1(x,y)=(x^{2^k+1}+xy^{2^k}+y^{2^k+1},x^{2^{2k}+1}+x^{2^{2k}}y+y^{2^{2k}+1})\] where $\gcd(3k,m)=1$ and \[f_2(x,y)=(x^{2^k+1}+xy^{2^k}+y^{2^k+1},x^{2^{3k}}y+xy^{2^{3k}}),\] where $\gcd(3k,m)=1$ and $m$ is odd. Then $f_1$ and $f_2$ are almost-$3$-to-$1$ APN maps. \end{theorem} \begin{proof} The APN property of these maps (under the given conditions) was proven in \cite{farukapn}. For the rest, we check the conditions of Proposition~\ref{prop:apn_3to1}. The first condition is clearly satisfied in both cases. We start with $f_1$: Direct computations show that \begin{align*} f_1(y,x+y)&= \\ & (y^{2^k+1}+y(x+y)^{2^k}+(x+y)^{2^k+1},y^{2^{2k}+1}+y^{2^{2k}}(x+y)+(x+y)^{2^{2k}+1}) \\ &=(x^{2^k+1}+xy^{2^k}+y^{2^k+1},x^{2^{2k}+1}+x^{2^{2k}}y+y^{2^{2k}+1})=f_1(x,y). \end{align*} A similar calculation yields $f_1(x,y)=f_1(x+y,x)$. Thus every $y \in \Image(f_1)\setminus \{0\}$ has at least three preimages. Both conditions of Proposition~\ref{prop:apn_3to1} are satisfied for $f_1$, completing the proof for $f_1$. Consider now $f_2$: Similarly to the first case, we have \begin{align*} f_2(y,x+y)&=(y^{2^k+1}+y(x+y)^{2^k}+(x+y)^{2^k+1},y^{2^{3k}}(x+y)+y(x+y)^{2^{3k}}) \\ &=(x^{2^k+1}+xy^{2^k}+y^{2^k+1},x^{2^{3k}}y+xy^{2^{3k}})=f_2(x,y), \end{align*} and similarly $f_2(x,y)=f_2(x+y,x)$, so both conditions of Proposition~\ref{prop:apn_3to1} are again satisfied. \qed \end{proof} A further large family of inequivalent almost-$3$-to-$1$ APN maps has been found by Faruk G\"olo\u{g}lu and the first author, and will be published soon.} \\ \noindent A natural question is whether every quadratic APN map of $\F_{2^n}$ with even $n$ is EA-equivalent to an almost-$3$-to-$1$ map. The answer is negative. By Theorem~\ref{thm:walsh_spectrum_three_to_one}, all almost-3-to-1 quadratic APN maps have the classical Walsh spectrum. And hence the EA-class of quadratic APN maps with non-classical Walsh spectra do not contain an almost-$3$-to-$1$ map. \section{3-divisible APN maps} Observe that by Corollary~\ref{th_iff}, every APN DO polynomial $f'(x^3)$ on $\F_{2^n}$, $n$ even, is an example with the preimage distribution described in Case 1. of Theorem~\ref{thm:APN_Lower_Bound}. Prominent examples for such APN maps are $x\mapsto x^3$ and $x\mapsto x^3 +\Tr(x^9)$. These maps are APN for any $n$. If $n$ is odd, then $x\mapsto x^3$ is a permutation and $x\mapsto x^3 +\Tr(x^9)$ is 2-to-1, as we will see later in this section. Next we present an interesting observation which could be helpful for performing numerical searches as well as theoretical studies of 3-divisible APN DO polynomials. In particular it could be used for classifying exceptional APN 3-divisible DO polynomials. \begin{theorem}\label{th_subfield_permutation} Let $n=2^im$ with $i\geq 1$ and $m\geq 3$ odd. Suppose $f:\F_{2^n}\to \F_{2^n}$ is a 3-divisible APN DO polynomial over the subfield $\F_{2^m}$ that is $f \in \F_{2^m}[x]$. Then $f$ is an APN permutation on the subfield $\F_{2^m}$. \end{theorem} \begin{proof} Since the coefficients of $f$ are from $\F_{2^m}$, it defines an APN map on it. By Corollary~\ref{th_iff}, $f$ is almost-3-to-1 on $\F_{2^n}$. Moreover, $f(x)=f(\omega x)=f(\omega^2 x)$ for $\omega \in \F_4\setminus\F_2$. The statement now follows from the fact that $\F_4$ is not contained in $\F_{2^m}$. \qed \end{proof} The substitution of $x^3$ in a polynomial of shape $f'(x) = L_1(x) + L_2(x^3)$, where $L_1, L_2$ are linearized polynomials, results in a DO polynomial $f(x) = f'(x^3) = L_1(x^3) + L_2(x^9)$. Hence for even $n$ by Corollary~\ref{th_iff} such a map is APN if and only if it is almost 3-to-1. In particular, any permutation of shape $L_1(x) + L_2(x^3)$ yields directly an APN DO polynomial if $n$ is even. Observe that $x^3$ and $ x^3+ \Tr(x^9)$ are of this type too. These and further APN DO polynomials $L_1(x^3) + L_2(x^9)$ are studied in \cite{budaghyan2009constructing,budaghyan2009construction}. Corollary~\ref{th_iff} suggests a unified approach for understanding such APN maps. Results from \cite{charpin2008class} can be used to construct and explain permutations of shape $f'(x) = L_1(x) + L_2(x^3)$. For example, Theorem 6 in \cite{charpin2008class} with $s=3$ and $L(x) = x^2+\alpha x$ yields the following family of APN 3-divisible DO polynomials. \begin{theorem}\label{th:apn_chk} Let $\alpha, \beta, \gamma$ be non-zero elements in $\F_{2^n}$ with $n$ even. Further let $\gamma \not\in \{ x^2+\alpha x ~|~ x \in \F_{2^n}\}$ and $\Tr(\beta \alpha)=1$, then $$ f'(x) = x^2 + \alpha x +\gamma \Tr(\alpha^{-3}x^3+\beta x) $$ is a permutation on $\F_{2^n}$ and consequently $$ f(x) = f'(x^3) = x^6 + \alpha x^3 +\gamma \Tr(\alpha^{-3}x^9+\beta x^3) $$ is APN. \end{theorem} \qed An APN map constructed in Theorem~\ref{th:apn_chk} is affine equivalent to one of form $x^3 + \alpha\Tr(\alpha^{-3}x^9)$ studied in \cite{budaghyan2009construction}. Indeed, the map $f'(x)$ can be reduced to $$ f'(x) = x^2 + \alpha x + \gamma \Tr(\beta x) + \gamma \Tr(\alpha^{-3}x^3) = L_1(x)+\gamma \Tr(\alpha^{-3}x^3), $$ where $L_1$ is linear over $\F_2$. Using \cite[Theorem 5]{charpin2008class} the map $L_1$ is bijective. Then $L_1^{-1}$ composed with $f'(x)$ yields $$ L_1^{-1} \circ f'(x) = x + \Tr(\alpha^{-3}x^3)L_1^{-1}(\gamma), $$ and thus $$ L_1^{-1} \circ f'(x^3) = x^3 + \Tr(\alpha^{-3}x^9)L_1^{-1}(\gamma) = x^3+\alpha \Tr(\alpha^{-3}x^9), $$ where for the last equality we used $L_1(\alpha) = \gamma$. Note that this reduction remains true for $n$ odd too, showing that the examples of Theorem~\ref{th:apn_chk} are APN for any $n$.\\ As we mentioned earlier the polynomials $x^3$ and $x^3+\Tr(x^9)$ define APN maps on $\F_{2^n}$ for every $n\geq 1$. For $n$ odd, the first map is a permutation and the second is 2-to-1, as the following result shows. \begin{proposition} Let $n$ be odd and $a \in \F_{2^n}$ non-zero. Then the APN map $x \mapsto x^3 + a^{-1}\Tr(a^3x^9)$ is 2-to-1. \end{proposition} \begin{proof} This follows from Theorem 3 in \cite{charpin-kyureg-ffa}, since $a^{-1}$ is a $1$-linear structure of $\Tr(a^3x^3)$ for $n$ odd, which can be easily checked by direct calculations. \qed \end{proof} We believe that if $n$ is odd, then $2^{n-1}$ is the minimal possible image size of an APN DO polynomial of shape $L_1(x^3) +L_2(x^9)$. However, an analog of Corollary~\ref{th_iff} is not true for the size $2^{n-1}$. There are such DO polynomials with image size $2^{n-1}$, which are not APN. For example, if $n$ is odd, the DO polynomial $(x^2 + x)\circ x^3$ is 2-to-1, but not APN.\\ \noindent Next we observe that for $n$ odd there are APN DO polynomials of shape $L_1(x^3)+L_2(x^9)$, that are neither bijective nor have image size $2^{n-1}$. For a divisor $t$ of $n$ we denote by {$\Tr_{2^n/2^t}(x)$} the trace map from $\F_{2^n}$ into the subfield $\F_{2^t}$, that is {\[ \Tr_{2^n/2^t}(x) = \sum_{k=0}^{n/t}x^{\left({2^t}\right)^k}. \]} In \cite{budaghyan2009construction}, it is shown that for any non-zero $a \in \F_{2^{3m}}$, $m $ arbitrary, the DO polynomials $$ f_1(x) = f'_1(x^3) = x^3+a^{-1}\Tr_{2^{3m}/2^3}(a^3x^9+a^6x^{18}) $$ and $$ f_2(x) = f'_2(x^3) = x^3+a^{-1}(\Tr_{2^{3m}/2^3}(a^3x^9+a^6x^{18}))^2 $$ define APN maps on $\F_{2^{3m}}$. Moreover, the maps $f'_1$ and $f'_2$ are bijective when $m$ is even. For $m$ odd, the image sets of these maps contain $5\cdot 2^{3m-3}$ elements, as Propositions~\ref{prop:image-new1} and \ref{prop:image-new2} show. \begin{proposition}\label{prop:image-new1} Let $m$ be an odd integer and $a\in\F_{2^{3m}}^*$ be arbitrary. Then the APN map $f:\F_{2^{3m}} \to \F_{2^{3m}}$ given by \[ f(x) = x^3+a^{-1}\Tr_{2^{3m}/2^3}(a^3x^9+a^6x^{18}) \] satisfies $M_1(f) = 2^{3m-1}$, $M_4(f)=2^{3m-3}$. In particular $|\Image(f)|=5\cdot 2^{3m-3}$. \end{proposition} \begin{proof} We consider the equation $f(x)=f(y)$ on $\F_{2^{3m}}$. Since $x\mapsto x^3$ is a permutation on $\F_{2^n}$ with $n$ odd, it is sufficient to look at $f'(x)=f'(y)$, where \[ f'(x) = x+a^{-1}\Tr_{2^{3m}/2^3}(a^3x^3+a^6x^6), \] and $f(x)=f'(x^3)$. Suppose $f'(x)=f'(y)$. Then \[ x+a^{-1}\Tr_{2^{3m}/2^3}(a^3x^3+a^6x^6) = y+a^{-1}\Tr_{2^{3m}/2^3}(a^3y^3+a^6y^6) \] or equivalently, \begin{equation}\label{eq:x+y} \Tr_{2^{3m}/2^3}(a^3x^3+a^6x^6 + a^3y^3+a^6y^6) = a(x+y). \end{equation} In particular, $f'(x) = f'(y)$ only if $a(x+y) \in \F_8$. Let $z=x+y$. Taking the absolute trace on both sides of (\ref{eq:x+y}), we get \[ \Tr_{2^3/2}(az)=\Tr_{2^{3m}/2}(a^3x^3+(a^3x^3)^2 + a^3(x+z)^3+(a^3(x+z)^3)^2) = 0. \] Let $\beta\in\F_8$ with $\beta^3=\beta+1$, then \[ \Tr_{2^3/2}(\beta)=\Tr_{2^3/2}(\beta^2)=\Tr_{2^3/2}(\beta^4)=0, \] so that \[ az\in\{0, \beta, \beta^2, \beta^4\}. \] If $az=0$ we have $z=0$ and $x=y$. So let $z=a^{-1}\beta^k$ with $k\in\{1,2,4\}$. Note that $x\mapsto x^k$ is a linear permutation on $\F_{2^{3m}}$. We have \begin{align*} & a^3x^3+a^6x^6+a^3(x+z)^3+a^6(x+z)^6 \\ &=a^3x^3+a^6x^6+a^3(x^3+x^2z+xz^2+z^3)+a^6(x^6+x^4z^2+x^2z^4+z^6)\\ &=a^3x^2z+a^3xz^2+a^3z^3+a^6x^4z^2+a^6x^2z^4+a^6z^6 \\ &=a^2x^2\beta^k+ax\beta^{2k}+\beta^{3k}+a^4x^4\beta^{2k}+a^2x^2\beta^{4k}+\beta^{6k} \\ &=(\beta^{3k}+\beta^{6k})+ax\beta^{2k}+a^2x^2(\beta^k+\beta^{4k})+a^4x^4\beta^{2k}. \end{align*} As $\beta^3=\beta+1$, we get $\beta^6=\beta^2+1$ and therefore $\beta^3+\beta^6=\beta^2+\beta=\beta^4$. Further $\beta+\beta^4=\beta^2$, so that \begin{equation}\label{eq2imageproof} a^3x^3+a^6x^6+a^3(x+z)^3+a^6(x+z)^6 =\beta^{4k}+\beta^{2k}(ax+(ax)^2+(ax)^4). \end{equation} We now need to ensure that \eqref{eq:x+y} holds. Using \eqref{eq2imageproof} and $m$ odd this turns into \begin{align*} \beta^k &=\Tr_{2^{3m}/2^3}(\beta^{4k}+\beta^{2k}(ax+(ax)^2+(ax)^4)) \\ &=\beta^{4k}+\beta^{2k}\Tr_{2^{3m}/2^3}(ax+(ax)^2+(ax)^4) = \beta^{4k}+\beta^{2k}\Tr_{2^{3m}/2}(ax). \\ &=(\beta^4+\beta^2\Tr_{2^{3m}/2}(ax))^k \end{align*} Using again that $x\mapsto x^k$ is a permutation and that $\beta^4=\beta^2+\beta$ we obtain \[ \beta^2(\Tr_{2^{3m}/2}(ax)+1)=0, \] which has a solution $x$ if and only if $\Tr_{2^{3m}/2}(ax)=1$. Concluding, we have $f'(x)=f'(x+z)$ if and only if $\Tr_{2^{3m}/2}(ax)=1$ and $z\in\{0, a^{-1}\beta, a^{-1}\beta^2, a^{-1}\beta^4\}$. Since there are $2^{3m-1}$ elements $x$ with $\Tr_{2^{3m}/2}(ax)=1$, we get $M_4(f)=2^{3m-3}$. The map $f'$ is injective on the hyperplane $\{x \in \F_{2^{3m}} ~|~ \Tr_{2^{3m}/2}(ax)=0 \}$, yielding $M_1(f)=2^{3m-1}$. \qed \end{proof} The proof of next result is almost identical to the one of Proposition~\ref{prop:image-new1}: \begin{proposition} \label{prop:image-new2} Let $m$ be an odd integer and $a\in\F_{2^{3m}}^*$ be arbitrary. Then the APN map $f:\F_{2^{3m}} \to \F_{2^{3m}}$ given by \[ f(x) = x^3+a^{-1}\Tr_{2^{3m}/2^3}(a^3x^9+a^6x^{18})^2 \] satisfies $M_1(f) = 2^{3m-1}$, $M_4(f)=2^{3m-3}$. In particular $|\Image(f)|=5\cdot 2^{3m-3}$. \end{proposition} \section{Relations between the image sets of APN maps and their Walsh spectrum} Let $f \colon \F_{2^n} \rightarrow \F_{2^n}$. The Boolean fuctions $f_\lambda(x) = \Tr(\lambda f(x))$ with $\lambda \in \F_{2^n}^*$ are called the component functions of $f$. We call $f_\lambda$ a balanced component of $f$, if it takes the values $0$ and $1$ equally often, that is both $2^{n-1}$ times. The Walsh transform of $f$ is defined by \[W_f(b,a)=\sum_{x \in \F_{2^n}}(-1)^{\Tr(bf(x)+ax)} \in \mathbb{Z},\] where $a,b \in \F_{2^n}, b\ne 0$. The multiset $\{*W_f(b,a) \colon b \in \F_{2^n}^*, a \in \F_{2^n}*\}$ is called the Walsh spectrum of $f$ and $\{*|W_f(b,a)| \colon b \in \F_{2^n}^*, a \in \F_{2^n}*\}$ is called the extended Walsh spectrum of $f$. \begin{definition} Let $f \colon \F_{2^n} \rightarrow \F_{2^n}$, $n$ even. We say that the map $f$ has the classical Walsh spectrum if $$|W_f(b,a)| \in \{0,2^{n/2},2^{(n+2)/2}\}$$ for any $b \in \F_{2^n}^*, a \in \F_{2^n}$ and the extended Walsh spectrum of $f$ contains the values $0,2^{n/2},2^{(n+2)/2}$ precisely $(2^n-1)\cdot 2^{n-2}$-times, $(2/3)(2^n-1)(2^n)$-times and $(1/3)(2^n-1)(2^{n-2})$-times, respectively. \end{definition} Most of the known APN maps in even dimension have the classical Walsh spectrum, including the monomial APN maps with Gold and Kasami exponents. There are APN maps with non-classical Walsh spectra, see e.g. \cite{beierle-leander2020} for such examples. \begin{definition} Let $f \colon \F_{2^n} \rightarrow \F_{2^n}$. A component function $f_\lambda$ is called plateaued with amplitude $t$ for an integer $t \geq 0$ if $W_f(\lambda,a) \in \{0,\pm 2^{\frac{n+t}{2}}\}$ for all $a \in \F_{2^n}$. If all component functions of $f$ are plateaued, $f$ is called component-wise plateaued. For $n$ odd the map $f$ is called almost bent if all its components $f_\lambda$ are plateaued with $t=1$. For $n$ even, a plateaued component with $t=0$ is called a bent component. \end{definition} It is well known that an almost bent map is necessarily APN, and there are APN maps, which are not almost bent. Quadratic maps are always component-wise plateaued. Also a crooked map, which is defined by the property that all its differential sets are affine hyperplanes, is component-wise plateaued \cite{bending-crooked,kyureg-crooked}. Properties of component functions of crooked maps are studied in \cite{charpin-crooked}. Further examples of component-wise plateaued maps can be found in \cite{carletplateaued}.\\ \noindent The next result gives a sufficient condition for a map to be APN in terms of its component functions. \begin{proposition} [{\cite[Corollary 3]{berger2006almost}}]\label{prop:bentcomponents} Let $n$ be even and $f:\F_{2^n}\to \F_{2^n}$ be a component-wise plateaued map. If $f$ has $(2/3)(2^n-1)$ bent components and $(1/3)(2^n-1)$ components with amplitude $t=2$ then $f$ is APN. \end{proposition} The Parseval equation states $$ \sum_{a\in\F_{2^n}}W_f(b,a)^2 =2^{2n} $$ for any $b \in \F_{2^n}$. It implies that a component-wise plateaued map $f$ with $(2/3)(2^n-1)$ bent components and $(1/3)(2^n-1)$ plateaued components with amplitude $t=2$ has always the classical Walsh spectrum.\\ { To prove the next theorem we use the following well known lemma. \begin{lemma}\label{lem:gcds} Let $i,r \in \mathbb{N}$ be arbitrary. Then \begin{itemize} \item $\gcd(2^i-1,2^r+1) = \begin{cases} 2^{\gcd(i,r)} +1 & \text{if } i/\gcd(i,r) \text{ is even} \\ 1 & \text{else.} \end{cases}$ \item $\gcd(2^i+1,2^r+1) = \begin{cases} 2^{\gcd(i,r)} +1 & \text{if } i/\gcd(i,r) \text{ and } r/\gcd(i,r)\text{ are odd} \\ 1 & \text{else.} \end{cases}$ \end{itemize} \end{lemma} \qed The next theorem shows that almost-$(2^r+1)$-to-$1$ component-wise plateaued maps have a very special Walsh spectrum. The key step in its proof is the fact that the components of such maps have weights divisible by ${2^r+1}$. This together with Lemma \ref{lem:gcds} and some basic identities for Walsh values allow to control the Walsh spectrum of $f$.} {Our proof is an adaption of the one of Theorem 2 from \cite{carlet2016quadratic}, where $(p^r+1)$-divisible quadratic maps of finite fields with an arbitrary characteristic $p$ are considered.} { \begin{theorem}\label{thm:k-to-1} Let $n=2rm$ and $f:\F_{2^n}\to \F_{2^n}$ be an almost-$(2^r+1)$-to-$1$ component-wise plateaued map with $f(0)=0$ and $\omega(0)=1$, i.e. $0$ be the unique element with precisely one preimage. Then $f$ has $(2^r/(2^r+1))\cdot(2^n-1)$ bent components and $(2^n-1)/(2^r+1)$ components with amplitude $t=2r$. Moreover, \[W_f(b,0) \in \{(-1)^{m}2^{rm},(-1)^{m+1}2^{r(m+1)}\}\] for any $b \in \F_{2^n}^*$. \end{theorem} \begin{proof} Let $b\in \F_{2^n}^*$ be arbitrary. Since $f$ is component-wise plateaued, $W_f(b,0)$ takes the values $0$ or $\pm 2^{rm+s}$ with $s\geq 0$. Note that since $f$ is almost-$(2^r+1)$-to-$1$ with $f(0)=0$ and $\omega(0)=1$, the value $|\{x \in \F_{2^n}^* \colon \Tr(bf(x))=c\}|$ is divisible by $2^r+1$ for any $c \in \F_2$. Thus \begin{eqnarray*} W_f(b,0) &=& |\{x \in \F_{2^n}^* \colon \Tr(bf(x))=0\}|-|\{x \in \F_{2^n}^* \colon \Tr(bf(x))=1\}|+1 \\ & \equiv & 1 \pmod {2^r+1}. \end{eqnarray*} This shows in particular that $W_f(b,0) \neq 0$. Further, by Lemma~\ref{lem:gcds}, $2^{rm+s} \equiv 1 \pmod{2^r+1}$ if and only if $r|s$ and $m+(s/r)$ is even. Similarly, $-2^{rm+s} \equiv 1 \pmod{2^r+1}$ if and only if $r|s$ and $m+(s/r)$ is odd. Hence $W_f(b,0) =(-1)^{m+k}2^{r(m+k)}$ for a suitable $k\geq 0$. Define $$N_k=|\{b \in \F_{2^n}^* \colon |W_f(b,0)|=2^{r(m+k)}\}|$$ for an integer $k \geq 0$. Since $f(x)=0$ holds only for $x=0$, we have \[\sum_{b\in \F_{2^n}} W_f(b,0) = 2^n,\] which directly implies \[\sum_{b\in \F_{2^n}^*} W_f(b,0) = 0.\] Substituting the possible values for $W_f(b,0)$ in the above equation, we get \begin{equation*} 2^{rm}(N_0-2^{r}N_1+2^{2r}N_2-2^{3r}N_3+\dots )= 0, \end{equation*} implying \begin{equation} \label{eq:1} N_0-2^{r}N_1+2^{2r}N_2-2^{3r}N_3+\dots= 0. \end{equation} Now since $f$ is almost-$(2^r+1)$-to-$1$, for every fixed non-zero $x$ there are exactly $(2^r+1)$ elements $a \in \F_{2^n}$ satisfying $f(x)+f(x+a)=0$, and for $x=0$ only $a=0$ solves it. Thus we get \begin{align*} \sum_{b \in \F_{2^n}} (W_f(b,0))^2 &= \sum_{b \in \F_{2^n}} \sum_{x \in \F_{2^n}}\sum_{a \in \F_{2^n}} (-1)^{\Tr(b(f(x)+f(x+a)))} \\ &=2^n((2^r+1) \cdot (2^n-1)+1)=2^{2n+r}+2^{2n}-2^{n+r}. \end{align*} In particular, \begin{equation*} \sum_{b \in \F_{2^n}^*} (W_f(b,0))^2 = 2^{2n+r}-2^{n+r}. \end{equation*} Again, substituting the possible values for $W_f(b,0)$, we get \begin{equation*} 2^n(N_0+2^{2r}N_1+2^{4r}N_2+2^{6r}N_3+ \dots )= 2^{2n+r}-2^{n+r}, \end{equation*} which immediately leads to \begin{equation} \label{eq:2} N_0+2^{2r}N_1+2^{4r}N_2+2^{6r}N_3+ \dots = 2^{n+r}-2^r. \end{equation} Clearly, we also have \begin{equation}\label{eq:3} N_0+N_1+N_2+\dots = 2^n-1. \end{equation} Adding Eq.~\eqref{eq:1} $(2^r-1)$-times to Eq.~\eqref{eq:2}, we get \begin{equation} \label{eq:4} 2^rN_0+2^rN_1+(2^{2r}(2^r-1)+2^{4r})N_2+(2^{6r}-(2^r-1)2^{3r})N_3+\dots = 2^{n+r}-2^r. \end{equation} Observe that all coefficients in Eq.~\eqref{eq:4} are positive. Now, subtracting Eq.~\eqref{eq:3} $2^r$-times from Eq.~\eqref{eq:4} yields \begin{equation*} (2^{2r}(2^r-1)+2^{4r}-2^r)N_2+(2^{6r}-(2^r-1)2^{3r}-2^r)N_3+\dots = 0. \end{equation*} Here, all coefficients are again positive, so we conclude $N_2=N_3=\dots = 0$. From Eq.~\eqref{eq:1} and Eq.~\eqref{eq:3} we then immediately deduce that $N_0 =(2^r/(2^r+1))(2^n-1)$ and $N_1 = (2^n-1)/(2^r+1)$. \qed \end{proof} } Note that the conditions $f(0)=0$ and $\omega(0)=1$ are not restrictive when we consider the extended Walsh spectrum: Indeed, otherwise we consider $f(x+c)+d$ with suitable $c,d \in \F_{2^n}$, which is also component-wise plateaued and has the same extended Walsh spectrum as $f$: \begin{align*} W_{f(x+c)+d}(b,a) &= \sum_{x \in \F_{2^n}}(-1)^{\Tr(b(f(x+c)+d)+ax)} \\ &= (-1)^{\Tr(bd)}\sum_{x \in \F_{2^n}}(-1)^{\Tr(bf(x)+a(x+c))} \\ & = (-1)^{\Tr(bd+ac)}W_{f}(b,a). \end{align*} \\ \noindent { The two boundary cases $m=1$ and $r=1$ of Theorem~\ref{thm:k-to-1} imply interesting extremal cases. For $m=1$, we get that a component-wise plateaued almost-$(2^{n/2}+1)$-to-$1$ map on $\F_{2^n}$ has $2^n-2^{n/2}$ bent components, which is the maximum number of bent components that a map on $\F_{2^n}$ can have~\cite{maxbent}. For $m=1$ the following result holds too: \begin{proposition} Let $r\in \mathbb{N}$, $n=2r$ and $f:\F_{2^n}\rightarrow \F_{2^n}$ be an almost-$(2^r+1)$-to-$1$ map. Then $f$ is component-wise plateaued if and only if it has $2^n-2^{n/2}$ bent components. \end{proposition} \begin{proof} One direction is covered by Theorem~\ref{thm:k-to-1}. Assume that $f$ has $2^n-2^{n/2}$ bent components. As mentioned before, we can assume without loss of generality that $f(0)=0$ and $\omega(0)=1$. By the proof of Theorem~\ref{thm:k-to-1} we have $W_f(b,0) \equiv 1 \pmod{2^{n/2}+1}$ for all $b \in \F_{2^n}^*$. Hence if $b$ defines a bent component, then $W_f(b,0)=-2^{n/2}$. Thus \begin{align*} 0&=\sum_{b\in \F_{2^n}^*} W_f(b,0) \\ &=-(2^n-2^{n/2})2^{n/2}+\sum_{b \text{ not bent}} W_f(b,0), \end{align*} implying $\sum_{b \text{ not bent}} W_f(b,0)=2^{n}(2^{n/2}-1)$. The sum has $2^{n/2}-1$ terms and each term is less or equal to $2^n$, so necessarily it must hold $W_f(b,0)=2^n$ for every $b \in\F_{2^n}^*$ that does not define a bent component. Then, by Parseval's equation, $W_f(b,a)=0$ for these $b$ and every $a \in \F_{2^n}^*$, so these components are also plateaued. \qed \end{proof}} The case $r=1$ of Theorem~\ref{thm:k-to-1} shows that almost-3-to-1 component-wise plateaued maps are APN and they have the classical Walsh spectrum: \begin{theorem}\label{thm:walsh_spectrum_three_to_one} Let $n=2m$ and $f:\F_{2^n}\to \F_{2^n}$ be an almost-$3$-to-$1$ component-wise plateaued map. Then $f$ is an APN map with the classical Walsh spectrum. Moreover, if $f(0)=0$ and $\omega(0)=1$, i.e. $0$ is the only element with precisely one preimage, then \[W_f(b,0) \in \{(-1)^{m}2^m,(-1)^{m+1}2^{m+1}\}\] for any $b \in \F_{2^n}^*$. \end{theorem} \begin{proof} The result follows from Theorem~\ref{thm:k-to-1} for $r=1$ and Proposition~\ref{prop:bentcomponents}. \qed \end{proof} \noindent {In \cite[Corollary 10 and 11]{carletplateaued} it is proven that if $n$ is even, then all almost-3-to-1 plateaued maps of $\F_{2^n}$ have the same Walsh spectrum as the cube function $x\mapsto x^3$. This is exactly the statement of Theorem \ref{thm:walsh_spectrum_three_to_one} too. The precise Walsh spectrum of the cube function is determined by Carlitz~\cite{carlitzgold} via a refined evaluation of certain Gauss sums. The proof of Theorem \ref{thm:k-to-1} implies an elementary proof for Carlitz's result on the value of the cubic exponential sum $S(b)=\sum_{x \in \mathbb{F}_{2^n}} (-1)^{\Tr(bx^3)}$. The latter is the key step for obtaining the Walsh spectrum of the cube function in \cite{carlitzgold}.} \\ \noindent It is well known that quadratic (not necessarily APN) maps as well as crooked maps are component-wise plateaued. Hence Theorem~\ref{thm:walsh_spectrum_three_to_one} implies \begin{corollary}\label{cor:crooked-walsh} Let $n=2m$ and $f:\F_{2^n}\to \F_{2^n}$. \begin{itemize} \item[(a)] If $f$ is almost-$3$-to-$1$ crooked map, then $f$ has the classical Walsh spectrum. \item[(b)] If $f$ is almost-3-to-1 quadratic map, then it is APN with the classical Walsh spectrum. \end{itemize} \end{corollary} Note that Corollaries~\ref{th_iff} and \ref{cor:crooked-walsh} confirm Conjecture 1 stated in \cite{villa2019apn}, that all APN maps of the form $f(x)=L_1(x^3)+L_2(x^9)$ in even dimension have the classical Walsh spectrum. Theorem~\ref{thm:walsh_spectrum_three_to_one} combined with Theorems~\ref{thm:zhoupott31} and \ref{apn-faruk} yield that Zhou-Pott and G\"olo\u{g}lu APN maps have the classical Walsh spectrum. This is shown for Zhou-Pott and further maps in \cite{anbar-walsh}, using Bezout's theorem on intersection points of two projective plane curves. Not all maps considered in \cite{anbar-walsh} are almost-3-to-1.\\ \noindent By Corollary \ref{cor:crooked-walsh} the EA-class of a quadratic APN map with non-classical Walsh spectrum do not contain an almost-3-to-1 map. The following related question is yet open:\\ \noindent {{\bf Open Problem:} Let $n$ be even. Is there any APN DO map $f:\F_{2^n}\to \F_{2^n}$ with the classical Walsh spectrum, such that for any $\F_2$-linear map $f+l$ is not almost-$3$-to-$1$ (equivalently, that there is no almost-$3$-to-$1$ map in the EA-class of $f$)\,?} \\ \noindent Almost-$3$-to-$1$ APN maps with non-classical Walsh spectra exist; an example is the Dobbertin map $x \mapsto x^d$ on $\F_{2^n}$ where $10|n$, $n=5g$ and $d=2^{4g}+2^{3g}+2^{2g}+2^{g}-1$~\cite{ccddobbertin}.\\ \noindent We conclude this section with some observations on the almost bent maps on $\F_ {2^n}$ with $n$ odd. We use them in the next section to give an upper bound on the image set of such maps. Next lemma describes a direct connection between $N(f)$ and the number $N_0$ of balanced component functions of almost bent maps. \begin{lemma} \label{lem:AB} Let $n$ be odd and $f:\F_{2^n} \to \F_{2^n}$ be almost bent. Set \begin{align*} N_0&=|\{b \in \F_{2^n}^* \colon W_f(b,0)=0\}|\\ N_+ &= |\{b \in \F_{2^n}^* \colon W_f(b,0)=2^{(n+1)/2}\}|\\ N_- &= |\{b \in \F_{2^n}^* \colon W_f(b,0)=-2^{(n+1)/2}\}|. \end{align*} Then these three values are determined by $N(f)$ in the following way: \begin{align*} N_0&=2^n-1+2^{n-1}-N(f)/2\\ N_+ &= N(f)/4 -2^{n-2}+2^{(n-3)/2}(\omega (0)-1)\\ N_- &= N(f)/4 -2^{n-2}-2^{(n-3)/2}(\omega (0)-1). \end{align*} \end{lemma} \begin{proof} Clearly, we have \begin{equation} \label{eq:ab_1} N_0+N_++N_- = 2^n-1. \end{equation} Further, we have \begin{align*} \sum_{b \in \F_{2^n}} (W_f(b,0))^2 &= \sum_{b \in \F_{2^n}} \sum_{x \in \F_{2^n}}\sum_{y \in \F_{2^n}} (-1)^{\Tr(b(f(x)+f(y)))} \\ &=2^n N(f), \end{align*} which implies \begin{equation*} \sum_{b \in \F_{2^n}^*} (W_f(b,0))^2 = 2^n N(f) - 2^{2n}. \end{equation*} Rewriting this equation yields \begin{equation*} 2^{n+1}(N_++N_-) = 2^n N(f) - 2^{2n} \end{equation*} or, equivalently, \begin{equation} N_++N_-= N(f)/2 - 2^{n-1}. \label{eq:ab_2} \end{equation} Moreover, we have \[\sum_{b\in \F_{2^n}} W_f(b,0) = 2^n\cdot \omega (0),\] which directly implies \[\sum_{b\in \F_{2^n}^*} W_f(b,0) = 2^n\cdot (\omega (0)-1),\] which yields \begin{equation*} 2^{(n+1)/2}(N_+-N_-) = 2^n\cdot (\omega (0)-1), \end{equation*} and \begin{equation}\label{eq:ab_3} N_+-N_- = 2^{(n-1)/2}\cdot (\omega (0)-1). \end{equation} Subtracting Eq.~\eqref{eq:ab_2} from Eq.~\eqref{eq:ab_1} yields \begin{equation*} N_0 = 2^n+2^{n-1}-N(f)/2-1. \end{equation*} Similarly adding Eq.~\eqref{eq:ab_2} and Eq.~\eqref{eq:ab_3} we get that $N(f)$ must be divisible by $4$ and that \begin{equation*} N_+ = N(f)/4-2^{n-2}+2^{(n-3)/2}(\omega (0)-1). \end{equation*} The value of $N_-$ then follows immediately from Eq.~\eqref{eq:ab_1}. \qed \end{proof} Lemma~\ref{lem:AB} directly implies \begin{corollary}\label{cor:AB} Let $n$ be odd and $f:\F_{2^n} \to \F_{2^n}$ be almost bent. Then \begin{itemize} \item[(a)] $N(f)$ is divisible by $4$. \item[(b)] The number of balanced component functions of $f$ is odd. In particular, every almost bent function has at least one balanced component function. \item[(c)] $N(f) \leq 3\cdot 2^n-4$ and $f$ is not zero-difference 2-balanced. \item[(d)] {\[|\Image(f)| > \frac{2^n+1}{3}.\]} \end{itemize} \end{corollary} \begin{proof} Statement (a) holds since $N_+$ and $N_-$ in Lemma~\ref{lem:AB} are integers. Then (b) is a direct consequence of (a) and Lemma~\ref{lem:AB}. Using Corollary~\ref{cor:Nf} and (a), we get that $N(f) \leq 3\cdot 2^n-4$ and hence $f$ is not zero-difference 2-balanced. {Theorem \ref{thm:APN_Lower_Bound} with (c) imply (d), since if the lower bound is fulfilled then necessarily $N(f)=3\cdot 2^n-2$. } \qed \end{proof} \begin{remark} Recall that any crooked map is almost bent if $n$ is odd. Property (c) in Corollary~\ref{cor:AB} implies that at least one differential set of a crooked map on $\F_{2^n}$ with $n$ odd is a complement of a hyperplane. Equivalently, for $n$ odd there is no crooked map such that all its difference sets are hyperplanes. To the contrary if $n$ is odd then there are bijective crooked maps, for which necessarily all differential sets are complements of hyperplanes. Interestingly, this property is the other way around if $n$ is even. Then crooked maps, for which all differential sets are hyperplanes, do exist (for instance, $x\mapsto x^3$ as observed in \cite{kyureg-crooked}). But there are no crooked maps with all their differential sets being complements of hyperplanes. The latter is a consequence of the non-existence of bijective crooked maps in even dimension \cite{kyureg-crooked}. \end{remark} \section{Upper bounds on the image sets of APN maps} In previous sections we used the value $N(f)$ to obtain a lower bound for the image size of some special maps. In~\cite{coultersenger}, information on $N(f)$ was used to prove an \emph{upper} bound on the image size of maps, significantly for planar maps. \begin{theorem}[{\cite[Theorem 2]{coultersenger}}] \label{thm:upperbound} Let $f\colon \F_{2^n} \rightarrow \F_{2^n}$. Then \[|\Image(f)| \leq 2^n-\frac{2N(f)-2^{n+1}}{1+\sqrt{4N(f)-2^{n+2}+1}}=2^n-\frac{1}{2}(\sqrt{4N(f)-2^{n+2}+1}-1). \] \end{theorem} \qed \medskip \noindent The equality \[\frac{2N(f)-2^{n+1}}{1+\sqrt{4N(f)-2^{n+2}+1}} = \frac{1}{2}(\sqrt{4N(f)-2^{n+2}+1}-1)\] is not mentioned in~\cite{coultersenger}, but it can be verified easily by expanding the fraction with $1-\sqrt{4N(f)-2^{n+2}+1}$.\\ If $n$ is even, Theorem \ref{thm:upperbound} implies in particular an upper bound on the image size of a map depending on the number of its bent components as observed in Theorem \ref{thm:bentupperbound}. For an odd $n$ Lemma~\ref{lem:AB} yields an upper bound for the image size of almost bent maps. Since almost bent maps, contrary to the planar ones, can be permutations, the bound is more involved. \begin{theorem} \label{thm:ABupperbound} Let $f \colon \F_{2^n} \rightarrow \F_{2^n}$ be an almost bent map. Set $k = \max\{\omega(a) | a\in\F_{2^n}\}$, i.e. there exists an element $c \in \F_{2^n}$ with $k$ preimages under $f$ and there is no element with more than $k$ preimages. Then \[|\Image(f)| \leq 2^n - \frac{k-1}{k}2^{(n+1)/2}.\] In particular, if $f$ is not a permutation, then \begin{equation} |\Image(f)| \leq 2^n - 2^{(n-1)/2}. \label{eq:ABupperbound} \end{equation} \end{theorem} \begin{proof} By Eq.s~\eqref{eq_rMr} and \eqref{eq_r2Mr} \[N(f)-2^n = \sum_{r=1}^k r(r-1)M_r \leq k \sum_{r=1}^k (r-1)M_r,\] implying \begin{equation} \sum_{r=1}^k (r-1)M_r \geq \frac{N(f)-2^n}{k}. \label{eq:imagesizeupper} \end{equation} Set $f'(x) = f(x) - c$ with $\omega(c)=k$. Clearly, $f'$ is also almost bent and it satisfies $N(f) = N(f')$ and $|\Image(f)| = |\Image(f')|$, and additionally for $f'$ we have $\omega(0) =k$. We apply Lemma~\ref{lem:AB} to $f'$. Then \[0 \leq N_- = N(f)/4 -2^{n-2}-(k-1)2^{(n-3)/2}, \] which leads to \[N(f) - 2^n \geq (k-1)2^{(n+1)/2}.\] Then, using Eq.~\eqref{eq:imagesizeupper}, \begin{align*} |\Image(f)| &= \sum_{r=1}^k M_r = \sum_{r=1}^k r M_r - \sum_{r=1}^k (r-1) M_r \\ &= 2^n - \sum_{r=1}^k (r-1) M_r \leq 2^n- \frac{N(f)-2^n}{k} \leq 2^n - \frac{k-1}{k}2^{(n+1)/2}. \end{align*} If $f$ is not a permutation, then $k>1$ and $\frac{k-1}{k}\geq 1/2$, completing the proof. \qed \end{proof} \begin{remark} \begin{enumerate} \item From Theorem~\ref{thm:ABupperbound}, it is clear that almost bent maps that satisfy the bound in \eqref{eq:ABupperbound} with equality must satisfy $\max\{\omega(a) | a\in\F_{2^n}\}=2$. For such a map we then necessarily have $M_1(f)= 2^n-2^{(n+1)/2}$ and $M_2(f)=2^{(n-1)/2}$. However we believe that the bound is not sharp. \item {The bound of Theorem~\ref{thm:ABupperbound} is similar in style to the well-known general upper bound on the image size of maps by Wan~\cite{wanbound}, stating that if $f \colon \F_{2^n} \rightarrow \F_{2^n}$ is not bijective then \[|\Image(f)| \leq 2^n-\frac{2^n-1}{d},\] where $d$ is the degree of $f$. Another bound similar to Wan's bound is: If $f \colon \F_{2^n} \rightarrow \F_{2^n}$ is not bijective and has index $l$ then \[|\Image(f)| \leq 2^n-\frac{2^n-1}{l},\] see \cite{wang-value-set} for more details and the definition of the index of maps. For almost bent maps with known small degree or index, these upper bounds are stronger than the one in Theorem~\ref{thm:ABupperbound}.} \end{enumerate} \end{remark} In even dimension, it is well known that maps with bent component functions cannot be permutations since bent functions are never balanced. Using Theorem~\ref{thm:upperbound}, we present an upper bound on the image size of a map depending on the number of bent component functions. This also yields an upper bound for the image size of component-wise plateaued APN maps in even dimension since such maps have always many bent component functions. \begin{theorem} \label{thm:bentupperbound} Let $n$ be even and $f\colon \F_{2^n} \rightarrow \F_{2^n}$ be a map with $t$ bent component functions. Then $N(f) \geq t+2^n$ and \[|\Image(f)| \leq 2^n-\frac{1}{2}(\sqrt{4t+1}-1). \] \end{theorem} \begin{proof} We use again the relation \[2^n N(f) = \sum_{b \in \F_{2^n}} W_f(b,0)^2 = 2^{2n}+\sum_{b \in \F_{2^n}^*} W_f(b,0)^2.\] If $x \mapsto \Tr(bf(x))$ is bent, then $W_f(b,0)^2 = 2^n$, so \[2^nN(f) \geq 2^{2n}+t \cdot 2^n,\] implying $N(f) \geq t +2^n$. The remaining follows from Theorem~\ref{thm:upperbound}. \qed \end{proof} Theorems~\ref{thm:ABupperbound} and~\ref{thm:bentupperbound} yield an upper bound on the image size for component-wise plateaued APN maps. \begin{theorem} Let $f:\F_{2^n} \to \F_{2^n}$ be a component-wise plateaued APN map, and non-bijective if $n$ is odd. Then \[|\Image(f)| \leq \begin{cases} 2^n - 2^{(n-1)/2} & n \text{ is odd,} \\ 2^n-\frac{1}{2}(\sqrt{\frac{8}{3}(2^n-1)+1}-1)< 2^n-\sqrt{\frac{2}{3}(2^n-1)}+1/2 & n \text{ is even}. \end{cases} \] \end{theorem} \begin{proof} The statement for $n$ odd follows from Theorems~\ref{thm:ABupperbound}, since every component-wise plateaued APN map is almost bent. The upper bound for $n$ even is a direct consequence from Theorem \ref{thm:bentupperbound} and the fact that a component-wise plateaued APN map has at least $(2/3)(2^n-1)$ bent component functions \cite[Corollary 3]{berger2006almost}. \qed \end{proof} \begin{acknowledgement} { We thank our colleagues for all the comments which help us to improve the presentation of this paper. Our special thank is to Zeying Wang for pointing us an inaccuracy in the earlier version of Theorem \ref{th:2.7} and consequently in Corollary \ref{cor:apn-1-2}. We thank Steven Wang for bringing to our attention references \cite{wanbound,wang-value-set}. } \end{acknowledgement}
1301.5836
\section{Introduction}\label{sec:intro} The question of when the random graph~$G(n,p)$ becomes hamiltonian is well understood. P\'osa~\cite{Posa} and Korshunov~\cite{Kor76, Kor77} proved that the hamiltonicity threshold is $\log n/n$, Koml\'os and Szemer\'edi~\cite{KomSzem} determined an exact formula for the probability of the existence of a Hamilton cycle, and Bollob\'as~\cite{Boll} established an even more powerful hitting time result. The first polynomial time randomised algorithms for finding Hamilton cycles in $G(n,p)$ were developed by Angluin and Valiant~\cite{AngVal} and Shamir~\cite{Shamir}. Finally, Bollob\'as, Fenner and Frieze~\cite{BFF} gave a deterministic polynomial time algorithm whose success probability matches the probabilities established by Koml\'os and Szemer\'edi. For random hypergraphs much less is known. The random $r$-uniform hypergraph $\mathcal{G}^{(r)}(n,p)$ on vertex set~$[n]$ is generated by including each hyperedge from $\binom{[n]}{r}$ independently with probability $p=p(n)$. First, Frieze~\cite{Frieze} considered loose Hamilton cycles in random $3$-uniform hy\-per\-graphs. The \emph{loose $r$-uniform cycle} on vertex set $[n]$ has edges $\{i+1,\dots,i+r\}$ for exactly all $i=k(r-1)$ with $k\in\NATS$ and $(r-1)\mid n$, where we calculate modulo~$n$. Frieze showed that the threshold for a loose Hamilton cycle in $\mathcal{G}^{(3)}(n,p)$ is $\Theta(\log n/n^2)$. Dudek and Frieze~\cite{DudFriLoose} extended this to $r$-uniform hypergraphs with $r\ge 4$, where the threshold is $\tilde{\Theta}(\log n/n^{r-1})$. Both results require that $n$ is divisible by $2(r-1)$ (which was recently removed by Dudek, Frieze, Loh and Speiss~\cite{DFLS12}) and rely on the deep Johansson-Kahn-Vu theorem~\cite{JKV}, which makes their proofs non-constructive. Tight Hamilton cycles, on the other hand, were first considered in connection with packings. The \emph{tight $r$-uniform cycle} on vertex set $[n]$ has edges $\{i+1,\dots,i+r\}$ for all $i$ calculated modulo~$n$. Frieze, Krivelevich and Loh~\cite{FriKriLoh} proved that if $p\gg (\log^{21}n/n)^{1/16}$ and~$4$ divides~$n$ then most edges of $G^{(3)}(n,p)$ can be covered by edge disjoint tight Hamilton cycles. Further packing results were obtained by Frieze and Krivelevich~\cite{FriKri11} and by Bal and Frieze~\cite{BalFri12}, but the probability range is far from best possible. Subsequently, Dudek and Frieze~\cite{DudFriTight} used a second moment argument to show that the threshold for a tight Hamilton cycle in $\mathcal{G}^{(r)}(n,p)$ is sharp and equals $e/n$ for each $r\ge 4$ and for $r=3$ they showed that $\mathcal{G}^{(3)}(n,p)$ contains a tight Hamilton cycle when $p=\omega(n)/n$ for any $\omega(n)$ that goes to infinity. Since their method is non-constructive they asked for an algorithm to find a tight Hamilton cycle in a random hypergraph. In this paper we present a randomised algorithm for this problem if $p$ is slightly bigger than in their result. \begin{theorem}\label{thm:main} For each integer $r\ge 3$ and $0<\eps<1/(4r)$ there is a randomised polynomial time algorithm which for any $n^{-1+\eps}<p\le 1$ a.a.s. finds a tight Hamilton cycle in the random $r$-uniform hypergraph $\mathcal{G}^{(r)}(n,p)$. \end{theorem} The probability referred to in Theorem~\ref{thm:main} is with respect to the random bits used by the algorithm as well as by $\mathcal{G}^{(r)}(n,p)$. The running time of the algorithm in the above theorem is polynomial in~$n$, where the degree of the polynomial depends on~$\eps$. \smallskip \paragraph{\bf Organisation.} We first provide some notation and a brief sketch of our proof, formulate the main lemmas and prove Theorem~\ref{thm:main} in Section~\ref{sec:proof}. In Sections~\ref{sec:connect} and~\ref{sec:reserve} we prove the main lemmas, and in Section~\ref{sec:conclude} we end with some remarks and open problems. \section{Lemmas and proof of Theorem~\ref{thm:main}} \label{sec:proof} \subsection{Notation} An $s$-\emph{tuple} $(u_1,\dots,u_s)$ of vertices is an ordered set of vertices. We often denote tuples by bold symbols, and occasionally also omit the brackets and write $\tpl{u}=u_1,\dots,u_s$. Additionally, we may also use a tuple as a set and write for example, if~$S$ is a set, $S\cup\tpl{u}:=S\cup\{u_i\colon i\in[s]\}$. The \emph{reverse} of the $s$-tuple $\tpl{u}$ is the $s$-tuple $(u_s,\dots,u_1)$. In an $r$-uniform hypergraph~$\mathcal{G}$ the tuple $P=(u_1,\dots,u_\ell)$ forms a \emph{tight path} if $\{u_{i+1},\dots,u_{i+r}\}$ is an edge for every $0\le i\le \ell-r$. For any $s\in[\ell]$ we say that~$P$ \emph{starts} with the $s$-tuple $(u_1,\dots,u_s)=:\tpl{v}$ and \emph{ends} with the $s$-tuple $(u_{\ell-(s-1)},\dots,u_\ell)=:\tpl{w}$. We also call~$\tpl{v}$ the \emph{start $s$-tuple} of $P$, $\tpl{w}$ the \emph{end $s$-tuple} of $P$, and~$P$ a $\tpl{v}-\tpl{w}$ path. The \emph{interior} of~$P$ is formed by all its vertices but its start and end $(r-1)$-tuples. Note that the interior of $P$ is not empty if and only if $\ell>2(r-1)$. For a hypergraph~$\mathcal{H}$ we define the \emph{$1$-density} of~$\mathcal{H}$ to be $d^{(1)}(\mathcal{H}):=e(\mathcal{H})/\big( v(\mathcal{H}) -1\big)$ if $v(\mathcal{H})>1$, and $d^{(1)}(\mathcal{H}):=0$ if $v(\mathcal{H})=1$. We set \begin{equation*} m^{(1)}(\mathcal{H}):=\max\{d^{(1)}(\mathcal{H}') \colon \mathcal{H}'\subset \mathcal{H}\} \,. \end{equation*} We denote the $r$-uniform tight cycle on~$\ell$ vertices by~$\Crl$. Observe that $m^{(1)}(\Crl)=\ell/(\ell-1)$. \subsection{Outline of the proof} A simple greedy strategy shows that for $p=n^{\eps-1}$ it is easy to find a tight path (and similarly a tight cycle) in~$\mathcal{G}^{(r)}(n,p)$ which covers all but at most $n^{1-\frac12\eps}$ of its vertices. Incorporating these few remaining vertices is where the difficulty lies. To overcome this difficulty we apply the following strategy, which we call the \emph{reservoir method}. We first construct a tight path $P$ of a linear length in $n$ which contains a vertex set $W^*$, called the \emph{reservoir}, such that for any $W\subseteq W^*$ there is a tight path on $V(P)\setminus W$ whose end $(r-1)$-tuples are the same as that of $P$. In a second step we use the mentioned greedy strategy to extend~$P$ to an almost spanning tight path $P'$, with a leftover set $L$. The advantage we have gained now is that we are permitted to reuse the vertices in $W^*$: we will show that, by using a subset~$W$ of vertices from~$W^*$ to incorporate the vertices from~$L$, we can extend the almost spanning tight path to a spanning tight cycle~$C$. More precisely, we shall delete~$W$ from~$P'$ (observe that, by construction of $P$, the hypergraph induced on $V(P)\setminus W$ contains a tight path with the same ends) and use precisely all vertices of~$W$ to connect the vertices of~$L$ to construct $C$. We remark that our method has similarities, in spirit, with the absorbing method for proving extremal results for large structures in dense hypergraphs (see, e.g., R\"odl, Ruci\'nski and Szemer\'edi~\cite{RRS}). The techniques to deal with multi-round exposure in our algorithm is similar to those used by Frieze in~\cite{Fri88}. Moreover, a method very similar to ours was used independently by K\"uhn and Osthus~\cite{KOPosa} to find bounds on the threshold for the appearance of the square of a Hamilton cycle in a random graph. \subsection{Lemmas} We shall rely on the following lemmas. We state these lemmas together with an outline of how they are used, and then give the details of the proof of Theorem~\ref{thm:main}. Our first lemma asserts that there are hypergraphs~$\mathcal{H}^*$ with density arbitrarily close to~$1$ which have a spanning tight path and a vertex~$w^*$ such that deleting~$w^*$ from~$\mathcal{H}^*$ leaves a spanning tight path with the same start and end $(r-1)$-tuples. \begin{lemma}[Reservoir lemma]\label{lem:reserve} For all $r\ge 2$ and $0<\eps<1/(6r)$, there exist an $r$-uniform hypergraph $\mathcal{H}^*=\mathcal{H}^*(r,\eps)$ on less than $16/\eps^2$ vertices, a vertex~$w^*$, and two disjoint $(r-1)$-tuples $\tpl{u}=(u_1,\ldots,u_{r-1})$ and $\tpl{v}=(v_1,\ldots,v_{r-1})$ such that \begin{enumerate}[label=\rom] \item\label{lem:reserve:1} $m^{(1)}(\mathcal{H}^*)\le 1+\eps$, \item\label{lem:reserve:2} $\mathcal{H}^*$ has a tight Hamilton $\tpl{u}-\tpl{v}$ path, and \item\label{lem:reserve:3} $\mathcal{H}^*-w^*$ has a tight Hamilton $\tpl{u}-\tpl{v}$ path. \end{enumerate} \end{lemma} We provide a proof of Lemma~\ref{lem:reserve} in Section~\ref{lem:reserve}. We also call the graph~$\mathcal{H}^*$ asserted by this lemma the \emph{reservoir graph} and the vertex~$w^*$ the \emph{reservoir vertex}, since they will provide us as follows with the reservoir mentioned in the outline. If we can find many disjoint copies of $\mathcal{H}^*$ in $\mathcal{G}^{(r)}(n,p)$, and if we can connect these copies of~$\mathcal{H}^*$ to form a tight path, then the set~$W^*$ of reservoir vertices~$w^*$ from these $\mathcal{H}^*$-copies forms such a reservoir. In order to find many disjoint $\mathcal{H}^*$-copies, we use the following standard theorem. \begin{theorem}[see, e.g., {\cite[Theorem 4.9]{JaLuRu:Book}}] \label{thm:disjcopy} For every $r$-uniform hy\-per\-graph~$\mathcal{H}$ there are constants $\nu>0$ and $C\in\NATS$ such that if $p\ge C n^{-1/m^{(1)}(\mathcal{H})}$, then $\Gr(n,p)$ a.a.s.\ contains $\nu n$ vertex disjoint copies of $\mathcal{H}$. \end{theorem} For connecting the $\mathcal{H}^*$-copies into a long tight path~$P$ we use the next lemma. \begin{lemma}[Connection lemma]\label{lem:connect} Given $r\ge 3$, $0<\eps<1/(4r)$ and $\delta>0$, there exists $\eta>0$ such that there is a (deterministic) polynomial time algorithm~$\mathcal{A}$ which on inputs $\mathcal{G}=\mathcal{G}^{(r)}(n,p)$ with $p= n^{-1+\eps}$ a.a.s.\ does the following. Let $1\le k\le \eta n$, let $X$ be any subset of $[n]$ of size at least $\delta n$. Let $\tpl{u}^{(1)},\ldots,\tpl{u}^{(k)}$, $\tpl{v}^{(1)},\ldots,\tpl{v}^{(k)}$ be any $2k$ pairwise disjoint $(r-1)$-tuples in~$[n]$. Then~$\mathcal{A}$ finds in~$\mathcal{G}$ a collection of vertex disjoint tight paths $P_i$, $1\le i\le k$, of length at most $\ell:=(r-1)/\eps+2$, such that $P_i$ is a $\tpl{u}^{(i)}-\tpl{v}^{(i)}$ path all of whose interior vertices are in $X$. \end{lemma} We prove this lemma in Section~\ref{sec:connect}. In fact, we will also make use of this lemma after extending $P$ to a maximal tight path $P'$ in order to extend $P'$ (reusing vertices of the reservoir $W^*$) to cover the leftover vertices $L$. It is for this reason that we require the lemma to work with a set $X$ which can be quite small. \subsection{Proof of the main theorem} Our goal is to describe an algorithm which a.a.s.\ constructs a tight Hamilton cycle in the $r$-uniform random hypergraph $\Gr(n,q)$ in five steps. For convenience we replace $\Gr(n,p)$ in Theorem~\ref{thm:main} by $\Gr(n,q)$. We make use of a $5$-round exposure of the random hypergraph, that is, each of the five algorithm steps will individually a.a.s.\ succeed on an $r$-uniform random hypergraph with edge probability somewhat smaller than~$q$. % Observe, however, that for the algorithm the input graph is given at once, and not as the union of five graphs. Therefore, in a preprocessing step the algorithm will first split the (random) input hypergraph into five (independent random) hypergraphs. The only probabilistic component of our algorithm is in the preprocessing step. Our five algorithm steps will then be as follows. Firstly, we apply Theorem~\ref{thm:disjcopy} in order to find $cn$ vertex disjoint copies of the reservoir graph $\mathcal{H}^*$ from Lemma~\ref{lem:reserve}. Secondly, we use the connection lemma, Lemma~\ref{lem:connect}, to connect the $\mathcal{H}^*$ copies to a tight path $P$ of length $c'n$ which contains a set $W^*$ of linearly many reservoir vertices. Thirdly, we greedily extend $P$ until we get a tight path $P'$ on $n-n^{1-(\eps'/2)}$ vertices. In the fourth and fifth step we use~$W^*$ and Lemma~\ref{lem:connect} to connect the remaining vertices to the path constructed so far and to close the path into a cycle. For technical reasons it will be convenient to assume that the edge probability in each of the last four steps is exactly $q'=n^{-1+\eps'}$ for some $\eps'$. We therefore split our random input hypergraph into five independent random hypergraphs, of which the first has edge probability~$q''\ge q'$ and the remaining four have edge probability $q'$. \begin{proof}[Proof of Theorem~\ref{thm:main}] {\sl Constants:} Given $r\ge 3$ and $0<\eps<1/(4r)$, set $\eps':=\eps/2$. Suppose in the following that~$n$ is sufficiently large and define $q'=n^{-1+\eps'}$. Now let $q>n^{-1+\eps}$ be given and observe that $q\ge 5 q' \ge 1-(1-q')^5$. Finally, let $q''\in(0,1]$ be such that \begin{equation}\label{eq:main:p} 1-q=(1-q'')(1-q')^4 \end{equation} and note that since $q\ge 1-(1-q')^5$, we have $q''\ge q'$. Let $\eta_1>0$ be the constant given by Lemma~\ref{lem:connect} with input $r$, $\eps'$ and $\delta=1/2$. Let $\mathcal{H}^*=\mathcal{H}^*(r,\eps'/2)$ be the $r$-uniform reservoir hypergraph given by Lemma~\ref{lem:reserve} and $n^*:=v(\mathcal{H}^*)$. Let $\nu>0$ be the constant given by Theorem~\ref{thm:disjcopy} with input $\mathcal{H}^*$. We set \begin{equation}\label{eq:defc} c:=\min\Big(\frac{1}{2n^*},\frac{\nu}{n^*},\eta_1\Big)\,. % \end{equation} Finally, let $\eta_2>0$ and $\ell_2$ be the constants given by Lemma~\ref{lem:connect} with input $r$, $\eps'$ and $\delta=c/2$. \smallskip {\sl Preprocessing:} We shall use a randomised procedure to split the input graph~$\mathcal{G}$ which is distributed according to $\Gr(n,q)$ into five hypergraphs $\mathcal{G}_1,\ldots,\mathcal{G}_5$, such that $\mathcal{G}_1$ is distributed according to $\Gr(n,q'')$ and $\mathcal{G}_2,\dots,\mathcal{G}_5$ are distributed according to $\Gr(n,q')$, where the choice of parameters is possible by~\eqref{eq:main:p}. Moreover these five random hypergraphs are mutually independent. Our randomised procedure takes a copy $\mathcal{G}$ of $\Gr(n,q)$ and colours its edges as follows. It colours each edge~$e$ of~$\mathcal{G}$ independently with a non-empty subset~$c$ of $[5]$ such that \begin{equation*} \Pr(e\text{ receives colour }c)=\begin{cases} q'^{|c|}(1-q')^{4-|c|}(1-q'')/q & \text{ if } 1\notin c\\ q'^{|c|-1}(1-q')^{5-|c|}q''/q& \text{ if } 1\in c \,. \end{cases} \end{equation*} Then we let $\mathcal{G}_i$ be the hypergraph with those edges whose colour contains~$i$ for each $i\in[5]$. For justifying that this randomised procedure has the desired effect, let us consider the following second random experiment. We take five independent random hypergraphs, $\mathcal{G}_1=\Gr(n,q'')$ and four copies $\mathcal{G}_2,\ldots,\mathcal{G}_5$ of $\Gr(n,q')$, and form an $r$-uniform hypergraph on $n$ vertices, whose edges are the union of $\mathcal{G}_1,\ldots,\mathcal{G}_5$, each receiving a colour which is a subset of $[5]$ identifying the subset of $\mathcal{G}_1,\ldots,\mathcal{G}_5$ containing that edge. Observe that we simply obtain $\Gr(n,q)$, when we ignore the colours in this union. It is straightforward to check that the two experiments yield identical probability measures on the space of $n$-vertex coloured hypergraphs. It follows that any algorithm which with some probability finds a tight Hamilton cycle when presented with the five hypergraphs $\mathcal{G}_i$ of the first experiment succeeds with the same probability when presented with five hypergraphs obtained from the second experiment. \smallskip {\sl Step 1:} The first main step of our algorithm finds $cn$ vertex disjoint copies of the reservoir graph $\mathcal{H}^*$ in $\mathcal{G}_1$. To this end we would like to apply Theorem~\ref{thm:disjcopy}, hence we need to check its preconditions. We require that $q''\ge Cn^{-1/m^{(1)}(\mathcal{H}^*)}$ for some large $C$. By Lemma~\ref{lem:reserve} we have $m^{(1)}(\mathcal{H}^*)\le 1+\frac12\eps'$, and $1/(1+\frac12\eps')>1-\eps'$. It follows that for all sufficiently large $n$ we have $q'=n^{-1+\eps'}\ge Cn^{-1/m^{(1)}(\mathcal{H}^*)}$, and so the same holds for $q''$ since $q''\ge q'$. By Theorem~\ref{thm:disjcopy} and~\eqref{eq:defc}, a.a.s.\ $\mathcal{G}_1$ contains at least $\nu n\ge n^*\cdot cn$ vertex disjoint copies of $\mathcal{H}^*$. Hence we can algorithmically \emph{find} a subset of at least $cn$ of them as follows. We search the vertex subsets of size~$n^*$ of~$G_1$. Whenever we find a subset that induces $\mathcal{H}^*$ and does not share vertices with a previously chosen $\mathcal{H}^*$-copy, then we choose it. Clearly, we can do this until we chose $cn$ vertex disjoint copies $\mathcal{H}_1,\ldots,\mathcal{H}_{cn}$ of~$\mathcal{H}^*$. This requires running time $O\big(n^{n^*}\big)$, where $n^*\le 16\eps^{-2}$ does not depend on $n$. \smallskip {\sl Step 2:} The second step consists of using $\mathcal{G}_2$ and Lemma~\ref{lem:connect} with input $r,\eps'$ and $\delta=1/2$ to connect the $cn$ vertex disjoint reservoir graphs into one tight path. Let $W^*$ consist of the $cn$ reservoir vertices, one in each of $\mathcal{H}_1,\ldots,\mathcal{H}_{cn}$. By~\eqref{eq:defc}, $\mathcal{H}_1,\ldots,\mathcal{H}_{cn}$ cover at most $n/2$ vertices. By Lemma~\ref{lem:connect} applied with $X=[n]\setminus\big(\bigcup_{i\in[cn]} V(\mathcal{H}_i)\big)$ there is a polynomial time algorithm which a.a.s.\ for each $1\le i\le cn-1$ finds a tight path in $\mathcal{G}_2$ connecting the end $(r-1)$-tuple of $\mathcal{H}_i$ with the start $(r-1)$-tuple of $\mathcal{H}_{i+1}$, where these tight paths are disjoint and have their interior in~$X$. This yields a tight path $P$ in $\mathcal{G}_1\cup\mathcal{G}_2$ containing all of the $\mathcal{H}_i$ with the following property. For any $W\subset W^*$, if we remove $W$ from $P$, then we obtain (using the additional edges of the $\mathcal{H}_i$) a tight path $P(W)$ whose start and end $(r-1)$-tuples are the same as those of $P$ (see Lemma~\ref{lem:reserve}\ref{lem:reserve:1}). \smallskip {\sl Step 3:} In the third step we use $\mathcal{G}_3$ to greedily extend~$P$ to a tight path $P'$ covering all but at most $n^{1-\frac12\eps'}$ vertices. Let $P_0=P$ and do the following for each $i\ge 0$. Let $\tpl{e}_i$ be the end $(r-1)$-tuple of~$P_i$ if there is an edge $\tpl{e}_iv_i$ in $\mathcal{G}_3$ for some $v_i\in[n]\setminus V(P_i)$ then append~$v_i$ to~$P_i$ to obtain the tight path $P_{i+1}$. If no such edge exists, then halt. Observe that in step~$i$ of this procedure, it suffices to reveal the edges $\tpl{e}_iw$ with $w\in[n]\setminus P_i$. Hence, by the method of deferred decision, the probability that $v_i$ does not exist is at most $(1-q')^{n-|P_i|}$. So, as long as $|P_i|\le n-n^{1-\frac12\eps'}$ this probability is at most $\exp(-q'n^{1-\frac12\eps'})\le \exp(-n^{\frac12\eps'})$. We take the union bound over all (at most $n$) $i$ to infer that this procedure a.a.s.\ indeed terminates with a tight path $P'$ with $|P'|\ge n-n^{1-\frac12\eps'}$ which contains~$P$. \smallskip {\sl Step 4:} Now let $L'$ be the set of vertices not covered by $P'$. Let~$L$ be obtained from $L'$ by adding at most $r-2$ vertices of $W^*$, such that $|L|$ is divisible by $r-1$. Let $Y_1,\ldots,Y_t$ be a partition of $L$ into $|L|/(r-1)$ tuples of size $r-1$. Let $Y_0$ be the reverse of the start $(r-1)$-tuple of $P'$, and $Y_{t+1}$ be the reverse of its end $(r-1)$-tuple. In the fourth step, we use $\mathcal{G}_4$ and Lemma~\ref{lem:connect} with input $r$, $\eps'$ and $\delta=\frac12c$ to find for each $0\le i\le \frac12t$ a tight path between $Y_{2i}$ and $Y_{2i+1}$ of length at most $\ell_2$ using only vertices in $W^*\setminus L$, such that these paths are pairwise disjoint. This is possible since $|W^*\setminus L|\ge \frac12cn$ and since $t\le|L|\le n^{1-\frac12\eps'}+r-2$ implies $\frac{t}2+1\le n^{1-\frac13\eps'} \le \eta_2 n$ for~$n$ sufficiently large. Let~$W^{**}$ be the set of at least $cn-(\frac{t}2+1)\ell_2\ge cn-n^{1-\frac13\eps'}\ell_2\ge \frac23cn$ vertices in~$W^*$ not used in this step. \smallskip {\sl Step 5:} Similarly, in the fifth step, we use $\mathcal{G}_5$ and Lemma~\ref{lem:connect}, with input $r$, $\eps$ and $\delta=c/2$, to find for each $0\le i\le \frac12(t-1)$ a tight path between $Y_{2i+1}$ and $Y_{2i+2}$ of length at most $\ell_2$ using only vertices in $W^{**}\setminus L$, such that these paths are pairwise disjoint. Again, $|W^{**}\setminus L|\ge \frac12cn$ and $\frac{t}2+1\le \eta_2 n$ for $n$ sufficiently large. Thus Lemma~\ref{lem:connect} guarantees that this step a.a.s.\ succeeds also and the tight paths can be found in polynomial time. \smallskip But now we are done: Let~$W$ be the vertices of~$W^*$ used in steps~4 and~5. By definition of~$W^*$ we can delete the vertices of~$W$ from~$P'$ and obtain a tight path $P'(W)$ through the remaining vertices of~$P'$ (using additional edges of the reservoir graphs) and with the same start and end $(r-1)$-tuples. Then $P'(W)$ together with the connections constructed in steps~$4$ and~$5$ (which incorporated all vertices of~$L$) form a Hamilton cycle in~$\mathcal{G}$. \end{proof} \begin{remark}\label{rem:complexity} We note that the only non-deterministic part of the algorithm presented in the above proof concerns the partition of the edges of the input graph into five random subsets at the beginning. The algorithm in the connection lemma (Lemma~\ref{lem:connect}) is polynomial time, where the power of the polynomial is independent of~$\eps$. The same is (obviously) true for the greedy procedure of step~3. Finding many vertex disjoint reservoir graphs in step~1 however, we can only do in time~$n^{16\eps^{-2}}$. \end{remark} \section{Proof of the connection lemma} \label{sec:connect} {\bf Preliminaries.} For a binomially distributed random variable~$X$ and a constant~$\gamma$ with $0<\gamma\le 3/2$ we will use the following Chernoff bound, which can be found, e.g., in~\cite [Corollary~2.3]{JaLuRu:Book}: \begin{equation}\label{eq:chernoff} \mathbb{P}\big(\, |X-\mathbb{E} X|\ge\gamma \mathbb{E} X\big)\le 2\exp(-\gamma^2\mathbb{E} X/3) \,. \end{equation} In addition we apply the following consequence of Janson's inequality (see for example~\cite{JaLuRu:Book}, Theorem 2.18): Let~$\mathcal{E}$ be a finite set and~$\mathcal{P}$ be a family of non-empty subsets of~$\mathcal{E}$. Now consider the random experiment where each $e\in\mathcal{E}$ is chosen independently with probability~$p$ and define for each~$P\in\mathcal{P}$ the indicator variable $I_P$ that each element of~$P$ gets chosen. Set~$X=\sum_{P\in\mathcal{P}} I_P$ and $\Delta=\frac12\sum_{P\neq P',P\cap P'\neq\emptyset}\mathbb{E}(I_PI_{P'})$. Then \begin{equation}\label{eq:Janson} \mathbb{P}(X=0)\le\exp(\Delta - \mathbb{E} X) \,. \end{equation} For $e\in\binom{n}{r}$ we say that we \emph{expose the $r$-set~$e$} in $\mathcal{G}^{(r)}(n,p)$, if we perform (only) the random experiment of including~$e$ in~$\mathcal{G}^{(r)}$ with probability~$p$ (recall that $p:=n^{-1+\eps}$). If this random experiment includes~$e$ then we say that \emph{$e$ appears}. Clearly, we can iteratively generate (a subgraph of) $\mathcal{G}^{(r)}(n,p)$ by exposing $r$-sets, as long as we do not expose any $r$-set twice. For a tuple~$\tpl{u}$ of at most $r-1$ vertices in~$[n]$ we say that we \emph{expose the $r$-sets at~$\tpl{u}$}, if we expose all $r$-sets $e\in\binom{n}{r}$ with $\tpl{u}\subset e$. Similarly, we expose~$\mathcal{H}\subset\binom{n}{r}$ if we expose all $r$-sets $e\in\mathcal{H}$. In our algorithm we use the following structure. A \emph{fan} $\mathcal{F}(\tpl{u})$ in an $r$-uniform hypergraph~$\mathcal{H}$ is a set $\{P_1,\dots,P_t\}$ of tight paths in~$\mathcal{H}$ which all have length either~$\ell$ or~$\ell-1$, start in the same $(r-1)$-tuple $\tpl{u}$, and satisfy the following condition. For any set $S$ of at least $r/2$ vertices, let $\{P_j\}_{j\in I}$ be the collection of tight paths in which the set $S$ appears as a consecutive interval. Then the paths $\{P_j\}_{j\in I}$ also coincide between~$\tpl{u}$ and the interval~$S$. The tuple~$\tpl{u}$ is also called the \emph{root} of~$\mathcal{F}(\tpl{u})$. Moreover, $\ell$ is the \emph{length} of~$\mathcal{F}(\tpl{u})$, and~$t$ its \emph{width}. The set of \emph{leaves} $L\big(\mathcal{F}(\tpl{u})\big)$ of~$\mathcal{F}(\tpl{u})$ is the set of $(r-1)$-tuples $\tpl{u}'$ such that some path in~$\mathcal{F}$ ends in~$\tpl{u'}$. For intuition, observe that in the graph case $r=2$, a fan is simply a rooted tree all of whose leaves are at distance either $\ell$ or $\ell-1$ from the root. For $r\ge 3$, a fan is a more complicated structure. \smallskip {\bf Idea.} We shall consecutively build the $\tpl{u}^{(i)}-\tpl{v}^{(i)}$ paths~$P_i$ in the set~$X$, starting with~$P_1$. The construction of the path~$P_i$ we call \emph{phase~$i$}, and the strategy in this phase is as follows. We shall first expose all the hyperedges at~$\tpl{u}^{(i)}$, excluding a set of `used' vertices~$U$ (like those not in $X$, or in any~$\tpl{u}^{(i')}$ or~$\tpl{v}^{(i')}$). The edges $\{\tpl{u}^{(i)},c\}$ appearing in this process form possible starting edges for a path connecting $\tpl{u}^{(i)}$ and $\tpl{v}^{(i)}$. For each such (one edge) path~$P$ we next consider the $(r-1)$-endtuple of~$P$ and expose all edges at this tuple, excluding edges that were exposed earlier and used vertices (where now we count vertices in $P$ as used). And so on. In this way we obtain a (consecutively growing) fan~$\mathcal{F}(\tpl{u}^{(i)})$ with root~$\tpl{u}^{(i)}$. While growing this fan we shall also insist that no $j$-tuple of vertices with $j<r$ is used too often. We stop when the fan has width $n^{1-\eps/2}$. We will show that with high probability the fan then has only constant depth. Then we similarly construct a fan~$\mathcal{F}(\tpl{v}^{(i)})$ of width $n^{1-\eps/2}$ with root~$\tpl{v}^{(i)}$ (again avoiding used vertices and exposed edges). In a last step, for each leaf~$\tpl{\tilde u}^{(i)}$ of~$\mathcal{F}(\tpl{u}^{(i)})$ and each leaf $\tpl{\tilde v}^{(i)}$ of $\mathcal{F}(\tpl{v}^{(i)})$ we expose all $\tpl{\tilde u}^{(i)}-\tpl{\tilde v}^{(i)}$ paths of length $2(r-1)$, avoiding exposed edges. We shall show that with high probability at least one of these paths appears (and the fans $\mathcal{F}(\tpl{u}^{(i)})$ and $\mathcal{F}(\tpl{v}^{(i)})$ can be constructed), and hence we have successfully constructed~$P_i$. We shall also show that, in phase~$i$ we only exposed much less than a $1/n$ fraction of the $r$-sets in $X$. Hence it is plausible that we can avoid these exposed $r$-sets in future phases. We note that this last statement makes use of the fact $r\ge 3$: our connection algorithm does not work for $2$-graphs. \begin{proof}[Proof of Lemma~\ref{lem:connect}] {\sl Setup:} Given $r\ge 3$, $\delta>0$ and $0<\eps<1/(4r)$, we set \begin{equation}\label{eq:setxis} \xi':=\delta/(48r^2)\,,\quad\xi:=(\xi')^r/(r^2(r-1)!)\quad\text{and} \quad\eta=\delta/(16r)\,. \end{equation} Without loss of generality we will assume $|X|=\delta n$: this simplifies our calculations. In the algorithm described below, we maintain various auxiliary sets. We have a set $U$ of \emph{used vertices}, which contains all vertices in the sets $\tpl{u}^{(i)}$ and $\tpl{v}^{(i)}$, and in previously constructed connecting paths. In phase~$i$ we maintain additionally a (non-uniform) multihypergraph $U_i$ of \emph{used sets}, which keeps track of the number of times we have so far used a vertex, or pair of vertices, et cetera, consecutively in some path of the fan currently under construction. Actually, it will greatly simplify the analysis if any such used set can only appear in a unique order on these paths. Hence we choose the following setup. We arbitrarily fix an equipartition \[X=Y_1\dcup\cdots\dcup Y_{2r}\dcup Y'_1\dcup\cdots\dcup Y'_{2r}\,,\] and set $Y:=Y_1\dcup\cdots\dcup Y_{2r}$ and $Y':=Y'_1\dcup\cdots\dcup Y'_{2r}$. We shall construct the fan~$\mathcal{F}(\tpl{u}^{(i)})$ with root $\tpl{u}^{(i)}$ in $Y_1,\ldots,Y_{2r}$, taking successive levels of the fan from successive sets (in cyclic order), and similarly~$\mathcal{F}(\tpl{v}^{(i)})$ in $Y'_1,\ldots,Y'_{2r}$. Further, we maintain an $r$-uniform \emph{expos\'e hypergraph} $H$, which keeps track of the $r$-sets which we have exposed. We let $H_i$ be the hypergraph with the edges of~$H$ at the beginning of phase~$i$. We define hypergraphs $D_i^{(1)},\ldots,D_i^{(r-1)}$ of \emph{dangerous sets} for phase~$i$ as follows: \begin{subequations} \begin{align} \label{eq:connectr:dangerr-1} D_i^{(r-1)}&:=\Big\{\tpl{x}\in\tbinom{X}{r-1}\colon \deg_{H_i}(\tpl{x})\ge\xi n\Big\}\,,&&\text{and}\\ \label{eq:connectr:dangerj} D_i^{(j)}&:=\Big\{\tpl{x}\in\tbinom{X}{j}\colon \deg_{D_i^{(j+1)}}(\tpl{x})\ge\xi n\Big\}\,,&&\text{$r-2\ge j\ge 1$}\,. \end{align} \end{subequations} We will not use any set in any $D_i^{(j)}$ consecutively in a path in the fans constructed in phase~$i$. Given two vertex-disjoint $(r-1)$-sets $\tpl{u}$ and $\tpl{v}$, we say that the path $(\tpl{u},\tpl{v})$ of length $2r-2$ is \emph{blocked} by the expos\'e hypergraph $H$ if any $r$ consecutive vertices of the $(2r-2)$-set $\{\tpl{u},\tpl{v}\}$ is in $H$. When constructing the fan $\mathcal{F}\big(\tpl{v}^{(i)}\big)$ with root $\tpl{v}^{(i)}$, we need to ensure that not too many of its leaves are blocked by~$H$ together with too many leaves of the previously constructed fan $\mathcal{F}\big(\tpl{u}^{(i)}\big)$. For this purpose we define hypergraphs~$\tilde{D}_i^{(j)}$ of temporarily dangerous sets in phase $i$ as follows. We call an $(r-1)$-set $\tpl{y}$ in $Y'$ \emph{temporarily dangerous} if there are at least $\xi'\big|L\big(\mathcal{F}(\tpl{u}^{(i)})\big)\big|$ leaves $\tpl{x}$ of $\mathcal{F}(\tpl{u}^{(i)})$ such that $\{\tpl{x},\tpl{y}\}$ is blocked by $H_i$. We define \begin{subequations} \begin{align} \label{eq:connectr:dangerir-1} \tilde{D}_i^{(r-1)}&:=\Big\{\tpl{y}\in\tbinom{Y'}{r-1}\colon \tpl{y}\text{ is temporarily dangerous} \Big\}\,,\quad\text{and} \\ \label{eq:connectr:dangerij} \tilde{D}_i^{(j)}&:=\Big\{\tpl{y}\in\tbinom{Y'}{j}\colon \deg_{\tilde{D}_i^{(j+1)}}(\tpl{y})\ge\xi' n\Big\}\,, \quad \text{for }r-2\ge j\ge 1\,. \end{align} \end{subequations} Summarising, we do not want to append a vertex~$c\in X\setminus U$ to the end $(r-1)$-tuple $\tpl{a}$ of a path in one of our fans, if for $\tpl{a}$ or for any end $(j-1)$-tuple $\tpl{a}_{j-1}$ of~$\tpl{a}$ with $j\in[r-2]$ we have \begin{enumerate}[label=\rom] \item\label{item:Bad:H} $\{\tpl{a},c\}$ is in~$H$, \item\label{item:Bad:D} $\{\tpl{a}_{j-1},c\}$ is an edge of $D_i^{(j)}$ or of $\tilde{D}_i^{(j)}$, or \item\label{item:Bad:U} $\{\tpl{a}_{j-1},c\}$ has multiplicity greater than $\xi^{r-j}n^{(r-1)/2-j(1-\eps)}$ in $U_i$. \end{enumerate} Hence we define the set~$B(\tpl{a})$ of \emph{bad vertices} for $\tpl{a}$ to be the set of vertices in $X\setminus U$ for which at least one of these conditions applies. \smallskip {\sl Algorithm:} The desired paths $P_i$ will be constructed using Algorithm~\ref{alg:connect}. This algorithm constructs for each $i\in[k]$ two fans $\mathcal{F}(\tpl{u}^{(i)})$ and $\mathcal{F}(\tpl{v}^{(i)})$, using Algorithm~\ref{alg:fan} as a subroutine. \begin{algorithm}[t] \caption{Connect each pair $\tpl{u}^{(i)}$, $\tpl{v}^{(i)}$ with a path~$P_i$} \label{alg:connect} $U:=\bigcup_{i\in[k]}\{\tpl{u}^{(i)},\tpl{v}^{(i)}\}$ ; \quad $H:=\emptyset$ \; \ForEach{$i\in[k]$}{ \lnl{step:fanu}construct the fan~$\mathcal{F}(\tpl{u}^{(i)})$ in $Y_1\dcup\ldots\dcup Y_{2r}$ \; \lnl{step:fanv}construct the fan~$\mathcal{F}(\tpl{v}^{(i)})$ in $Y'_1\dcup\ldots\dcup Y'_{2r}$ \; let $L:=L(\tpl{u}^{(i)})$ be the leaves of~$\mathcal{F}(\tpl{u}^{(i)})$ \; let $L':=L(\tpl{v}^{(i)})$ be the leaves of~$\mathcal{F}(\tpl{v}^{(i)})$ reversed \; $\mathcal{P}:=$ all $L-L'$-paths of length~$2r-2$ not blocked by~$H$ \; \lnl{step:exposepaths}expose all edges which are in some $P\in\mathcal{P}$ \; \uIf{one of these paths~$\tpl{\tilde u}^{(i)},\tpl{\tilde v}^{(i)}$ appears}{ $P(\tpl{u^{(i)}}):=\text{the path in $\mathcal{F}(\tpl{u}^{(i)})$ ending with $\tpl{\tilde u}^{(i)}$}$ \; $P(\tpl{v^{(i)}}):=\text{reversal of the path in $\mathcal{F}(\tpl{v}^{(i)})$ ending with $\tpl{\tilde v}^{(i)}$}$ \; $P_i:= P(\tpl{u^{(i)}}),\tpl{\tilde u}^{(i)},\tpl{\tilde v}^{(i)}, P(\tpl{v^{(i)}})$ \; } \lnl{step:failure}\lElse{ halt with \Failure \; } \lnl{step:U}$U:=U\cup V(P_i)$ \; \lnl{step:Hconnect}\lForEach{$\tpl{x}\in L(\tpl{u}^{(i)}),\tpl{y}\in L(\tpl{v}^{(i)})$}{ $H:=H\cup\binom{\tpl{x}\,\cup\,\tpl{y}}{r}$ \; } } \end{algorithm} \begin{algorithm}[t] \caption{Construct the fan $\mathcal{F}(\tpl{u}^{(i)})$} \label{alg:fan} $\mathcal{F}(\tpl{u}^{(i)}):=\{\tpl{u}^{(i)}\}$ ; \quad $U_i:=\emptyset$ ; \quad $t:=1$ \; \RepeatForever{}{ $\mathcal{P}:=\mathcal{F}(\tpl{u}^{(i)})$ \; \lnl{step:forP} \ForEach{path $P\in\mathcal{P}$}{ let~$\tpl{a}$ be the end $(r-1)$-tuple of~$P$ \; \lnl{step:exposeedges} expose all edges $\{\tpl{a},c\}$ with $c\in C':=Y_t\setminus(V(P)\cup U\cup B(\tpl{a}))$ \; \lnl{step:C} $C:=\{c \colon \{\tpl{a},c\} \text{ appears in previous step}\}$ \; \lnl{step:CheckC} \lIf{\NOT $\delta n^\eps/(16r)\le |C|\le\delta n^{\eps}/(2r)$}{halt with \Failure \;} \lnl{step:F} $\mathcal{F}(\tpl{u}^{(i)}) := \big( \mathcal{F}(\tpl{u}^{(i)})\setminus\{P\}\big)\cup \big\{ (P,c) \colon c\in C \big\}$ \; \lnl{step:Qj} $\tpl{a}_j:=$ last $j$ vertices of $P$ for $j\in[r-2]$ \; \lnl{step:Ui} $U_i:=U_i \cup C \cup \bigcup_{c\in C}\big\{\{\tpl{a}_j,c\}\colon j\in[r-2]\big\}$ \; \lnl{step:Hfan} $H:=H\cup\big\{(\tpl{a},c)\colon c\in C'\big\}$ \; \lnl{step:whileF} \lIf{$|\mathcal{F}(\tpl{u}^{(i)})|\ge n^{(r-1)/2-\eps/2}$}{\Return\ \;} } $t:=(t \mod 2r) + 1$ \; } \end{algorithm} It is clear that the running time (whether the algorithm succeeds or fails) is polynomial: Steps~\ref{step:CheckC} and~\ref{step:whileF} guarantee that in one call, Algorithm~\ref{alg:fan} runs at most $n^{(r-1)/2}$ times through its repeat loop. Our analysis will show that a.a.s.\ the algorithm indeed succeeds. \smallskip Before we proceed with the analysis, let us remind the reader that $H$ denotes the already exposed hyperedges that appeared so far, $H_i$ consists of the hyperedges of $H$ before the start of phase $i$, $U$ is the set of already used vertices and $U_i$ is the auxiliary multihypergraph which is maintained through phase $i$ and records those $j$-tuples ($j\in[r-1]$) that were used for constructing the fan $\mathcal{F}(\tpl{u}^{(i)})$ ($\mathcal{F}(\tpl{v}^{(i)})$ resp.). \smallskip {\sl Analysis:} First, we claim that the algorithm is valid in that it does not try to expose any $r$-set twice. To see this, we need to check that at steps~\ref{step:exposepaths} and~\ref{step:exposeedges}, we do not attempt to re-expose an already exposed $r$-set. Since we do not expose any $r$-set in $H$ at either step (by the definition of $B(\tpl{a})$), it is enough to check that after either step, all exposed $r$-sets are added to $H$ before the next visit to either step. This takes place in steps~\ref{step:Hconnect} and~\ref{step:Hfan}. In order to show that the algorithm succeeds, we need to show that the following hold with sufficiently high probability for each $i\in[k]$. \begin{enumerate}[label=\itm{A\arabic{*}}, start=1] \item\label{item:connect:A2} Algorithm~\ref{alg:fan} successfully builds the fans~$\mathcal{F}(\tpl{u}^{(i)})$ and~$\mathcal{F}(\tpl{v}^{(i)})$, that is, the condition in step~\ref{step:whileF} eventually becomes true, and the condition in step~\ref{step:CheckC} never becomes true. \item\label{item:connect:A3} If this is the case, then Algorithm~\ref{alg:connect} successfully constructs~$P_i$, that is, one of the paths exposed in step~\ref{step:exposepaths} appears. \item\label{item:connect:A4} If this is the case, then~$P_i$ is of length at most~$s=\frac{r-1}{\eps}$, that is, the fans~$\mathcal{F}(\tpl{u}^{(i)})$ and~$\mathcal{F}(\tpl{v}^{(i)})$ have length at most~$s/2$. \end{enumerate} It is straightforward to see that~\ref{item:connect:A4} holds. Indeed, if Algorithm~\ref{alg:fan} succeeds in step~$i$, then in the last repetition of the for-loop creating $\mathcal{F}(\tpl{u}^{(i)})$, the width of $\mathcal{F}(\tpl{u}^{(i)})$ finally exceeds $n^{(r-1)/2-\eps/2}$. Since by step~\ref{step:CheckC} at most $|C|\le \delta n^\eps/(2r)<n^{(r-1)/2-\eps/2}$ paths are added to $\mathcal{F}(\tpl{u}^{(i)})$ in this last for-loop (and the same holds for $\mathcal{F}(\tpl{v}^{(i)})$), we obtain for the width of $\mathcal{F}(\tpl{u}^{(i)})$ and $\mathcal{F}(\tpl{v}^{(i)})$ (which equals the number of their leaves) that \begin{equation}\label{eq:connect:L} n^{(r-1)/2-\eps/2}\le \big|L(\tpl{u}^{(i)})\big|,\big|L(\tpl{v}^{(i)})\big|\le 2n^{(r-1)/2-\eps/2}\,. \end{equation} Now observe that by step~\ref{step:CheckC} the fan $\mathcal{F}(\tpl{u}^{(i)})$ (and similarly~$\mathcal{F}(\tpl{v}^{(i)})$) has width at least $\big(\delta n^{\eps}/(16r)\big)^{s_i}$, where $s_i$ is the length of~$\mathcal{F}(\tpl{u}^{(i)})$. For $s_i\ge(r-1)/(2\eps)$ this would imply $\big|L(\tpl{u}^{(i)})\big|\ge \big(\delta n^{\eps}/(16r)\big)^{(r-1)/(2\eps)}>n^{(r-1)/2-\eps/2}$, contradicting~\eqref{eq:connect:L}. Hence we have~\ref{item:connect:A4}. For proving~\ref{item:connect:A2} and~\ref{item:connect:A3}, we first show bounds on various quantities during the running of the algorithm. For a set $\tpl{a}$ in the multiset $U_i$ with $i\in[k]$, we write $\text{mult}_{U_i}(\tpl{a})$ for the multiplicity of~$\tpl{a}$ in~$U_i$. \begin{claim}\label{cl:connect:F} If phase~$i$ and all phases before succeed, then the following hold throughout phase~$i$. \begin{enumerate}[label=\abc] \item\label{item:connect:U} $|U|\le k\big(s+2(r-1)\big)\le 2kr/\eps$. \item\label{item:connect:Ui} For each $j\in[r-1]$ and each $j$-set $\tpl{a}\in U_i$ we have \[\text{mult}_{U_i}(\tpl{a})\le\xi^{r-j}n^{((r-1)/2)-j(1-\eps)}+1\,.\] \item\label{item:connect:Uij} For each $j\in[r-1]$ and each $(j-1)$-set $\tpl{a}$ in $[n]$, for all but $\xi n$ vertices $c\in X$ we have \[\text{mult}_{U_i}(\{\tpl{a},c\})\le\xi^{r-j} n^{((r-1)/2)-j(1-\eps)}\,.\] \item\label{item:connect:H} $e(H_{i+1})\le 2^{2r+1}(i+1)n^{r-1-\eps/2}$. \item\label{item:connect:J} At step~\ref{step:exposeedges} in Algorithm~\ref{alg:fan}, we have $|Y_t\setminus(V(P)\cup U\cup B(\tpl{a}))|\ge\delta n/(8r)$. \end{enumerate} \end{claim} Observe that for $j\ge r/2$ Claim~\ref{cl:connect:F}\ref{item:connect:Ui} implies that we always have $\text{mult}_{U_i}(\tpl{a})\le 1$ for any $j$-tuple used in any $\mathcal{F}(\tpl{u}^{(i)})$ or $\mathcal{F}(\tpl{v}^{(i)})$. This shows that $\mathcal{F}(\tpl{u}^{(i)})$ and $\mathcal{F}(\tpl{v}^{(i)})$ are indeed fans, as we claim. \begin{claimproof}[Proof of Claim~\ref{cl:connect:F}] We first prove~\ref{item:connect:U}. The set $U$ contains the $2k(r-1)$ vertices of the $k$ pairs of $(r-1)$-tuples which we wish to connect, together with all the vertices of the paths thus far constructed. Since by~\ref{item:connect:A4} these paths are of length at most~$s$, it follows that $|U|\le 2k(r-1)+(i-1)s\le k\big(s+2(r-1)\big)$. \smallskip To see that~\ref{item:connect:Ui} holds, observe that $j$-sets are added to~$U_i$ only at step~\ref{step:Ui}, and at this point the sets added are distinct: two sets either contain different members of $Y_t$, or they are of different sizes. Moreover, they are added only if their multiplicity in $U_i$ is at most $\xi^{r-j} n^{(r-1)/2-j(1-\eps)}$ by~\ref{item:Bad:U} in the definition of $B(\tpl{a})$. \smallskip For~\ref{item:connect:Uij} we proceed by induction on~$j$. First consider the case~$j=1$. Observe that $c\in X$ is added to~$U_i$ in step~\ref{step:Ui} only if it is added at the end of a path~$P$. Since step~\ref{step:CheckC} guarantees that each fan grows by a factor of at least~$2$ in each iteration, we have \begin{equation*} \sum_{c\in X} \text{mult}_{U_i}(c) \le 2\big(|L(\tpl{u}^{(i)})|+L(\tpl{v}^{(i)})|\big) \leByRef{eq:connect:L} 4n^{(r-1)/2-\eps/2}<\xi^rn^{(r-1)/2}\,. \end{equation*} We conclude that there are at most \begin{equation*} \frac{\xi^rn^{(r-1)/2}} {\xi^{r-1}n^{((r-1)/2)-1+\eps}} = \xi n^{1-\eps} \end{equation*} vertices $c\in X$ with $\text{mult}_{U_i}(c)>\xi^{r-1}n^{((r-1)/2)-1+\eps}$. Now assume that~\ref{item:connect:Uij} holds for $j-1$ and let~$\tpl{a}$ be a $(j-1)$-set in $[n]$. Similarly as before, for $c\in X$ the set $\{\tpl{a},c\}$ is in $U_i$ with multiplicity equal to the number of times that~$\tpl{a}$ has appeared as the end of a path~$P$ in one of the two fans constructed in this phase and the path $(P,c)$ was subsequently added to the fan in step~\ref{step:F}. Since we did not previously halt in step~\ref{step:CheckC}, for any $P$ there are at most $\delta n^\eps/(2r)\le\frac12 n^{\eps}$ vertices $c\in X$ such that $(P,c)$ is added in this way. Thus we have \begin{equation}\label{eq:connect:mult} \sum_{c\in X} \text{mult}_{U_i}(\tpl{a},c)\le \text{mult}_{U_i}(\tpl{a})\cdot \tfrac{1}{2} n^{\eps}\,. \end{equation} By~\ref{item:connect:Ui} we know in addition that \[\text{mult}_{U_i}(\tpl{a})\le\xi^{r-j+1}n^{((r-1)/2)-(j-1)(1-\eps)}+1\,.\] Note that if this bound is less than~$2$ then \eqref{eq:connect:mult} directly implies that there are at most~$\xi n$ vertices~$c$ with $\text{mult}_{U_i}(\tpl{a},c)\ge 1$ and we are done. Hence we may assume $\text{mult}_{U_i}(\tpl{a})\le2\xi^{r-j+1}n^{((r-1)/2)-(j-1)(1-\eps)}$. This together with~\eqref{eq:connect:mult} also implies that there are at most \begin{equation*} \frac{2\xi^{r-j+1}n^{((r-1)/2)-(j-1)(1-\eps)} \cdot \frac12 n^{\eps}}{\xi^{r-j}n^{((r-1)/2)-j(1-\eps)}} =\xi n \end{equation*} vertices $c\in X$ with $\text{mult}_{U_i}(\tpl{a},c)\ge\xi^{r-j}n^{((r-1)/2)-j(1-\eps)}$, as desired. \smallskip For the remaining parts of the claim, we proceed by induction on the phase~$i\in[k]$. So assume that the claim holds at the end of the $(i-1)$st phase. We next prove~\ref{item:connect:H}. At the end of phase~$i$, the hypergraph $H$ contains all the $r$-sets which it had at the end of phase $i-1$, together with all those added in phase~$i$. Now consider the construction of one fan in phase~$i$, say of $\mathcal{F}(\tpl{u}^{(i)})$. Since we did not halt in step~\ref{step:CheckC}, the width of the fan grows exponentially, more than doubling at each step. Thus we can bound the total number of iterations of the for-loop by the number $|L(\tpl{u}^{(i)})|$ of leaves of this fan (cf.\ step~\ref{step:F}). In each of these iterations, we exposed $|Y_t\setminus(P\cup U\cup B(\tpl{a}))|<n$ of the $r$-sets and added them to~$H$. Hence, while constructing $\mathcal{F}(\tpl{u}^{(i)})$ (and similarly for $\mathcal{F}(\tpl{v}^{(i)})$), we added at most $|L(\tpl{u}^{(i)})|n$ new $r$-sets to~$H$. The only other step where we add $r$-tuples to~$H$ is step~\ref{step:Hconnect}. In this step, for each pair of leaves of $\mathcal{F}(\tpl{u}^{(i)})$ and $\mathcal{F}(\tpl{v}^{(i)})$, we add $\binom{2r-2}{r}$ new $r$-sets to $H$. Using the induction hypothesis we thus conclude that at the end of phase~$i$ we have \begin{equation*}\begin{split} e(H_{i+1}) &\le e(H_i) + \big(|L(\tpl{u}^{(i)})|+|L(\tpl{v}^{(i)})|\big)n + \tbinom{2r-2}{r} |L(\tpl{u}^{(i)})|\cdot|L(\tpl{v}^{(i)})| \\ & \leByRef{eq:connect:L} 2^{2r+1} i\cdot n^{r-1-\eps/2} + 4 n^{(r+1)/2-\eps/2} \cdot n + \tbinom{2r-2}{r} 4n^{r-1-\eps} \\ & \le 2^{2r+1}(i+1)n^{r-1-\eps/2} \,, \end{split}\end{equation*} where for the final inequality we use the fact that $(r+1)/2\le r-1$, which holds since $r\ge 3$. This is the only step in the analysis where we use $r\ge 3$, but this analysis is reasonably tight: the algorithm does fail for $r=2$. \smallskip Last we prove~\ref{item:connect:J}, for which we additionally proceed by induction on the number~$f$ of iterations through the for-loop of Algorithm~\ref{alg:fan} done in the $i$th phase so far. So we assume that the claim holds at the end of the $(i-1)$st phase and after $f-1$ iterations. Let $P$ be the path considered in iteration~$f$ of this for-loop, and $\tpl{a}$ the $(r-1)$-tuple ending $P$. We would like to estimate the size of $B(\tpl{a})\cap Y_t$. Keep in mind in the following analysis that for $j\in[r-1]$ the hypergraph~$D_i^j$ does not change during phase~$i$, by definition. Similarly, $\tilde D_i^j$ does not change once the fan $\mathcal{F}(\tpl{u}_i)$ is constructed. Now let us first assess the effect of~\ref{item:Bad:H} of the definition of $B(\tpl{a})$. Since~$\tpl{a}$ is the end of a path constructed by Algorithm~\ref{alg:fan}, step~\ref{step:exposeedges} implies that the last vertex~$b$ of~$\tpl{a}$ is not contained in $B(\tpl{b})$ where~$\tpl{b}$ is the end $(r-1)$-tuple of $P-b$. From~\ref{item:Bad:D} in the definition of $B(\tpl{b})$ we conclude that $\tpl{a}\notin D_i^{(r-1)}$. Thus, by the definition of $D_i^{(r-1)}$ in~\eqref{eq:connectr:dangerr-1}, the number of edges in $H_i$ containing $\tpl{a}$ is smaller than $\xi n$. But how many edges $\{\tpl{a},c\}$ with $c\in Y_t$ did phase~$i$ add to $H$ so far? By~\ref{item:connect:Ui} the set~$\tpl{a}$ has multiplicity at most $\xi n^{(r-1)(2\eps-1)/2}+1<2$ in~$U_i$. It follows that since the start of phase~$i$ only one edge containing~$\tpl{a}$ was added to~$H$ in step~\ref{step:Hfan}: the end $r$-tuple $\tpl{a}_r$ of~$P$. However, since~$\tpl{a}_r$ contains no vertices of~$Y_t$ because the algorithm takes successive levels of the fan in successive~$Y_{t'}$ (or~$Y'_{t'}$), we conclude that the current phase did not add any additional edges $\{\tpl{a},c\}$ to~$H$ with $c\in Y_t$. Now let us estimate the number of vertices~$c\in Y_t$ which~\ref{item:Bad:D} of the definition of $B(\tpl{a})$ forbids. First, we need to consider the case $j=1$, and show that the number of vertices in $D_i^{(1)}$ is at most $\xi n$. Suppose not, and observe that by definition~\eqref{eq:connectr:dangerj}, each vertex in $D_i^{(1)}$ extends to at least $\xi n$ pairs in $D_i^{(2)}$, and so on, where at the final step each constructed member of $D_i^{(r-1)}$ extends to at least $\xi n$ members of $H_i$. We can construct any given member of $H_i$ in at most $r!$ ways, so we conclude that $e(H_i)\ge(\xi n)^r/r!$, which (for sufficiently large $n$) contradicts part~\ref{item:connect:H}. Next, again for the case $j=1$, we need to show that further there are at most $\xi' n$ vertices in $\tilde{D}_i^{(1)}$. Again, suppose not: then as above this implies that the number of pairs of $(r-1)$-tuples $(\tpl{x},\tpl{y})$ with $\tpl{x}\in L\big(\tpl{u}^{(i)}\big)$ and $\tpl{y}$ contained in $Y'_1\cup\ldots\cup Y'_{2r}$ is at least \begin{equation}\label{eq:connect:tempdang} \xi'\Big|L\big(\tpl{u}^{(i)} \big)\Big|\cdot(\xi'n)^ { r-1 } /(r-1)! \geByRef{eq:setxis}r^2\xi n^{r-1}\Big|L\big(\tpl{u}^{(i)}\big)\Big|\,. \end{equation} However, by construction of $\mathcal{F}(\tpl{u}^{(i)})$, for each $j\in[r-1]$ and each $\tpl{x}\in L\big(\tpl{u}^{(i)}\big)$, we have the property that the last $j$ vertices of $\tpl{x}$ are not in $D_i^{(j)}$. We claim that this implies that the number of $(r-1)$-tuples $\tpl{y}$ contained in $Y'_1\cup\ldots\cup Y'_{2r}$ such that $(\tpl{x},\tpl{y})$ is blocked by $H_i$, is at most $(r-1)^2\xi n^{r-1}$, which is a contradiction to~\eqref{eq:connect:tempdang}. To see this, consider the following property P of tuples $\tpl{y}$. For each $r-1\ge j\ge 1$ and each $1\le k\le r-j$, the tuple consisting of the last $j$ vertices of $\tpl{x}$ followed by the first $k$ vertices of $\tpl{y}$ is not in $D_i^{(j+k)}$ (if $j+k<r$) and not in $H_i$ (if $j+k=r$). If $\tpl{y}$ has property P, then clearly the pair $(\tpl{x},\tpl{y})$ is not blocked by $H_i$. On the other hand, if $\tpl{y}$ does not have P, then there is a smallest $k$ for which P fails. By definition of the sets $D_i^{(j+k)}$, for a fixed~$j$ given the first $k-1$ vertices of $\tpl{y}$ there are at most~$\xi n$ choices for the $k$-th vertex of $\tpl{y}$. Hence, in total, given the first $k-1$ vertices of $\tpl{y}$ there are at most $(r-1)\xi n$ choices for the $k$-th vertex of $\tpl{y}$. Thus the number of $(r-1)$-tuples $\tpl{y}$ which do not have P is at most $(r-1)^2\xi n^{r-1}$ as desired. Now given $2\le j\le r-2$, let $\tpl{a}_{j-1}$ be the set consisting of the last $j-1$ vertices of $\tpl{a}$. By construction, $\tpl{a}_{j-1}$ is in neither $D_i^{(j-1)}$ nor in $\tilde{D}_i^{(j-1)}$. It follows from the definition of these sets in~\eqref{eq:connectr:dangerj} and~\eqref{eq:connectr:dangerij} that there are at most $\xi n$ vertices $c$ such that $\{\tpl{a}_{j-1},c\}\in D_i^{(j)}$, and at most $\xi' n$ such that $\{\tpl{a}_{j-1},c\}\in \tilde{D}_i^{(j)}$. Together with the case $j=1$, this gives at most $(r-2)(\xi+\xi')n$ forbidden vertices $c\in Y_t$. Finally, for~\ref{item:Bad:U}, observe that by part~\ref{item:connect:Ui}, for each $j\in[r-2]$ there are at most $\xi n$ vertices $c\in X$ with $\text{mult}_{U_i}\big(\{\tpl{a}_{j-1},c\}\big)>\xi^{r-j} n^{\frac{r-1}2-j(1-\eps)}$. Hence, in total, $B(\tpl{a})\cap Y_t$ contains at most \[\xi n + (r-2)(\xi+\xi')n+(r-2)\xi n \leByRef{eq:setxis} \frac{\delta n}{4r}\] vertices. Moreover, it follows from~\ref{item:connect:A4} that $|P|\le r/\eps$, and from~\ref{item:connect:U} that $|U|\le 2kr/\eps$. Since we have $|Y_t|=\delta n/(2r)$, we conclude that \[|Y_t\setminus(P\cup U\cup B(\tpl{a}))|\ge \frac{\delta n}{2r} - \frac{r}{\eps} - \frac{2kr}{\eps} -\frac{\delta n}{4r} \ge\frac{\delta n}{8r}\] \end{claimproof} Now we can use a Chernoff bound to show that a.a.s.\ Algorithm~\ref{alg:fan} does not fail in step~\ref{step:CheckC}. \begin{claim}\label{cl:connect:checkC} At any given visit to step~\ref{step:CheckC}, Algorithm~\ref{alg:fan} halts with probability at most $2\exp\big(-\delta n^\eps/(96r)\big)$. \end{claim} \begin{claimproof} By Claim~\ref{cl:connect:F}\ref{item:connect:J}, we have \[\delta n/(8r)\le |Y_t\setminus(P\cup U\cup B(\tpl{a}))|\le |Y_t|=\delta n/(4r)\,.\] Since $C$ is a $p$-random subset of $Y_t\setminus(P\cup U\cup B(\tpl{a}))$ with $p=n^{-1+\eps}$, we obtain $\delta n^{\eps}/(8r)\le\mathbb{E}|C|\le \delta n^\eps/(4r)$. Using the Chernoff bound~\eqref{eq:chernoff} with $\gamma=1/2$, we conclude that $\delta n^{\eps}/(16r)\le |C|\le \delta n^\eps/(2r)$ with probability at least $1-2\exp\big(-\delta n^\eps/(96r)\big)$. \end{claimproof} We would like to show that also a.a.s.\ Algorithm~\ref{alg:connect} does not fail in step~\ref{step:failure}. Since the events considered in this step are not mutually independent, we use Janson's inequality for this purpose. \begin{claim}\label{cl:connect:failure} At any given visit to step~\ref{step:failure}, Algorithm~\ref{alg:connect} halts with probability at most $\exp(-n^{(r-2)\eps}/4)$. \end{claim} \begin{claimproof} Let~$\mathcal{E}=\bigcup\mathcal{P}$ be the family of $r$-sets exposed in step~\ref{step:exposepaths} in this iteration of the foreach-loop. For each~$P\in\mathcal{P}$ let $I_P$ be the indicator variable for the event that the path~$P$ appears, which occurs with probability $\tilde p=p^{r-1}$. Then $X=\sum_{P\in\mathcal{P}} I_P$ is the random variable counting the number of $L-L'$-paths appearing in this iteration. We would like to use Janson's inequality~\eqref{eq:Janson} to show that $X>0$ with high probability, in which case Algorithm~\ref{alg:connect} does not halt in step~\ref{step:failure}. To this end we first bound $\mathbb{E} X$, for which we need to estimate $|\mathcal{P}|$. Firstly, since $\mathcal{F}(\tpl{u}^{(i)})$ and $\mathcal{F}(\tpl{v}^{(i)})$ are disjoint fans, no vertex is in a leaf both of $\mathcal{F}(\tpl{u}^{(i)})$ and of $\mathcal{F}(\tpl{v}^{(i)})$, and in particular $L(\tpl{u}^{(i)})$ and $L(\tpl{v}^{(i)})$ are disjoint. Now let $\tpl{\tilde v}$ be any $(r-1)$-tuple in $L(\tpl{v}^{(i)})$. By construction $\tpl{\tilde v}$ is not in $\tilde{D}_i^{(r-1)}$ (see step~\ref{step:exposeedges} and the definition of $B(\tpl{a})$). By~\eqref{eq:connectr:dangerir-1} the path $(\tpl{\tilde u},\tpl{\tilde v})$ therefore is blocked by~$H$ for at most $\xi'\big|L(\tpl{u}^{(i)})\big|$ tuples $\tpl{\tilde{u}}\in L(\tpl{u}^{(i)})$. Thus we have \begin{equation*} |\mathcal{P}| \ge |L(\tpl{v}^{(i)})\big|\cdot (1-\xi')|L(\tpl{u}^{(i)})\big| \geByRef{eq:connect:L} (1-\xi')n^{r-1-\eps}\,, \end{equation*} which gives \begin{equation}\label{eq:connect:E} \mathbb{E} X=|\mathcal{P}|\tilde p \ge (1-\xi')n^{r-1-\eps}n^{(\eps-1)(r-1)}= (1-\xi')n^{(r-2)\eps}\,. \end{equation} Next we would like to estimate $\mathbb{E}(I_PI_{P'})$ for two distinct paths $P=(\tpl{\tilde u},\tpl{\tilde v})$ and $P'=(\tpl{\tilde u}',\tpl{\tilde v}')$ which share at least one edge. If $P$ and $P'$ are distinct paths sharing at least one edge, then in particular, either $\tpl{\tilde u}$ and $\tpl{\tilde u}'$ have the same end $r/2$-tuple, or $\tpl{\tilde v}$ and $\tpl{\tilde v}'$ have the same start $r/2$-tuple. Without loss of generality assume the former and suppose that $\tpl{\tilde v}$ and $\tpl{\tilde v}'$ match in the start $j$-tuple, but not in the $(j+1)$st vertex. Clearly $1\le j$, and since $\mathcal{F}(\tpl{u}^{(i)})$ and $\mathcal{F}(\tpl{v}^{(i)})$ are fans we have $\tpl{\tilde u}=\tpl{\tilde u'}$ and $j<r/2$. Hence~$P$ and~$P'$ share precisely an interval of length $r-1+j$, and thus~$j$ edges. Therefore $\mathbb{E}(I_PI_{P'})\le p^{2r-2-j}$. In addition, the discussion above shows that for a fixed path $P=(\tpl{\tilde u},\tpl{\tilde v})$, the number $N_{P,j}$ of paths~$P'=(\tpl{\tilde u}',\tpl{\tilde v}')$ such that $P$ and $P'$ share $j$ edges, is at most the number of choices of a leaf $\tpl{\tilde v}'\in L(\tpl{v}^{(i)})$ such that $\tpl{\tilde v}'$ only has the end $(r-1-j)$-tuple $\tpl{v}$ different from $\tpl{\tilde v}$, plus the number of choices of a leaf $\tpl{\tilde u}'\in L(\tpl{u}^{(i)})$ such that $\tpl{\tilde u}'$ only has the start $(r-1-j)$-tuple $\tpl{u}$ different from $\tpl{\tilde u}$. By Claim~\ref{cl:connect:F}\ref{item:connect:Ui} the start $j$-tuple of $\tpl{\tilde v}$ and the end $j$-tuple of $\tpl{\tilde u}$ have multiplicity in~$U_i$ at most $n^{(r-1)/2-j(1-\eps)}+1$. By step~\ref{step:Ui} this implies that there are at most $n^{(r-1)/2-j(1-\eps)}$ choices for $\tpl{u}$ and for $\tpl{v}$, and hence $N_{P,j}\le 2 n^{(r-1)/2-j(1-\eps)}$. With this we are ready to estimate \begin{equation*} \Delta=\sum_{P\neq P',P\cap P'=\emptyset}\mathbb{E}(I_PI_{P'}) =\sum_P\sum_{1\le j<r/2}\Big(\sum_{|P'\cap P|=j} \mathbb{E}(I_PI_{P'})\Big), \end{equation*} where $P,P'\in\mathcal{P}$. We have \begin{equation*} \begin{split} \Delta&\le\sum_P\sum_{1\le j<r/2} N_{P,j} \cdot p^{2r-2-j} \\ &\le|L(\tpl{u}^{(i)})||L(\tpl{v}^{(i)})| \sum_{1\le j<r/2} 2 n^{(r-1)/2-j(1-\eps)} p^{2r-2-j}, \end{split} \end{equation*} which, by~\eqref{eq:connect:L}, is at most \begin{multline*} n^{(r-1)-\eps} \sum_{1\le j<r/2} 2 n^{(r-1)/2-j(1-\eps)} n^{(\eps-1)(2r-2-j)},\\ \leq n^{(r-1)-\eps} \cdot r \cdot n^{-\frac32(r-1)+2\eps(r-1)}<1. \end{multline*} Hence, inequalities~\eqref{eq:Janson} and~\eqref{eq:connect:E} imply that $\mathbb{P}(X=0)\le \exp(\Delta-\mathbb{E} X)\le \exp(-n^{(r-2)\eps}/4)$, and thus Algorithm~\ref{alg:connect} fails with at most this probability in this visit to step~\ref{step:failure} \end{claimproof} Since Algorithm~\ref{alg:connect} visits step~\ref{step:failure} at most $k\le n$ times, we can use Claim~\ref{cl:connect:failure} and a union bound to infer that~\ref{item:connect:A3} holds with probability at least $1-n\cdot\exp\big(-n^{(r-2)\eps}/4\big)\ge 1-\frac12\exp\big(-\delta n^\eps/(100 r)\big)$. % Similarly, step~\ref{step:CheckC} of Algorithm~\ref{alg:fan} is called at most once per leaf in any of the at most $2k$ constructed fans, which is at most $2k\cdot 2n^{(r-1)/2-\eps/2}\le n^r$ times by~\eqref{eq:connect:L}. It follows from Claim~\ref{cl:connect:checkC} that~\ref{item:connect:A2} holds with probability at least $1-n^r \cdot 2\exp\big(-\delta n^\eps/(96r)\big)\ge 1- \frac12\exp(-\delta n^\eps/(100r)\big)$. Summarising, we showed that Algorithm~\ref{alg:connect} constructs the~$k$ desired tight paths of length at most~$\ell$ with probability at least $1-\exp(-\delta n^\eps/(100r)\big)$. \end{proof} \section{Proof of the reservoir lemma} \label{sec:reserve} In this section we prove Lemma~\ref{lem:reserve}. \begin{proof}[Proof of Lemma~\ref{lem:reserve}] Choose $\ell:=\big\lceil 1/\big(2(r-1)\eps\big) \big\rceil+2$. Our strategy will be as follows. We will start by defining an auxiliary $r$-uniform hypergraph~$\Drl$ with $2(r-1)(2\ell-1)+1$ vertices and as many edges, which implies \begin{equation}\label{eq:do:D} d^{(1)}(\Drl)=1+\frac{1}{2(r-1)(2\ell-1)} \le 1 + \eps \,. \end{equation} After defining~$\Drl$ we shall construct a graph~$\mathcal{H}^*$ which satisfies~\ref{lem:reserve:2} and~\ref{lem:reserve:3} and is such that $\Drl\subset\mathcal{H}^*$ and $\Drl$~has maximum $1$-density among all subhypergraphs of~$\mathcal{H}^*$. \smallskip The vertex set of $\Drl$ is \begin{equation*} V(\Drl):=U\cup V\cup\bigcup_{i\in[\ell-1]} A_i \cup \bigcup_{i\in[\ell-2]} B_i\,. \end{equation*} where $U:=(u_1,\dots,u_{r-1},w^*,u_r,\dots,u_{2(r-1)})$, $V:=(v'_1,\dots,v'_{2(r-1)})$, $A_i:=(a^{(i)}_1,\dots a^{(i)}_{2(r-1)})$ for $i\in[\ell-1]$, and $B_i:=(b^{(i)}_1,\dots b^{(i)}_{2(r-1)})$ for $i\in[\ell-2]$ are ordered sets of vertices. The edge set of $\Drl$ contains exactly the edges of the tight paths determined by~$U$, by~$V$, by~$A_i$ for each $i\in[\ell-1]$, by~$B_i$ for each $i\in[\ell-2]$, as well as by the following vertex sequences: \begin{align*} & \tilde U_A:= (u_{1},\dots,u_{r-1},a^{(1)}_{r-1},\dots,a^{(1)}_{1}) \,, \\ & \tilde V_A:=(a^{(\ell-1)}_{2(r-1)},\dots,a^{(\ell-1)}_{r},v'_{r},\dots,v'_{2(r-1)}) \,, \\ & \tilde U_B:=(u_{2(r-1)},\dots,u_{r},b^{(1)}_{r-1},\dots,b^{(1)}_{1}) \,, \\ & \tilde V_B:=(b^{(\ell-2)}_{2(r-1)},\dots,b^{(\ell-2)}_{r},v'_{r-1},\dots,v'_{1}) \,, \end{align*} and \begin{align*} & \tilde A_{i,i+1}:=(a^{(i)}_{2(r-1)},\dots,a^{(i)}_{r},a^{(i+1)}_{r-1},\dots,a^{(i+1)}_{1}) && \quad\text{for all $i\in[\ell-2]$} \,,\\ & \tilde B_{i,i+1}:=(b^{(i)}_{2(r-1)},\dots,b^{(i)}_{r},b^{(i+1)}_{r-1},\dots,b^{(i+1)}_{1}) && \quad\text{for all $i\in[\ell-3]$} \,. \end{align*} It is not difficult to check that~$\Drl$ has exactly $2(r-1)(2\ell-1)+1$ vertices and edges as claimed. \begin{figure}[t] \begin{center} \psfrag{U}{\scalebox{1.3}{$U$}} \psfrag{V}{\scalebox{1.3}{$V$}} \psfrag{A1}{\scalebox{1.3}{$A_1$}} \psfrag{A2}{\scalebox{1.3}{$A_2$}} \psfrag{B1}{\scalebox{1.3}{$B_1$}} \psfrag{w}{\scalebox{0.8}{$w^*$}} \includegraphics[scale=.85]{path.eps} \caption{$\mathcal{H}^*$ for $r=3$ and $\ell=3$. The vertices of~$\Drl$ are drawn bigger than the vertices newly inserted in~$\mathcal{H}$. The continuous line indicates the tight Hamilton path in~$\mathcal{H}^*$ from~\eqref{eq:reserve:path}, the dashed line the tight Hamilton path in $\mathcal{H}^*-w^*$. }\label{fig:F} \end{center} \end{figure} In order to obtain~$\mathcal{H}^*$ from~$\Drl$ we first let $v_i:=v'_{(r-1)+i}$ for each $i\in[r-1]$. Then we insert \[k:=3(r-1)^2(2\ell-1)\] new vertices `between' each of the following pairs of vertex sets in $\Drl$: $U$ and~$A_1$, $A_{\ell-1}$ and~$V$, $A_i$ and~$B_i$ for each $i\in[\ell-2]$, $B_i$ and~$A_{i+1}$ for each $i\in[\ell-2]$. We let $I(X,Y)$ denote the ordered set of vertices inserted `between' the sets~$X$ and~$Y$ in this process (where we choose any ordering). In addition, we add to this graph the tight Hamilton path \begin{multline}\label{eq:reserve:path} U, I(U,A_1), A_1, I(A_1,B_1), B_1, I(B_1,A_2), A_2,\dots \\ \dots, B_{\ell-2}, I(B_{\ell-2},A_{\ell-1}),A_{\ell-1},I(A_{\ell-1},V),V \end{multline} running from $\tpl{u}$ to $\tpl{v}$ (which uses some edges already present in~$\Drl$). The resulting hypergraph is~$\mathcal{H}^*$ (see also Figure~\ref{fig:F}). By construction $v(\mathcal{H}^*)=2(r-1)(2\ell-1)+1+2k(\ell-1)$ which by definition of~$k$ is smaller than $16(r-1)^2\ell^2\le 16\eps^{-2}$, and $e(\mathcal{H}^*)=2(r-1)(2\ell-1)+1+2(k+r-1)(\ell-1)$. Since $\ell>1$ this implies \begin{equation}\label{eq:do:F} \begin{split} d^{(1)}(\mathcal{H}^*) &= 1+\frac{2(r-1)(\ell-1)+1}{2(r-1)(2\ell-1)+2k(\ell-1)} \\ & < 1+\frac{2(r-1)(\ell-1)+1}{2k(\ell-1)} \le 1+\frac{1}{2(r-1)(2\ell-1)} \\ & \eqByRef{eq:do:D} d^{(1)}(\Drl) \,. \end{split} \end{equation} By~\eqref{eq:reserve:path} the hypergraph~$\mathcal{H}^*$ satisfies~\ref{lem:reserve:2}. We define $\tilde I(Y,X)$ to be the reversal of $I(X,Y)$. It can be checked that~$\mathcal{H}^*$ also contains the tight path \begin{multline*} \tilde U_A, \tilde I(A_1, U), \tilde U_B,\tilde I(B_1,A_1), \tilde A_{1,2},\tilde I(A_2,B_1), \tilde B_{1,2},\tilde I(B_2,A_2), \tilde A_{2,3}, \dots \\ \dots, \tilde A_{\ell-2,\ell-1}, \tilde I(A_{\ell-1},B_{\ell-2}), \tilde V_B, \tilde I(V,A_{\ell-1}), \tilde V_A \,. \end{multline*} This is a tight path from $\tpl{u}$ to $\tpl{v}$ running through all vertices of~$\mathcal{H}^*$ but~$w^*$, and so~$\mathcal{H}^*$ also satisfies~\ref{lem:reserve:3}. It remains to show that~$\Drl$ has maximal $1$-density among all subgraphs of~$\mathcal{H}^*$. Suppose that $\mathcal{H}$ is a subgraph of $\mathcal{H}^*$ with maximal $1$-density. It follows that $\mathcal{H}$ is an induced subgraph of $\mathcal{H}^*$, and that we have $d^{(1)}(\mathcal{H})\ge d^{(1)}(\Drl)>1$. It follows that $\mathcal{H}$ cannot contain any vertex of degree one (otherwise we could delete it and increase the $1$-density). In particular, this means that if $I(X,Y)$ is any of the sets of $k$ vertices which form a tight path in $\mathcal{H}^*$ and which are not present in $\Drl$, then either every vertex of $I(X,Y)$ is in $\mathcal{H}$, or none are. Similarly, by the definition of $k$ we have $k\cdot d^{(1)}(\Drl)>k+(r-1)$ and so $\mathcal{H}$ cannot contain any $k$ vertices meeting only $k+r-1$ edges. Accordingly $\mathcal{H}$ cannot contain $I(X,Y)$. It follows that $\mathcal{H}$ must be a subgraph of $\Drl$. It is straightforward to check that if any of the vertices \begin{equation*} \begin{split} S:= \{u_2,\ldots,u_{2(r-1)-1},&w^*,v_2,\ldots,v_{2(r-1)-1},\\ &a_2^{(i)},\ldots, a_{2(r-1)-1}^{(i)},b_2^{(i)},\ldots,b_{2(r-1)-1}^{(i)}\} \end{split} \end{equation*} of $\Drl$ is removed from $\Drl$, then we obtain a graph which can be decomposed by successively removing vertices of degree at most one (i.e. it is $1$-degenerate) and which therefore has $1$-density at most $1$. It follows that $S\subset V(\mathcal{H})$. Now let $x$ be any vertex of $\Drl$ which is not in $\mathcal{H}$. Since $x\not\in S$ we have $\deg_{\Drl}(x)=2$, and both edges containing $x$ have all their remaining vertices in $S$. Thus we have \[d^{(1)}\big(\Drl\big[V(\mathcal{H})\cup\{x\}\big]\big)\ge\min\big(d^{(1)}(\mathcal{H}),2\big)\] and since $d^{(1)}(\Drl)<2$, we conclude that $d^{(1)}(\mathcal{H})\le d^{(1)}(\Drl)$ as required. \end{proof} \section{Concluding remarks} \label{sec:conclude} \paragraph{\bf Graphs} We remark that our approach does not work (as such) in the case $r=2$, even for the sub-optimal edge probability $n^{\eps-1}$. For this case, in the proof of the connection lemma, Lemma~\ref{lem:connect}, when growing a fan we would have to reveal in each iteration of the foreach-loop in Algorithm~\ref{alg:fan} more than $n^{1-\eps}$ edges at a vertex~$a$. In the construction of one fan we would have to repeat this operation at least $n^{(1/2)-2\eps}$ times: only then we could hope for the fan to have $n^{(1/2)-\eps}$ leaves, which we need in order to get a connection between two such fans at least in expectation. But then we would have revealed at least $n^{1-\eps}\cdot n^{(1/2)-2\eps}=n^{(3/2)-3\eps}$ edges to obtain a single connection. Hence, we cannot obtain a linear number of connections in this way, as required by our strategy. \smallskip \paragraph{\bf Vertex disjoint cycles} It is easy to modify our approach to show the following theorem. \begin{theorem}\label{thm:factor} For every integer $r\ge 3$ and for every $\eps,\delta>0$ the following holds. Suppose that $n_1,\dots,n_\ell$ are integers, each at least $2r/\eps$, whose sum is at most $n$, and $n_1\ge\delta n$. Then for any $n^{-1+\eps}< p=p(n)\le 1$, the random $r$-uniform hypergraph $\mathcal{G}^{(r)}(n,p)$ contains a collection of vertex disjoint tight cycles of lengths $n_1,\ldots,n_\ell$ with probability tending to one as $n$ tends to infinity. \end{theorem} A proof sketch is as follows. We refer to the steps used in the proof of Theorem~\ref{thm:main}. First, we would run step~$1$ as before, except that we would find reservoir graphs covering only at most $\eps\delta n/(8r)$ vertices. Step $2$ remains unchanged. We would then in an extra step (requiring an extra round of probability) to create greedily a collection of vertex disjoint tight paths of lengths slightly shorter than $n_2,\ldots,n_\ell$, and another extra step using Lemma~\ref{lem:connect} to connect these paths into tight cycles of lengths $n_2,\ldots,n_\ell$. Here we require that the connecting paths always have a precisely specified length. As written, Lemma~\ref{lem:connect} does not guarantee this (the output paths have lengths differing by at most two, since the paths in each fan can differ in length by one) but it is easy to modify the lemma to obtain this (we would simply extend each of the shorter fan paths by one vertex while avoiding dangerous sets). The remainder of the proof can remain almost unchanged. We extend the reservoir path greedily to cover most of the remaining vertices. Then we apply Lemma~\ref{lem:connect} twice to cover all the leftover vertices and complete a cycle. Then this cycle has length $n_1$ as desired. (The only difference is that some of our constants will need to be adapted slightly.) Again, for fixed $r$, $\eps$ and $\delta$ we obtain a randomised polynomial time algorithm from this proof. Note that the condition that the cycles should not be too short cannot be completely removed: in order to have linearly many cycles of length $g$ with high probability, we require that linearly many such cycles exist in expectation. This expectation is of the order $n^gp^g$, which is in $o(n)$ if $p=o\big(n^{-(g-1)/g}\big)$. \smallskip \paragraph{\bf Derandomisation} Our approach to Theorem~\ref{thm:main} yields a randomised algorithm. However we only actually use the power of randomness in order to preprocess our input hypergraph and `simulate' multi-round exposure. This motivates the following question. \begin{question} For a constructive proof which uses multi-round exposure, how can one obtain a \emph{deterministic} algorithm? \end{question} Replacing the randomised preprocessing step with a deterministic splitting of the edges of the complete $r$-uniform hypergraph into disjoint dense quasirandom subgraphs might be a promising strategy here. Multi-Round exposure is a very common technique in probabilistic combinatorics. Hence this question might be of interest for other problems as well. \smallskip \paragraph{\bf Resilience.} A very active recent development in the theory of random graphs is the concept of resilience: under which conditions can one transfer a classical extremal theorem to the random graph setting? Lee and Sudakov~\cite{LS}, improving on previous work of Sudakov and Vu~\cite{SV}, showed that Dirac's theorem can be transferred to random graphs almost as sparse as at the threshold for hamiltonicity. More precisely, they proved that for each $\eps>0$, if $p\ge C\log n/n$ for some constant $C=C(\eps)$, then almost surely the random graph $G=G(n,p)$ has the following property. Every spanning subgraph of $G$ which has minimum degree $(\tfrac{1}{2}+\eps)pn$, contains a Hamilton cycle. It would be interesting to prove a corresponding result for tight Hamilton cycles in subgraphs of random hypergraphs. It is unlikely that the Second Moment Method will provide help for this. Our methods, however, might be robust enough to provide some assistance. \section{Acknowledgements} We would like to thank Klas Markstr\"om for suggesting Theorem~\ref{thm:factor}. \bibliographystyle{amsplain_yk}
2009.07346
\section{Introduction} \label{sec:intro} In strategic recommendation (SR) systems, the goal is to learn a strategy that sequentially selects recommendations with the highest long-term acceptance by each visiting user of a retail website, a business, or a user interactive system in general. These systems are in their infancy in the industry and in need of practical solutions to some fundamental research challenges. At Adobe research, we have been implementing such SR systems for various use-cases, including points of interest recommendations, tutorial recommendations, next step guidance in multi-media editing software, and ad recommendation for optimizing lifetime value. Most recommendation systems today use supervised learning or contextual bandit algorithms. These algorithms assume that the visits are i.i.d.~and do not discriminate between visit and user, i.e.,~each visit is considered as a new user that has been sampled i.i.d.~from the population of the business's users. As a result, these algorithms are myopic and do not try to optimize the long-term effect of the recommendations on the users. Click through rate (CTR) is a suitable metric to evaluate the performance of such greedy algorithms. Despite their success, these methods are becoming insufficient as users incline to establish longer and longer-term relationship with their websites (by going back to them). This increase in {\em returning users} further violates the main assumption underlying supervised learning and bandit algorithms, i.e.,~there is no difference between a visit and a user. This is the main motivation for SR systems that we examine in this paper. Reinforcement learning (RL) algorithms that aim to optimize the long-term performance of the system (often formulated as the expected sum of rewards/costs) seem to be suitable candidates for SR systems. The nature of these algorithms allows them to take into account all the available knowledge about the user in order to select an offer or recommendation that maximizes the total number of times she will click or accept the recommendation over multiple visits, also known as the user's life-time value (LTV). Unlike myopic approaches, RL algorithms differentiate between a visit and a user, and consider all the visits of a user (in chronological order) as a system trajectory. Thus, they model the user, and not their visits, as i.i.d.~samples from the population of the users of the website. This means that although we may evaluate the performance of the RL algorithms using CTR, this is not the metric that they optimize, and thus, it would be more appropriate to evaluate them based on the expected total number of clicks per user (over the user's trajectory), a metric we call LTV. This long-term approach to SR systems allows us to make decisions that are better than the short-sighted decisions made by the greedy algorithms. Such decisions might propose an offer that is considered as a loss to the business in the short term, but increases the user loyalty and engagement in the long term. Using RL for LTV optimization is still in its infancy. Related work has experimented with toy examples and has appeared mostly in marketing venues \cite{Pfeifer2000,Jonker2004,giuliano@marketing07}. An approach directly related to ours first appeared in \cite{Pednault:2002:SCD:775047.775086}, where the authors used public data of an email charity campaign, batch RL algorithms, and heuristic simulators for evaluation, and showed that RL policies produce better results than myopic’s. Another one is \cite{Silver2013}, which proposed an on-line RL system that learns concurrently from multiple customers. The system was trained and tested on a simulator. A recent approach uses RL to optimizes LTV for slate recommendations \cite{DBLP:journals/corr/abs-1905-12767}. It addresses the problem of how to decompose the LTV of a slate into a tractable function of its component item-wise LTVs. Unlike most of previous previous work, we address many more challenges that are found when dealing real data. These challenges, which hinder the widespread application of RL technology to SR systems include: \begin{itemize} \item{\bf High confidence off-policy evaluation} refers to the problem of evaluating the performance of an SR system with high confidence before costly A/B testing or deployment. \item{\bf Safe deployment} refers to the problem of deploying a policy without creating disruption from the previous running policy. For example, we should never deploy a policy that will have a worse LTV than the previous. \item{\bf Non-stationarity} refers to the fact that the real world is non-stationary. In RL and Markov decision processes there is usually the assumption that the transition dynamics and reward are stationary over time. This is often violated in the marketing world where trends and seasonality are always at play. \item{\bf Learning from passive data} refers to the fact that there is usually an abundance of sequential data or events that have been collected without a recommendation system in place. For example, websites record the sequence of products and pages a user views. Usually in RL, data is in the form of sequences of states, actions and rewards. The question is how can we leverage passive data that do not contain actions to create a recommendation systems that recommends the next page or product. \item{\bf Recommendation acceptance factors} refers to the problem of deeper understanding of recommendation acceptance than simply predicting clicks. For example, a person might have low propensity to listen due to various reasons of inattentive disposition. A classic problem is the `recommendation fatigue', where people may quickly stop paying attention to recommendations such as ads and promotional emails, if they are presented too often. \item{\bf Resource constraints in multi-user systems} refers to the problems of constraints created in multi-user recommendation systems. For example, if multiple users in a theme park are offered the same strategy for touring the park, it could overcrowd various attractions. Or, if a store offers the same deal to all users, it might deplete a specific resource. \item{\bf Large action spaces} refers to the problem of having too many recommendations. Netflix for example employs thousands of movie recommendations. This is particularly challenging for SR systems that make a sequence of decisions, since the search space grows exponentially with the planning horizon (the number of decisions made in a sequence). \item{\bf Dynamic actions} refers to the problem where the set of recommendations may be changing over time. This is a classic problem in marketing where the offers made at some retail shop could very well be slowly changing over time. Another example is movie recommendation, in businesses such as Netflix, where the catalogue of movies evolves over time. \end{itemize} In this paper we address all of the above research challenges. We summarize in chronological order our work in making SR systems practical for the real-world. In Section~\ref{sec:hope} we present a method for evaluating SR systems off-line with high confidence. In Section~\ref{sec:par} we present a practical reinforcement learning (RL) algorithm for implementing an SR system with an application to ad offers. In Section~\ref{sec:safety} we present an algorithm for safely deploying an SR system. In Section~\ref{sec:nonstationary} we tackle the problem of non-stationarity. Technologies in sections \ref{sec:hope}, \ref{sec:par}, \ref{sec:safety} and \ref{sec:nonstationary} are build chronologically, where high confidence off-policy evaluation is leveraged across all of them. In Section~\ref{sec:passive} we address the problem of bootstrapping an SR system from passive sequential data that do not contain past recommendations. In Section~\ref{sec:acceptance} we examine recommendation acceptance factors, such as the `propensity to listen' and `recommendation fatigue'. In Section~\ref{sec:constraints} we describe a solution that can optimize for resource constraints in multi-user SR systems. Sections \ref{sec:passive}, \ref{sec:acceptance} and \ref{sec:constraints} are build chronologically, where the bootstrapping from passive data is used across. In Section~\ref{sec:large-actions} we describe a solution to the large action space in SR systems. In Section~\ref{sec:dynaimic-actions} we describe a solution to the dynamic action problem, where the available actions can vary over time. Sections \ref{sec:large-actions} and \ref{sec:dynaimic-actions} are build chronologically, where they use same action embedding technology. Finally, in Section~\ref{sec:bias-aware} we argue that the next generation of recommendation systems needs to incorporate human cognitive biases. \section{Preliminaries} In this section, we present the general set of notations, which will be useful throughout the paper. In cases where problem specific notations are required, we introduce them in the respective section. We model SR systems as \textit{Markov decision process} (MDPs) \citep{Sutton1998}. An MDP is represented by a tuple, $\mathcal{M} = (\mathcal{S},\mathcal{A},\mathcal{P},\mathcal{R}, \gamma, d_0)$. % $\mathcal{S}$ is the set of all possible states, called the state set, and $\mathcal{A}$ is a finite set of actions, called the action set. % % The random variables, $S_t \in \mathcal S$, $A_t \in \mathcal A$, and $R_t$ denote the state, action, and reward at time $t$. % % The first state comes from an initial distribution, $d_0$. % % % % The reward discounting parameter is given by $\gamma \in [0,1)$. % $\mathcal{P}$ is the state transition function. We denote by $s_t$ the feature vector describing a user's $t^\text{th}$ visit with the system and by $a_t$ the $t^\text{th}$ recommendation shown to the user, and refer to them as a \textit{state} and an \textit{action}. The rewards are assumed to be non-negative. The reward $r_t$ is $1$ if the user accepts the recommendation $a_t$ and $0$, otherwise. We assume that the users interact at most $T$ times. We write $\tau\coloneqq \{s_1, a_1, r_1, s_2, a_2, r_2, \hdots, s_{T}, a_{T}, r_{T}\}$ to denote the history of visits with one user, and we call $\tau$ a \textit{trajectory}. The \textit{return} of a trajectory is the discounted sum of rewards, $R(\tau)\coloneqq\sum_{t=1}^T\gamma^{t-1}r_t$, A \textit{policy} $\pi$ is used to determine the probability of showing each recommendation. Let $\pi(a \vert s)$ denote the probability of taking action $a$ in state $s$, regardless of the time step $t$. The goal is to find a policy that maximizes the expected total number of recommendation acceptances per user: $\rho(\pi) \coloneqq \mathbb E[R(\tau) \vert \pi].$ Our historical data is a set of trajectories, one per user. Formally, $\mathcal D$ is the historical data containing $n$ trajectories $\{\tau_i\}_{i=1}^n$, each labeled with the \textit{behavior policy} $\pi_i$ that produced it. \section{High Confidence Off-policy Evaluation} \label{sec:hope} One of the first challenges in building SR systems is evaluating their performance before costly A/B testing and deployment. Unlike classical machine learning systems, an SR system is more complicated to evaluate because recommendations can affect how a user responds to all future recommendations. In this section we summarize a \textit{high confidence off-policy evaluation} (HCOPE) method \cite{DBLP:conf/aaai/ThomasTG15}, which can inform the business manager of the performance of the SR system with some guarantee, before the system is deployed. We denote the policy to be evaluated as the \textit{evaluation policy} $\pi_e$. HCOPE is a family of methods that use the historical data $\mathcal D$ in order to compute a $1-\delta$-confidence lower bound on the expected performance of the evaluation policy $\pi_e$~\cite{DBLP:conf/aaai/ThomasTG15}. In this section, we explain three different approaches to HCOPE. All these approaches are based on importance sampling. The {\em importance sampling estimator} \begin{equation} \label{eq:ISE} \hat \rho(\pi_e \vert \tau_i, \pi_i) \coloneqq \underbrace{R(\tau_i)}_{\text{return}} \underbrace{\prod_{t=1}^T \frac{\pi_e(a_t^{\tau_i} \vert s_t^{\tau_i})}{\pi_i(a_t^{\tau_i} \vert s_t^{\tau_i})}}_{\text{importance weight}}, \end{equation} is an unbiased estimator of $\rho(\pi)$ if $\tau_i$ is generated using policy $\pi_i$ \cite{Precup2000}, the support of $\pi_e$ is a subset of the support or $\pi_i$, and where $a_t^{\tau_i}$ and $s_t^{\tau_i}$ denote the state and action in trajectory $\tau_i$ respectively. Although the importance sampling estimator is conceptually easier to understand, in most of our applications we use the {\em per-step importance sampling estimator} \begin{equation} \label{eq:PEISE} \hat \rho(\pi_e \vert \tau_i, \pi_i) \coloneqq \sum_{t=1}^T\gamma^{t-1}r_t\left(\prod_{j=1}^t\frac{\pi_e(a_j^{\tau_i} \vert s_j^{\tau_i})}{\pi_i(a_j^{\tau_i} \vert s_j^{\tau_i})}\right), \end{equation} where the term in the parenthesis is the importance weight for the reward generated at time $t$. This estimator has a lower variance than~\eqref{eq:ISE}, and remains unbiased. For brevity, we describe the approaches to HCOPE in terms of a set of non-negative independent random variables, $\mathbf X=\{X_i\}_{i=1}^n$ (note that the importance weighted returns are non-negative because the rewards are never negative, since in our applications the reward is $1$ when the user accepts a recommendation and $0$ otherwise). For our applications, we will use $X_i = \hat \rho(\pi_e \vert \tau_i, \pi_i)$, where $\hat \rho(\pi_e \vert \tau_i, \pi_i)$ is computed either by~\eqref{eq:ISE} or~\eqref{eq:PEISE}. The three approaches that we will use are: \noindent{\bf 1. Concentration Inequality:} Here we use the concentration inequality (CI) in~\cite{DBLP:conf/aaai/ThomasTG15} and call it the {\em CI approach}. We write $\rho_-^\text{CI}(\mathbf X, \delta)$ to denote the $1-\delta$ confidence lower-bound produced by their method. The benefit of this method is that it provides a true high-confidence lower-bound, i.e.,~it makes no false assumption or approximation, and so we refer to it as $\textit{safe}$. However, as it makes no assumptions, bounds obtained using CI happen to be overly conservative, as shown in Figure \ref{fig:boundComparison}. \noindent{\bf 2. Student's $t$-test:} One way to tighten the lower-bound produced by the CI approach is to introduce a false but reasonable assumption. Specifically, we leverage the central limit theorem, which says that $\hat X \coloneqq \frac{1}{n} \sum_{i=1}^n X_i$ is approximately normally distributed if $n$ is large. Under the assumption that $\hat X$ is normally distributed, we may apply the one-tailed Student's $t$-test to produce $\rho_-^\text{TT}(\mathbf X, \delta)$, a $1-\delta$ confidence lower-bound on $\mathbb{E}[\hat X]$, which in our application is a $1-\delta$ confidence lower-bound on $\rho(\pi_e)$. Unlike the other two approaches, this approach, which we call {\em TT}, requires little space to be formally defined, and so we present its formal specification: \begin{gather} \hat X \coloneqq \frac{1}{n} \sum_{i=1}^n X_i,\quad\quad \hat \sigma \coloneqq \sqrt{\frac{1}{n-1}\sum_{i=1}^n \left (X_i - \hat X \right )^2},\\ \rho_-^\text{TT}(\mathbf X, \delta) \coloneqq \hat X - \frac{\hat \sigma}{\sqrt{n}}t_{1-\delta, n-1}, \end{gather} where $t_{1-\delta, \nu}$ denotes the inverse of the cumulative distribution function of the Student's $t$ distribution with $\nu$ degrees of freedom, evaluated at probability $1-\delta$ (i.e.,~function $\text{tinv}(1-\delta,\nu)$ in {\sc Matlab}). Because $\hat \rho_-^\text{TT}$ is based on a false (albeit reasonable) assumption, we refer to it as \textit{semi-safe}. Although the {\em TT approach} produces tighter lower-bounds than the CI's, it still tends to be overly conservative for our application, as shown in Figure \ref{fig:boundComparison}. More discussion can be found in the work by \cite{DBLP:conf/aaai/ThomasTG15}. \noindent{\bf 3. Bias Corrected and Accelerated Bootstrap:} One way to correct for the overly-conservative nature of TT is to use bootstrapping to estimate the true distribution of $\hat X$, and to then assume that this estimate is the true distribution of $\hat X$. The most popular such approach is \textit{Bias Corrected and accelerated} (BCa) bootstrap \cite{Efron1987}. We write $\rho_-^\text{BCa}(\mathbf X, \delta)$ to denote the lower-bound produced by BCa, whose pseudocode can be found in ~\cite{DBLP:conf/icml/ThomasTG15}. While the bounds produced by BCa are reliable, like t-test it may have error rates larger than $\delta$ and are thus \textit{semi-safe}. An illustrative example is provided in Figure \ref{fig:boundComparison}. \begin{figure} \centering \includegraphics[width=0.5\columnwidth]{figs/boundComparison.pdf} \caption{Empirical error rates when estimating a $95\%$ confidence lower-bound on the mean of a gamma distribution (shape parameter $k=2$ and scale parameter $\theta=50$) using $\rho_-^\dagger$, where the legend specifies the value of $\dagger$. This gamma distribution has a heavy upper-tail similar to that of importance weighted returns. The logarithmically scaled horizontal axis is the number of samples used to compute the lower bound (from $20$ to $2000$) and the vertical axis is the mean empirical error rate over $100,\!000$ trials. Note that CI is overly conservative, with zero error in all the trials (it is on the $x$-axis). The $t$-test is initially conservative, but approaches the allowed $5\%$ error rate as the number of samples increases. BCa remains around the correct $5\%$ error rate regardless of the number of samples. } \label{fig:boundComparison} \end{figure} For SR systems, where ensuring quality of a system before deployment is critical, these three approaches provide several viable approaches to obtaining performance guarantees using only historical data. \section{Personalized Ad Recommendation Systems for Life-Time Value Optimization with Guarantees} \label{sec:par} The next question is how to compute a good SR policy. In this section we demonstrate how to compute an SR policy for personalized ad recommendation (PAR) systems using reinforcement learning (RL). RL algorithms take into account the long-term effect of actions, and thus, are more suitable than myopic techniques, such as contextual bandits, for modern PAR systems in which the number of returning visitors is rapidly growing. However, while myopic techniques have been well-studied in PAR systems, the RL approach is still in its infancy, mainly due to two fundamental challenges: how to compute a good RL strategy and how to evaluate a solution using historical data to ensure its `safety' before deployment. In this section, we use the family of off-policy evaluation techniques with statistical guarantees presented in Section~\ref{sec:hope} to tackle both of these challenges. We apply these methods to a real PAR problem, both for evaluating the final performance and for optimizing the parameters of the RL algorithm. Our results show that an RL algorithm equipped with these off-policy evaluation techniques outperforms the myopic approaches. Our results give fundamental insights on the difference between the click through rate (CTR) and life-time value (LTV) metrics for evaluating the performance of a PAR algorithm \cite{TheocharousTG15}. \subsection{CTR versus LTV} Any personalized ad recommendation (PAR) policy could be evaluated for its greedy/myopic or long-term performance. For greedy performance, click through rate (CTR) is a reasonable metric, while life-time value (LTV) seems to be the right choice for long-term performance. These two metrics are formally defined as \begin{equation} \text{CTR }=\frac{\text{Total $\#$ of Clicks}}{\text{Total $\#$ of {\bf Visits}}} \times 100, \label{eq:ctr} \end{equation} \begin{equation} \text{LTV}= \frac{\text{Total $\#$ of Clicks}}{\text{Total $\#$ of {\bf Visitors}}} \times 100. \label{eq:ltv} \end{equation} CTR is a well-established metric in digital advertising and can be estimated from historical data (off-policy) in unbiased (inverse propensity scoring;~\cite{lihong@www10}) and biased (see e.g.,~\cite{StrehlLLK10}) ways. The reason that we use LTV is that CTR is not a good metric for evaluating long-term performance and could lead to misleading conclusions. Imagine a greedy advertising strategy at a website that directly displays an ad related to the final product that a user could buy. For example, it could be the BMW website and an ad that offers a discount to the user if she buys a car. users who are presented such an offer would either take it right away or move away from the website. Now imagine another marketing strategy that aims to transition the user down a sales funnel before presenting her the discount. For example, at the BMW website one could be first presented with an attractive finance offer and a great service department deal before the final discount being presented. Such a long-term strategy would incur more visits with the user and would eventually produce more clicks per user and more purchases. The crucial insight here is that the policy can change the number of times that a user will be shown an advertisement---the length of a trajectory depends on the actions that are chosen. A visualization of this concept is presented in Figure~\ref{fig:ltv-ctr}. \begin{figure}[h] \centering \includegraphics[height=2.1in]{figs/ctr-ltv.pdf} \caption{The circles indicate user visits. The black circles indicate clicks. Policy~1~is greedy and users do not return. Policy~2~optimizes for the long-run, users come back multiple times, and click towards the end. Even though Policy~2~has a lower CTR than Policy~1, it results in more revenue, as captured by the higher LTV. Hence, LTV is potentially a better metric than CTR for evaluating ad recommendation policies.} \label{fig:ltv-ctr} \end{figure} \subsection{Ad Recommendation Algorithms} For greedy optimization, we used a random forest (RF) algorithm~\cite{Statistics01randomforests} to learn a mapping from features to actions. RF is a state-of-the-art ensemble learning method for regression and classification, which is relatively robust to overfitting and is often used in industry for big data problems. The system is trained using a RF for each of the offers/actions to predict the immediate reward. During execution, we use an $\epsilon$-greedy strategy, where we choose the offer whose RF has the highest predicted value with probability $1-\epsilon$, and the rest of the offers, each with probability $\epsilon/(|A|-1)$ For LTV optimization, we used the Fitted Q Iteration (FQI)~\cite{Ernst05tree-basedbatch} algorithm, with RF function approximator, which allows us to handle high-dimensional continuous and discrete variables. When an arbitrary function approximator is used in the FQI algorithm, it does not converge monotonically, but rather oscillates during training iterations. To alleviate the oscillation problem of FQI and for better feature selection, we used our high confidence off-policy evaluation (HCOPE) framework within the training loop. The loop keeps track of the best FQI result according to a validation data set (see Algorithm~\ref{alg:ltv}). For both algorithms we start with three data sets an $X_\text{train}$, $X_\text{val}$ and $X_\text{test}$. Each one is made of complete user trajectories. A user only appears in one of those files. The $X_\text{val}$ and $X_\text{test}$ contain users that have been served by the random policy. The greedy approach proceeds by first doing feature selection on the $X_\text{train}$, training a random forest, turning the policy into $\epsilon$-greedy on the $X_\text{test}$ and then evaluating that policy using the off-policy evaluation techniques. The LTV approach starts from the random forest model of the greedy approach. It then computes labels as shown is step 6 of the LTV optimization algorithm \ref{alg:ltv}. It does feature selection, trains a random forest model, and then turns the policy into $\epsilon$-greedy on the $X_\text{val}$ data set. The policy is tested using the importance weighted returns according to Equation \ref{eq:PEISE}. LTV optimization loops over a fixed number of iterations and keeps track of the best performing policy, which is finally evaluated on the $X_\text{test}$. The final outputs are `risk plots', which are graphs that show the lower-bound of the expected sum of discounted reward of the policy for different confidence values. \begin{algorithm} \begin{algorithmic}[1] \STATE $\pi_b = \text{randomPolicy}$ \STATE $Q =$ {\sc rf.Greedy}$(\mathbf X_\text{train}, \mathbf X_\text{test}, \delta) $ \COMMENT{start with greedy value function} \FOR{$i=1$ {\bf to} $K$ } \STATE $r=\mathbf X_\text{train}(\text{reward}) $ \COMMENT{use recurrent visits} \STATE $x=\mathbf X_\text{train}(\text{features})$ \STATE $y= r_{t} + \gamma \max_{a \in A }Q_a(x_{t+1})$ \STATE $\bar{x}= \text{informationGain}(x,y)$ \COMMENT{feature selection} \STATE $Q_a = \text{randomForest}(\bar{x}, y)$ \COMMENT{for each action} \STATE $\pi_e = \text{epsilonGreedy}( Q, \mathbf X_\text{val} )$ \STATE $W = \hat \rho(\pi_e |\mathbf X_\text{val}, \pi_b)$ \COMMENT{importance weighted returns} \STATE $\text{currBound} = \rho_-^\dagger(W, \delta)$ \IF{$\text{currBound} > \text{prevBound}$} \STATE $\text{prevBound}= \text{currBound}$ \STATE $Q_\text{best} = Q$ \ENDIF \ENDFOR \STATE $\pi_e = \text{epsilonGreedy}( Q_\text{best}, \mathbf X_\text{test} )$ \STATE $W = \hat \rho(\pi_e |\mathbf X_\text{test}, \pi_b)$ \STATE \Return $ \rho_-^\dagger(W, \delta)$ \COMMENT{lower bound} \end{algorithmic} \caption{{\sc LtvOptimization}$(\mathbf X_\text{train}, \mathbf X_\text{val}, \mathbf X_\text{test}, \delta, K, \gamma, \epsilon)$ : compute a LTV strategy using $\mathbf X_\text{train}$, and predict the $1-\delta$ lower bound on the test data $\mathbf X_\text{test}$.} \label{alg:ltv} \end{algorithm} \subsection{Experiments} For our experiments we used 2 data sets from the banking industry. On the bank website when users visit, they are shown one of a finite number of offers. The reward is 1 when a user clicks on the offer and 0, otherwise. For data set 1, we collected data from a particular campaign of a bank for a month that had 7 offers and approximately $200,\!000$ visits. About $20,\!000$ of the visits were produced by a random strategy. For data set 2 we collected data from a different bank for a campaign that had 12 offers and $4,\!000,\!000$ visits, out of which $250,\!000$ were produced by a random strategy. When a user visits the bank website for the first time, she is assigned either to a random strategy or a targeting strategy for the rest of the campaign life-time. We splitted the random strategy data into a test set and a validation set. We used the targeting data for training to optimize the greedy and LTV strategies. We used aggressive feature selection for the greedy strategy and selected $20\%$ of the features. For LTV, the feature selection had to be even more aggressive due to the fact that the number of recurring visits is approximately $5\%$. We used information gain for the feature selection module~\cite{Cheng:2012:FSE:2399970.2399981}. With our algorithms we produce performance results both for the CTR and LTV metrics. To produce results for CTR we assumed that each visit is a unique visitor. We performed various experiments to understand the different elements and parameters of our algorithms. For all experiments we set $\gamma=0.9$ and $\epsilon=0.1$. \paragraph{Experiment 1: How do LTV and CTR compare?} For this experiment we show that every strategy has both a CTR and LTV metric as shown in Figure~\ref{fig:ctr-ltv} (Left). In general the LTV metric gives higher numbers than the CTR metric. Estimating the LTV metric however gets harder as the trajectories get longer and as the mismatch with the behavior policy gets larger. In this experiment the policy we evaluated was the random policy which is the same as the behavior policy, and in effect we eliminated the importance weighted factor. \begin{figure}[ht!] \centering \includegraphics[width=0.45\textwidth]{figs/ctr-ltv-random-eps-converted-to.pdf}\includegraphics[width=0.45\textwidth]{figs/bounds-eps-converted-to.pdf} \caption{ (Left) This figure shows the bounds and empirical importance weighted returns for the random strategy. It shows that every strategy has both a CTR and LTV metric. This was done for data set 1. (Right) This figure shows comparison between the 3 different bounds. It was done for data set 2.} \label{fig:ctr-ltv} \end{figure} \paragraph{Experiment 2: How do the three bounds differ?} In this experiment we compared the 3 different lower-bound estimation methods, as shown in Figure~\ref{fig:ctr-ltv} (Right). We observed that the bound for the $t$-test is tighter than that for CI, but it makes the false assumption that importance weighted returns are normally distributed. We observed that the bound for BCa has higher confidence than the $t$-test approach for the same performance. The BCa bound does not make a Gaussian assumption, but still makes the false assumption that the distribution of future empirical returns will be the same as what has been observed in the past. \paragraph{Experiment 3: When should each of the two optimization algorithms be used?} In this experiment we observed that the {\sc GreedyOptimization} algorithm performs the best under the CTR metric and the {\sc LtvOptimization} algorithm performs the best under the LTV metric as expected, see Figure \ref{fig:ctr-tt}. The same claim holds for data set 2. \begin{figure}[ht!] \centering \includegraphics[width=0.45\textwidth]{figs/ctr-tt-eps-converted-to.pdf} \includegraphics[width=0.45\textwidth]{figs/ltv-tt-eps-converted-to.pdf} \caption{(Left) This figure compares the CTR bounds of the Greedy versus the LTV optimization It was done for data set 1, but similar graphs exist for data set 2. (Right) This figure compare the LTV bounds of the Greedy versus the LTV optimization It was done for data set 1, but similar graphs exist for data set 2.} \label{fig:ctr-tt} \end{figure} \paragraph{Experiment 4: What is the effect of $\epsilon$?} One of the limitations of out algorithm is that it requires stochastic policies. The closer the new policy is to the behavior policy the easier to estimate the performance. Therefore, we approximate our policies with $\epsilon$-greedy and use the random data for the behavior policy. The larger the $\epsilon$, the easier is to get a more accurate performance of a new policy, but at the same time we would be estimating the performance of a sub-optimal policy, which has moved closer to the random policy, see Figure~\ref{fig:epsilon}. Therefore, when using the bounds to compare two policies, such as Greedy vs.~LTV, one should use the same $\epsilon$. \begin{figure}[h] \centering \includegraphics[height=1.9in]{figs/epsilon-eps-converted-to.pdf} \caption{The figure shows that as epsilon gets larger the policy moves towards the random policy. Random polices are easy to estimate their performance since they match the behavior policy exactly. Thus epsilon should be kept same when comparing two policies. This experiment was done on data set 2 and shows the bounds and empirical mean importance weighted returns (vertical line) for the LTV policy. The bound used here was the CI.} \label{fig:epsilon} \end{figure} \section{Safe Deployment} \label{sec:safety} In the previous sections we described how to compute an SR in combination with high confidence off-policy evaluation for deployment with some guarantees. In the real world the deployment may need to happen incrementally, where at fixed intervals of time we would like to update the current SR policy in a safe manner. In this section we present a batch reinforcement learning (RL) algorithm that provides probabilistic guarantees about the quality of each policy that it proposes, and which has no hyper-parameter that requires expert tuning. Specifically, the user may select any performance lower-bound, $\rho_-$, and confidence level, $\delta$, and our algorithm will ensure that the probability that it returns a policy with performance below $\rho_-$ is at most $\delta$. We then propose an incremental algorithm that executes our policy improvement algorithm repeatedly to generate multiple policy improvements. We show the viability of our approach with a digital marketing application that uses real world data \cite{DBLP:conf/icml/ThomasTG15}. \subsection{Problem Formulation} \label{problem} Given a (user specified) lower-bound, $\rho_-$, on the performance and a confidence level, $\delta$, we call an RL algorithm \textit{safe} if it ensures that the probability that a policy with performance less than $\rho_-$ will be proposed is at most $\delta$. The only assumption that a safe algorithm may make is that the underlying environment is a POMDP. Moreover, we require that the safety guarantee must hold regardless of how any hyperparameters are tuned. We call an RL algorithm \textit{semi-safe} if it would be safe, except that it makes a false but reasonable assumption. Semi-safe algorithms are of particular interest when the assumption that the environment is a POMDP is significantly stronger than any (other) false assumption made by the algorithm, e.g.,~that the sample mean of the importance weighted returns is normally distributed when using many trajectories. We call a policy, $\pi$, (as opposed to an algorithm) safe if we can ensure that $\rho(\pi) \geq \rho_-$ with confidence $1-\delta$. Note that ``a policy is safe" is a statement about our belief concerning that policy given the observed data, and not a statement about the policy itself. If there are many policies that might be deemed safe, then the policy improvement mechanism should return the one that is expected to perform the best, i.e., \begin{equation} \pi' \in \arg \max_{\text{safe }\pi} g(\pi \vert \mathcal D), \label{eq:g} \end{equation} where $g(\pi \vert \mathcal D)\in \mathbb R$ is a prediction of $\rho(\pi)$ computed from $\mathcal D$. We use a lower-variance, but biased, alternative to ordinary importance sampling, called \textit{weighted} importance sampling~\cite{Precup2000}, for $g$, i.e.,~ $$ g(\pi \vert \mathcal D) \coloneqq \frac{\sum_{i=1}^{\lvert \mathcal D \rvert} \hat \rho(\pi \vert \tau_i^\mathcal D, \pi_i^\mathcal D) }{ \sum_{i=1}^{\lvert \mathcal D \rvert} \hat w(\tau_i^\mathcal D, \pi, \pi_i^\mathcal D)}. $$ Note that even though Eq.~\eqref{eq:g} uses $g$, our safety guarantee is uncompromising---it uses the true (unknown and often unknowable) expected return, $\rho(\pi)$. In the following sections, we present batch and incremental policy improvement algorithms that are safe when they use the CI approach to HCOPE and semi-safe when they use the $t$-test or BCa approaches. Our algorithms have no hyperparameters that require expert tuning. In the following, we use the $\dagger$ symbol as a placeholder for either CI, TT, or BCa. We also overload the symbol $\rho_-^\dagger$ so that it can take as input a policy, $\pi$, and a set of trajectories, $\mathcal D$, in place of $\mathbf X$, as follows: \begin{equation} \rho_-^\dagger (\pi, \mathcal D, \delta, m) \coloneqq \rho_-^\dagger \Big ( \underbrace{\bigcup_{i=1}^{\lvert \mathcal D \rvert} \left \{ \hat \rho \left (\pi \vert \tau_i^{\mathcal D}, \pi_i^{\mathcal D} \right ) \right \}}_{\mathbf X},\delta, m \Big ). \end{equation} For example, $\rho_-^\text{BCa}(\pi, \mathcal D, \delta, m)$ is a prediction made using the data set $\mathcal D$ of what the $1-\delta$ confidence lower-bound on $\rho(\pi)$ would be, if computed from $m$ trajectories by BCa. \subsection{Safe Policy Improvement} \label{policyImprovement} Our proposed batch (semi-)safe policy improvement algorithm, {\sc PolicyImprovement}$^\dagger_\ddagger$, takes as input a set of trajectories labeled with the policies that generated them, $\mathcal D$, a performance lower bound, $\rho_-$, and a confidence level, $\delta$, and outputs either a new policy or {\sc No Solution Found (NSF)}. The meaning of the $\ddagger$ subscript will be described later. When we use $\mathcal D$ to both search the space of policies and perform safety tests, we must be careful to avoid the \textit{multiple comparisons problem} \cite{Benjamin1995}. To make this important problem clear, consider what would happen if our search of policy space included only two policies, and used all of $\mathcal D$ to test both of them for safety. If at least one is deemed safe, then we return it. HCOPE methods can incorrectly label a policy as safe with probability at most $\delta$. However, the system we have described will make an error whenever either policy is incorrectly labeled as safe, which means its error rate can be as large as $2\delta$. In practice the search of policy space should include many more than just two policies, which would further increase the error rate. We avoid the multiple comparisons problem by setting aside data that is only used for a \textit{single} safety test that determines whether or not a policy will be returned. Specifically, we first partition the data into a small training set, $\mathcal D_\text{train}$, and a larger test set, $\mathcal D_\text{test}$. The training set is used to search for which single policy, called the \textit{candidate policy}, $\pi_c$, should be tested for safety using the test set. This policy improvement method, {\sc PolicyImprovement}$^\dagger_\ddagger$, is reported in Algorithm~\ref{alg:PolicyImprovement}. To simplify later pseudocode, {\sc PolicyImprovement}$^\dagger_\ddagger$ assumes that the trajectories have already been partitioned into $\mathcal D_\text{train}$ and $\mathcal D_\text{test}$. In practice, we place $1/5$ of the trajectories in the training set and the remainder in the test set. Also, note that {\sc PolicyImprovement}$^\dagger_\ddagger$ can use the safe concentration inequality approach, $\dagger=$ CI, or the semi-safe $t$-test or BCa approaches, $\dagger \in \{$ TT, BCa$\}$. {\sc PolicyImprovement}$^\dagger_\ddagger$ is presented in a top-down manner in Algorithm \ref{alg:PolicyImprovement}, and makes use of the {\sc GetCandidatePolicy}$^\dagger_\ddagger(\mathcal D, \delta, \rho_-, m)$ method, which searches for a candidate policy. The input $m$ specifies the number of trajectories that will be used during the subsequent safety test. Although {\sc GetCandidatePolicy}$^\dagger_\ddagger$ could be any batch RL algorithm, like LSPI or FQI \citep{Lagoudakis2001,Ernst05tree-basedbatch}, we propose an approach that leverages our knowledge that the candidate policy must pass a safety test. We will present two versions of {\sc GetCandidatePolicy}$^\dagger_\ddagger$, which we differentiate between using the subscript $\ddagger$, which may stand for None or $k$-fold. Before presenting these two methods, we define an objective function $f^\dagger$ as: $$ f^\dagger(\pi, \mathcal D,\delta,\rho_-,m)\coloneqq \begin{cases} g(\pi \vert \mathcal D)&\hspace{-1.25cm}\mbox{if } \;\rho_-^\dagger \left (\pi, \mathcal D, \delta, m \right ) \geq \rho_-, \\ \rho_-^\dagger \left (\pi, \mathcal D, \delta, m \right ) &\mbox{\hspace{0.8cm}otherwise.} \end{cases} $$ Intuitively, $f^\dagger$ returns the predicted performance of $\pi$ if the predicted lower-bound on $\rho(\pi)$ is at least $\rho_-$, and the predicted lower-bound on $\rho(\pi)$, otherwise. Consider {\sc GetCandidatePolicy}$^\dagger_\text{None}$, which is presented in Algorithm \ref{alg:CandidateNone}. This method uses all of the available training data to search for the policy that is predicted to perform the best, subject to it also being predicted to pass the safety test. That is, if no policy is found that is predicted to pass the safety test, it returns the policy, $\pi$, that it predicts will have the highest lower bound on $\rho(\pi)$. If policies are found that are predicted to pass the safety test, it returns one that is predicted to perform the best (according to $g$). The benefits of this approach are its simplicity and that it works well when there is an abundance of data. However, when there are few trajectories in $\mathcal D$ (e.g.,~cold start), this approach has a tendency to overfit---it finds a policy that it predicts will perform exceptionally well and which will easily pass the safety test, but actually fails the subsequent safety test in {\sc PolicyImprovement}$^\dagger_\text{None}$. We call this method $\ddagger$ = None because it does not use any methods to avoid overfitting. \begin{algorithm} \small \begin{algorithmic}[1] \STATE $\pi_c \gets $ {\sc GetCandidatePolicy}$^\dagger_\ddagger(\mathcal D_\text{train}, \delta, \rho_-, \lvert \mathcal D_\text{test} \rvert )$ \STATE {\bf if }$\rho_-^\dagger \left (\pi_c, \mathcal D_\text{test}, \delta, \lvert \mathcal D_\text{test} \rvert \right ) \geq \rho_-$ {\bf then return }$\pi_c$ \STATE {\bf return }{\sc NSF} \end{algorithmic} \caption{\small {\sc PolicyImprovement} $^\dagger_\ddagger(\mathcal D_\text{train}, \mathcal D_\text{test}, \delta, \rho_-)$ Either returns {\sc No Solution Found (NSF)} or a \mbox{\text{(semi-)}}safe policy. Here $\dagger$ can denote either CI, TT, or BCa.} \label{alg:PolicyImprovement} \end{algorithm} \begin{algorithm} \small \begin{algorithmic}[1] \STATE {\bf return }$ \arg \max_\pi f^\dagger(\pi, \mathcal D, \delta, \rho_-, m) $ \end{algorithmic} \caption{\small {\sc GetCandidatePolicy}$_\text{None}^\dagger(\mathcal D, \delta, \rho_-, m)$Searches for the candidate policy, but does nothing to mitigate overfitting.} \label{alg:CandidateNone} \end{algorithm} In machine learning, it is common to introduce a regularization term, $\alpha \lVert w \rVert$, into the objective function in order to prevent overfitting. Here $w$ is the model's weight vector and $\lVert \cdot \rVert$ is some measure of the complexity of the model (often $L_1$ or squared $L_2$-norm), and $\alpha$ is a parameter that is tuned using a model selection method like cross-validation. This term penalizes solutions that are too complex, since they are likely to be overfitting the training data. Here we can use the same intuition, where we control for the complexity of the solution policy using a regularization parameter, $\alpha$, that is optimized using $k$-fold cross-validation. Just as the squared $L_2$-norm relates the complexity of a weight vector to its squared distance from the zero vector, we define the complexity of a policy to be some notion of its distance from the initial policy, $\pi_0$. In order to allow for an intuitive meaning of $\alpha$, rather than adding a regularization term to our objective function, $f^\dagger(\cdot, \mathcal D_\text{train}, \delta, \rho_-, \vert \mathcal D_\text{test} \vert )$, we directly constrain the set of policies that we search over to have limited complexity. We achieve this by only searching the space of mixed policies $\mu_{\alpha, \pi_0, \pi}$, where $\mu_{\alpha, \pi_0, \pi}(a|s) \coloneqq \alpha \pi(a|s) + (1 - \alpha)\pi_0$. Here, $\alpha$ is the fixed regularization parameter, $\pi_0(a|s)$ is the fixed initial policy, and we search the space of all possible $\pi$. Consider, for example what happens to the probability of action $a$ in state $s$ when $\alpha=0.5$. If $\pi_0(a|s)=0.4$, then for any $\pi$, we have that $\mu_{\alpha, \pi_0, \pi}(a\vert s) \in [0.2,0.7]$. That is, the mixed policy can only move $50\%$ of the way towards being deterministic (in either direction). In general, $\alpha$ denotes that the mixed policy can change the probability of an action no more than $100\alpha\%$ towards being deterministic. So, using mixed policies results in our searches of policy space being constrained to some \textit{feasible set} centered around the initial policy, and where $\alpha$ scales the size of this feasible set. While small values of $\alpha$ can effectively eliminate overfitting by precluding the mixed policy from moving far away from the initial policy, they also limit the quality of the best mixed policy in the feasible set. It is therefore important that $\alpha$ is chosen to balance the tradeoff between overfitting and limiting the quality of solutions that remain in the feasible set. Just as in machine learning, we use $k$-fold cross-validation to automatically select $\alpha$. This approach is provided in Algorithm~\ref{alg:CandidateKFold}, where {\sc CrossValidate}$^\dagger(\alpha, \mathcal D, \delta, \rho_-, m)$ uses $k$-fold cross-validation to predict the value of $f^\dagger(\pi, \mathcal D_\text{test}, \delta, \rho_-, \lvert \mathcal D_\text{test} \rvert)$ if $\pi$ were to be optimized using $\mathcal D_\text{train}$ and regularization parameter $\alpha$. {\sc CrossValidate}$^\dagger$ is reported in Algorithm~\ref{alg:CrossValidate}. In our implementations we use $k=\min\{20, \frac{1}{2}\lvert \mathcal D \rvert \}$ folds. \begin{algorithm} \small \begin{algorithmic}[1] \STATE {\small $\alpha^\star \gets \arg \max_{\alpha\in[0,1]}\text{\sc CrossValidate}^\dagger(\alpha, \mathcal D, \delta, \rho_-, m)$} \STATE $ \pi^\star \gets \arg \max_\pi f^\dagger(\mu_{\alpha^\star, \pi_0,\pi}, \mathcal D, \delta, \rho_-, m)$ \STATE {\bf return }$\mu_{\alpha^\star,\pi_0,\pi^\star}$ \end{algorithmic} \caption{\small{\sc GetCandidatePolicy}$_\text{$k$-fold}^\dagger(\mathcal D, \delta, \rho_-, m)$Searches for the candidate policy using $k$-fold cross-validation to avoid overfitting.} \label{alg:CandidateKFold} \end{algorithm} \begin{algorithm} \small \begin{algorithmic}[1] \STATE Partition $\mathcal D$ into $k$ subsets, $\mathcal D_1, \hdots, \mathcal D_k$, of approximately the same size. \STATE result $\gets 0$ \FOR{$i=1$ {\bf to} $k$} \STATE $\widehat {\mathcal D} \gets \bigcup_{j \neq i} \mathcal D_j$ \STATE $\pi^\star \gets \arg \max_\pi f^\dagger(\mu_{\alpha, \pi_0,\pi}, \widehat{\mathcal D}, \delta, \rho_-, m)$ \STATE result $\gets$ result $ + f^\dagger(\mu_{\alpha, \pi_0,\pi^\star}, \mathcal D_i, \delta, \rho_-, m)$ \ENDFOR \STATE {\bf return }result$/k$ \end{algorithmic} \caption{\small{\sc CrossValidate}$^\dagger(\alpha, \mathcal D, \delta, \rho_-, m)$} \label{alg:CrossValidate} \end{algorithm} \subsection{Daedalus} \label{Daedalus} The {\sc PolicyImprovement}$^\dagger_\ddagger$ algorithm is a batch method that can be applied to an existing data set, $\mathcal D$. However, it can also be used in an incremental manner by executing new safe policies whenever they are found. The user might choose to change $\rho_-$ at each iteration, e.g., to reflect an estimate of the performance of the best policy found so far or the most recently proposed policy. However, for simplicity in our pseudocode and experiments, we assume that the user fixes $\rho_-$ as an estimate of the performance of the initial policy. This scheme for selecting $\rho_-$ is appropriate when trying to convince a user to deploy an RL algorithm to tune a currently fixed initial policy, since it guarantees with high confidence that it will not decrease performance. Our algorithm maintains a list, $\mathcal C$, of the policies that it has deemed safe. When generating new trajectories, it always uses the policy in $\mathcal C$ that is expected to perform best. $\mathcal C$ is initialized to include a single initial policy, $\pi_0$, which is the same as the baseline policy used by {\sc GetCandidatePolicy}$_\text{$k$-fold}^\dagger$. This online safe learning algorithm is presented in Algorithm \ref{alg:Daedalus}.\footnote{If trajectories are available \textit{a priori}, then $\mathcal D_\text{train}, \mathcal D_\text{test},$ and $\mathcal C$ can be initialized accordingly.} It takes as input an additional constant, $\beta$, which denotes the number of trajectories to be generated by each policy. If $\beta$ is not already specified by the application, it should be selected to be as small as possible, while allowing {\sc Daedalus}$^\dagger_\ddagger$ to execute within the available time. We name this algorithm {\sc Daedalus}$^\dagger_\ddagger$ after the mythological character who promoted safety when he encouraged Icarus to use caution. \begin{algorithm} \small \begin{algorithmic}[1] \STATE $\mathcal C \gets \{\pi_0\}$ \STATE $\mathcal D_\text{train} \gets \mathcal D_\text{test} \gets \emptyset$ \WHILE {{\bf true}} \STATE $\widehat{\mathcal D} \gets \mathcal D_\text{train}$ \STATE $\pi_\star \gets \arg \max_{\pi \in \mathcal C} g(\pi \vert \widehat{\mathcal D})$ \STATE Generate $\beta$ trajectories using $\pi^\star$ and append $\lceil \beta/5 \rceil$ to $\mathcal D_\text{train}$ and the rest to $\mathcal D_\text{test}$ \STATE $\pi_c \gets ${\sc PolicyImprovement}$^\dagger_\ddagger(\mathcal D_\text{train}, \mathcal D_\text{test}, \delta, \rho_-)$ \STATE $\widehat{\mathcal D} \gets \mathcal D_\text{train}$ \IF{$\pi_c \neq$ {\sc NSF} {\bf and } $g(\pi_c \vert \widehat{\mathcal D}) > \max_{\pi \in \mathcal C} g(\pi \vert \widehat{\mathcal D})$} \STATE $\mathcal C \gets \mathcal C \cup \pi_c$ \STATE $\mathcal D_\text{test} \gets \emptyset$ \ENDIF \ENDWHILE \end{algorithmic} \caption{\small{\sc Daedalus}$^\dagger_\ddagger(\pi_0, \delta, \rho_-, \beta)$Incremental policy improvement algorithm.} \label{alg:Daedalus} \end{algorithm} The benefits of $\ddagger=k$-fold are biggest when only a few trajectories are available, since then {\sc GetCandidatePolicy}$^\dagger_\text{None}$ is prone to overfitting. When there is a lot of data, overfitting is not a big problem, and so the additional computational complexity of $k$-fold cross-validation is not justified. In our implementations of {\sc Daedalus}$^\dagger_{k\text{-fold}}$, we therefore only use $\ddagger = {k\text{-fold}}$ until the first policy is successfully added to $\mathcal C$, and $\ddagger$ = None thereafter. This provides the early benefits of $k$-fold cross-validation without incurring its full computational complexity. The {\sc Daedalus}$^\dagger_\ddagger$ algorithm ensures safety with each newly proposed policy. That is, during each iteration of the while-loop, the probability that a new policy, $\pi$, where $\rho(\pi) < \rho_-$, is added to $\mathcal C$ is at most $\delta$. The multiple comparison problem is not relevant here because this guarantee is per-iteration. However, if we consider the safety guarantee over multiple iterations of the while-loop, it applies and means that the probability that at least one policy, $\pi$, where $\rho(\pi) < \rho_-$, is added to $\mathcal C$ over $k$ iterations is at most $\min\{1,k\delta\}$. We define {\sc Daedalus2}$^\dagger_\ddagger$ to be {\sc Daedalus}$^\dagger_\ddagger$ but with line 11 removed. The multiple hypothesis testing problem does \emph{not} affect {\sc Daedalus2$^\dagger_\ddagger$} more than {\sc Daedalus}$^\dagger_\ddagger$, since the safety guarantee is per-iteration. However, a more subtle problem is introduced: the importance weighted returns from the trajectories in the testing set, $\hat \rho(\pi_c \vert \tau_i^{\mathcal D_\text{test}}, \pi_i^{\mathcal D_\text{test}})$, are not necessarily unbiased estimates of $\rho(\pi_c)$. This happens because the policy, $\pi_c$, is computed in part from the trajectories in $\mathcal D_\text{test}$ that are used to test it for safety. This dependence is depicted in Figure \ref{fig:badInfluence}. We also modify {\sc Daedalus2}$^\dagger_\ddagger$ by changing lines 4 and 8 to $\widehat{\mathcal D} \gets \mathcal D_\text{train} \cup \mathcal D_\text{test}$, which introduces an additional minor dependence of $\pi_c$ on the trajectories in $\mathcal D_\text{test}^j$. \begin{figure}[ht] \centering \includegraphics[width=0.8\columnwidth]{figs/badInfluence.pdf \caption{This diagram depicts influences as {\sc Daedalus2$^\dagger_\ddagger$} runs. First, $\pi_0$ is used to generate sets of trajectories, $\mathcal D_\text{train}^1$ and $\mathcal D_\text{test}^1$, where superscripts denote the iteration. Next $\mathcal D_\text{train}^1$ is used to select the candidate policy, $\pi_c^1$. Next, $\pi_c^1$ is tested for safety using the trajectories in $\mathcal D_\text{test}^1$ (this safety test occurs on line 2 of {\sc PolicyImprovement}$^\dagger_\ddagger$). The result of the safety test influences which policy, $\pi_1$, will be executed next. These policies are then used to produce $\mathcal D_\text{train}^2$ and $\mathcal D_\text{test}^2$ as before. Next, both $\mathcal D_\text{train}^1$ and $\mathcal D_\text{train}^2$ are used to select the candidate policy, $\pi_c^2$. This policy is then tested for safety using the trajectories in $\mathcal D_\text{test}^1$ and $\mathcal D_\text{test}^2$. The result of this test influences which policy, $\pi_2$, will be executed next, and the process continues. Notice that $\mathcal D_\text{test}^1$ is used when testing $\pi_c^2$ for safety (as indicated by the dashed blue line) even though it also influences $\pi_c^2$ (as indicated by the dotted red path). This is akin to performing an experiment, using the collected data ($\mathcal D_\text{test}^1$) to select a hypothesis ($\pi_c^2$ is safe), and then using that same data to test the hypothesis. {\sc Daedalus}$^\dagger_\ddagger$ does not have this problem because the dashed blue line is not present.} \label{fig:badInfluence} \end{figure} Although our theoretical analysis applies to {\sc Daedalus}$^\dagger_\ddagger$, we propose the use of {\sc Daedalus2}$^\dagger_\ddagger$ because the ability of the trajectories, $\mathcal D_\text{test}^i$, to bias the choice of which policy to test for safety in the future ($\pi_c^j$, where $j > i$) towards a policy that $\mathcal D_\text{test}^i$ will deem safe, is small. However, the benefits of {\sc Daedalus2}$^\dagger_\ddagger$ over {\sc Daedalus}$^\dagger_\ddagger$ are significant---the set of trajectories used in the safety tests increases in size with each iteration, as opposed to always being of size $\beta$. So, in practice, we expect the over-conservativeness of $\rho_-^\text{CI}$ to far outweigh the error introduced by {\sc Daedalus2}$^\dagger_\ddagger$. Notice that {\sc Daedalus2}$^\text{CI}_\ddagger$ is safe (not just semi-safe) if we consider its execution up until the first change of the policy, since then the trajectories are always generated by $\pi_0$, which is not influenced by any of the testing data. \subsection{Empirical Analysis} \label{caseStudies} \paragraph{Case Study:} For our case study we used real data, captured with permission from the website of a Fortune 50 company that receives hundreds of thousands of visitors per day and which uses Adobe Target, to train a simulator using a proprietary in-house system identification tool at Adobe. The simulator produces a vector of $31$ real-valued features that provide a compressed representation of all of the available information about a user. The advertisements are clustered into two high-level classes that the agent must select between. After the agent selects an advertisement, the user either clicks (reward of $+1$) or does not click (reward of $0$) and the feature vector describing the user is updated. Although this greedy approach has been successful, as we discussed in Section \ref{sec:par}, it does not necessarily also maximize the total number of clicks from each user over his or her lifetime. Therefore, we consider a full reinforcement learning solution for this problem. We selected $T= 20$ and $\gamma =1$. This is a particularly challenging problem because the reward signal is sparse. If each action is selected with probability $0.5$ always, only about $0.38\%$ of the transitions are rewarding, since users usually do not click on the advertisements. This means that most trajectories provide no feed-back. Also, whether a user clicks or not is close to random, so returns have relatively high variance. We generated data using an initial baseline policy and then evaluated a new policy proposed by an in-house reinforcement learning algorithm. In order to avoid the large costs associated with deployment of a bad policy, in this application it is imperative that new policies proposed by RL algorithms are ensured to be safe before deployment. \paragraph{Results:} In our experiments, we selected $\rho_-$ to be an empirical estimate of the performance of the initial policy and $\delta = 0.05$. We used CMA-ES \cite{Hansen2006} to solve all $\arg \max_\pi$, where $\pi$ was parameterized by a vector of policy parameters using linear softmax action selection \cite{Sutton1998} with the Fourier basis \cite{Konidaris2011} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figs/DigitalMarketing.pdf} \includegraphics[width=0.15\textwidth]{figs/legend.pdf} \caption{ % Performance of {\sc Daedalus2}$^\dagger_\ddagger$ on the digital marketing domain. The legend specifies $\ddagger, \dagger$. } \label{fig:digitalMarketing} \end{figure} For our problem domain, we executed {\sc Daedalus2}$^\dagger_\ddagger$ with $\dagger \in \{$CI, TT, BCa$\}$ and $\ddagger \in \{$None, $k$-fold$\}$. Ideally, we would use $\beta=1$ for all domains. However, as $\beta$ decreases, the runtime increases. We selected $\beta \in [50, 100,500]$ for the digital marketing domain. $\beta$ increases with the number of trajectories in the digital marketing domain so that the plot can span the number of trajectories required by the CI approach without requiring too many calls to the computationally demanding {\sc PolicyImprovement}$^\text{BCa}_{k\text{-fold}}$ method. We did not tune $\beta$ for these experiments---it was set solely to limit the runtime. The performance of {\sc Daedalus2}$^\dagger_\ddagger$ on the digital marketing domain is provided in Figure \ref{fig:digitalMarketing}. The expected normalized returns in Figure \ref{fig:digitalMarketing} are computed using $20,\!000$ Monte Carlo rollouts, respectively. The curves are also averaged over $10$ trials, respectively, with standard error bars provided when they do not cause too much clutter. First, consider the different values for $\dagger$. As expected, the CI approaches (solid curves) are the most conservative, and therefore require the most trajectories in order to guarantee improvement. The BCa approaches (dashed lines) perform the best, and are able to provide high-confidence guarantees of improvement with as few as $50$ trajectories. The TT approach (dotted lines) perform in-between the CI and BCa approaches, as expected (since the $t$-test tends to produce overly conservative lower bounds for distributions with heavy upper tails). Next, consider the different values of $\ddagger$. Using $k$-fold cross-validation provides an early boost in performance by limiting overfitting when there are few trajectories in the training set. Although the results are not shown, we experimented with using $\ddagger=k$-fold for the entire runtime (rather than just until the first policy improvement), but found that while it did increase the runtime significantly, it did not produce much improvement. \section{Non-Stationarity} \label{sec:nonstationary} In the previous sections we made a critical assumption that the domain can be modeled as a POMDP. However, real world problems are often non-stationary. In this section we consider the problem of evaluating an SR policy off-line without assuming stationary transition and rewards. We argue that off-policy policy evaluation for non-stationary MDPs can be phrased as a time series prediction problem, which results in predictive methods that can anticipate changes before they happen. We therefore propose a synthesis of existing off-policy policy evaluation methods with existing time series prediction methods, which we show results in a drastic reduction of mean squared error when evaluating policies using real digital marketing data set~\cite{DBLP:conf/aaai/ThomasTGDB17}. \subsection{Motivating Example} \label{subsec:motivatehotel} In digital marketing applications, when a person visits the website of a company, she is often shown a list of current promotions. In order for the display of these promotions to be effective, it must be properly targeted based on the known information about the person (e.g., her interests, past travel behavior, or income). The problem now reduces to automatically deciding which promotion (sometimes called a \textit{campaign}) to show to the visitor of a website. As we have described in Section \ref{sec:par} the system's goal is to determine how to select actions (select promotions to display) based on the available observations (the known information of the visitor) such that the reward is maximized (the number of clicks is maximized). Let $\rho(\pi_e,\iota)$ be the performance of the policy $\pi_e$ in episode $\iota$. In the bandit setting $\rho(\pi_e,\iota)$ is the expected number of clicks \emph{per visit}, called the \textit{click through rate} (CTR), while in the reinforcement learning setting it is the expected number of clicks \emph{per user}, called the \textit{life-time value} (LTV). In order to determine how much of a problem non-stationarity really is, we collected data from the website of one of Adobe's Test and Target customers: the website of a large company in the hotel and entertainment industry. We then used a proprietary policy search algorithm custom designed for digital marketing to generate a new policy for the customer. We then collected $n\approx 300,\!000$ new episodes of data, which we used as $D$, to compute $\operatorname{OPE}(\pi_e,\iota|D)$ for all $\iota \in \{0,\dotsc,n-1\}$ using ordinary importance sampling. Figure \ref{fig:Hotel} summarizes the resulting data. \begin{figure}% \centering \includegraphics[width=0.35\columnwidth]{figs/hotelMovingAvg.pdf}% \caption{Plot of $\operatorname{OPE}(\pi_e, \iota | D)$ for various $\iota$, on the real-world digital marketing data from a large company in the hotel and entertainment industry. This data spans several days. Since the raw data has high variance, we bin the data into bins that each span one hour. Notice that the performance of the policy drops from an initial CTR of $0.1$ down to a near-zero CTR near the middle of the data set.}% \label{fig:Hotel}% \end{figure} In this data it is evident that there is significant non-stationarity---the CTR varied drastically over the span of the plot. This is also not just an artifact of high variance: using Student's $t$-test we can conclude that the expected return during the first $100,\!000$ and subsequent $60,\!000$ episodes was different with $p=1.6\times 10^{-33}$. This is compelling evidence that we cannot ignore non-stationarity in our users' data when providing predictions of the expected future performance of our digital marketing algorithms, and is compelling real-world motivation for developing non-stationary off-policy policy evaluation algorithms. \subsection{Nonstationary Off-Policy Policy Evaluation (NOPE)} \begin{figure}[h]% \centering \includegraphics[width=0.35\columnwidth]{figs/example.pdf}% \caption{This illustration depicts an example of how the existing standard OPE methods produce \textit{reactive} behavior, and is hand-drawn to provide intuition. Here the dotted blue line depicts $\rho(\pi_e, \iota)$ for various $\iota$. The black dots denote $\operatorname{OPE}(\pi_e,\iota|D)$ for various $\iota$. Notice that each $\operatorname{OPE}(\pi_e,\iota|D)$ is a decent estimate of $\rho(\pi_e,\iota)$, which changes with $\iota$. Our goal is to estimate $\rho(\pi_e, n)$---the performance of the policy during the \textit{next} episode. That is, our goal is to predict the vertical position of the green circle. However, by averaging the OPE estimates, we get the red circle, which is a reasonable prediction of performance in the past. As more data arrives ($n$ increases) the predictions will decrease, but will always remain behind the target value of $\rho(\pi_e,n)$.}% \label{fig:example}% \end{figure} \textit{Non-stationary Off-Policy Policy Evaluation} (NOPE) is simply OPE for non-stationary MDPs. In this setting, the goal is to use the available data $D$ to estimate $\rho(\pi_e, n)$---the performance of $\pi_e$ during the \textit{next episode} (the $n^\text{th}$ episode). Notice that we have not made assumptions about how the transition and reward functions of the non-stationary MDP change. For some applications, they may drift slowly, making $\rho(\pi_e,\iota)$ change slowly with $\iota$. For example, this sort of drift may occur due to mechanical wear in a robot. For other applications, $\rho(\pi_e,\iota)$ may be fixed for some number of episodes, and then make a large jump. For example, this sort of jump may occur in digital marketing applications \citep{TheocharousTG15} due to media coverage of a relevant topic rapidly changing public opinion of a product. In yet other applications, the environment may include both large jumps and smooth drift. Notice that NOPE can range from trivial to completely intractable. If the MDP has few states and actions, changes slowly between episodes, and the evaluation policy is similar to the behavior policy, then we should be able to get accurate off-policy estimates. On the other extreme, if for each episode the MDP's transition and reward functions are drawn randomly (or adversarially) from a wide distribution, then producing accurate estimates of $\rho(\pi_e,n)$ may be intractable. \subsection{Predictive Off-Policy Evaluation using Time Series Methods} The primary insight in this section, in retrospect, is obvious: \textbf{NOPE is a time series prediction problem.} Figure \ref{fig:example} provides an illustration of the idea. Let $x_\iota = \iota$ and $Y_\iota = \operatorname{OPE}(\pi_e, \iota | D)$ for $\iota \in \{1,\dotsc,n-1\}$. This makes $x$ an array of $n$ times (each episode corresponds to one unit of time) and $y$ an array of the corresponding $n$ observations. Our goal is to predict the expected value of the next point in this time series, which will occur at $x_n=n$. Pseudocode for this \textit{time series prediction} (TSP) approach is given in Algorithm \ref{alg:TSP_POPE}. \begin{algorithm}[] \caption{Time Series Prediction (TSP)} \label{alg:TSP_POPE} \begin{algorithmic}[1] \STATE {\bfseries Input:} Evaluation policy, $\pi_e$, historical data, $D\coloneqq (H^\iota, \pi^\iota)_{\iota=0}^{n-1}$, and a time-series prediction algorithm (and its hyper-parameters). \vspace{.1cm} \STATE Create arrays $x$ and $y$, both of length $n$. \FOR{$\iota=0$ {\bfseries to} $n-1$} \STATE $x_\iota \gets \iota$ \STATE $y_\iota \gets \operatorname{OPE}(\pi_e, \iota | D)$ \ENDFOR \STATE Train a time-series prediction algorithm on $x,y$. \STATE {\bfseries return} the time-series prediction algorithm's prediction for time $n$. \end{algorithmic} \end{algorithm} When considering using time-series prediction methods for off-policy policy evaluation, it is important that we establish that the underlying process is actually nonstationary. One popular method for determining whether a process is stationary or nonstationary is to report the sample \emph{autocorrelation function} (ACF): $$ \operatorname{ACF}_h \coloneqq \frac{\mathbf{E}[(X_{t+h}-\mu)(X_t- \mu)]}{\mathbf{E}[(X_t-\mu)^2]}, $$ where $h$ is a parameter called the \textit{lag} (which is selected by the researcher), $X_t$ is the time series, and $\mu$ is the mean of the time series. For a stationary time series, the ACF will drop to zero relatively quickly, while the ACF of nonstationary data decreases slowly. ARIMA models are models of time series data that can capture many different sources of non-stationarity. The time series prediction algorithm that we use in our experiments is the $R$ forecast package for fitting ARIMA models \cite{Hyndman2008}. \subsection{Empirical Studies} In this section we show that, despite the lack of theoretical results about using TSP for NOPE, it performs remarkably well on real data. Because our experiments use real-world data, we do not know ground truth---we have $\operatorname{OPE}(\pi_e,\iota|D)$ for a series of $\iota$, but we do not know $\rho(\pi_e,\iota)$ for any $\iota$. This makes evaluating our methods challenging---we cannot, for example, compute the true error or mean squared error of estimates. We therefore estimate the mean error and mean squared error directly from the data as follows. For each $\iota \in \{1,\dotsc,n-1\}$ we compute each method's output, $\hat y_\iota$, given all of the previous data, $D_{\iota-1}\coloneqq (H^{\hat \iota}, \pi^{\hat \iota})_{\hat \iota=0}^{\iota-1}$. We then compute the observed next value, $y_\iota = \operatorname{OPE}(\pi_e,\iota | D_\iota)$. From these, we compute the squared error, $(\hat y_\iota - y_\iota)^2$, and we report the mean squared error over all $\iota$. We perform this experiment using both the current standard OPE approach, which computes sample mean of performance over all the available data, and using our new time series prediction approach. Notice that this scheme is not perfect. Even if an estimator perfectly predicts $J(\pi_e,\iota)$ for every $\iota$, it will be reported as having non-zero mean squared error. This is due to the high variance of $\operatorname{OPE}$, which gets conflated with the variance of $\hat y$ in our estimate of mean squared error. Although this means that the mean squared errors that we report are not good estimates of the mean squared error of the estimators, $\hat y$, the variance-conflation problem impacts all methods nearly equally. So, in the absence of ground truth knowledge, the reported mean squared error values are a reasonable measure of how accurate the methods are relative to each other. The domain we consider is digital marketing using the data from the large companies in the hotel and entertainment industry as described in Section \ref{subsec:motivatehotel}. We refer to this domain as the \textit{Hotel} domain. For this domain, and all others, we used ordinary importance sampling for $\operatorname{OPE}$. Recall that the performance of the evaluation policy appears to drop initially---the probability of a user clicking decays from a remarkably high $10\%$ down to a near-zero probability---before it rises back to close to its starting level. Recall also that using a two-sided Student's $t$-test we found that the true mean during the first $100,\!000$ trajectories was different from the true mean during the subsequent $60,\!000$ trajectories with $p=1.6\times 10^{-33}$, so the non-stationarity that we see is likely not noise. We collected additional data from the website of a large company in the financial industry, and used the same proprietary policy improvement algorithm to find a new policy that we might consider deploying for the user. There appears to be less long-term non-stationarity in this data, and a two-sided Student's $t$-test did not detect a difference between the early and late performance of the evaluation policy. We refer to this data as the \textit{Bank} domain. \subsection{Results} \begin{figure}% \centering \includegraphics[width=0.35\columnwidth]{figs/acf-hotel-ctr.pdf}% \includegraphics[width=0.35\columnwidth]{figs/hotel-ctr.pdf}% \\ \includegraphics[width=0.35\columnwidth]{figs/acf-bank-ctr.pdf}% \includegraphics[width=0.35\columnwidth]{figs/bank-ctr.pdf}% \caption{(Top) Hotel domain. The left plot shows the auto-correlation for the time series, where it is obvious the signal nonstationary. The right plot compares the tsp approach with the standard. The tsp outperforms the standard approach, since the series is nonstationary. The time series was aggregated at the hour level. (Bottom) Bank domain. The left plot shows the autocorellation for the time series, where it is obvious the signal stationary. The right plot compares the tsp approach with the standard. They both perform the same, since the series is stationary. The time series was aggregated at the hour level.}% \label{fig:hotel-ctr}% \end{figure} We applied our TSP algorithm for NOPE, described in Algorithm \ref{alg:TSP_POPE}, to the nonstationary hotel and bank data sets. The plots in Figure \ref{fig:hotel-ctr} all take the same form: the plots on the left are autocorrelation plots that show whether or not there appears to be non-stationarity in the data. As a rule of thumb, if the ACF values are within the dotted blue lines, then there is not sufficient evidence to conclude that there is non-stationarity. However, if the ACF values lie outside the dotted blue lines, it suggests that there is non-stationarity. The plots on the right depict the expected return (which is the expected CTR for the hotel and bank data sets) as predicted by several different methods. The black curves are the target values---the observed mean OPE estimate over a small time interval. For each episode number, our goal is to compute the value of the black curve given all of the previous values of the black curve. The blue curve does this using the standard method, which simply averages the previous black points. The red curve is our newly proposed method, which uses ARIMA to predict the next point on the black curve---to predict the performance of the evaluation policy during the next episode. Above the plots we report the sample \textit{root mean squared error} (RMSE) for our method, \textit{tsp}, and the standard method, \textit{standard}. Consider the results on the hotel data set, which are depicted in Figure \ref{fig:hotel-ctr} (Top). The red curve (our method) tracks the binned values (black curve) much better than the blue curve (standard method). Also, the sample RMSE of our method is $0.025$, which is lower than the standard method's RMSE of $0.036$. This suggests that treating the problem as a time series prediction problem results in more accurate estimates. Finally, consider the results on the bank data set, which are depicted in Figure \ref{fig:hotel-ctr} (Bottom). The auto-correlation plot suggests that there is not much non-stationarity in this data set. This validates another interesting use case for our method: does it break down when the environment happens to be (approximately) stationary? The results suggest that it does not---our method achieves the same RMSE as the standard method, and the blue and red curves are visually quite similar. An interesting research question is whether our high-confidence policy evaluation and improvement algorithms can be extended to non-stationary MDPs. However, following TSP algorithm, it can be noticed that estimating performance of a policy with high-confidence in non-stationary MDP can be reduced to time-series forecasting with high-confidence, which in complete generality is infeasible. An open research direction is to leverage domain specific structure of the problem and identify conditions under which this problem can be made feasible. \section{Learning from Passive Data} \label{sec:passive} Constructing SR systems is particularly challenging due to the cold start problem. Fortunately, in many real world problems, there is an abundance of sequential data which are usually `passive' in that they do not include past recommendations. In this section we propose a practical approach that learns from passive data. We use scalar parameterization that turns a passive model into active, and posterior sampling for Reinforcement learning (PSRL) to learn the correct parameter value. In this section we summarize our work from \cite{DBLP:conf/iui/TheocharousVW17,DBLP:conf/nips/TheocharousWAV18}. The idea is to first learn a model from passive data that predicts the next activity given the history of activities. This can be thought of as the ‘no-recommendation’ or passive model. To create actions for recommending the various activities, we can perturb the passive model. Each perturbed model increases the probability of following the recommendations, by a different amount. This leads to a set of models, each one with a different `propensity to listen’. In effect, the single `propensity to listen’ parameter is used to turn a passive model into a set of active models. When there are multiple models one can use online algorithms, such as posterior sampling for Reinforcement learning (PSRL) to identify the best model for a new user \cite{Strens:2000:BFR:645529.658114,NIPS2013_5185}. In fact, we used a deterministic schedule PSRL (DS-PSRL) algorithm, for which we have shown how it satisfies the assumptions of our parameterization in \cite{DBLP:conf/nips/TheocharousWAV18}. The overall solution is shown in Figure \ref{fig:passive-data-solution}. \begin{figure*}[ht] \centering \includegraphics[width=1.0\textwidth]{figs/architecture} \caption[ ]{The first 4 steps are done offline and are used to create and solve a discrete set of MDPs, for each value of $\theta$. Step 5 implements the DS-PSRL algorithm of this paper.} \label{fig:passive-data-solution} \end{figure*} \subsection{Sequence Modeling} The first step in the solution is to model past sequences of activities. Due to the fact that the number of activities is usually finite and discrete and the fact that what activity a person may do next next depends on the history of activities done, we chose to model activity sequences using probabilistic suffix trees (PST). PSTs are a compact way of modeling the most frequent suffixes, or history of a discrete alphabet $S$ (e.g., a set of activities). The nodes of a PST represent suffixes (or histories). Each node is associated with a probability distribution for observing every symbol in the alphabet, given the node suffix \citep{pst-R-package}. Given a PST model one could easily estimate the probability of the next symbol $s=s_{t+1}$ given the history of symbols $X=(s_1, s_2 \dots s_t)$ as $P(s | X)$. An example PST is shown in Figure \ref{fig:pst}. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{figs/pst} \caption{The figure describes an example probabilistic suffix tree. The circles represent suffixes. In this tree, the suffixes are \{(1), (3,1), (4,1), (2,4,1), (4)\}. The rectangles show the probability of observing the next symbol given the suffix. The alphabet, or total number of symbols in the example are \{1,2,3,4\}} \label{fig:pst} \end{figure} The log likelihood of a set of sequences can easily be computed as $\log(\mathcal{L})=\sum_{s \in S} \log(P(s|X))$, where $s$ are all the symbols appearing in the data and $X$ maps the longest suffixes (nodes) available in the tree for each symbol. For our implementation we learned PSTs using the {\it pstree} algorithm from \citep{pst-R-package}. The {\it pstree} algorithm can take as input multiple parameters, such as the depth of the tree, the number of minimum occurrence of a suffix, and parametrized tree pruning methods. To select the best set of parameters we perform model selection using the Modified Akaike Information Criterion (AICc) ${\mathrm {AICc}}=2k-2\log(\mathcal{L})+{\frac {2k(k+1)}{n-k-1}}$, where $\log(\mathcal{L})$ is the log likelihood as defined earlier and $k$ is the number of parameters \citep{akaike-1974} . \subsection{Action Creation} The second step involves the creation of action models for representing various personas. An easy way to create such parameterization, is to perturb the passive dynamics of the crowd PST (a global PST learned from all the data). Each perturbed model increases the probability of listening to the recommendation by a different amount. While there could be many functions to increase the transition probabilities, in our implementation we did it as follows: \begin{equation} \label{eq:poi-dynamics} P(s|X,a,\theta) = \begin{cases} P(s|X) ^{1/ \theta}, & \text{if } a = s\\ P(s|X)/z(\theta), & \text{otherwise} \end{cases} \end{equation} where $s$ is an activity, $X=(s_1, s_2 \dots s_t)$ a history of activities, and $z(\theta)=\frac{\sum_{s \neq a} P(s|X)} {1-P(s=a|X) ^{1/ \theta}}$ is a normalizing factor. \subsection{Markov Decision Processes}\label{sec:psrl-mdp} The third step is to create MDPs from $P(s|X,a,\theta)$ and compute their policies in the fourth step. It is straightforward to use the PST to compute an MDP model, where the states/contexts are all the nodes of the PST. If we denote $x$ to be a suffix available in the tree, then we can compute the probability of transitioning from every node to every other node by finding resulting suffixes in the tree for every additional symbol that an action can produce: $$p(x'|x,a,\theta)=\sum_{s \in S} \one{x'=\text{pst.suffix}(x,s)} p(s|x,a,\theta),$$ where $\text{pst.suffix}(x,s)$, is the longest suffix in the PST of suffix $x$ concatenated with symbol $s$. We set the reward $r(x)=f(x,a)$, where $f$ is a function of the suffix history and the recommendation. This gives us a finite and practically small state space. We can use the classic {\it policy iteration} algorithm to compute the optimal policies and value functions $V_\theta^*(x) $. \subsection{Posterior Sampling for Reinforcement Learning}\label{sec:psrl} The fifth step is to use on-line learning to compute the true user parameters. For this we used a posterior sampling for reinforcement learning (PSRL) algorithm called deterministic schedule PSRL (DS-PSRL) \cite{,DBLP:conf/nips/TheocharousWAV18}. The DS-PSRL algorithm shown in Figure~\ref{alg:lazy} changes the policy in an exponentially rare fashion; if the length of the current episode is $L$, the next episode would be $2L$. This switching policy ensures that the total number of switches is $O(\log T)$. \begin{figure}[ht] \begin{center} \framebox{\parbox{8cm}{ \begin{algorithmic} \STATE {\bf Inputs}: $P_1$, the prior distribution of $\theta_*$. \STATE $L \leftarrow 1$. \FOR{$t\gets 1,2,\dots$} \IF{$t = L $} \STATE Sample $\widetilde\theta_{t}\sim P_t$. \STATE $L \leftarrow 2L$. \ELSE \STATE $\widetilde\theta_{t} \leftarrow \widetilde\theta_{t-1}$. \ENDIF \STATE Calculate near-optimal action $a_t \leftarrow \pi^*(x_t, \widetilde\theta_t)$. \STATE Execute action $a_t$ and observe the new state $x_{t+1}$. \STATE Update $P_t$ with $(x_t,a_t,x_{t+1})$ to obtain $P_{t+1}$. \ENDFOR \end{algorithmic} }} \end{center} \caption{The DS-PSRL algorithm with deterministic schedule of policy updates.} \label{alg:lazy} \end{figure} The algorithm makes three assumptions. First is assumes assume that MDP is weakly communicating. This is a standard assumption and under this assumption, the optimal average loss satisfies the Bellman equation. Second, it assumes that the dynamics are parametrized by a scalar parameter and satisfy a smoothness condition. \begin{ass}[Lipschitz Dynamics] \label{ass:lipschitz} There exist a constant $C$ such that for any state $x$ and action $a$ and parameters $\theta,\theta'\in \Theta \subseteq \Re$, \[ \norm{P(.|x,a,\theta) - P(.|x,a,\theta')}_1 \le C \abs{\theta-\theta'} \;. \] \end{ass} Third, it makes a concentrating posterior assumption, which states that the variance of the difference between the true parameter and the sampled parameter gets smaller as more samples are gathered. \begin{ass}[Concentrating Posterior] \label{ass:concentrating} Let $N_{j}$ be one plus the number of steps in the first $j$ episodes. Let $\widetilde\theta_{j}$ be sampled from the posterior at the current episode $j$. Then there exists a constant $C'$ such that \[ \max_{j} \EE{ N_{j-1} \abs{\theta_{*} - \widetilde\theta_{j}}^2 } \le C' \log T \;. \] \end{ass} The \ref{ass:concentrating} assumption simply says the variance of posterior decreases given more data. In other words, we assume that the problem is learnable and not a degenerate case. \ref{ass:concentrating} was actually shown to hold for two general categories of problems, finite MDPs and linearly parametrized problems with Gaussian noise \cite{Abbasi-Yadkori-Szepesvari-2015}. Under these assumptions we the following theorem can be proven~\cite{DBLP:conf/nips/TheocharousWAV18}. \begin{thm} \label{thm:main} Under Assumption~\ref{ass:lipschitz} and \ref{ass:concentrating}, the regret of the DS-PSRL algorithm is bounded as \[ R_T = \widetilde{O}(C \sqrt{C' T}), \] where the $\widetilde{O}$ notation hides logarithmic factors. \end{thm} Notice that the regret bound in Theorem~\ref{thm:main} does not directly depend on $S$ or $A$. Moreover, notice that the regret bound is smaller if the Lipschitz constant $C$ is smaller or the posterior concentrates faster (i.e. $C'$ is smaller). \subsection{Satisfying the Assumptions} Here we summarize how the parameterization assumption in Equation \ref{eq:poi-dynamics} satisfies assumptions \ref{ass:lipschitz} and \ref{ass:concentrating}. \paragraph{Lipschitz Dynamics} We can show that the dynamics are Lipschitz continuous: \begin{lemma} \label{lemma:poi-lipschitz} (Lipschitz Continuity) Assume the dynamics are given by Equation \ref{eq:poi-dynamics}. Then for all $\theta, \theta' \geq 1$ and all $X$ and $a$, we have \[ \| P(\cdot|X,a,\theta) - P(\cdot|X,a,\theta') \|_1 \leq \frac{2}{e} |\theta -\theta'|. \] \end{lemma} \paragraph{Concentrating Posterior} we can also show that Assumption~\ref{ass:concentrating} holds. Specifically, we can show that under mild technical conditions, we have \[ \max_j \EE{ N_{j-1} \abs{\theta_{*} - \widetilde\theta_{j}}^2 } =O(1) \] Please refer to~\cite{DBLP:conf/nips/TheocharousWAV18} for the proofs. \section{Optimizing for Recommendation Acceptance} \label{sec:acceptance} Accepting recommendations needs deeper consideration than simply predicting click through probability of an offer. In this section we examine two acceptance factors, the `propensity to listen' and `recommendations fatigue'. The `propensity to listen' is a byproduct of the passive data solution shown in Section~\ref{sec:passive}. The `recommendation fatigue' is a problem where people may quickly stop paying attention to recommendations such as ads, if they are presented too often. The property of RL algorithms for solving delayed reward problems gives a natural solution to this fundamental marketing problem. For example, if the decision was to recommend, or not some product every day, where the final goal would be to buy at some point in time, then RL would naturally optimize the right sending schedule and thus avoid fatigue. In this section we present experimental results for a Point-of-Interest (POI) recommendation system that solves both, the `propensity to listen' as well as `recommendation fatigue' problems \cite{DBLP:conf/iui/TheocharousVW17}. We experimented with a points of interest domain. For experiments we used the Yahoo! Flicker Creative Commons 100M (YFCC100M)~\cite{Thomee:2016:YND:2886013.2812802}, which consists of 100M Flickr photos and videos. This dataset also comprises the meta information regarding the photos, such as the date/time taken, geo-location coordinates and accuracy of these geo-location coordinates. The geo-location accuracy range from world level (least accurate) to street level (most accurate). We used location sequences that were mapped to POIs near Melbourne Australia\footnote{The data and pre-processing algorithms are publicly available on https://github.com/arongdari/flickr-photo}. After preprocessing, and removing loops, we had 7246 trajectories and 88 POIs. We trained a PST using the data and performed various experiments to test the ability of our algorithm to quickly optimize the cumulative reward for a given user. We used $\theta=\{1,10,20\}$ and did experiments assuming the true user to be any of those $\theta$. For reward, we used a signal between $[0,1]$ indicating the frequency/desirability of the POIs. We computed the frequency from the data. The action space was a recommendation for each POI (88 POIs), plus a null action. All actions but the null action had a cost $0.2$ of the reward. Recommending a POI that was already seen (e.g in the current suffix) had an additional cost of $0.4$ of the reward. This was done in order to reduce the number of recommendations otherwise called the fatigue factor. We compared DS-PSRL with greedy policies. Greedy policies do not solve the underlying MDP but rather choose the action with maximum immediate reward, which is equivalent to the classic Thompson sampling for contextual bandits. PSRL could also be thought of as Thompson sampling for MDPs. We also compared with the optimal solution, which is the one that knows the true model from the beginning. Our experiments are shown in Tables \ref{tab:reward} and \ref{tab:fatigue} and Figure \ref{fig:exp}. DS-PSRL quickly learns to optimize the average reward. At the same time it produces more reward than the greedy approach, while minimizing the fatigue factor. \begin{table}[!htb] \vspace{0pt}\centering% \begin{tabular}{r r l} & {\small \textbf{GREEDY}} & {\small \textbf{MDP}} \\ \toprule $*$ & 0.45 & 0.5 \\ TS & 0.32 & 0.42 \\ \bottomrule \\ \end{tabular} \caption{Average reward comparisons between Thomson sampling and the optimal policy in hindsight denoted by $*$. The columns label indicate the type of policies being used.} \label{tab:reward} \end{table} \begin{table}[!htb] \vspace{0pt}\centering% \begin{tabular}{r r l} Time & {\small \textbf{GREEDY}} & {\small \textbf{MDP}} \\ \toprule 1 & 72 & 72 \\ 2 & 71 & 51 \\ 3 & 72 & 0 \\ 4 & 71 & 72 \\ 5 & 72 & 51 \\ 6 & 71 & 0 \\ \bottomrule \\ \end{tabular} \caption{The table shows the actions taken by each algorithm for the first 6 recommendations. For cost of recommendation of 0.3. Thompson sampling for MDPs with deterministic switching schedule (or DS-PSRL) does not give recommendation at every step, and yet achieves higher reward. In a way it solves the recommendation fatigue problem.}\label{tab:fatigue} \end{table} \begin{figure} \centering \includegraphics[width=0.6\textwidth]{figs/exp3} \caption[ ]{The dotted lines is the performance assuming we knew the true different $\theta$. DS-PSRL denoted with solid lines learns quickly for different true $\theta$.} \label{fig:exp} \end{figure} \section{Capacity-aware Sequential Recommendations} \label{sec:constraints} So far we have considered recommendation systems that consider each user individually, ignoring the collective effects of recommendations. However, ignoring the collective effects could result in diminished utility for the user, for example through overcrowding at high-value points of interest~(POI) considered in Section~\ref{sec:acceptance}. In this section we summarize a solution that can optimize for both latent factors and resource constraints in SR systems~\cite{DBLP:conf/atal/NijsTVWS18}. To incorporate collective effects in recommendation systems, we extend our model to a multi-agent system with global \emph{capacity constraints}, representing for example, the maximum capacity for visitors at a POI. Furthermore, to handle latent factors, we model each user as a \emph{partially observable} decision problem, where the hidden state factor represents the user's latent interests. An optimal decision policy for this partially observable problem chooses recommendations that find the best possible trade-off between exploration and exploitation. Unfortunately, both global constraints and partial observability make finding the optimal policy intractable in general. However, we show that the structure of this problem can be exploited, through a novel belief-space sampling algorithm which bounds the size of the state space by a limit on regret incurred from switching from the partially observable model to the most likely fully observable model. We show how to decouple constraint satisfaction from sequential recommendation policies, resulting in algorithms which issue recommendations to thousands of agents while respecting constraints. \subsection{Model of capacity-aware sequential recommendation problem} \label{sec:constraints:model} While PSRL~(Section~\ref{sec:psrl}) eventually converges to the optimal policy, it will never select actions which are not part of the optimal policy for any~$\text{MDP}_{\theta}$, even if this action would immediately reveal the true parameters~$\theta_{\ast}$ to the learner. In order to reason about such information gathering actions, a recommender should explicitly consider the decision-theoretic value of information~\cite{Howard1966}. To do so, we follow~\cite{Chades2012} in modeling such a hidden-model MDP as a Mixed-Observability MDP (MOMDP). The state space of a MOMDP model factors into a fully observable factor~$x\in X$ and a partially observable factor~$y\in Y$, each with their own transition functions,~$T_X(x'\mid x,y,a)$ and $T_Y(y' \mid x,y,a,x')$. An observation function~$\Omega(o \mid a, y')$ exists to inform the decision maker about transitions of the hidden factor. However, in addition to the observations, the decision maker also conditions his policy~$\pi(t,x,o)$ on the observable factor~$x$. Given a parametric MDP~$\langle \Theta, S, A, \bar{R}, \bar{T}, h \rangle$ over a \emph{finite} set of user types~$\Theta$, for example as generated in Section~\ref{sec:psrl-mdp}, we derive an equivalent MOMDP~$\langle X, Y, A, O, T_X, T_Y, R, \Omega, h \rangle$ having elements \begin{equation}% \begin{aligned}% X &= S,\: Y = \Theta, & R(s,\theta,a) &= R_\theta(s,a), & T_X(s' \mid s, \theta, a) &= T_\theta(s' \mid s, a), \\ O &= \{ o_{\textsc{null}} \}, & \Omega(o_{\textsc{null}} \mid a,\theta') &= 1, & T_Y(\theta' \mid s,\theta,a,s') &= {\begin{cases} 1 & \text{if } \theta = \theta', \\ 0 & \text{otherwise}. \end{cases}} \\ \end{aligned}% \end{equation}% The model uses the latent factor $Y$ to represent the agent's type, selecting the type-specific transition and reward functions based on its instantiation. Because a user's type does not change over the plan horizon, the model is a `stationary' Mixed-Observability MDP~\cite{Martin2017a}. The observation function~$O$ is uninformative, meaning that there is no direct way to infer a user's type. Intuitively, this means the recommender can only learn a user's type by observing state transitions. This gives a recommender model for a single user~$i$, out of a total of $n$~users. To model the global capacities at different points of interest, we employ a consumption function~$C$ and limit vector~$L$ defined over $m$~POIs. The consumption of resource type~$r$ is defined using function~$C_{r} : S \times A \rightarrow \{ 0, 1 \}$, where 1 indicates that the user is present at~$r$. The limit function~$L_r$ gives POI~$r$'s peak capacity. The optimal (joint) recommender policy satisfies the constraints \emph{in expectation}, optimizing \begin{equation} \max_\pi \mathbb{E} \bigl[ V^{\pi} \bigr] \text{, subject to } \mathbb{E} \bigl[ C^{\pi}_{r,t} \bigr] \leq L_r \quad \forall t, r. \end{equation} For multi-agent problems of reasonable size, directly optimizing this joint policy is infeasible. For such models Column Generation~(CG;~\cite{Gilmore1961}) has proven to be an effective algorithm~\cite{deNijs2017,Walraven2018,Yost2000}. Agent planning problems are decoupled by augmenting the optimality criterion of the (single-agent) planning problem with a Lagrangian term pricing the expected resource consumption cost $\mathbb{E}[C^{\pi_i}_{r,t}]$, i.e., \begin{equation}\label{eq:cg:plan} \argmax_{\pi_i} {\Bigl( \mathbb{E}[V^{\pi_i}] - \sum_{t,r} \lambda_{t,r} \mathbb{E}[C^{\pi_i}_{r,t}] \Bigr)}\quad\forall i. \end{equation} This routine is used to compute a new policy~$\pi_i$ to be added to a set~$Z_i$ of potential policies of agent~$i$. These sets form the search space of the CG LP, which optimizes the current best \emph{joint} mix of policies subject to constraints, by solving: \begin{equation}\label{eq:cg:solve} \begin{aligned} \max_{x_{i,j}}\:\:& \sum_{i=1}^n \sum_{\pi_{i,j} \in Z_i} x_{i,j}\,\mathbb{E}[V^{\pi_{i,j}}], \\ \text{s.t.}\:\: & \sum_{i=1}^n \sum_{\pi_{i,j} \in Z_i} x_{i,j}\,\mathbb{E}[C^{\pi_{i,j}}_{r,t}] \leq L_r & \forall r, t, \\ & \sum_{\mathclap{\pi_{i,j} \in Z_i}} x_{i,j} = 1,\text{ and } x_{i,j} \geq 0 &\forall i, j.\\ \end{aligned} \end{equation} Solving this LP results in: 1) a probability distribution over policies, having agents follow a policy with $\Pr(\pi_i = \pi_{i,j})=x_{i,j}$, and 2) a new set of dual prices~$\lambda'_{t,r}$ to use in the subsequent iteration. This routine stops once $\lambda=\lambda'$, at which point a global optimum is found. \subsection{Bounded belief state space planning} Unfortunately, in every iteration of column generation, we need to find $n$ optimal policies satisfying Equation~\eqref{eq:cg:plan}, which in itself has PSPACE complexity for MOMDPs~\cite{Papadimitriou1987}. Therefore, we propose a heuristic algorithm exploiting the structure of our problems: bounded belief state space planning (Alg.~\ref{alg:boundtree}). To plan for partially observable MDP models it is convenient to reason over belief states~\citep{Kaelbling1998}. In our case, a belief state~$b$ records a probability distribution over the possible types $\Theta$, with $b(\theta)$ indicating how likely the agent is of type~$\theta$. Given a belief state~$b$, the action taken~$a$, and the observation received~$o$, the subsequent belief state~$b'(\theta)$ can be derived using application of Bayes' theorem. In principle, this belief-state model can be used to compute the optimal policy, but the exponential size of $B$ prohibits this. Therefore, approximation algorithms generally focus on a subset of the space~$B'$. \begin{algorithm}[tb] \begin{algorithmic}[1] \STATE Given parametric MDP~$\langle \Theta, S, A, \bar{R}, \bar{T}, h \rangle$ and approximate belief space~$B'$\label{algline:sample} \STATE Plan $\pi_j^{\ast}$ for all $j$ \label{algline:planmdp} \STATE Compute $V_{\theta_i,\pi_j^{\ast}}$ for all $i$, $j$ \label{algline:evalmdp} \STATE Create policy $\pi[b]$ \FOR{time $t = h \to 1$} \FOR{belief point $b \in B'(t)$} \STATE $V[b] = -\infty$ \FOR{action $a \in A$} \STATE $Q[b,a] = R(b,a)$ \FOR{observed next state $s' \in S$} \STATE $b' = \text{updateBelief}(b, a, s')$ \IF{$b' \in B'$} \STATE $Q[b,a] = Q[b,a] + \Pr(s' \mid b, a) \cdot V[b']$ \ELSE \label{algline:mdpex1} \STATE $j = \argmax_j Q\bigl[b', \pi^{\ast}_j\bigr]$ \label{algline:mdpex2} \STATE $\pi[b'] = \pi_j^{\ast}$\label{ch6:algline:minregretpi} \STATE $Q[b,a] = Q[b,a] + \Pr(s' \mid b, a) \cdot \bar{V}\bigl[ b' \bigr]$ \label{algline:mdpex3} \ENDIF \ENDFOR \IF{$Q[b,a] > V[b]$} \STATE $V[b] = Q[b,a]$ \STATE $\pi[b] = a$\label{ch6:algline:bestaction} \ENDIF \ENDFOR \ENDFOR \ENDFOR \STATE \Return $\langle \pi, V[b] \rangle$ \label{ch6:algline:brreturn} \end{algorithmic} \caption{Bounded belief state space planning~\cite{DBLP:conf/atal/NijsTVWS18}.} \label{alg:boundtree} \end{algorithm} When computing a policy~$\pi$ for a truncated belief space $B'$ we have to be careful that we compute unbiased consumption expectations~$\mathbb{E}[C_\pi]$, to guarantee feasibility of the Column Generation solution. This can be achieved if we know the exact expected consumption of the policy at each `missing' belief point \emph{not} in $B'$. For corners of the belief space, where $b(\theta_i) = 1$ (and $b(\theta_j) = 0$ for $i \neq j$), the fact that agent types are stationary ensures that the optimal continuation is the optimal policy for the $\text{MDP}_{\theta_i}$. If we use the same policy in a non-corner belief, policy~$\pi^{\ast}_i$ may instead be applied on a different $\text{MDP}_{\theta_j}$, with probability~$b(\theta_j)$. In general, the expected value of choosing policy~$\pi^{\ast}_i$ in belief point~$\langle t,s,b \rangle$ is \begin{equation} Q\bigl[\langle t,s,b \rangle, \pi^{\ast}_i\bigr] = \sum_{j = 1}^{|\Theta|} \Bigl( b(\theta_j) \cdot V^{\theta_j}_{\pi^{\ast}_i}[t,s] \Bigr). \end{equation} For belief points close to corner~$i$, policy~$\pi^{\ast}_i$ will be the optimal policy with high probability. If we take care to construct~$B'$ such that truncated points are close to corners, we can limit our search to the optimal policies of each type, \begin{equation}\label{eqn:ch6:approxpol} \bar{V}\bigl[\langle t,s,b \rangle\bigr] = \max_{\theta_i \in \Theta} Q\bigl[\langle t,s,b \rangle, \pi^{\ast}_i\bigr]. \end{equation} When we apply policy~$\pi^{\ast}_i$ in a belief point that is not a corner, we incur regret proportional to the amount of value lost from getting the type wrong. Policy~$\pi_i^{\ast}$ applied to $\text{MDP}_{\theta_j}$ obtains expected value~$V^{\theta_j}_{\pi_i^{\ast}} \leq V^{\theta_j}_{\pi_j^{\ast}}$ by definition of optimality. Thus, the use of policy $\pi_i^{\ast}$ in belief point $b$ incurs a regret of \begin{equation}\label{eq:regret} \textsc{regret}(b) = \min_i \textsc{regret}(b, i) = \min_i \sum_{j = 1}^{|\Theta|} \biggl( b(\theta_j) \cdot \Bigl( V^{\theta_j}_{\pi_j^{\ast}} - V^{\theta_j}_{\pi_i^{\ast}} \Bigr) \biggr). \end{equation} This regret function can serve as a scoring rule for belief points worth considering in belief space $B'$. Let~$\mathrm{P}(b)$ stand for the probability of belief point~$b$, then we generate all subsequent belief points from initial belief~$b_0$ that meet a threshold (for hyper-parameters minimum probability~$p$ and shape~$\alpha$): \begin{equation}\label{eq:boundedregret} b \in B' \text{ if } \textsc{regret}(b) > \bigl( e^{-\alpha(\mathrm{P}(b)-p)} - e^{-\alpha(1-p)} \bigr) \cdot \textsc{regret}(b_0). \end{equation} Algorithm~\ref{alg:boundtree} starts by computing the optimal MDP policy~$\pi_j^{\ast}$ for each type~$\theta_j$, followed by determining the exact expected values~$V^{\theta_i}_{\pi_j^{\ast}}$ of applying these policies to all different user types $\theta_i$. The remainder of the algorithm computes expected values at each belief point in regret-truncated space~$B'$, according to the typical dynamic programming recursion. However, in case of a missing point~$b'$, the best policy~$\pi_{j}^{\ast}$ is instead selected (line~\ref{algline:mdpex2}), and the expected value of using this MDP policy is computed according to the belief state. The resulting policy~$\pi$ thus consists of two stages: the maximally valued action stored in~$\pi[b]$ is selected, unless $b \notin B'$, at which point MDP policy~$\pi_j^{\ast}$ replaces~$\pi$ for the remaining steps. \subsection{Empirical evaluation of scalability versus quality} By bounding the exponential growth of the state space, Algorithm~\ref{alg:boundtree} trades off solution quality for scalability. To assess this trade-off, we perform an experiment on the POI recommendation problem introduced in Section~\ref{sec:acceptance}. We compare with the highly scalable PSRL on the one hand, and state-of-the-art mixed-observability MDP planner SARSOP~\cite{Kurniawata2008} on the other. We consider a problem consisting of 5~POIs, 3~user types, 50~users and PST depth~1. For this experiment we measure the quality of the computed policy as the mean over 1,000~simulations per instance, solving~$5$~instances per setting of the horizon. We consider two settings, the regular single recommendation case, and a dual recommendation case where the recommender is allowed to give an alternative to the main recommendation, which may provide more opportunities to gather information in each step. \begin{figure}[hb] \centering \begin{tikzpicture}[scale=0.96,transform shape,every node/.style={inner sep=0}] \node at (0,0) {\includegraphics{./figs/dual-summary}}; \node[anchor=base] at ( -3.55, 3.55) {\small\textbf{Single} recommendation}; \node[anchor=base] at ( 1.10, 3.55) {\small\textbf{Dual} recommendations}; \node[anchor=base] at ( -3.35, -3.8) {Horizon ($h$)}; \node[anchor=base] at ( 1.30, -3.8) {Horizon ($h$)}; \node[anchor=base,rotate=90] at ( -6.38, 1.82) {Mean value}; \node[anchor=base,rotate=90] at ( -6.38, -1.45) {Runtime (m)}; \end{tikzpicture} \caption{Solution quality and planning time of the different sequential recommendation planners, as a function of the horizon.} \label{fig:performance} \end{figure} Figure~\ref{fig:performance} presents the results. The top row presents the observed mean reward, while the bottom row presents the required planning time in minutes. We observe that for our constrained finite-horizon problems, SARSOP quickly becomes intractable, even when the discount factor is set very low. However, by not optimizing for information value, PSRL obtains significantly lower lifetime value. Our algorithm finds policies which do maximize information value, while at the same time remaining tractable through its effective bounding condition on the state space growth. We note that its runtime stops increasing significantly beyond~$h=20$, as a result of the bounded growth of the state space. \section{Large Action Spaces} \label{sec:large-actions} In many real-world recommendation systems the number of actions could be prohibitively large. Netflix for example employs a few thousands of movie recommendations. For SR systems the difficulty is even more severe, since the search space grows exponentially with the planning horizon. In this section we show how to learn action embeddings for action generalization. Most model-free reinforcement learning methods leverage state representations (embeddings) for generalization, but either ignore structure in the space of actions or assume the structure is provided \emph{a priori}. We show how a policy can be decomposed into a component that acts in a low-dimensional space of action representations and a component that transforms these representations into actual actions. These representations improve generalization over large, finite action sets by allowing the agent to infer the outcomes of actions similar to actions already taken. We provide an algorithm to both learn and use action representations and provide conditions for its convergence. The efficacy of the proposed method is demonstrated on large-scale real-world problems \cite{DBLP:conf/icml/ChandakTKJT19}. \begin{figure}[h] \centering \includegraphics[scale=0.17]{figs/execution_new.png} \quad\quad\quad\quad\quad \includegraphics[width=0.45\textwidth]{figs/representation_space_2.png} \caption{ % (Left) The structure of the proposed overall policy, $\pi_o$, consisting of $f$ and $\pi_i$, that learns action representations to generalize over large action sets. (Right) Illustration of the probability induced for three actions by the probability density of $\pi_i(e|s)$ on a $1$-D embedding space. The $x$-axis represents the embedding, $e$, and the $y$-axis represents the probability. The colored regions represent the mapping $a=f(e)$, where each color is associated with a specific action. } \label{Fig:execution} \end{figure} \subsection{Generalization over Actions} The benefits of capturing the structure in the underlying state space of MDPs is a well understood and a widely used concept in RL. State representations allow the policy to generalize across states. Similarly, there often exists additional structure in the space of actions that can be leveraged. We hypothesize that exploiting this structure can enable quick generalization across actions, thereby making learning with large action sets feasible. To bridge the gap, we introduce an action representation space, $\mathcal{E} \subseteq \mathbb{R}^{ d}$, and consider a factorized policy, $\pi_o$, parameterized by an embedding-to-action mapping function, $f \colon \mathcal{E} \to \mathcal{A}$, and an internal policy, $\pi_i \colon \mathcal{S} \times \mathcal{E} \to [0,1]$, such that the distribution of $A_t$ given $S_t$ is characterized by \begin{equation} E_t \sim \pi_i(\cdot |S_t), \hspace{2cm} A_t = f(E_t). \label{eqn:decomposed-policy} \end{equation} Here, $\pi_i$ is used to sample $E_t \in \mathcal{E}$, and the function $f$ deterministically maps this representation to an action in the set $\mathcal{A}$. Both these components together form an \textit{overall policy}, $\pi_o$. Figure \ref{Fig:execution} (Right) illustrates the probability of each action under such a parameterization. With a slight abuse of notation, we use $f^{-1}(a)$ as a one-to-many function that denotes the set of representations that are mapped to the action $a$ by the function $f$, i.e., $f^{-1}(a) \coloneqq \{e\in\mathcal E:f(e)=a\}$. In the following sections we discuss the existence of an optimal policy $\pi_o^*$ and the learning procedure for $\pi_o$. To elucidate the steps involved, we split it into four parts. First, we show that there exist $f$ and $\pi_i$ such that $\pi_o$ is an optimal policy. Then we present the supervised learning process for the function $f$ when $\pi_i$ is fixed. Next we give the policy gradient learning process for $\pi_i$ when $f$ is fixed. Finally, we combine these methods to learn $f$ and $\pi_i$ simultaneously. \subsection{Existence of $\pi_i$ and $f$ to Represent an Optimal Policy} In this section, we aim to establish a condition under which $\pi_o$ can represent an optimal policy. Consequently, we then define the optimal set of $\pi_o$ and $\pi_i$ using the proposed parameterization. To establish the main results we begin with the necessary assumptions. The characteristics of the actions can be naturally associated with how they influence state transitions. In order to learn a representation for actions that captures this structure, we consider a standard Markov property, often used for learning probabilistic graphical models \cite{ghahramani2001introduction}, and make the following assumption that the transition information can be sufficiently encoded to infer the action that was executed. \begin{ass} \label{ass:A1} Given an embedding $E_t$, $A_t$ is conditionally independent of $S_t$ and $S_{t+1}$: {\small $ P(A_t|S_t,S_{t+1}) =\!\! \int_{\mathcal{E}} \!\!\!P(A_t|E_t=e) P(E_t=e|S_t,S_{t+1})\,\mathrm{d}e. $ } \end{ass} \begin{ass} \label{ass:A2} Given the embedding $E_t$ the action, $A_t$ is deterministic and is represented by a function $f:\mathcal E \to \mathcal A$, i.e., $\exists a \text{ such that } P(A_t=a|E_t=e)=1$. \end{ass} % % We now establish a necessary condition under which our proposed policy can represent an optimal policy. % This condition will also be useful later when deriving learning rules. % \begin{lemma} \label{lemma:bellman} Under Assumptions \eqref{ass:A1}--\eqref{ass:A2}, which defines a function $f$, for all $\pi$, there exists a $\pi_i$ such that \begin{align} v^\pi(s) = \sum_{a \in \mathcal{A}} \int_{f^{-1}(a)} \pi_i(e|s) q^\pi(s, a)\, \mathrm{d}e. \label{eqn:lemma-1} \end{align} \end{lemma} % The proof is available in \cite{DBLP:conf/icml/ChandakTKJT19}. % % Following Lemma \eqref{lemma:bellman}, we use $\pi_i$ and $f$ to define the overall policy as \begin{align} \pi_o(a|s) &\coloneqq \int_{f^{-1}(a)}\pi_i(e|s)\,\mathrm{d}e. \label{eqn:optimal-policy} \end{align} \begin{thm} Under Assumptions \eqref{ass:A1}--\eqref{ass:A2}, which defines a function $f$, there exists an overall policy, $\pi_o$, such that $v^{\pi_o}=v^{\star}$. \label{thm:optimal-overall-policy} \end{thm} \begin{proof} This follows directly from Lemma \ref{lemma:bellman}. % Because the state and action sets are finite, the rewards are bounded, and $\gamma \in [0,1)$, there exists at least one optimal policy. % For any optimal policy $\pi^\star$, the corresponding state-value and state-action-value functions are the unique $v^\star$ and $q^\star$, respectively. % By Lemma \ref{lemma:bellman} there exist $f$ and $\pi_i$ such that \begin{align} v^\star(s)&= \sum_{a \in \mathcal{A}} \int_{f^{-1}(a)}\pi_i(e|s) q^\star(s,a)\,\mathrm{d}e. \label{eqn:optimal-overall-policy} \end{align} % Therefore, there exists $\pi_i$ and $f$, such that the resulting $\pi_o$ has the state-value function $v^{\pi_o}=v^{\star}$, and hence it represents an optimal policy. \end{proof} % Note that Theorem \ref{thm:optimal-overall-policy} establishes existence of an optimal overall policy based on equivalence of the state-value function, but does \emph{not} ensure that all optimal policies can be represented by an overall policy. % Using \eqref{eqn:optimal-overall-policy}, we define $\Pi_o^\star \coloneqq \{\pi_o : v^{\pi_o}=v^\star\}$. % Correspondingly, we define the set of \textit{optimal internal policies} as $\Pi_i^\star \coloneqq \{\pi_i : \exists \pi_o^\star \in \Pi_o^\star,\exists f, \pi_o^\star(a|s) = \int_{f^{-1}(a)}\pi_i(e|s)\,\mathrm{d}e \}$. % % % % \subsection{Supervised Learning of $f$ for a Fixed $\pi_i$} \label{section:learn-f} % Theorem \ref{thm:optimal-overall-policy} shows that there exist $\pi_i$ and a function $f$, which helps in predicting the action responsible for the transition from $S_t$ to $S_{t+1}$, such that the corresponding overall policy is optimal. % % However, such a function, $f$, may not be known \emph{a priori}. % In this section, we present a method to estimate $f$ using data collected from visits with the environment. By Assumptions \eqref{ass:A1}--\eqref{ass:A2}, $P(A_t|S_t,S_{t+1})$ can be written in terms of $f$ and $P(E_t|S_t,S_{t+1})$. % We propose searching for an estimator, $\hat f$, of $f$ and an estimator, $\hat g(E_t|S_t,S_{t+1})$, of $P(E_t|S_t,S_{t+1})$ such that a reconstruction of $P(A_t|S_t,S_{t+1})$ is accurate. % Let this estimate of $P(A_t|S_t,S_{t+1})$ based on $\hat f$ and $\hat g$ be { \begin{equation} \small \hat P(A_t|S_t,S_{t+1}) = \int_\mathcal{E} \!\! \hat f (A_t|E_t\!=\!e) \hat g(E_t\!=\!e|S_t,S_{t+1})\,\mbox{d}e \label{eqn:action-rep-estimator} \end{equation}\textnormal{} } % % One way to measure the difference between $P(A_t|S_t,S_{t+1})$ and $\hat P(A_t|S_t,S_{t+1})$ is using the expected (over states coming from the on-policy distribution) Kullback-Leibler (KL) divergence \begin{align} \text{KL}(P(A_t|S_t,S_{t+1}) || \hat P(A_t|S_t,S_{t+1}))=& -\mathbf{E} \left [\sum_{a \in \mathcal{A}} P(a|S_t,S_{t+1}) \ln \left ( \frac{\hat P(a|S_t,S_{t+1})}{P(a|S_t,S_{t+1}) } \right ) \right \\ =& -\mathbf{E} \left [ \ln \left ( \frac{\hat P(A_t|S_t,S_{t+1})}{P(A_t|S_t,S_{t+1})} \right ) \right ]. \label{eqn:KL-sample} \end{align} % Since the observed transition tuples, $(S_t,A_t,S_{t+1})$, contain the action responsible for the given $S_t$ to $S_{t+1}$ transition, % an on-policy sample estimate of the KL-divergence can be computed readily using \eqref{eqn:KL-sample}. % We adopt the following loss function based on the KL divergence between $P(A_t|S_t,S_{t+1})$ and $\hat P(A_t|S_t,S_{t+1})$: \begin{align} \mathcal{L}(\hat f, \hat g) &= - \mathbf{E}\left [ \ln \left (\hat P(A_t|S_t,S_{t+1}) \right )\right ], \label{Eqn:self-supervised-loss} \end{align} where the denominator in \eqref{eqn:KL-sample} is not included in \eqref{Eqn:self-supervised-loss} because it does not depend on $\hat f$ or $\hat g$. % If $\hat f$ and $\hat g$ are parameterized, their parameters can be learned by minimizing the loss function, $\mathcal{L}$, using a supervised learning procedure. % \begin{figure}[t] \centering \includegraphics[width=0.65\textwidth]{figs/learning_step.png} \caption{% (a) Given a state transition tuple, functions $g$ and $f$ are used to estimate the action taken. % The red arrow denotes the gradients of the supervised loss \eqref{Eqn:self-supervised-loss} for learning the parameters of these functions. % (b) During execution, an internal policy, $\pi_i$, can be used to first select an action representation, $e$. % The function $f$, obtained from previous learning procedure, then transforms this representation to an action. % The blue arrow represents the internal policy gradients \eqref{Eqn:internal_gradient} obtained using Lemma \ref{prop:local-policy-gradient} to update $\pi_i$. } \label{Fig:architecture-graph} \end{figure} A computational graph for this model is shown in Figure \ref{Fig:architecture-graph}. % % Note that, while $\hat f$ will be used for $f$ in an overall policy, $\hat g$ is only used to find $\hat f$, and will not serve an additional purpose. As this supervised learning process only requires estimating $P(A_t|S_t,S_{t+1})$, % it does not require (or depend on) the rewards. % This partially mitigates the problems due to sparse and stochastic rewards, since an alternative informative supervised signal is always available. % This is advantageous for making the action representation component of the overall policy learn quickly and with low variance updates. \subsection{Learning $\pi_i$ For a Fixed $f$} \label{section:learn-internal-policy} % A common method for learning a policy parameterized with weights $\theta$ is to optimize the discounted start-state objective function, $ J(\theta) := \sum_{s \in \mathcal{S}} d_0(s) v^\pi(s). $ % For a policy with weights $\theta$, the expected performance of the policy can be improved by ascending the \emph{policy gradient}, $\frac{\partial J(\theta)}{\partial \theta}$. % Let the state-value function associated with the internal policy, $\pi_i$, be $v^{\pi_i}(s) = \mathbf{E}[\sum_{t=0}^{\infty}\gamma^tR_{t} |s, \pi_i, f]$, and the state-action value function $q^{\pi_i}(s,e) = \mathbf{E}[\sum_{t=0}^{\infty}\gamma^t R_{t}$ $ |s, e, \pi_i, f]$. % We then define the performance function for $\pi_i$ as: % \begin{align} J_i(\theta) := \sum_{s \in \mathcal{S}} d_0(s) v^{\pi_i}(s) \label{Eqn:internal-Performance-function}. \end{align} % Viewing the embeddings as the action for the agent with policy $\pi_i$, the policy gradient theorem \cite{sutton2000policy}, states that the unbiased \cite{thomas2014bias} gradient of \eqref{Eqn:internal-Performance-function} is, % \begin{align} \frac{\partial J_i(\theta)}{\partial \theta} = \sum_{t=0}^{\infty}\mathbf{E}\left [ \gamma^t \int_\mathcal{E} q^{\pi_i}(S_t, e) \frac{\partial}{\partial \theta} \pi_i(e|S_t) \, \mathrm{d}e\right ], \label{Eqn:internal_gradient} \end{align} % % where, the expectation is over states from $d^\pi$, as defined in \cite{sutton2000policy} (which is not a true distribution, since it is not normalized). % The parameters of the internal policy can be learned by iteratively updating its parameters in the direction of $\partial J_i(\theta) /\partial \theta$. Since there are no special constraints on the policy $\pi_i$, any policy gradient algorithm designed for continuous control, like DPG \cite{silver2014deterministic}, PPO \cite{schulman2017proximal}, NAC \cite{bhatnagar2009natural} etc., can be used out-of-the-box. However, note that the performance function associated with the overall policy, $\pi_o$ (consisting of function $f$ and the internal policy parameterized with weights $\theta$), is: % \begin{align} J_o(\theta,f) = \sum_{s \in \mathcal{S}} d_0(s) v^{\pi_o}(s) \label{Eqn:Performance-function}. \end{align} The ultimate requirement is the improvement of this overall performance function, $J_o(\theta,f)$, and not just $J_i(\theta)$. % So, how useful is it to update the internal policy, $\pi_i$, by following the gradient of its own performance function? The following lemma answers this question. \begin{lemma} For all deterministic functions, $f$, which map each point, $e \in \mathbb{R}^{ d}$, in the representation space to an action, $a \in \mathcal{A}$, the expected updates to $\theta$ based on $\frac{\partial J_i(\theta)}{\partial \theta}$ are equivalent to updates based on $\frac{\partial J_o(\theta,f)}{\partial \theta}$. % That is, % \begin{align*} \frac{\partial J_o(\theta,f)}{\partial \theta} = \frac{\partial J_i(\theta)}{\partial \theta}. \end{align*} \label{prop:local-policy-gradient} \end{lemma} % The proof is available in \cite{DBLP:conf/icml/ChandakTKJT19}. % The chosen parameterization for the policy has this special property, which allows $\pi_i$ to be learned using its internal policy gradient. % Since this gradient update does not require computing the value of any $\pi_o(a|s)$ explicitly, the potentially intractable computation of $f^{-1}$ in \eqref{eqn:optimal-policy} required for $\pi_o$ can be avoided. % Instead, $\partial J_i(\theta) / \partial \theta$ can be used directly to update the parameters of the internal policy while still optimizing the overall policy's performance, $J_o(\theta,f)$. \subsection{Learning $\pi_i$ and $f$ Simultaneously} \label{section:learn-simultaneously} Since the supervised learning procedure for $f$ does not require rewards, a few initial trajectories can contain enough information to begin learning a useful action representation. % As more data becomes available it can be used for fine-tuning and improving the action representations. % \subsubsection{Algorithm} % We call our algorithm \textbf{p}olicy \textbf{g}radients with \textbf{r}epresentations for \textbf{a}ctions (PG-RA). % % % PG-RA first initializes the parameters in the action representation component by sampling a few trajectories using a random policy and using the supervised loss defined in \eqref{Eqn:self-supervised-loss}. % If additional information is known about the actions, as assumed in prior work \cite{dulac2015deep}, it can also be considered when initializing the action representations. % Optionally, once these action representations are initialized, they can be kept fixed. % In the Algorithm \ref{Alg:1}, Lines $2$-$9$ illustrate the online update procedure for all of the parameters involved. % Each time step in the episode is represented by $t$. % For each step, an action representation is sampled and is then mapped to an action by $\hat f$. % Having executed this action in the environment, the observed reward is then used to update the internal policy, $\pi_i$, using \textit{any} policy gradient algorithm. % Depending on the policy gradient algorithm, if a critic is used then semi-gradients of the TD-error are used to update the parameters of the critic. % In other cases, like in REINFORCE \cite{williams1992simple} where there is no critic, this step can be ignored. % The observed transition is then used in Line $9$ to update the parameters of $\hat f$ and $\hat g$ so as to minimize the supervised learning loss \eqref{Eqn:self-supervised-loss}. % In our experiments, Line $9$ uses a stochastic gradient update. % \IncMargin{1em} \begin{algorithm2e}[t] Initialize action representations \\%memory buffer $\Omega$ \\ \For {$episode = 0,1,2...$}{ \For {$t = 0,1,2...$} { Sample action embedding, $E_t$, from $\pi_i(\cdot|S_t) $ \\ $A_t = \hat f(E_t)$\\ Execute $A_t$ and observe $S_{t+1}, R_{t}$ \\ Update $\pi_i$ using \textit{any} policy gradient algorithm\\ % Update critic (if any) to minimize TD error\\ Update $\hat f$ and $\hat g$ to minimize $\mathcal L$ defined in \eqref{Eqn:self-supervised-loss} } } \caption{Policy Gradient with Representations for Action (PG-RA)} \label{Alg:1} \end{algorithm2e} \DecMargin{1em} \subsubsection{PG-RA Convergence} If the action representations are held fixed while learning the internal policy, then as a consequence of Lemma \ref{prop:local-policy-gradient}, convergence of our algorithm directly follows from previous two-timescale results \cite{borkar1997actor,bhatnagar2009natural}. % Here we show that learning both $\pi_i$ and $f$ simultaneously using our PG-RA algorithm can also be shown to converge by using a three-timescale analysis. % Similar to prior work \cite{bhatnagar2009natural,degris2012off,konda2000actor}, for analysis of the updates to the parameters, $\theta \in \mathbb{R}^{d_\theta}$, of the internal policy, $\pi_i$, we use a projection operator $\Gamma : \mathbb{R}^{d_\theta} \rightarrow \mathbb{R}^{d_\theta}$ that projects any $x \in \mathbb{R}^{d_\theta}$ to a compact set $\mathcal{C}\subset \mathbb R^{d_\theta}$. % We then define an associated vector field operator, $\hat \Gamma$, that projects any gradients leading outside the compact region, $\mathcal{C}$, back to $\mathcal{C}$. % % Practically, however, we do not project the iterates to a constraint region as they are seen to remain bounded (without projection). % Formally, we make the following assumptions, % \begin{ass} \label{ass:differentiable} For any state action-representation pair (s,e), internal policy, $\pi_i(e|s)$, is continuously differentiable in the parameter $\theta$. \end{ass} \begin{ass} \label{ass:projection} The updates to the parameters, $\theta \in \mathbb{R}^{d_\theta}$, of the internal policy, $\pi_i$, includes a projection operator $\Gamma : \mathbb{R}^{d_\theta} \rightarrow \mathbb{R}^{d_\theta}$ that projects any $x \in \mathbb{R}^{d_\theta}$ to a compact set $\mathcal{C} = \{x|c_i(x) \leq 0, i=1,...,n\} \subset \mathbb{R}^{d_\theta}$, where $c_i(\cdot), i=1,...,n$ are real-valued, continuously differentiable functions on $\mathbb{R}^{d_\theta}$ that represents the constraints specifying the compact region. For each $x$ on the boundary of $\mathcal C$, the gradients of the active $c_i$ are considered to be linearly independent. \end{ass} % \begin{ass} \label{ass:param-bounded} The iterates $\omega_t$ and $\phi_t$ satisfy $\underset{t}{\mathrm{sup}} \ (|| \omega_t||) < \infty$ and $\underset{t}{\mathrm{sup}} \ (|| \phi_t||) < \infty$. \end{ass} % \begin{thm} \label{thm:convergence} Under Assumptions \eqref{ass:A1}--\eqref{ass:param-bounded}, the internal policy parameters $\theta_t$, converge to $\mathcal{\hat Z} = \left\{x \in \mathcal{C}|\hat \Gamma\left(\frac{\partial J_i(x)}{\partial \theta}\right)=0\right \}$ as $t \rightarrow \infty$, with probability one. \end{thm} \begin{proof} (Outline) We consider three learning rate sequences, such that the update recursion for the internal policy is on the slowest timescale, the critic's update recursion is on the fastest, and the action representation module's has an intermediate rate. % With this construction, we leverage the three-timescale analysis technique \cite{borkar2009stochastic} and prove convergence. % The complete proof is available in \cite{DBLP:conf/icml/ChandakTKJT19}. \end{proof} \subsection{Experimental Analysis} We evaluate our proposed algorithms on the following domains. % \paragraph{Maze: } As a proof-of-concept, we constructed a continuous-state maze environment where the state comprised of the coordinates of the agent's current location. % The agent has $n$ equally spaced actuators (each actuator moves the agent in the direction the actuator is pointing towards) around it, and it can choose whether each actuator should be on or off. % Therefore, the size of the action set is exponential in the number of actuators, that is $|\mathcal{A}| = 2^n$. % The net outcome of an action is the vectorial summation of the displacements associated with the selected actuators. % The agent is rewarded with a small penalty for each time step, and a reward of $100$ is given upon reaching the goal position. % To make the problem more challenging, random noise was added to the action $10\%$ of the time and the maximum episode length was $150$ steps. % % This environment is a useful test bed as it requires solving a long horizon task in an MDP with a large action set and a single goal reward. % Further, we know the Cartesian representation for each of the actions, and can thereby use it to visualize the learned representation, as shown in Figure \ref{Fig:emb}. % % \paragraph{Real-word recommender systems: } % We consider two real-world applications of recommender systems that require decision making over \textit{multiple time steps}. % First, a web-based video-tutorial platform, which has a recommendation engine that suggests a series of tutorial videos on various software. % The aim is to meaningfully engage the users in learning how to use these software and convert novice users into experts in their respective areas of interest. % The tutorial suggestion at each time step is made from a large pool of available tutorial videos on several software. % The second application is a professional multi-media editing software. % Modern multimedia editing software often contain many tools that can be used to manipulate the media, and this wealth of options can be overwhelming for users. % In this domain, an agent suggests which of the available tools the user may want to use next. % The objective is to increase user productivity and assist in achieving their end goal. % For both of these applications, an existing log of user's click stream data was used to create an $n$-gram based MDP model for user behavior \cite{shani2005mdp}. % In the tutorial recommendation task, user activity for a three month period was observed. % Sequences of user visit were aggregated to obtain over $29$ million clicks. Similarly, for a month long duration, sequential usage patterns of the tools in the multi-media editing software were collected to obtain a total of over $1.75$ billion user clicks. % Tutorials and tools that had less than $100$ clicks in total were discarded. % The remaining $1498$ tutorials and $1843$ tools for the web-based tutorial platform and the multi-media software, respectively, were used to create the action set for the MDP model. % The MDP had continuous state-space, where each state consisted of the feature descriptors associated with each item (tutorial or tool) in the current $n$-gram. % Rewards were chosen based on a surrogate measure for difficulty level of tutorials and popularity of final outcomes of user visits in the multi-media editing software, respectively. % Since such data is sparse, only $5\%$ of the items had rewards associated with them, and the maximum reward for any item was $100$. % Often the problem of recommendation is formulated as a contextual bandit or collaborative filtering problem, but as shown in \cite{TheocharousTG15} these approaches fail to capture the long term value of the prediction. % Solving this problem for a longer time horizon with a large number of actions (tutorials/tools) makes this real-life problem a useful and a challenging domain for RL algorithms. % \subsection{Results} \subsubsection*{Visualizing the learned action representations } \begin{figure*}[ht] \centering \includegraphics[ height=2.8cm, width=4cm]{figs/maze.png} \includegraphics[ height=3.2cm, width=4cm]{figs/12_true.png} \includegraphics[ height=3.2cm, width=4cm]{figs/12_learnt.png} \caption{ (a) The maze environment. % The star denotes the goal state, the red dot corresponds to the agent and the arrows around it are the $12$ actuators. % Each action corresponds to a unique combination of these actuators. % Therefore, in total $2^{12}$ actions are possible. % (b) 2-D representations for the displacements in the Cartesian co-ordinates caused by each action, and (c) learned action embeddings. % In both (b) and (c), each action is colored based on the displacement ($\Delta x$, $\Delta y$) it produces. % That is, with the color \lbrack R= $\Delta x$, G=$\Delta y$, B=$0.5$\rbrack, where $\Delta x$ and $\Delta y$ are normalized to $[0,1]$ before coloring. % Cartesian actions are plotted on co-ordinates ($\Delta x$, $\Delta y$), and learned ones are on the coordinates in the embedding space. % Smoother color transition of the learned representation is better as it corresponds to preservation of the \textit{relative} underlying structure. % The `squashing' of the learned embeddings is an artifact of a non-linearity applied to bound its range. } \label{Fig:emb} \end{figure*} To understand the internal working of our proposed algorithm, we present visualizations of the learned action representations on the maze domain. A pictorial illustration of the environment is provided in Figure \ref{Fig:emb}. Here, the underlying structure in the set of actions is related to the displacements in the Cartesian coordinates. This provides an intuitive base case against which we can compare our results. In Figure \ref{Fig:emb}, we provide a comparison between the action representations learned using our algorithm and the underlying Cartesian representation of the actions. It can be seen that the proposed method extracts useful structure in the action space. Actions which correspond to settings where the actuators on the opposite side of the agent are selected result in relatively small displacements to the agent. These are the ones in the center of plot. In contrast, maximum displacement in any direction is caused by only selecting actuators facing in that particular direction. Actions corresponding to those are at the edge of the representation space. The smooth color transition indicates that not only the information about magnitude of displacement but the direction of displacement is also represented. Therefore, the learned representations efficiently preserve the relative transition information among all the actions. % To make exploration step tractable in the internal policy, $\pi_i$, we bound the representation space along each dimension to the range [$-1,1$] using \textit{Tanh} non-linearity. This results in `squashing' of these representations around the edge of this range. \subsubsection*{Performance Improvement } \begin{figure*}[t] \centering \includegraphics[width=0.29\textwidth]{figs/grid4_perf.png} \hfill \includegraphics[width=0.29\textwidth]{figs/grid8_perf.png} \hfill \includegraphics[width=0.29\textwidth]{figs/grid12_perf.png} \\ \includegraphics[width=0.29\textwidth]{figs/helpx_perf.png} \hspace{20pt} \includegraphics[width=0.29\textwidth]{figs/highbeam_perf.png} \caption{(Top) Results on the Maze domain with $2^4, 2^8,$ and $2^{12}$ actions respectively. (Bottom) Results on a) Tutorial MDP b) Software MDP. AC-RA and DPG-RA are the variants of PG-RA algorithm that uses actor-critic (AC) and DPG, respectively. The shaded regions correspond to one standard deviation and were obtained using $10$ trials.} \label{Fig:performance-plots} \end{figure*} % The plots in Figure \ref{Fig:performance-plots} for the Maze domain show how the performance of standard actor-critic (AC) method deteriorates as the number of actions increases, even though the goal remains the same. However, with the addition of an action representation module it is able to capture the underlying structure in the action space and consistently perform well across all settings. Similarly, for both the tutorial and the software MDPs, standard AC methods fail to reason over longer time horizons under such an overwhelming number of actions, choosing mostly one-step actions that have high returns. In comparison, instances of our proposed algorithm are not only able to achieve significantly higher return, up to $2\times$ and $3\times$ in the respective tasks, but they do so much quicker. These results reinforce our claim that learning action representations allow implicit generalization of feedback to other actions embedded in proximity to executed action. Further, under the PG-RA algorithm, only a fraction of total parameters, the ones in the internal policy, are learned using the high variance policy gradient updates. % The other set of parameters associated with action representations are learned by a supervised learning procedure. % This reduces the variance of updates significantly, thereby making the PG-RA algorithms learn a better policy faster. % % This is evident from the plots in the Figure \ref{Fig:performance-plots}. % These advantages allow the internal policy, $\pi_i$, to quickly approximate an optimal policy without succumbing to the curse of large actions sets. \section{Dynamic Actions} \label{sec:dynaimic-actions} Beside the large number of actions, in many real-world sequential decision making problems, the number of available actions (decisions) can vary over time. While problems like catastrophic forgetting, changing transition dynamics, changing rewards function, etc. have been well-studied in the lifelong learning literature, the setting where the action set changes remains unaddressed. In this section, we present an algorithm that autonomously adapts to an action set whose size changes over time. To tackle this open problem, we break it into two problems that can be solved iteratively: inferring the underlying, unknown, structure in the space of actions and optimizing a policy that leverages this structure. We demonstrate the efficiency of this approach on large-scale real-world lifelong learning problems \cite{DBLP:journals/corr/abs-1906-01770}. \subsection{Lifelong Markov Decision Process} \label{sec:lmdp} % MDPs, the standard formalization of decision making problems, are not flexible enough to encompass lifelong learning problems wherein the action set size changes over time. % In this section we extend the standard MDP framework to model this setting. In real-world problems where the set of possible actions changes, there is often underlying structure in the set of all possible actions (those that are available, and those that may become available). % For example, tutorial videos can be described by feature vectors that encode their topic, difficulty, length, and other attributes; % in robot control tasks, primitive locomotion actions like left, right, up, and down could be encoded by their change to the Cartesian coordinates of the robot, etc. % Critically, we will not assume that the agent knows this structure, merely that it exists. % If actions are viewed from this perspective, then the set of all possible actions (those that are available at one point in time, and those that might become available at any time in the future) can be viewed as a vector-space, $\mathcal E \subseteq \mathbb R^d$. % % \begin{figure}[t] \centering \includegraphics[width=0.7\textwidth]{figs/observed.png} \caption{ Illustration of a \emph{lifelong MDP} where $\mathcal M_0$ is the base MDP. For every change $k$, $\mathcal M_K$ builds upon $\mathcal M_{k-1}$ by including the newly available set of actions $\mathcal A_k$. The internal structure in the space of actions is hidden and only a set of discrete actions is observed. } \label{Fig:CL-MDP} \end{figure} To formalize the lifelong MDP, we first introduce the necessary variables that govern when and how new actions are added. % We denote the episode number using $\tau$. % Let $I_\tau \in \{0, 1\}$ be a random variable that indicates whether a new set of actions are added or not at the start of episode $\tau$, and let frequency $\mathcal F: \mathbb N \rightarrow [0, 1]$ be the associated probability distribution over episode count, such that $\Pr(I_\tau=1) = \mathcal F(\tau)$. % % % Let $U_\tau \in 2^{\mathcal E}$ be the random variable corresponding to the set of actions that is added before the start of episode $\tau$. % When $I_\tau=1$, we assume that $U_\tau\neq\emptyset$, and when $I_\tau=0$, we assume that $U_\tau = \emptyset$. % Let $\mathcal D_\tau$ be the distribution of $U_\tau$ when $I_\tau=1$, i.e., $U_\tau \sim \mathcal D_\tau$ if $I_\tau=1$. % We use $\mathcal D$ to denote the set $\{\mathcal D_\tau\}$ consisting of these distributions. Such a formulation using $I_\tau$ and $\mathcal D_\tau$ provides a fine control of when and how new actions can be incorporated. % This allows modeling a large class of problems where both the distribution over the type of incorporated actions as well intervals between successive changes might be irregular. % Often we will not require the exact episode number $\tau$ but instead require $k$, which denotes the number of times the action set is changed. % Since we do not assume that the agent knows the structure associated with the action, we instead provide actions to the agent as a set of discrete entities, $\mathcal A_k$. % To this end, we define $\phi$ to be a map relating the underlying structure of the new actions to the observed set of discrete actions $\mathcal A_k$ for all $k$, i.e., if the set of actions added is $u_k$, then $\mathcal A_k=\{ \phi(e_i) | e_i \in u_k\}$. % Naturally, for most problems of interest, neither the underlying structure $\mathcal E$, nor the set of distributions $\mathcal D$, nor the frequency of updates $\mathcal F$, nor the relation $\phi$ is known---the agent only has access to the observed set of discrete actions. % We now define the \textit{lifelong Markov decision process} (L-MDP) as $\mathscr{L} = (\mathcal M_0, \mathcal E, \mathcal D, \mathcal F)$, which extends a \textit{base} MDP $\mathcal M_0 = (\mathcal{S}, \mathcal{A},\mathcal{P},\mathcal{R}, \gamma, d_0)$. % % $\mathcal{S}$ is the set of all possible states that the agent can be in, called the state set. $\mathcal A$ is the discrete set of actions available to the agent, and for $\mathcal M_0$ we define this set to be empty, i.e., $\mathcal A=\emptyset$. % When the set of available actions changes and the agent observes a new set of discrete actions, $\mathcal A_k$, then $\mathcal M_{k-1}$ transitions to $\mathcal M_k$, such that $\mathcal A$ in $\mathcal M_k$ is the set union of $\mathcal A$ in $\mathcal M_{k-1}$ and $\mathcal A_k$. % Apart from the available actions, other aspects of the L-MDP remain the same throughout. % An illustration of the framework is provided in Figure \ref{Fig:CL-MDP}. % % We use $S_t \in \mathcal S$, $A_t \in \mathcal A$, and $R_t \in \mathbb R$ as random variables for denoting the state, action and reward at time $t \in \{0,1,\dotsc\}$ within each episode. % % The first state, $S_0$, comes from an initial distribution, $d_0$, and the reward function $\mathcal R$ is defined to be only dependent on the state such that $\mathcal R(s)=\mathbf{E}[R_t|S_t=s]$ for all $s \in \mathcal S$. % We assume that $R_t \in [-R_\text{max},R_\text{max}]$ for some finite $R_\text{max}$. % The reward discounting parameter is given by $\gamma \in [0,1)$. % % $\mathcal{P}$ is the state transition function, such that for all $s,a,s',t$, the function $ \mathcal{P}(s,a,s')$ denotes the transition probability $ P(s'| s, e)$, where $a = \phi(e)$.\footnote{For notational ease, (a) we overload symbol $P$ for representing both probability mass and density; % (b) we assume that the state set is finite, however, our primary results extend to MDPs with continuous states. } % In the most general case, new actions could be completely arbitrary and have no relation to the ones seen before. % In such cases, there is very little hope of lifelong learning by leveraging past experience. % To make the problem more feasible, we resort to a notion of \textit{smoothness} between actions. % Formally, we assume that transition probabilities in an L-MDP are $\rho-$Lipschitz in the structure of actions, i.e., $\exists \rho > 0$ such that % \begin{equation} \forall s, s', e_i, e_j \hspace{5pt} \lVert P(s'| s,e_i) - P(s'| s,e_j) \rVert_1 \leq \rho \lVert e_i - e_j\rVert_1. \label{eqn:lipschitz} \end{equation} % For any given MDP $\mathcal{M}_k$ in $\mathscr L$, an agent's goal is to find a policy, $\pi_k$, that maximizes the expected sum of discounted future rewards. % For any policy $\pi_k$, the corresponding state value function is $v^{\pi_k}(s) = \mathbf{E}[\sum_{t=0}^{\infty}\gamma^t R_{t} |s, \pi_k]$. \subsection{Blessing of Changing Action Sets} \label{sec:blessing} Finding an optimal policy when the set of possible actions is large is difficult due to the curse of dimensionality. In the L-MDP setting this problem might appear to be exacerbated, as an agent must additionally adapt to the changing levels of possible performance as new actions become available. This raises the natural question: \textit{as new actions become available, how much does the performance of an optimal policy change? } If it fluctuates significantly, can a lifelong learning agent succeed by continuously adapting its policy, or is it better to learn from scratch with every change to the action set? To answer this question, consider an optimal policy, $\pi^*_k$, for MDP $\mathcal M_k$, i.e., an optimal policy when considering only policies that use actions that are available during the $k^\text{th}$ episode. We now quantify how sub-optimal $\pi^*_k$ is relative to the performance of a hypothetical policy, $\mu^*$, that acts optimally given access to all possible actions. \begin{thm} \label{thm:1} In an L-MDP, let $\epsilon_k$ denote the maximum distance in the underlying structure of the closest pair of available actions, i.e., $\epsilon_k \coloneqq \underset{a_i \in \mathcal A}{\text{sup}} ~ \underset{a_j \in \mathcal A}{\text{inf}} \lVert e_i - e_j \rVert_1$, then \begin{align} v^{\mu^*}(s_0) -v^{\pi^*_k}(s_0) &\leq \frac{\gamma \rho \epsilon_k}{(1 - \gamma)^2} R_{\text{max}}. \label{eqn:thm1} \end{align} \end{thm} The proof is available in \cite{DBLP:journals/corr/abs-1906-01770}. With a bound on the maximum possible sub-optimality, Theorem \ref{thm:1} presents an important connection between achievable performances, the nature of underlying structure in the action space, and a property of available actions in any given $\mathcal M_k$. Using this, we can make the following conclusion. \begin{cor} \label{cor:1} Let $\mathcal Y\subseteq \mathcal E$ be the smallest closed set such that, $P(U_k \subseteq 2^\mathcal Y)=1$. We refer to $\mathcal Y$ as the element-wise-support of $U_k$. If $\,\,\forall k$, the element-wise-support of $U_k$ in an L-MDP is $\mathcal E$, then as $k \rightarrow \infty$ the sub-optimality vanishes. That is, $$\lim_{k \rightarrow \infty} v^{\mu^*}(s_0) -v^{\pi^*_k}(s_0) \rightarrow 0. $$ \end{cor} Through Corollary \ref{cor:1}, we can now establish that the change in optimal performance will eventually converge to zero as new actions are repeatedly added. An intuitive way to observe this result would be to notice that every new action that becomes available indirectly provides more information about the underlying, unknown, structure of $\mathcal E$. However, in the limit, as the size of the available action set increases, the information provided by each each new action vanishes and thus performance saturates. Certainly, in practice, we can never have $k \rightarrow \infty$, but this result is still advantageous. Even when the underlying structure $\mathcal E$, the set of distributions $\mathcal D$, the change frequency $\mathcal F$, and the mapping relation $\phi$ are all \textit{unknown}, it establishes the fact that the difference between the best performances in \textit{successive changes} will remain bounded and will not fluctuate arbitrarily. This opens up new possibilities for developing algorithms that do not need to start from scratch after new actions are added, but rather can build upon their past experiences using updates to their existing policies that efficiently leverage estimates of the structure of $\mathcal E$ to adapt to new actions. \subsection{Learning with Changing Action Sets} \label{sec:learning} Theorem \ref{thm:1} characterizes what \textit{can be} achieved in principle, however, it does not specify \textit{how} to achieve it---how to find $\pi_k^*$ efficiently. Using any parameterized policy, $\pi$, which acts directly in the space of observed actions, suffers from one key practical drawback in the L-MDP setting. That is, the parameterization is deeply coupled with the number of actions that are available. That is, not only is the meaning of each parameter coupled with the number of actions, but often the number of parameters that the policy has is dependent on the number of possible actions. This makes it unclear how the policy should be adapted when additional actions become available. A trivial solution would be to ignore the newly available actions and continue only using the previously available actions. However, this is clearly myopic, and will prevent the agent from achieving the better long term returns that might be possible using the new actions. To address this parameterization-problem, instead of having the policy, $\pi$, act directly in the observed action space, $\mathcal A$, we propose an approach wherein the agent reasons about the underlying structure of the problem in a way that makes its policy parameterization invariant to the number of actions that are available. To do so, we split the policy parameterization into two components. The first component corresponds to the state conditional policy responsible for making the decisions, $\beta : \mathcal S \times \hat{\mathcal E} \rightarrow [0, 1]$, where $\hat {\mathcal E} \in \mathbb{R}^d$. The second component corresponds to $\hat \phi : \hat{ \mathcal E} \times \mathcal A \rightarrow [0,1]$, an estimator of the relation $\phi$, which is used to map the output of $\beta$ to an action in the set of available actions. That is, an $E_t \in \hat{\mathcal E}$ is sampled from $\beta(S_t, \cdot)$ and then $ \hat \phi(E_t)$ is used to obtain the action $A_t$. Together, $\beta$ and $\hat \phi$ form a complete policy, and $\hat {\mathcal E}$ corresponds to the inferred structure in action space. One of the prime benefits of estimating $\phi$ with $\hat \phi$ is that it makes the parameterization of $\beta$ invariant to the cardinality of the action set---changing the number of available actions does not require changing the number of parameters of $\beta$. Instead, only the parameterization of $\hat \phi$, the estimator of the underlying structure in action space, must be modified when new actions become available. We show next that the update to the parameters of $\hat \phi$ can be performed using \emph{supervised learning} methods that are independent of the reward signal and thus typically more efficient than RL methods. % While our proposed parameterization of the policy using both $\beta$ and $\hat \phi$ has the advantages described above, the performance of $\beta$ is now constrained by the quality of $\hat \phi$, as in the end $\hat \phi$ is responsible for selecting an action from $\mathcal A$. Ideally we want $\hat \phi$ to be such that it lets $\beta$ be both: (a) invariant to the cardinality of the action set for practical reasons and (b) as expressive as a policy, $\pi$, explicitly parameterized for the currently available actions. Similar trade-offs have been considered in the context of learning optimal state-embeddings for representing sub-goals in hierarchical RL \citep{nachum2018near}. For our lifelong learning setting, we build upon their method to efficiently estimate $\hat \phi$ in a way that provides bounded sub-optimality. Specifically, we make use of an additional \textit{inverse dynamics} function, $\varphi$, that takes as input two states, $s$ and $s'$, and produces as output a prediction of which $e \in \mathcal E$ caused the transition from $s$ to $s'$. Since the agent does not know $\phi$, when it observes a transition from $s$ to $s'$ via action $a$, it does \emph{not} know which $e$ caused this transition. So, we cannot train $\varphi$ to make good predictions using the actual action, $e$, that caused the transition. Instead, we use $\hat \phi$ to transform the prediction of $\varphi$ from $e \in \mathcal E$ to $a \in \mathcal A$, and train both $\varphi$ and $\hat \phi$ so that this process accurately predicts which action, $a$, caused the transition from $s$ to $s'$. Moreover, rather than viewing $\varphi$ as a deterministic function mapping states $s$ and $s'$ to predictions $e$, we define $\varphi$ to be a \textit{distribution} over $\mathcal E$ given two states, $s$ and $s'$. For any given $\mathcal M_k$ in L-MDP $\mathscr L$, let $\beta_k$ and $\hat \phi_k$ denote the two components of the overall policy and let $\pi_k^{**}$ denote the best overall policy that can be represented using some fixed $\hat \phi_k$. The following theorem bounds the sub-optimality of $\pi_k^{**}$. \begin{thm} \label{thm:2} For an L-MDP $\mathcal M_k$, If there exists a $\varphi : S \times S \times \hat{\mathcal E} \rightarrow [0, 1] $ and $\hat \phi_k : \hat {\mathcal E} \times \mathcal A \rightarrow [0, 1]$ such that \begin{align} \footnotesize \sup_{s \in \mathcal S, a \in \mathcal A} \text{KL}\Big(P(S_{t+1}|S_t=s, A_t=a) \Vert P(S_{t+1}|S_t=s, A_t=\hat A) \Big) &\leq \delta_k^2/2, \label{eqn:lemma1} \end{align} where $\hat A \sim \hat \phi_k(\cdot|\hat E)$ and $\hat E \sim \varphi(\cdot | S_t, S_{t+1})$, then \begin{align*} v^{\mu^*}(s_0) -v^{\pi_k^{**}}(s_0) &\leq \frac{\gamma \left( \rho \epsilon_k + \delta_k \right)}{(1 - \gamma)^2} R_{\text{max}}. \end{align*} \end{thm} See \cite{DBLP:journals/corr/abs-1906-01770} for the proof. By quantifying the impact $\hat \phi$ has on the sub-optimality of achievable performance, Theorem \ref{thm:2} provides the necessary constraints for estimating $\hat \phi$. At a high level, Equation \eqref{eqn:lemma1} ensures $\hat \phi$ to be such that it can be used to generate an action corresponding to any $s$ to $s'$ transition. This allows $\beta$ to leverage $\hat \phi$ and choose the required action that induces the state transition needed for maximizing performance. Thereby, following \eqref{eqn:lemma1}, sub-optimality would be minimized if $\hat \phi$ and $\varphi$ are optimized to reduce the supremum of KL divergence over all $s$ and $a$. In practice, however, the agent does not have access to all possible states, rather it has access to a limited set of samples collected from interactions with the environment. Therefore, instead of the supremum, we propose minimizing the average over all $s$ and $a$ from a set of observed transitions, \begin{align} \!\!\! \mathcal L(\hat \phi, \varphi) \!\!&\coloneqq \!\! \sum_{s \in \mathcal S}\! \sum_{a \in \mathcal A_k} P(s, a) \textit{KL}\left(P(s'|s, a) \Vert P(s'|s, \hat a) \right) . \label{eqn:exp_kl} \end{align} Equation \eqref{eqn:exp_kl} suggests that $\mathcal L(\hat \phi, \varphi)$ would be minimized when $\hat a$ equals $a$, but using \eqref{eqn:exp_kl} directly in the current form is inefficient as it requires computing KL over all probable $s' \in \mathcal S$ for a given $s$ and $a$. To make it practical, we make use of the following property. \begin{prop} \label{prop:lb} For some constant C, $- \mathcal L(\hat \phi, \varphi)$ is lower bounded by \begin{align} \sum_{s \in \mathcal S} \sum_{a \in \mathcal A_k}\sum_{s' \in \mathcal S} P(s, a, s') \Bigg ( \mathbf{E}\left[\log \hat \phi(\hat a|\hat e) \middle | \hat e \sim \varphi(\cdot|s,s') \right] - \text{KL}\Big(\varphi(\hat e|s, s') \Big \Vert P(\hat e|s, s') \Big) \Bigg) + C. ~~~~~\label{eqn:beta_vae} \end{align} \end{prop} As minimizing $\mathcal L(\hat \phi, \varphi)$ is equivalent to maximizing $- \mathcal L(\hat \phi, \varphi)$, we consider maximizing the lower bound obtained from Property \ref{prop:lb}. In this form, it is now practical to optimize \eqref{eqn:beta_vae} just by using the observed $(s, a, s')$ samples. As this form is similar to the objective for variational auto-encoder, inner expectation can be efficiently optimized using the reparameterization trick \citep{kingma2013auto}. $P(\hat e | s, s')$ is the prior on $\hat e$, and we treat it as a hyper-parameter that allows the KL to be computed in closed form. Importantly, note that this optimization procedure only requires individual transitions, $s,a,s'$, and is independent of the reward signal. Hence, at its core, it is a supervised learning procedure. This means that learning good parameters for $\hat \phi$ tends to require far fewer samples than optimizing $\beta$ (which is an RL problem). This is beneficial for our approach because $\hat \phi$, the component of the policy where new parameters need to be added when new actions become available, can be updated efficiently. As both $\beta$ and $\varphi$ are invariant to action cardinality, they do not require new parameters when new actions become available. Additional implementation level details are available in Appendix F. \subsection{Algorithm} \label{sec:algo} \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{figs/LAICA.png} \caption{An illustration of a typical performance curve for a lifelong learning agent. % The point $(a)$ corresponds to the performance of the current policy in $\mathcal M_k$. % The point $(b)$ corresponds to the performance drop resulting as a consequence of adding new actions. % We call the phase between (a) and (b) as the adaptation phase, which aims at minimizing this drop when adapting to new set of actions. % The point $(c)$ corresponds to the improved performance in $\mathcal M_{k+1}$ by optimizing the policy to leverage the new set of available actions. % $\mu^*$ represents the best performance of the hypothetical policy which has access to the entire structure in the action space. } \label{Fig:LAICA} \end{figure} % When a new set of actions, $\mathcal A_{k+1}$, becomes available, the agent should leverage the existing knowledge and quickly adapt to the new action set. Therefore, during every change in $\mathcal M_k$, the ongoing best components of the policy, $\beta_{k-1}^*$ and $\phi_{k-1}^*$, in $\mathcal M_{k-1}$ are carried over, i.e., $\beta_k \coloneqq \beta_{k-1}^*$ and $\hat \phi_k \coloneqq \hat \phi_{k-1}^*$. % For lifelong learning, the following property illustrates a way to organize the learning procedure so as to minimize the sub-optimality in each $\mathcal M_k$, for all $k$. % \begin{prop} (Lifelong Adaptation and Improvement)\label{prop:LAICA} In an L-MDP, let $\Delta$ denote the difference of performance between $v^{\mu^*}$ and the best achievable using our policy parameterization, then the overall sub-optimality can be expressed as, \begin{align} v^{\mu^*}(s_0) - v_{_{\mathcal M_1}}^{\beta_1 \hat \phi_1}(s_0) =& \underbrace{\sum_{k=1}^{\infty}\left(v_{_{\mathcal M_k}}^{\beta_{k} \hat \phi_k^*}(s_0) - v_{_{\mathcal M_{k}}}^{\beta_{k} \hat\phi_{k}}(s_0) \right)}_{\text{Adaptation}} + \underbrace{\sum_{k=1}^{\infty}\left(v_{_{\mathcal M_k}}^{\beta_k^* \hat\phi_k^*}(s_0) - v_{_{\mathcal M_k}}^{\beta_k \hat\phi_k^*}(s_0) \right)}_{\text{Policy Improvement}} + \Delta, \label{eqn:adapt_improve} \end{align} where $\mathcal M_k$ is used in the subscript to emphasize the respective MDP in $\mathscr L$. \end{prop} Property \ref{prop:LAICA} illustrates a way to understand the impact of $\beta$ and $\hat \phi$ by splitting the learning process into an adaptation phase and a policy improvement phase. These two iterative phases are the crux of our algorithm for solving an L-MDP $\mathscr L$. Based on this principle, we call our algorithm LAICA: \textit{lifelong adaptation and improvement for changing actions}. Whenever new actions become available, adaptation is prone to cause a performance drop as the agent has no information about when to use the new actions, and so its initial uses of the new actions may be at inappropriate times. Following Property \ref{prop:lb}, we update $\hat \phi$ so as to efficiently infer the underlying structure and minimize this drop. That is, for every $\mathcal M_k$, $ \hat\phi_k$ is first adapted to $ \hat \phi_k^*$ in the adaptation phase by adding more parameters for the new set of actions and then optimizing \eqref{eqn:beta_vae}. After that, $ \hat \phi_k^*$ is fixed and $\beta_k$ is improved towards $\beta_k^*$ in the policy improvement phase, by updating the parameters of $\beta_k$ using the policy gradient theorem \citep{sutton2000policy}. These two procedures are performed sequentially whenever $\mathcal M_{k-1}$ transitions to $\mathcal M_k$, for all $k$, in an L-MDP $\mathscr L$. An illustration of the procedure is presented in Figure \ref{Fig:LAICA}. A step-by-step pseudo-code for the LAICA algorithm is available in Algorithm 1. The crux of the algorithm is based on the iterative adapt and improve procedure obtained from Property \ref{prop:LAICA}. We begin by initializing the parameters for $\beta_{0}^*, \hat \phi_{0}^*$ and $\varphi_{0}^*$. In Lines $3$ to $5$, for every change in the set of available actions, instead of re-initializing from scratch, the previous best estimates for $\beta, \hat\phi$ and $\varphi$ are carried forward to build upon existing knowledge. As $\beta$ and $\varphi$ are invariant to the cardinality of the available set of actions, no new parameters are required for them. In Line $6$ we add new parameters in the function $\hat \phi$ to deal with the new set of available actions. To minimize the adaptation drop, we make use of Property \ref{prop:lb}. Let $\mathcal L^{\text{lb}}$ denote the lower bound for $\mathcal L$, such that, $$ \mathcal L^{\text{lb}}(\hat \phi, \varphi) \coloneqq \mathbf{E}\left[\log \hat \phi(\hat A_t|\hat E_t) \middle | \varphi(\hat E_t|S_t,S_{t+1}) \right] - \lambda \text{KL}\left(\varphi(\hat E_t|S_t, S_{t+1}) \middle \Vert P(\hat E_t|S_t, S_{t+1}) \right).$$ Note that following the literature on variational auto-encoders, we have generalized \eqref{eqn:beta_vae} to use a Lagrangian $\lambda$ to weight the importance of KL divergence penalty \citep{higgins2017beta}. When $\lambda = 1$, it degenrates to \eqref{eqn:beta_vae}. We set the prior $P(\hat e|s, s')$ to be an isotropic normal distribution, which also allows KL to be computed in closed form \citep{kingma2013auto}. From Line $7$ to $11$ in the Algorithm 1, random actions from the available set of actions are executed and their corresponding transitions are collected in a buffer. Samples from this buffer are then used to maximize the lower bound objective $\mathcal L^{\text{lb}}$ and adapt the parameters of $\hat \phi$ and $\varphi$. The optimized $\hat \phi^*$ is then kept fixed during policy improvement. Lines $16$--$22$ correspond to the standard policy gradient approach for improving the performance of a policy. In our case, the policy $\beta$ first outputs a vector $\hat e$ which gets mapped by $\hat \phi^*$ to an action. The observed transition is then used to compute the policy gradient \citep{sutton2000policy} for updating the parameters of $\beta$ towards $\beta^*$. If a critic is used for computing the policy gradients, then it is also subsequently updated by minimizing the TD error \citep{Sutton1998}. % This iterative process of adaption and policy improvement continues for every change in the action set size. \IncMargin{1em} \begin{algorithm2e} \label{Alg:laica} \caption{Lifelong Adaptation and Improvement for Changing Actions (LAICA)} \textbf{Initialize} $\beta_{0}^*, \hat \phi_{0}^*, \varphi_{0}^*$. \\ \For {\text{change} $k = 1,2...$} { $\beta_k \leftarrow \beta_{k-1}^*$ \tikzmark{top} \\ $\varphi_k \leftarrow \varphi_{k-1}^*$ \\ $\hat \phi_k \leftarrow \hat \phi_{k-1}^*$ \\ Add parameters in $\hat \phi_k$ for new actions ~~~~~~~~\tikzmark{right} \tikzmark{bottom} \\ \vspace{10pt} Buffer $\mathbb{B} = \{\}$ \tikzmark{top2} \\ \For {$episode = 0,1,2...$} { \For {$t = 0,1,2...$} { Execute random $a_t$ and observe $s_{t+1}$ \\ Add transition to $\mathbb{B}$ } } \For {$iteration = 0,1,2...$} { Sample batch $b \sim \mathbb{B}$ \\ Update $\hat \phi_k$ and $\varphi_k$ by maximizing $\mathcal L^{\text{lb}}(\hat \phi_k, \varphi_k)$ for $b$ ~~~~~~~~~~\tikzmark{right2} } \tikzmark{bottom2} \\ \vspace{8pt} \tikzmark{top3} \For {$episode = 0,1,2...$}{ \For {$t = 0,1,2...$} { Sample $\hat e_t \sim \beta_k(\cdot|s_t) $ \\ Map $\hat e_t$ to an action $a_t$ using $ \hat \phi_k^*(e)$ \\ Execute $a_t$ and observe $s_{t+1}, r_{t}$ \\ Update $\beta_k$ using any policy gradient algorithm ~~~~~~~~~~~~\tikzmark{right3}\\ Update critic by minimizing TD error. \tikzmark{bottom3} } } } \nonl \AddNote{top}{bottom}{right}{Reuse past knowledge.} \AddNote{top2}{bottom2}{right2}{Adapt \\ $\hat \phi_k$ to $\hat \phi_k^*$ } \AddNote{top3}{bottom3}{right3}{Improve \\ $\beta_k$ to $\beta_k^*$} \end{algorithm2e} \DecMargin{1em} \subsection{Empirical Analysis} \label{sec:empirical} In this section, we aim to empirically compare the following methods, \begin{itemize} \item Baseline(1): The policy is re-initialised and the agent learns from scratch after every change. \item Baseline(2): New parameters corresponding to new actions are added/stacked to the existing policy (and previously learned parameters are carried forward as-is). \item LAICA(1): The proposed approach that leverages the structure in the action space. To act in continuous space of inferred structure, we use DPG \citep{silver2014deterministic} to optimize $\beta$. \item LAICA(2): A variant of LAICA which uses an actor-critic \citep{Sutton1998} to optimize $\beta$. \end{itemize} To demonstrate the effectiveness of our proposed method(s) on lifelong learning problems, we consider a maze environment and two domains corresponding to real-world applications, all with a large set of changing actions. % For each of these domains, the total number of actions were randomly split into five equal sets. % Initially, the agent only had the actions available in the first set and after every change the next set of actions was made available additionally. % In the following paragraphs we briefly outline the domains. \paragraph{Case Study: Real-World Recommender Systems. } % We consider the following two real-world applications of large-scale recommender systems that require decision making over multiple time steps and where the number of possible decisions varies over the lifetime of the system. % % \begin{itemize} \item A web-based video-tutorial platform, that has a recommendation engine to suggest a series of tutorial videos. The aim is to meaningfully engage the users in a learning activity. % % In total, $1498$ tutorials were considered for recommendation. \item A professional multi-media editing software, where sequences of tools inside the software need to be recommended. The aim is to increase user productivity and assist users in quickly achieving their end goal. In total, $1843$ tools were considered for recommendation. \end{itemize} For both of these applications, an existing log of user's click stream data was used to create an $n$-gram based MDP model for user behavior \citep{shani2005mdp}. % Sequences of user interaction were aggregated to obtain over $29$ million clicks and $1.75$ billion user clicks for the tutorial recommendation and the tool recommendation task, respectively. % The MDP had continuous state-space, where each state consisted of the feature descriptors associated with each item (tutorial or tool) in the current $n$-gram. % \subsection{Results} \begin{figure*}[t] \centering \includegraphics[width=0.8\textwidth]{figs/Software_perf_stderr.png}\\ \includegraphics[width=0.8\textwidth]{figs/Tutorial_perf_stderr.png} % \caption{Lifelong learning experiments with a changing set of actions in the recommender system domains. % The learning curves correspond to the running mean of the best performing setting for each of the algorithms. The shaded regions correspond to standard error obtained using $10$ trials. % Vertical dotted bars indicate when the set of actions was changed. % % } \label{Fig:CL-experiments2} \end{figure*} % % The plots in Figure \ref{Fig:CL-experiments2} present the evaluations on the domains considered. % The advantage of LAICA over Baseline(1) can be attributed to its policy parameterization. % The decision making component of the policy, $\beta$, being invariant to the action cardinality can be readily leveraged after every change without having to be re-initialized. % This demonstrates that efficiently re-using past knowledge can improve data efficiency over the approach that learns from scratch every time. % Compared to Baseline(2), which also does not start from scratch and reuses existing policy, we notice that the variants of LAICA algorithm still perform favorably. % As evident from the plots in Figure \ref{Fig:CL-experiments2}, while Baseline(2) does a good job of preserving the existing policy, it fails to efficiently capture the benefit of new actions. While the policy parameters in both LAICA and Baseline(2) are improved using policy gradients, the superior performance of LAICA can be attributed to the adaptation procedure incorporated in LAICA which aims at efficiently inferring the underlying structure in the space of actions. % Overall LAICA(2) performs almost twice as well as both the baselines on all of the tasks considered. Note that even before the first addition of the new set of actions, the proposed method performs better than the baselines. % This can be attributed to the fact that the proposed method efficiently leverages the underlying structure in the action set and thus learns faster. % Similar observations have been made previously \citep{dulac2015deep,he2015deep,bajpai2018transfer,naturalGuy}. \section{Cognitive Bias Aware} \label{sec:bias-aware} While SRs are beginning to find their way in academia and industry, other recommendation technologies such as collaborative filtering \cite{Netflix} and contextual bandits \cite{Li10} have been the mainstream methods. There are hundreds of papers every year in top-tier machine learning conferences advancing the state of the art in collaborative filtering and bandit technologies. Nonetheless, all these systems do not truly understand people but rather naively optimize expected utility metrics such as click through rate. People, do not perceive expected values in a rational fashion \cite{Tversky92}. In fact, people have evolved with various cognitive biases to simplify information processing. Cognitive biases are systematic patterns of deviation from norm or rationality in judgment. They are rules of thumb that help people make sense of the world and reach decisions with relative speed. Some of these biases include the perception of risk, collective effects and long-term decision making. In this final section we argue that the next generation of recommendation systems needs to incorporate human cognitive biases. We would like to build recommendation and personalization systems that are aware of these biases. We would like to do it in a fashion that is win-win for both the marketer and the consumer. Such technology is not studied in academia and does not exist in the industry \cite{DBLP:conf/um/TheocharousHMS19}. It has becoming a well-established result that humans do not reason by maximizing their expected utility, and yet most work in artificial intelligence and machine learning continues to be based on idealized mathematical models of decisions \cite{Machina87,Ellsberg61,VonNeumann44}. There are many studies that show that given two choices that are logically equivalent, people will prefer one to another based on how the information is presented to them, even if the choices made violate the principle of maximizing expected utility \cite{Allais53}. The Allais paradox is a classic example that demonstrates this type of inconsistency in the choices people make when presented with two different gambling strategies. It was found that people often preferred a gambling strategy with a lower expected utility but with more certain positive gains over a strategy where expected utility is higher at the cost of more uncertainty. Furthermore, there are a number of biases that have been explored in the context of marketing and eCommerce that influence the manner in which consumers make purchasing decisions. An example of such a bias is the \emph{decoy effect} which refers to the phenomenon when consumers flip their preference for one item over another when presented with a third item with certain characteristics. Furthermore, consider how fatigue and novelty bias can be incorporated into a movie or book recommendation system. While a recommendation system may have identified a user's preference for a particular genre of movies or books, novelty bias, which is a well studied phenomenon in behavioral psychology, suggests that novelty in options might actually yield uplift in system results if modeled appropriately. \subsection{Cognitive Biases} Next we list different types of biases that have a direct impact on decision making and that we envision will be explicitly modelled in future personalization systems. \subsubsection{Loss, Risk and Ambiguity} Biases related to loss, risk and ambiguity are some of the most well studied from a modeling perspective and are relevant to any eCommerce recommendation. These biases could be incorporated by modeling the degree of certainty in experiencing satisfaction from the purchase of a product, or alternatively modeling the risk of regret associated with the purchase. In \cite{Tversky92} specific curves describing the degree of loss aversion are derived for gambling examples. From a behavioral science perspective these biases can be described as follows: \begin{itemize} \item \textbf{Loss (Regret) Aversion}: Motivated by the tendency to avoid the possibility of experiencing regret after making a choice with a negative outcome; loss aversion refers to the asymmetry between the affinity and the aversion to the same amount of gain and loss respectively. In other words, this bias refers to the phenomenon whereby a person has a higher preference towards not losing something to winning that same thing. I.e losing \$$100$ for example results in a greater loss of satisfaction than the gain in satisfaction that is caused by winning \$$100$. \item \textbf{Risk Aversion}: This refers to the tendency to prefer a certain positive outcome over a risky one despite the chance of it being more profitable (in expectation) than the certain one. \item \textbf{Ambiguity Aversion}: This phenomenon in decision making refers to a general preference for known risks rather than unknown ones. The difference between risk and ambiguity aversion is subtle. Ambiguity aversion refers to the aversion to not having information about the degree of risk involved. \cite{Baron94,Ellsberg61}\\ \end{itemize} \subsubsection{Collective Effects} Another set of biases we believe are important to model relate to how the value of recommended items is likely to be perceived in the context of other items or recommendations. These comparative differences are important when recommendations are considered in sequence or when collectively surfaced such as multiple ads on the same web page. These include: \begin{itemize} \item \textbf{Contrast Effect}: An object is perceived differently when shown in isolation than when shown in contrast to other objects. For example a \$$5$ object might seem inexpensive next to a \$$11$ object but expensive next to an object priced at \$$1$. \cite{Plous93} \item \textbf{Decoy Effect}: This effect is common in consumer and marketing scenarios whereby a consumer's preference between two items reverses when they are presented with a third option that is clearly inferior to one of the two items and only partially inferior to the second. The Decoy effect can be viewed as a specific instance or a special case of the contrast effect described above. \item \textbf{Distinction Bias}: This refers to the situation where two items are viewed as more distinct (from each other) when viewed together than when viewed separately. \item \textbf{Framing effect}: This refers to the effect on decision making of the manner in which something is presented to a user. I.e. the same information presented in different ways can lead to different decision outcomes. \\ \end{itemize} \subsubsection{Long Term Decision Making} Finally, we consider biases that might arise in different parts of the decision making process over time. These biases will all play a role in optimizing future recommendation systems for long term user value. Some examples of these include: \\ \begin{itemize} \item \textbf{Choice Supportive Bias}: Also referred to as post-purchase rationalization, this bias refers to the tendency to remember past choices as justifiable and in some cases retroactively ascribe a positive sentiment to them. \cite{Mather00} \item \textbf{Anchoring (Confirmation) Bias}: paying more attention to information that supports one's opinions while ignoring or underscoring another set of information. This type of bias includes resorting to preconceived opinions when encountering something new. \item \textbf{Hyperbolic Discounting Effect}: This bias refers to the tendency to prefer immediate payoffs than those that occur later in time. As an example related to our application of recommendation systems, considering a consumer's preference to receive an object sooner rather than at a later time period (for instance due to shipping time) can have an direct impact on item preference. \item \textbf{Decision Fatigue}: Although not an explicit cognitive bias, decision fatigue is a phenomenon worth exploring as it refers to the manner in which decision making deviates from the expected norm as a result of fatigue induced by long periods of decision making. \item \textbf{Selective Perception}: This refers to the tendency for expectation to bias ones perception of certain phenomena. \cite {Chandler11} \\ \end{itemize} \subsection{The Future of Personalization} In this section, we have argued that the next-generation of personalization systems should be designed by explicitly modeling people's cognitive biases. Such a redesign will require developing algorithms that have a more realistic model of how humans perceive the world and will be able to exploit human deviations from perfect rationality. \begin{figure}[h] \centering \quad\input{figs/ValueFun5} \caption{A piece-wise non-linear model of modelling human perception of gains and losses. Algorithms that optimize expectation view the gain of \$100 and the loss of \$100 as equal, but humans do not.} \label{fig:prospect_value} \end{figure} Recent academic papers have shown that it is possible to model human biases by incorporating behavioral science frameworks, such as prospect theory~\cite{Kahneman79,Tversky92} into reinforcement learning algorithms \cite{Csaba16}. Traditional work in reinforcement learning is based on maximizing long-term expected utility. Prospect theory will require redesigning reinforcement learning models to reflect human nonlinear processing of probability estimates. These models incorporate the cognitive bias of loss aversion using a theory~\cite{Kahneman79,Tversky92} that models perceived loss as asymmetric to gain, as illustrated in Figure 1. This captures a human perspective where potential gain of \$100 is less preferable to avoiding a potential loss of \$100. Algorithms that simply maximize expectation would treat these two outcomes equally. We envision that future personalization algorithms will incorporate such models for a wide variety of human cognitive biases, adjusting the steepness of the curves and the value of the inflection points accordingly for each person based on their overall character, their immediate context and the history of their interactions with the system. Unlike contextual bandits that cannot differentiate between a `visit' and a `user', the next generation of personalization systems will consider a sequence of recent events and interactions with the system when making a recommendation, and thus be able to incorporate surprise and novelty into the recommendation sequence according to the user's modelled profile. We also argue that the cognitive bias model will be useful in deciding how best to present the recommendation to the user, for example a person with a high familiarity bias might only want to invest in a stock that she knows, and be less likely to keep a more diverse portfolio. A recommendation system that had an explicit model of this bias could present the diversified portfolio in a way that makes it appear more familiar, for example mentioning the user's friends who had similar diversified portfolios. We expect that these next-generation personalization algorithms may initially require a high density of data, but that this dependence on data may be ameliorated as we move beyond modelling solely based on click streams, and exploiting other data sources available from sensor rich environments. In particular we envision curated experiences such as visits to theme parks, cruise ships or hotel chains with loyalty programs. In such environments rich data is available, activity preferences, shop purchases, facility utilization and user queries could all contribute to help train a sufficiently effective model. Designing an SR system that understands how people actually reason has a huge potential to retain users and far more effectively market products than a system that mathematically optimizes some convenient but non-human-like optimization function. Understanding how people actually make decisions will help us match them with products that will truly make them happy and keep them engaged in using the system for the long term. \section{Summary and Conclusions} In this paper we demonstrated through various real world use-cases the advantages of strategic recommendation systems and their implementation using RL. To make RL practical, we solved many fundamental challenges, such as evaluating policies offline with high confidence, safe deployment, non-stationarity, building systems from passive data that do not contain past recommendations, resource constraint optimization in multi-user systems, and scaling to large and dynamic actions spaces. Finally we presented ideas for the what we believe will be the next generation of SRs that would truly understand people by incorporating human cognitive biases. \bibliographystyle{spmpsci}
1712.10303
\section{Introduction} Graphene is a two-dimensional allotrope of carbon, arranged as a honeycomb lattice with a $C_{3v}\otimes Z_2$ symmetry \cite{Wallace_47} that determines its remarkable physical properties \cite{Munoz_16,Peres_10,Goerbig_11,Novoselov_05}. In particular, the electronic spectrum arising from an atomistic tight-binding model displays two non-equivalent points $K_{+},\,K_{-}$ where the conduction and valence bands touch, and in whose vicinity the dispersion relation is approximately linear. This leads to an effective, low-energy continuum model where the electronic properties of the material are well captured by those of relativistic Dirac fermions in 2D. Among the plethora of physical consequences of this fact that have been already predicted and measured \cite{Munoz_16,Peres_10,Goerbig_11,Novoselov_05,Vozmediano_10,DasSarma_11,Novoselov_04}, we noticed an interesting experiment that measures the optical transparency of single and few-layer graphene~\cite{Nair}. The transparency is a physical property that is determined by the optical conductivity, i.e. the linear response to an electromagnetic field, in the zero-frequency limit. A variety of experiments confirm~\cite{Nair,WASSEI201052,Ma2013,Mak24082010,shou} that the measured transmittance is indeed compatible with the effective single-particle model of relativistic Dirac fermions in graphene. A number of different theoretical works have exploited this fact to calculate the light absorption rate in graphene from a ``relativistic'' quantum electrodynamics perspective~\cite{FV-2012,mariel,FV-2016,FV-2011B,FV-2011,Fial-2011,david,saul,Merthe}. An interesting question that remains open is up to what extent this effective model is valid in the representation of this optical property, since it arises from a tight-binding microscopic atomistic model that involves only the nearest neighbor hopping. In this article, we decided to explore what is the contribution to the optical conductivity arising from the next-to-nearest neighbors coupling in the atomistic Hamiltonian, included as a quadratic correction to the kinetic energy operator within the continuum effective model for graphene. Such a model has been considered in Ref.~\cite{GNAQ} to fully account for the Anomalous Integer Quantum Hall Effect in this material and the underlying wave equation is referred to in literature as Second Order Dirac Equation~\cite{second}. For our purposes, let us recall that within the linear response theory, general Kubo relations allow to express the transport coefficients in terms of retarded correlators~\cite{Wen}, that for a pair of observables $\hat{O}_1$, $\hat{O}_2$ are defined by ($\zeta = \pm$ for Bosons and Fermions, respectively) \begin{eqnarray} C_{O_1,O_2}^{R}(t-t') &=& -i\theta(t - t')\langle [ \hat{O}_1(t),\hat{O}_2(t')]_{-\zeta} \rangle\nonumber \\ &=& -i\theta(t - t')\langle \hat{O}_1(t)\hat{O}_2(t') \rangle -i\theta(t - t')\zeta\langle \hat{O}_2(t')\hat{O}_1(t) \rangle. \label{eq_C1} \end{eqnarray} These retarded correlators differ from the usual time-ordered ones that, by construction, are obtained via functional differentiation of the standard generating functional constructed form a path-integral formulation in quantum field theory. This rather technical inconvenience can be overcome by connecting the different propagators using a Lehmann representation, or alternatively to work in the Matsubara formalism at finite temperature and use analytic continuation a posteriori~\cite{Wen}. There is however a third, and more direct alternative, which is to express the generating functional in the contour time path (CTP), also known as Keldysh formalism in the condensed matter literature~\cite{Rammer,Stefanucci}. In this work, we choose the CTP formalism to explicitly calculate the polarization tensor as a retarded correlator of the current operators, which provides the correct definition of the optical conductivity within linear response theory. With these ideas in mind, we have organized the remaining of this article as follows: In Sect.~\ref{Ec-Mov}, we present the details of the model. In Sect.~\ref{CTP} we present the Keldysh formalism to calculate the current-current correlator and in Sect.~\ref{vacpol} we obtain the optical conductivity from the vacuum polarization tensor. We discuss our findings in Sect.~\ref{conclusions}. Some calculational details are presented in an Appendix. \section{Lagrangian, conserved current and generating functional}\label{Ec-Mov} \begin{figure}[tbp] \centering \epsfig{file=grafeno,width=0.5\columnwidth} \caption{(Color online) Sketch of the crystal structure of graphene. The honeycomb array is described in terms of two overlapping triangular sublattices. } \label{fig1a} \end{figure} Graphene consist in one atom thick membrane of tightly packed carbon atoms in a honeycomb array. Its crystal structure, sketched in Fig.~\ref{fig1a}, is described in terms of two overlapping triangular (Bravais) sublattices so that for a given atom belonging to any of these sublattices, its nearest neighbors belong to the second sublattice, the next-to-nearest neighbors to the original sublattice and so on. The band structure at the next-to-nearest approximation is of the form \begin{equation} E_\pm(\mathbf{k})=\pm t\sqrt{f(\mathbf{k})}-t'[f(\mathbf{k})-3], \end{equation} where $t$ and $t'$ are the nearest and next-to-nearest hopping parameters and \begin{equation} f(\mathbf{k})=3+4\cos\left( \frac{3k_x a}{2}\right)\cos\left( \frac{\sqrt{3}k_y a}{2}\right)+2\cos(\sqrt{3}k_ya)\;, \end{equation} where $a\simeq 1.42\AA$ is the interatomic distance. The points $K_+$ and $K_-$ at which $f(K_\pm)=0$ define the so-called Dirac points. Around $K_+$, \begin{equation} E_\pm(\mathbf{k}+K_+)=\pm t\left[\frac{3}{2}a |\mathbf{k}|-\frac{3}{8}a^2 \mathbf{k}^2 \sin(3\vartheta) \right]+t'\left[-\frac{9}{4}a^3\mathbf{k}^2+3 \right]+{\cal O}(|\mathbf{k}|^3)\;,\label{nnn} \end{equation} with $\tan(\vartheta)=k_y/k_x$. Around $K_-$ one merely has to replace $\vartheta\to -\vartheta$ in Eq.~(\ref{nnn}). The isotropic portion of this model was first considered in Ref.~\cite{GNAQ} as a natural framework to explain the Anomalous Integer Quantum Hall Effect in graphene. The anisotropic term in that work was treated perturbatively and shown not to contribute to the energy spectrum at first order. In the presence of electromagnetic interactions, the model is described by the Lagrangian~\cite{GNAQ} \begin{eqnarray} \mathcal{L}&:=&\frac{i}{2}\left[\psi^\dagger \, \partial_t \psi - \partial_t\psi^\dagger\, \psi\right]+ \psi^\dagger e A_0 \psi \nonumber\\ &&-\frac{1}{2m} \left\{ \left[\left(\mathbf{p}-e\mathbf{A}+\theta\boldsymbol \sigma\right)\psi\right]^\dagger \cdot \left[\left(\mathbf{p}-e\mathbf{A}+\theta\boldsymbol \sigma\right)\psi\right]-2\theta^2\psi^\dagger\psi\right\}\nonumber\\ &=& \frac{i}{2}\left[\psi^\dagger \, \partial_t \psi - \partial_t\psi^\dagger\, \psi\right] -\frac{1}{2m} \left\{\boldsymbol \nabla\psi^\dagger \cdot \boldsymbol \nabla \psi + i \boldsymbol \nabla\psi^\dagger \cdot \left(-e\mathbf{A}+\theta\boldsymbol \sigma\right)\psi -\right.\nonumber\\ &&\left. -i \psi^\dagger \left(-e\mathbf{A}+\theta\boldsymbol \sigma\right) \cdot\boldsymbol \nabla\psi+ \psi^\dagger \left[\left(-e\mathbf{A}+\theta\boldsymbol \sigma\right)^2- 2 \theta^2 \right]\psi \right\} \,,\label{L1} \end{eqnarray} where $\theta=m v_F$. Here, $\psi^\dagger$ and $\psi$ are regarded as independent fields whose equations of motion are derived from the variation of the action with respect to these fields, namely, \begin{eqnarray} \frac{\partial \mathcal{L}}{\partial \psi^\dagger}-\partial_t \left( \frac{\partial\mathcal{L}}{\partial \left(\partial_t \psi^\dagger\right)}\right) - \boldsymbol \nabla\cdot \left( \frac{\partial\mathcal{L}}{\partial \left(\boldsymbol \nabla \psi^\dagger\right)}\right)\nonumber\\ &&\hspace{-5cm} =i \partial_t \psi- \frac{1}{2m} \left[\left(\mathbf{p}-e\mathbf{A}+\theta\boldsymbol \sigma\right)^2 -2\theta^2\right] \psi =0\,,\label{L2} \end{eqnarray} and similarly for $\psi$. \medskip The Lagrangian in Eq.(\ref{L1}) remains invariant against the local change in the dynamical variables and the external electromagnetic field \begin{equation}\label{L4} \begin{array}{c} \displaystyle \psi(x)\rightarrow e^{i e \alpha(x)}\psi(x) \quad \Rightarrow \quad \delta \psi(x) = i e \alpha(x) \psi(x)\,, \\ \\ \displaystyle \psi^\dagger(x)\rightarrow \psi^\dagger(x) e^{-i e \alpha(x)} \quad \Rightarrow \quad \delta \psi^\dagger(x) = -i e \alpha(x) \psi^\dagger(x)\,, \\ \\ \displaystyle A_\mu(x) \rightarrow A_\mu(x) + \partial_\mu \alpha(x)\,, \end{array} \end{equation} that is, it has a $U(1)$ gauge symmetry. N{\oe}ther's Theorem leads to the existence of the locally conserved current \begin{equation}\label{L5} \alpha j^\mu := - \delta \psi^\dagger \left( \frac{\partial\mathcal{L}}{\partial \left(\partial_\mu \psi^\dagger\right)}\right) - \left( \frac{\partial\mathcal{L}}{\partial \left(\partial_\mu \psi\right)}\right) \delta\psi\,. \end{equation} The corresponding charge density is \begin{equation}\label{L6} j^0 = e \, \psi^\dagger \psi \end{equation} and the current density \begin{equation}\label{L7} {j^k}=\frac{e}{2m}\left\{ i \left(\partial_k \psi^\dagger\, \psi - \psi^\dagger \, \partial_k\psi \right) + 2 \psi^\dagger\left(-e {A_k}+\theta\sigma_k\right)\psi\right\}\,. \end{equation} It is straightforward to verify, from the equations of motion, that $j^{\mu}$ is conserved, \begin{equation}\label{L8} \partial_\mu j^\mu=\partial_t j^0 - \boldsymbol \nabla \cdot \mathbf{j}=0\,. \end{equation} Notice also that we can write \begin{equation}\label{JderivGamma} j^\mu(x)= \frac{\delta}{\delta A_\mu(x)} \int \mathcal{L}(y) \, d^3 y\,. \end{equation} With these ingredients, we can formulate the corresponding current-current correlator. \section{Generating functional in the Contour Time Path.}~\label{CTP} We seek to calculate the polarization tensor, defined as a retarded current-current correlator that, in linear response, determines the optical conductivity. For that purpose, we choose to represent the field-theory described in the previous section on the Contour Time Path (CTP)~\cite{Rammer,Stefanucci}. Let us define the contour $\gamma = \gamma_{-}\oplus\gamma_{+}$, where $\gamma_{-}$ represents the time-ordered branch while $\gamma_{+}$ the anti-time-ordered branch, as depicted in Fig.\ref{fig1}. Therefore, we define a contour evolution parameter $\tau\in\gamma$, such that \begin{eqnarray} \tau = \left\{\begin{array}{cc} t_{-},&\tau\in\gamma_{-}\;,\\ t_{+},&\tau\in\gamma_{+}\;. \end{array}\right. \end{eqnarray} \begin{figure}[tbp] \centering \epsfig{file=CTP_Fig,width=0.3\columnwidth} \caption{(Color online) The contour $\gamma = \gamma_{+}\oplus\gamma_{-}$ is depicted in the figure. The double folding of the time axis is displayed, by showing that always two points $t_{-}$ and $t_{+}$, located in the time ordered $t_{-}\in\gamma_{-}$ and and anti-time ordered $t_{+}\in\gamma_{+}$ branches of the contour correspond to the same chronological time instant $t$.} \label{fig1} \end{figure} Also notice that, as depicted in Fig.\ref{fig1}, both $t_{+}$ and $t_{-}$ have a unique correspondence to a given chronological instant of time $t\in {\rm I\!R}$. Correspondingly, for operators and fields defined with their time arguments along the CTP, we have the definitions \begin{eqnarray} \psi(\mathbf{x},\tau) = \left\{\begin{array}{cc} \psi(\mathbf{x},t_{-}) \equiv \psi_{-}(\mathbf{x},t),&\tau\in\gamma_{-}\;,\\ \psi(\mathbf{x},t_{+}) \equiv \psi_{+}(\mathbf{x},t),&\tau\in\gamma_{+}\;. \end{array} \right. \end{eqnarray} Then, the generating functional of (current) Green's functions of this two-dimensional system, defined on the CTP reads \begin{equation}\label{eq_CTP_gen} Z_{\gamma}[A]=e^{i \Gamma_{\gamma}[A]}:=\int \mathcal{D} \psi^\dagger(\mathbf{x},\tau) \mathcal{D}\psi(\mathbf{x},\tau) \, e^{\displaystyle i \int_{\gamma} d\tau \int d^2 \mathbf{x}\, \mathcal{L}[\psi^\dagger(\mathbf{x},\tau),\psi(\mathbf{x},\tau)]}\,, \end{equation} where $\Gamma_{\gamma}[A]$ is the effective contribution to the action for the electromagnetic field. The path-integral on the CTP induces by construction the contour-ordering between the fields, defined by the operation $\mathcal{T}$ between two operators $\hat{O}_1(\tau)$ and $\hat{O}_2(\tau)$ in the Heisenberg picture ($\zeta = \pm$ for Bosons/Fermions, respectively) \begin{eqnarray} \langle \mathcal{T} \hat{O}_1(\tau_1)\hat{O}_2(\tau_2) \rangle &=& \theta(\tau_1 - \tau_2)\langle \hat{O}_1(\tau_1)\hat{O}_2(\tau_2) \rangle \nonumber\\ && + \zeta \theta(\tau_2 - \tau_1)\langle \hat{O}_2(\tau_2)\hat{O}_1(\tau_1)\rangle. \end{eqnarray} Here, we have defined the contour Heaviside function as \begin{eqnarray} \theta(\tau_1 - \tau_2) = \left\{ \begin{array}{cc} 1,&\tau_1 >_{c} \tau_2\;,\\ 0,&\tau_2 >_{c} \tau_1\;, \end{array} \right. \end{eqnarray} with the symbol $>_{c}$ indicating the relation ``later than in the contour''. In general physical situations where the sources and external fields do not break time-reversal invariance, $\psi_{-}(x,t) = \psi_{+}(x,t)$, and the CTP becomes just a useful trick to express at once all the different correlators. Consider for instance the contour-ordered correlator between two fields, \begin{eqnarray} \Delta(\mathbf{x}_1,\tau_1;\mathbf{x}_2,\tau_2) &\equiv& -i\langle \mathcal{T} \psi(\mathbf{x}_1,\tau_1) \psi^{\dagger}(\mathbf{x}_2,\tau_2) \rangle \nonumber\\ &=& \theta(\tau_1 - \tau_2) (-i)\langle \psi(\mathbf{x}_1,\tau_1) \psi^{\dagger}(\mathbf{x}_2,\tau_2) \rangle\nonumber\\ && + \zeta \theta(\tau_2 - \tau_1) (-i)\langle \psi^{\dagger}(\mathbf{x}_2,\tau_2) \psi(\mathbf{x}_1,\tau_1) \rangle\;. \end{eqnarray} This single definition, depending on the location of the parameters $\tau_1,\,\,\tau_2\,\in\gamma$, generates at once four different propagators: \begin{eqnarray} \Delta_{--}(\mathbf{x}_1,t_1;\mathbf{x}_2,t_2) &=& -i\langle \mathcal{T} \psi(\mathbf{x}_1, t_{1-}) \psi^{\dagger}(\mathbf{x}_2,t_{2-}) \rangle \nonumber\\ &=& -i\langle \mathcal{T} \psi_{-}(\mathbf{x}_1, t_{1}) \psi_{-}^{\dagger}(\mathbf{x}_2,t_{2}) \rangle\nonumber\\ &=& -i\langle \hat{T} \psi_{-}(\mathbf{x}_1, t_{1}) \psi_{-}^{\dagger}(\mathbf{x}_2,t_{2}) \rangle\nonumber\\ &=& -i\langle \hat{T} \psi(\mathbf{x}_1, t_{1}) \psi^{\dagger}(\mathbf{x}_2,t_{2}) \rangle\;,\\ \Delta_{-+}(\mathbf{x}_1,t_1;\mathbf{x}_2,t_2) &=& -i\langle \mathcal{T} \psi(\mathbf{x}_1, t_{1-}) \psi^{\dagger}(\mathbf{x}_2,t_{2+}) \rangle \nonumber\\ &=& -i\langle \mathcal{T} \psi_{-}(\mathbf{x}_1, t_{1}) \psi_{+}^{\dagger}(\mathbf{x}_2,t_{2}) \rangle\nonumber\\ &=& -i\zeta \langle \psi_{+}^{\dagger}(\mathbf{x}_2,t_{2}) \psi_{-}(\mathbf{x}_1, t_{1}) \rangle\nonumber\\ &=& -i\zeta \langle \psi^{\dagger}(\mathbf{x}_2,t_{2}) \psi(\mathbf{x}_1, t_{1})\rangle\;, \\ \Delta_{+-}(\mathbf{x}_1,t_1;\mathbf{x}_2,t_2) &=& -i\langle \mathcal{T} \psi(\mathbf{x}_1, t_{1+}) \psi^{\dagger}(\mathbf{x}_2,t_{2-}) \rangle \nonumber\\ &=& -i\langle \mathcal{T} \psi_{+}(\mathbf{x}_1, t_{1}) \psi_{-}^{\dagger}(\mathbf{x}_2,t_{2}) \rangle\nonumber\\ &=& -i\langle \psi_{+}(\mathbf{x}_1, t_{1}) \psi_{-}^{\dagger}(\mathbf{x}_2,t_{2}) \rangle\nonumber\\ &=& -i\langle \psi(\mathbf{x}_1, t_{1}) \psi^{\dagger}(\mathbf{x}_2,t_{2}) \rangle\;,\\ \Delta_{++}(\mathbf{x}_1,t_1;\mathbf{x}_2,t_2) &=& -i\langle \mathcal{T} \psi(\mathbf{x}_1, t_{1+}) \psi^{\dagger}(\mathbf{x}_2,t_{2+}) \rangle\nonumber\\ &=& -i\langle \mathcal{T} \psi_{+}(\mathbf{x}_1, t_{1}) \psi_{+}^{\dagger}(\mathbf{x}_2,t_{2}) \rangle\nonumber\\ &=& -i\langle \tilde{T}\psi_{+}(\mathbf{x}_1, t_{1}) \psi_{+}^{\dagger}(\mathbf{x}_2,t_{2}) \rangle\nonumber\\ &=& -i\langle \tilde{T} \psi(\mathbf{x}_1, t_{1}) \psi^{\dagger}(\mathbf{x}_2,t_{2}) \rangle\;. \end{eqnarray} Here, we have defined the usual time-order $\hat{T}$ and anti-time-order $\tilde{T}$ operators. Notice that not all correlators are independent, since they satisfy \begin{eqnarray} \Delta_{+-}(x,y) + \Delta_{-+}(x,y) = \Delta_{--}(x,y) + \Delta_{++}(x,y). \end{eqnarray} It is customary to organize the correlators above in the matrix form \begin{eqnarray} \Delta(x,y) = \left[\begin{array}{cc} \Delta_{--}(x,y) & \Delta_{-+}(x,y)\\\Delta_{+-}(x,y) & \Delta_{++}(x,y) \end{array} \right]\;. \end{eqnarray} Using the definitions above, the retarded and advances correlators can be expressed as linear combinations of the previous ones \begin{eqnarray} \Delta^{A}(\mathbf{x}_1,t_1;\mathbf{x}_2,t_2) &=& i\theta(t_2 - t_1)\langle \left[ \psi(\mathbf{x}_1,t_1),\psi^{\dagger}(\mathbf{x}_2,t_2) \right]_{-\zeta}\rangle \nonumber\\ &=& \Delta_{--}(\mathbf{x}_1,t_1;\mathbf{x}_2,t_2) - \Delta_{+-}(\mathbf{x}_1,t_1;\mathbf{x}_2,t_2)\nonumber\\ &=& \Delta_{-+}(\mathbf{x}_1,t_1;\mathbf{x}_2,t_2) - \Delta_{++}(\mathbf{x}_1,t_1;\mathbf{x}_2,t_2)\;,\\ \Delta^{R}(\mathbf{x}_1,t_1;\mathbf{x}_2,t_2) &=& -i\theta(t_1 - t_2)\langle \left[ \psi(\mathbf{x}_1,t_1),\psi^{\dagger}(\mathbf{x}_2,t_2) \right]_{-\zeta}\rangle \nonumber\\ &=& -i\theta(t_1 - t_2)\Bigg( \langle \psi(\mathbf{x}_1,t_1)\psi^{\dagger}(\mathbf{x}_2,t_2)\rangle \nonumber\\ &&-\zeta \langle \psi^{\dagger}(\mathbf{x}_2,t_2)\psi(\mathbf{x}_1,t_1) \rangle\Bigg)\nonumber\\ &=& \Delta_{--}(\mathbf{x}_1,t_1;\mathbf{x}_2,t_2) - \Delta_{-+}(\mathbf{x}_1,t_1;\mathbf{x}_2,t_2)\nonumber\\ &=& \Delta_{+-}(\mathbf{x}_1,t_1;\mathbf{x}_2,t_2) - \Delta_{++}(\mathbf{x}_1,t_1;\mathbf{x}_2,t_2)\;. \label{eq_Delta_retarded} \end{eqnarray} From the CTP generating functional defined in Eq.~(\ref{eq_CTP_gen}), it is possible to generate the average current components \begin{eqnarray} -i \frac{\delta\log Z_{\gamma}[A]}{\delta A_\mu(x)}&=& \frac{1}{Z_{\gamma}[A]} \int \mathcal{D} \psi^\dagger \mathcal{D}\psi \, e^{\displaystyle i \int_{\gamma} d^3 y \mathcal{L}(y)} j^\mu(x)\nonumber\\ &=& \left\langle j^\mu(x) \right\rangle =:J^\mu[A](x)\,,\label{L10} \end{eqnarray} while the second functional derivative gives the current-current correlation function, \begin{eqnarray} (-i )^2 \frac{\delta^2\log Z_{\gamma}[A]}{\delta A_\mu(x) \delta A_\nu(y)}&=& -i \frac{\delta J^\mu[A](x)}{\delta A_\nu(y)}\nonumber\\ &=&-i \left\langle \frac{\delta j^\mu(x)}{\delta A_\nu(y)} \right\rangle + \left\langle \mathcal{T} j^\mu(x) j^\nu(y) \right\rangle\nonumber\\ &&- \left\langle j^\mu(x) \right\rangle \left\langle j^\nu(y) \right\rangle\,,\label{correl-js} \end{eqnarray} where the first term is the \emph{diamagnetic contribution} \cite{Altland-Simons} \begin{equation}\label{diamagnetic-term} \begin{array}{c} \displaystyle \left\langle \frac{\delta j^\mu(x)}{\delta A_\nu(y)} \right\rangle= \delta^{\mu k} \delta^{\nu}_k \left( - \frac{{e^2}}{m^2}\right) \left\langle \psi^\dagger(x) \psi(x)\right\rangle \delta^{(3)} \left( x-y \right) \\ \\ \displaystyle = - \frac{{e}}{m^2}\, \delta^{\mu k} \delta^{\nu}_k \left\langle j^0(x) \right\rangle \delta^{(3)}\left( x-y \right)\,, \end{array} \end{equation} and the others are the \emph{paramagnetic} ones. We take the currents in normal order with respect to the fermionic field, so that $J^\mu[A=0]=0$. The \emph{\emph{linear response}} of the system to the external electromagnetic field is described by the second derivative in Eq.~(\ref{correl-js}) evaluated at $A_\mu=0$~\cite{Altland-Simons}, \begin{eqnarray} K^{\mu\nu}(x,y) &=& \left. (-i )^2 \frac{\delta^2\log Z_{\gamma}[A]}{\delta A_\mu(x) \delta A_\nu(y)} \right|_{A=0 = K^{\nu\mu}(y,x) \nonumber\\ &=&\left\langle \mathcal{T} j^\mu(x) j^\nu(y) \right\rangle_0\,. \label{Kmunu} \end{eqnarray} Then, the density response is \begin{eqnarray} K^{00}(x,y) &=& \left\langle \mathcal{T} j^0(x) j^0(y) \right\rangle_0\nonumber\\ &=& e^2 \left\langle \mathcal{T }\psi^\dagger(x) \psi(x)\,\psi^\dagger(y) \psi(y) \right\rangle_0\,.\label{{K00}} \end{eqnarray} The spatial components of the current are given by \begin{eqnarray} j^{k}(x)\Bigg|_{A=0} &=& \frac{e}{2m}\left\{ i\partial_{k}\psi^{\dagger}(x) \psi(x) - i\psi^{\dagger}(x) \partial_{k}\psi(x) + 2\theta \psi^{\dagger}(x)\sigma_{k}\psi(x) \right\}\nonumber\\ &=& \psi^{\dagger}(x)\left( \frac{e}{2m}\left\{ -i\overleftrightarrow{\partial}_{k} + 2\theta \sigma_k \right\} \right)\psi (x)\nonumber\\ &\equiv& \psi^{\dagger}_a(x)\hat{D}_{ab}^{k}\psi_b(x)\;. \end{eqnarray} Here, we have defined the differential operators \begin{eqnarray} \hat{D}_{ab}^{k} = \frac{e}{2m}\left\{-i\overleftrightarrow{\partial}_{k} \delta_{ab}+ 2\theta \left[\sigma_k\right]_{ab} \right\}\;. \end{eqnarray} Applying Wick's theorem on the CTP for the definition of the current-correlator (correlators associated to disconnected diagrams vanish): \begin{eqnarray} \langle \mathcal{T} j^{k}(x) j^{l} (y)\rangle &=& \langle \mathcal{T} \psi_{a}^{\dagger}(x)\hat{D}_{ab}^k \psi_{b}(x) \psi_c^{\dagger}(y)\hat{D}_{cd}^{l}\psi_d(y)\rangle\nonumber\\ &=& -\hat{D}_{ab}^k \hat{D}_{cd}^{l} \langle \mathcal{T} \psi_{b}(x) \psi_c^{\dagger}(y)\rangle \langle \mathcal{T} \psi_{d}(y) \psi_a^{\dagger}(x)\rangle\;. \end{eqnarray} The previous relation allows us to define the corresponding components of the polarization tensor in the CTP contour indices $\alpha,\beta = \pm$, \begin{eqnarray} K^{k l}_{\alpha\beta}(x,y) &=& \langle \mathcal{T} j^{k}_{\alpha}(x) j^{l}_{\beta} (y)\rangle\nonumber\\ &=& -\hat{D}_{ab}^k \hat{D}_{cd}^{l} \Delta_{bc}^{\alpha\beta}(x,y)\Delta_{da}^{\beta\alpha}(y,x)\;. \end{eqnarray} The retarded component of the polarization tensor is obtained following the general prescription explained in Eq.(\ref{eq_Delta_retarded}), \begin{eqnarray} K_{R}^{kl}(x,y) &=& K_{--}^{kl}(x,y) - K_{-+}^{kl}(x,y)\nonumber\\ &=& \hat{D}_{ab}^k \hat{D}_{cd}^{l} \left\{ \Delta_{bc}^{--}(x,y)\Delta_{da}^{--}(y,x) - \Delta_{bc}^{-+}(x,y)\Delta_{da}^{+-}(y,x) \right\}\nonumber\\ &=& \hat{D}_{ab}^k \hat{D}_{cd}^{l} \Bigg\{ \Delta_{bc}^{F}(x,y)\Delta_{da}^{F}(y,x) \nonumber\\ &&- \left( \Delta_{bc}^{F}(x,y) - \Delta_{bc}^{R}(x,y) \right)\left( \Delta_{da}^{F}(y,x) - \Delta_{da}^{A}(y,x) \right) \Bigg\}\nonumber\\ &=& \hat{D}_{ab}^k \hat{D}_{cd}^{l} \Bigg\{ \Delta_{bc}^{F}(x,y) \Delta_{da}^{A}(y,x) + \Delta_{bc}^{R}(x,y) \Delta_{da}^{F}(y,x) \nonumber\\ &&- \Delta_{bc}^{R}(x,y) \Delta_{da}^{A}(y,x) \Bigg\}\;. \end{eqnarray} In terms of Fourier transforms, \begin{equation}\label{psi-de-p} \psi(x)= \frac{1}{\left( 2 \pi\right)^{3/2}} \int d^3p \, e^{-i p\cdot x} \tilde{\psi}(p)\,, \qquad \psi^\dagger(x)= \frac{1}{\left( 2 \pi\right)^{3/2}} \int d^3p \, e^{i p\cdot x} \tilde{\psi}^\dagger(p)\,, \end{equation} we have \begin{eqnarray} \Delta_{ab}^{\alpha\beta}(x,y) \equiv \Delta_{ab}^{\alpha\beta}(x-y) = \int\frac{d^3 p }{(2\pi)^3} e^{i(x-y)\cdot p}\tilde{\Delta}_{ab}^{\alpha\beta}(p). \end{eqnarray} Here, the different propagators for the Hamiltonian model considered are, in Fourier space (F: Feynman, R: Retarded, A: Advanced), \begin{eqnarray} \tilde{\Delta}^{F}(p) &=& \tilde{\Delta}_{--}(p)= i\frac{\mathbf{p}_0 - \frac{\mathbf{p}^2}{2m} + v_F \mathbf{p}\cdot\vec{\sigma}}{\left( p_0 - \frac{\mathbf{p}^2}{2m}\right)^2 - v_F^2\mathbf{p}^2 + i\epsilon'}\nonumber\\ &&= i\frac{\mathbf{p}_0 - \frac{\mathbf{p}^2}{2m} + v_F \mathbf{p}\cdot\vec{\sigma}}{\left( p_0 + i\epsilon - \frac{\mathbf{p}^2}{2m} - v_F|\mathbf{p}|\right)\left( p_0 - i\epsilon - \frac{\mathbf{p}^2}{2m} + v_F|\mathbf{p}|\right)}\;, \\ \tilde{\Delta}^{R}(p) &=& i\frac{p_0 - \frac{\mathbf{p}^2}{2m} + v_F \mathbf{p}\cdot\vec{\sigma} }{\left( p_0+ i\epsilon - \frac{\mathbf{p}^2}{2m}\right)^2 - v_F^2\mathbf{p}^2 }\;,\\ \tilde{\Delta}^{A}(p) &=& i\frac{ p_0 - \frac{\mathbf{p}^2}{2m} + v_F \mathbf{p}\cdot\vec{\sigma}}{\left( p_0 - i\epsilon - \frac{\mathbf{p}^2}{2m}\right)^2 - v_F^2\mathbf{p}^2 }\;. \label{eq_Deltas_Fourier} \end{eqnarray} In particular, for the linear response theory, we need the retarded component of the polarization tensor \begin{eqnarray} K_{R}^{\mu\nu}(x-y) = \int\frac{d^3 p}{(2\pi)^3} e^{i(x-y)\cdot p}\, \Pi_{R}^{\mu\nu}(p)\;. \end{eqnarray} Here, the Fourier transform of the retarded component is given by \begin{eqnarray} \Pi_R^{kl}(p) = \Pi_{FA}^{kl}(p) + \Pi_{RF}^{kl}(p) - \Pi_{RA}^{kl}(p)\;, \label{eq_PiR} \end{eqnarray} where the different terms are defined by \begin{eqnarray} \Pi^{kl}_{FA}(p) &=& \frac{e^2}{4 m^2}\int\frac{d^3 q}{(2\pi)^3}\Gamma_{ab}^k(p+2q) \tilde{\Delta}_{bc}^{F}(p+q) \Gamma_{cd}^l(p+2q) \tilde{\Delta}_{da}^{A}(q)\;,\nonumber\\ \Pi^{kl}_{RF}(p) &=& \frac{e^2}{4 m^2}\int\frac{d^3 q}{(2\pi)^3} \Gamma_{ab}^k(p+2q) \tilde{\Delta}_{bc}^{R}(p+q) \Gamma_{cd}^l(p+2q) \tilde{\Delta}_{da}^{F}(q)\;,\nonumber\\ \Pi^{kl}_{RA}(p) &=& \frac{e^2}{4 m^2}\int\frac{d^3 q}{(2\pi)^3} \Gamma_{ab}^k(p+2q) \tilde{\Delta}_{bc}^{R}(p+q) \Gamma_{cd}^l(p+2q) \tilde{\Delta}_{da}^{A}(q)\;, \label{eq_PiRAF} \end{eqnarray} with the symbol \begin{equation} \Gamma_{ab}^k(p+2q)= \left[ \delta_{ab}(p + 2q)_k + 2\theta\left[\sigma_k\right]_{ab}\right], \end{equation} and a similar expression for $\Gamma_{cd}^l(p+2q)$. Below we obtain the polarization tensor explicitly. \section{The polarization tensor}\label{vacpol} The polarization tensor $ \Pi^{kl}(p)$ contains the information about the conductivity on the plane of this two-dimensional system and also about its properties of transmission of light through it \cite{FV-2016,Altland-Simons}. We are interested in the consequences of the application of harmonic homogeneous electric fields which, in the temporal gauge, are related with the vector potential by $E_k=-\partial A_k/\partial t=i \omega A_k$. Since the conductivity is determined by the linear relation between the current and the applied electric field, $J_k=\sigma_{kl} E_l$, from Eqs.\ \eqref{L10}, \eqref{Kmunu} and \eqref{eq_PiR}, we can write for the conductivity as a function of the frequency \cite{FV-2016,Altland-Simons} \begin{equation}\label{sigma-pi} \sigma_{kl}(\omega)=\left. \frac{\Pi_{kl}^{R}(p)}{ p_0} \right|_{p \rightarrow (\omega, \mathbf{0})}\,. \end{equation} So, in the following we evaluate $\Pi_{kl}^{R}(\omega,\mathbf{0})$ from Eq.(\ref{eq_PiR}), that hence requires the evaluation of the three integrals defined in Eq.(\ref{eq_PiRAF}). Let us start with $\Pi_{kl}^{FA}(p)$, \begin{eqnarray} \Pi_{kl}^{FA}(p) &=&\frac{e^2}{4 m^2}\nonumber\\ &&\hspace{-18mm} \times\int\frac{d^3 q}{(2\pi)^3}{\rm{Tr}}\left\{\left[ p_k + 2q_k + 2\theta \sigma_k \right]\Delta^{F}(p+q) \left[ p_l + 2q_l + 2\theta\sigma_l\right]\Delta^{A}(q)\right\}. \end{eqnarray} Specializing this expression to the case $p = \left(\omega,\mathbf{0}\right)$, we write \begin{eqnarray} \Pi_{kl}^{FA}(\omega,\mathbf{0}) = \frac{e^2}{4 m^2}\int\frac{d^3 q}{(2\pi)^3}\frac{{\rm{Tr}}\{A\}}{B^{FA}} \label{eq_PiFA1} \end{eqnarray} with \begin{eqnarray} A&=&\left[ 2q_k + 2\theta \sigma_k \right]\left[ \omega + q_0 - \frac{\mathbf{q}^2}{2m} + v_F \mathbf{q}\cdot\vec{\sigma}\right] \left[ 2q_l + 2\theta\sigma_l\right]\nonumber\\ &&\times\left[ q_0 - \frac{\mathbf{q}^2}{2m} + v_F \mathbf{q}\cdot\vec{\sigma}\right]\;, \nonumber\\ B^{FA}&=&\left( \omega + q_0 + i\epsilon - \frac{\mathbf{q}^2}{2m} - v_F|\mathbf{q}|\right)\left( \omega + q_0 - i\epsilon - \frac{\mathbf{q}^2}{2m} + v_F|\mathbf{q}|\right)\nonumber\\ &&\times\left(\left( q_0 - i\epsilon - \frac{\mathbf{q}^2}{2m}\right)^2 - v_F^2\mathbf{q}^2\right)\;.\label{PiFA_den} \end{eqnarray} By writing $q_1=Q \cos\varphi, q_2=Q \sin\varphi$, and noticing that the denominator is independent of $\varphi$, it is straightforward to get for the trace in the numerator integrated over $\varphi$, \begin{eqnarray}\label{trazas-integ} \int_0^{2\pi} {\rm Tr} \left\{ A \right\} d\varphi &=& -8\pi\left( Q^2 + 2 m^2 v_F^2\right)q_0^2 \nonumber\\ &&+ 8 \frac{\pi}{m} \left( Q^4 -\omega \left(m Q^2 + 2m^3 v_F^2 \right) - 2 m^2 Q^2 v_f^2\right)q_0\nonumber\\ && + 2 \frac{\pi}{ m^2} Q^2\left(2 m \omega \left(Q^2 -2m^2 v_F^2\right) + 2 m^2 Q^2 v_F^2 - Q^4 \right), \end{eqnarray} for $k,l=1,1$ or $2,2$, and a vanishing result for $k,l=1,2$ or $2,1$. Since the previous result is a quadratic polynomial in $q_0$, and the denominator in Eq.~\eqref{PiFA_den} is a quartic expression in the integration variable, the integral over $q_0$ can be done on the complex plane, taking into account the position of the simple poles of the integrand with respect to the real axis. With a dimensional regularization of the resulting integral (dimension $d=2-s$), as described in detail in Appendix, one finds \begin{eqnarray} \Pi_{11}^{FA}(\omega,\mathbf{0}) = \frac{e^2}{4 m^2}\left\{ i\frac{m^2\omega}{4\pi s} - i\frac{m^2\omega}{4\pi}\log\left[ -\frac{(\omega + 2 i \epsilon)}{2 m v_F}\right]\right\}\;. \label{eq_PiFA} \end{eqnarray} A similar procedure, as described in Appendix, leads to the corresponding expressions for the other two pieces of the retarded polarisation tensor \begin{eqnarray} \Pi_{11}^{RA}(\omega,\mathbf{0}) &=& \frac{e^2}{4 m^2}\Bigg\{ i\frac{m^2\omega}{2\pi s} - i\frac{m^2\omega}{4\pi}\log\left[ -\frac{(\omega + 2 i \epsilon)}{2 m v_F}\right]\nonumber\\ &&- i\frac{m^2\omega}{4\pi}\log\left[ \frac{(\omega + 2 i \epsilon)}{2 m v_F}\right]\Bigg\}\;, \label{eq_PiRA} \\ \Pi_{11}^{RF}(\omega,\mathbf{0}) &=& \frac{e^2}{4 m^2}\left\{ i\frac{m^2\omega}{4\pi s} - i\frac{m^2\omega}{4\pi}\log\left[ -\frac{(\omega + 2 i \epsilon)}{2 m v_F}\right]\right\}\;. \label{eq_PiRF} \end{eqnarray} We notice that the three separate parts above, which together yield the retarded polarization tensor, display a pole at $s=0$. However, when added together according to Eq.(\ref{eq_PiR}), the poles exactly cancel to yield a finite result \begin{eqnarray} \Pi_{11}^{R}(\omega,\mathbf{0}) &=& \lim_{\epsilon\rightarrow 0^{+}}\Pi_{11}^{RF}(\omega,\mathbf{0}) + \Pi_{11}^{FA}(\omega,\mathbf{0}) - \Pi_{11}^{RA}(\omega,\mathbf{0})\nonumber\\ &=& \frac{e^2}{4 m^2}\frac{m^2\omega}{4} = \frac{e^2\omega}{16}\;. \end{eqnarray} The result above must be multiplied by a factor of 2 due to the spin degeneracy, and another factor of 2 due to valley degeneracy. Thus, the final result of the optical conductivity is \begin{eqnarray} \sigma_{11} = 2\cdot2 \frac{\Pi_{11}^{R}(\omega,\mathbf{0})}{\omega} = \frac{e^2}{4}\;. \end{eqnarray} Remarkably, this is the same result that is obtained for the usual linear dispersion approximation. Therefore, we conclude that the optical conductivity, and hence the transparency in graphene are not affected by next-to-nearest neighbor contributions to the tight-binding microscopic model, that translate into a quadratic correction to the kinetic energy, as considered in this work. \section{Conclusions}\label{conclusions} Among the many outstanding properties of graphene which can be described within the Dirac limit, its optical transparency is entirely explained in terms of the fine structure constant. A natural question is to ask the extent at which such picture deviates from the experimental measurements. In this regard, in this article we considered the next-to-nearest neighbors contribution which in the continuum corrects the kinetic term with a quadratic contribution. Introducing the CTP formalism, we calculate the linear response current-current correlator from which the optical conductivity is derived. Within this formalism, it is straightforward to obtain the retarded part of the polarization tensor after a dimensional regularization of the involved integrals. Remarkably and somehow unexpectedly, we found the conductivity of the Dirac limit to be robust against quadratic corrections. This encouraging results opens the possibility of testing deviations of the Dirac limit in graphene in other physical phenomena. These results are currently under scrutiny and results will be reported elsewhere. \section*{Appendix: Regularization of the momentum integrals}\label{app} In this Appendix, we present in detail the dimensional regularization method used to calculate the momentum integrals defined in the main text. Let us consider the term Eq.~(\ref{eq_PiFA1}). After taking the trace and performing the angular integral as shown in Eq.~(\ref{trazas-integ}), we have to evaluate \begin{eqnarray} \Pi_{11}^{FA}(\omega,\mathbf{0}) &=& \frac{e^2}{4 m^2}\int_{0}^{\infty} \frac{dQ}{(2\pi)^3} Q \int_{-\infty}^{\infty} \frac{dq_0}{B^{FA}} \Bigg\{ -8\pi\left( Q^2 + 2 m^2 v_f^2\right)q_0^2\nonumber\\ &&+ 8\frac{\pi}{m} \left( Q^4 -\omega \left(m Q^2 + 2m^3 v_f^2 \right) - 2 m^2 Q^2 v_f^2\right)q_0 \nonumber\\ &&+ 2 \frac{\pi}{m^2} Q^2\left(2 m \omega \left(Q^2 -2m^2 v_f^2\right) + 2 m^2 Q^2 v_f^2 - Q^4 \right) \Bigg\}\;, \end{eqnarray} with $B^{FA}$ given in Eq.~(\ref{PiFA_den}) with $|\mathbf{q}|=Q$. Clearly, on the $q_0$-plane, the integrand has three poles on the positive imaginary plane at $q_{0}^{(1,2)} = i\epsilon + \frac{Q^2}{2m} \pm v_F Q$, $q_{0}^{(3)} = i\epsilon-\omega + \frac{Q^2}{2m} - v_F Q$, and a single pole on the negative imaginary plane at $q_{0}^{(4)} = -i\epsilon - \omega + \frac{Q^2}{2m} + v_F Q$. We evaluate the $q_0$ integral by means of the residue theorem, closing the contour on the lower plane. The result of this procedure can be expressed as \begin{eqnarray} \Pi_{11}^{FA}(\omega,\mathbf{0}) = \frac{e^2}{4 m^2}\int_{0}^{\infty} \frac{dQ}{(2\pi)^3} \frac{Q \, P^{FA}(Q,\omega)}{\left(\omega + 2 i \epsilon\right)\left(\omega - 2 v_F Q + 2 i \epsilon\right)\left(i v_F Q + \epsilon \right)}. \end{eqnarray} Here, we have defined the numerator as the polynomial function \begin{eqnarray} P^{FA}(Q,\omega) &=& \left(-16 \pi ^2 m^2 v_F^4+16 \pi ^2 m \omega v_F^2+32 i \pi ^2 m v_F^2 \epsilon -8 i \pi ^2 \omega \epsilon +8 \pi ^2 \epsilon ^2\right)\nonumber\\ &&\hspace{-5mm}+Q \left(16 \pi ^2 m^2 \omega v_F^3+32 i \pi ^2 m^2 v_F^3 \epsilon \right)\nonumber\\ &&\hspace{-5mm}+ Q^2 \left(16 \pi ^2 m^2 v_F^2 \epsilon ^2-16 i \pi ^2 m^2 \omega v_F^2 \epsilon \right)\nonumber\\ &&\hspace{-5mm}+Q^3 \left(-32 \pi ^2 m v_F^3+8 \pi ^2 \omega v_F+16 i \pi ^2 v_F \epsilon \right)-16 Q^4 \left(\pi ^2 v_F^2\right). \end{eqnarray} By simply counting powers in numerator and denominator, it is clear that the remaining integral is divergent and needs regularization. For this purpose, we first perform a partial fraction decomposition of the denominator as follows \begin{eqnarray} \frac{1}{\left(\omega + 2 i \epsilon\right)\left(\omega - 2 v_F Q + 2 i \epsilon\right)\left(i v_F Q + \epsilon \right)} &=& \frac{1}{2 i\omega v_F^2\left(\omega + 2 i \epsilon\right)}\nonumber\\ &&\times\left(\frac{1}{Q - Q_1} - \frac{1}{Q - Q_2} \right)\;,\label{partial} \end{eqnarray} with $Q_1 = i \epsilon/v_F$, $Q_2 = \left( \omega + 2 i \epsilon\right)/(2 v_F)$. After this, the integral splits into two contributions \begin{eqnarray} \Pi_{11}^{FA}(\omega,\mathbf{0}) &=& \frac{e^2}{4 m^2}\frac{1}{2 i \omega v_F^2\left(\omega + 2 i \epsilon\right)}\Bigg\{ \int_{0}^{\infty} \frac{dQ}{(2\pi)^3} Q \frac{P^{FA}(Q,\omega)}{Q - Q_1} \nonumber\\ &&- \int_{0}^{\infty} \frac{dQ}{(2\pi)^3} Q \frac{P^{FA}(Q,\omega)}{Q - Q_2} \Bigg\}\;. \end{eqnarray} For each integral (i.e. $Q_j = Q_1, Q_2$ respectively), let us analyze the asymptotic behavior of the integrand at large momentum values, say for $Q > Q^{*}$, with $Q^{*}$ an arbitrary but large momentum scale. In this regime, \begin{eqnarray} \frac{P^{FA}(Q,\omega)}{Q - Q_j} &\sim& \frac{8 \pi ^2 Q_j}{Q^2} \Bigg(Q_j^2 \left(-2 m^2 v_F^4+2 m v_F^2 (\omega+2 i \epsilon )+\epsilon (\epsilon -i \omega)\right) \nonumber\\ &&+2 m^2 Q_j v_F^3 (\omega+2 i \epsilon )-2 i m^2 \omega v_F^2 \epsilo +2 m^2 v_F^2 \epsilon ^2\nonumber\\ &&+Q_j^3 v_F \left(-4 m v_F^2+\omega+2 i \epsilon \right)-2 Q_j^4 v_F^2\Bigg)\nonumber\\ &&+8 \pi ^2 Q \Bigg(-2 m^2 v_F^4 +\omega \left(2 m v_F^2+Q_j v_F-i \epsilon \right)-4 m Q_j v_F^3\nonumber\\%\right.\nonumber\\&&\left. &&+4 i m v_F^2 \epsilon -2 Q_j^2 v_F^2+2 i Q_j v_F \epsilon +\epsilon ^2\Bigg)\nonumber\\ &&+\frac{8 \pi ^2}{Q} \Bigg(Q_j^2 \left(-2 m^2 v_F^4+2 m v_F^2 (\omega+2 i \epsilon )+\epsilon (\epsilon -i \omega)\right) \nonumber\\ &&+2 m^2 Q_j v_F^3 (\omega+2 i \epsilon ) -2 i m^2 \omega v_F^2 \epsilon +2 m^2 v_F^2 \epsilon ^2\nonumber\\ &&+Q_j^3 v_F \left(-4 m v_F^2+\omega+2 i \epsilon \right)-2 Q_j^4 v_F^2\Bigg)\nonumber\\ &&+8 \pi ^2 \Bigg(Q_j \left(-2 m^2 v_F^4 +2 m v_F^2 (\omega+2 i \epsilon )+\epsilon (\epsilon -i \omega)\right)\nonumber\\ &&+2 m^2 \omega v_F^3+4 i m^2 v_F^3 \epsilon +Q_j^2 v_F \left(-4 m v_F^2+\omega+2 i \epsilon \right)\nonumber\\ &&-2 Q_j^3 v_F^2\Bigg +8 \pi ^2 Q^2 v_F \left(-4 m v_F^2+\omega-2 Q_j v_F+2 i \epsilon \right)\nonumber\\ &&-16 \pi ^2 q^3 v_F^2 + O[Q^{-3}] \nonumber\\ &\equiv &P_{FA}^{Asymp}(Q,\omega,Q_j) + O[Q^{-3}]\;, \end{eqnarray} where we have defined $P_{FA}^{Asymp}(Q,\omega,Q_j)$ as the polynomial obtained by truncating the asymptotic expansion above up to $O[Q^{-3}]$, for $Q > Q^{*}$. Therefore, using this expansion, we regularize each of the integrals using the prescription ($d = 2 - s$) \begin{eqnarray} \int_{0}^{\infty} \frac{dQ}{(2\pi)^3} Q \frac{P^{FA}(Q,\omega)}{Q - Q_j} &\rightarrow& \int_{0}^{Q^{*}} \frac{dQ}{(2\pi)^3} Q \frac{P^{FA}(Q,\omega)}{Q - Q_j}\nonumber\\ & +& \int_{Q^{*}}^{\infty} \frac{dQ}{(2\pi)^3} Q \left[\frac{P^{FA}(Q,\omega)}{Q - Q_j} - P_{FA}^{Asymp}(Q,\omega,Q_j)\right]\nonumber\\ & +& \int_{Q^{*}}^{\infty} \frac{dQ}{(2\pi)^3} m^{s}Q^{1-s} P_{FA}^{Asymp}(Q,\omega,Q_j)\;. \end{eqnarray} After lengthy but straightforward algebra, we obtain in the limit $\epsilon \rightarrow 0^{+}$ \begin{eqnarray} \Pi_{11}^{FA}(\omega,\mathbf{0}) = \frac{e^2}{4 m^2}\left\{ i\frac{m^2\omega}{4\pi s} - i\frac{m^2\omega}{4\pi}\log\left[ -\frac{(\omega + 2 i \epsilon)}{2 m v_F}\right]\right\}\;. \end{eqnarray} Let us now consider the expression for $\Pi_{11}^{RF}(\omega,\mathbf{0})$, as obtained after calculating the trace and angular integration according to Eq.(\ref{trazas-integ}) \begin{eqnarray} \Pi_{11}^{RF}(\omega,\mathbf{0}) &=& \frac{e^2}{4 m^2}\int_{0}^{\infty} \frac{dQ}{(2\pi)^3} Q \int_{-\infty}^{\infty} \frac{dq_0}{B^{RF}} \Bigg\{ -8\pi\left( Q^2 + 2 m^2 v_f^2\right)q_0^2 \nonumber\\ &&+ 8 \frac{\pi}{m} \left( Q^4 -\omega \left(m Q^2 + 2m^3 v_f^2 \right) - 2 m^2 Q^2 v_f^2\right)q_0\nonumber\\ && + 2 \frac{\pi}{m^2} Q^2\left(2 m \omega \left(Q^2 -2m^2 v_f^2\right) + 2 m^2 Q^2 v_f^2 - Q^4 \right) \Bigg\}\;, \end{eqnarray} with \begin{eqnarray} B^{RF}&=& \left(q_0 + i\epsilon - \frac{Q^2}{2m} - v_FQ\right \left( q_0 - i\epsilon - \frac{Q^2}{2m} + v_F Q\right)\nonumber\\ &&\times\left(\left(\omega+ q_0 + i\epsilon - \frac{Q^2}{2m}\right)^2 - v_F^2Q^2\right)\;. \end{eqnarray} In this case, on the $q_0$-plane the integrand has three poles on the negative imaginary plane, $q_{0}^{(1,2)} = -i\epsilon -\omega + Q^{2}/(2m) \pm v_F Q$, $q_{0}^{(3)} = -i\epsilon + Q^2/(2m) + v_F Q$, and a single pole on the positive imaginary plane at $q_{0}^{(4)} = i\epsilon + \frac{Q^2}{2m} - v_F Q$. Therefore, we calculate the integral over $q_0$ using the residue theorem, by choosing a contour that closes on the upper complex plane. Thus, \begin{eqnarray} \Pi_{11}^{RF}(\omega,\mathbf{0}) &=& \frac{e^2}{4 m^2}\nonumber\\ &&\hspace{-5mm}\times\int_{0}^{\infty} \frac{dQ}{(2\pi)^3} \frac{Q \, P^{RF}(Q,\omega)}{\left(\omega + 2 i \epsilon\right)\left(\omega - 2 v_F Q + 2 i \epsilon\right)\left(i v_F Q + \epsilon \right)}\;. \end{eqnarray} The numerator of the resulting integrand is defined by the quartic polynomial function \begin{eqnarray} P^{RF}(Q,\omega) &=& \left(16 \pi ^2 m^2 v_F^2 \epsilon ^2-16 i \pi ^2 m^2\omega v_F^2 \epsilon \right)\nonumber\\ &&\hspace{-8mm}+Q \left(16 \pi ^2 m^2\omega v_F^3+32 i \pi ^2 m^2 v_F^3 \epsilon \right)\nonumber\\ &&\hspace{-8mm}+Q^2 (-16 \pi ^2 m^2 v_F^4-16 \pi ^2 m\omega v_F^2-32 i \pi ^2 m v_F^2 \epsilon -8 i \pi ^2\omega \epsilon +8 \pi ^2 \epsilon ^2)\nonumber\\ &&\hspace{-8mm}+Q^3 \left(32 \pi ^2 m v_F^3+8 \pi ^2\omega v_F+16 i \pi ^2 v_F \epsilon \right)-16 Q^4 \left(\pi ^2 v_F^2\right), \end{eqnarray} and then the integral is clearly divergent. A consistent regularization procedure is applied in this case as well. By performing the same partial fraction expansion of the denominator, as in Eq.(\ref{partial}), we find that the integral splits into two pieces ($Q_1 = i\epsilon/v_F$, $Q_2 = (\omega + 2 i\epsilon)/(2 v_F)$) \begin{eqnarray} \Pi_{11}^{RF}(\omega,\mathbf{0}) &=& \frac{e^2}{4 m^2}\frac{1}{2 i \omega v_F^2\left(\omega + 2 i \epsilon\right)}\Bigg\{ \int_{0}^{\infty} \frac{dQ}{(2\pi)^3} Q \frac{P^{RF}(Q,\omega)}{Q - Q_1} \nonumber\\ &&- \int_{0}^{\infty} \frac{dQ}{(2\pi)^3} Q\frac{P^{RF}(Q,\omega)}{Q - Q_2} \Bigg\}\;. \end{eqnarray} For each integral (i.e. $Q_j = Q_1, Q_2$ respectively), we analyze the asymptotic behavior of the integrand at large momentum values, say for $Q > Q^{*}$. In this regime, \begin{eqnarray} \frac{P^{RF}(Q,\omega)}{Q - Q_j} &&\sim 8 \pi^2 \frac{Q_j}{Q^2}\Bigg( -2 i m^2 \omega v_F^2 \epsilon +2 m^2 v_F^2 \epsilon^2 +2 m^2Q_j v_F^3 (\omega+2 i \epsilon ) \nonumber\\ &&+Q_j^2 \left(-2 m^2 v_F^4-2 m v_F^2 (\omega+2 i \epsilon )+\epsilon (\epsilon -i \omega)\right)\nonumber\\ &&+Q_j^3 v_F \left(4 m v_F^2+\omega+2 i \epsilon \right)-2Q_j^4 v_F^2\Bigg)\nonumber\\ &&+\frac{8 \pi ^2}{Q} \Bigg(Q_j^2 \left(-2 m^2 v_F^4-2 m v_F^2 (\omega+2 i \epsilon )+\epsilon (\epsilon -i \omega)\right)\nonumber\\ &&+2 m^2Q_j v_F^3 (\omega+2 i \epsilon )-2 i m^2 \omega v_F^2 \epsilon +2 m^2 v_F^2 \epsilon ^2\nonumber\\ &&+Q_j^3 v_F \left(4 m v_F^2+\omega+2 i \epsilon \right)-2Q_j^4 v_F^2\Bigg)\nonumber\\ &&+8 \pi ^2 \Bigg(Q_j \left(-2 m^2 v_F^4-2 m v_F^2 (\omega+2 i \epsilon )+\epsilon (\epsilon -i \omega)\right)+2 m^2 \omega v_F^3\nonumber\\&&+4 i m^2 v_F^3 \epsilon +Q_j^2 v_F \left(4 m v_F^2+\omega+2 i \epsilon \right)-2Q_j^3 v_F^2\Bigg)\nonumber\\ &&+8 \pi ^2 Q \Bigg(-2 m^2 v_F^4+\omega \left(-2 m v_F^2+Q_j v_F-i \epsilon \right)+4 mQ_j v_F^3\nonumber\\ &&-4 i m v_F^2 \epsilon -2Q_j^2 v_F^2+2 iQ_j v_F \epsilon +\epsilon ^2\Bigg)\nonumber\\ &&+8 \pi ^2 Q^2 v_F \left(4 m v_F^2+\omega-2Q_j v_F+2 i \epsilon \right)-16 \pi ^2 Q^3 v_F^2 + O[Q^{-3}] \nonumber\\ &&\equiv P_{RF}^{Asymp}(Q,\omega,Q_j) + O[Q^{-3}], \end{eqnarray} where we have defined $P_{RF}^{Asymp}(Q,\omega,Q_j)$ as the polynomial obtained by truncating the asymptotic expansion above up to $O[Q^{-3}]$, for $Q > Q^{*}$. Therefore, using this expansion, we regularize each of the integrals using the prescription ($d = 2 - s$) \begin{eqnarray} \int_{0}^{\infty} \frac{dQ}{(2\pi)^3} Q \frac{P^{RF}(Q,\omega)}{Q - Q_j} &\rightarrow& \int_{0}^{Q^{*}} \frac{dQ}{(2\pi)^3} Q \frac{P^{RF}(Q,\omega)}{Q - Q_j}\nonumber\\ & +& \int_{Q^{*}}^{\infty} \frac{dQ}{(2\pi)^3} Q \left[\frac{P^{RF}(Q,\omega)}{Q - Q_j} - P_{RF}^{Asymp}(Q,\omega,Q_j)\right]\nonumber\\ & +& \int_{Q^{*}}^{\infty} \frac{dQ}{(2\pi)^3} m^{s}Q^{1-s} P_{RF}^{Asymp}(Q,\omega,Q_j)\;. \end{eqnarray} After straightforward manipulations, we obtain in the limit $\epsilon \rightarrow 0^{+}$ \begin{eqnarray} \Pi_{11}^{RF}(\omega,\mathbf{0}) = \frac{e^2}{4 m^2}\left\{ i\frac{m^2\omega}{4\pi s} - i\frac{m^2\omega}{4\pi}\log\left[ -\frac{(\omega + 2 i \epsilon)}{2 m v_F}\right]\right\}\;. \end{eqnarray} Finally, let us consider the term Eq.(\ref{eq_PiRAF}). After tracing and performing the angular integral, \begin{eqnarray} \Pi_{11}^{RA}(\omega,\mathbf{0}) &=& \frac{e^2}{4 m^2}\int_{0}^{\infty} \frac{dQ}{(2\pi)^3} Q \int_{-\infty}^{\infty} \frac{dq_0}{B^{RA}} \Bigg\{ -8\pi\left( Q^2 + 2 m^2 v_f^2\right)q_0^2 \nonumber\\ &&+ 8\frac{\pi}{m} \left( Q^4 -\omega \left(m Q^2 + 2m^3 v_f^2 \right - 2 m^2 Q^2 v_f^2\right)q_0 \nonumber\\ && + 2 \frac{\pi}{m^2} Q^2\left(2 m \omega \left(Q^2 -2m^2 v_f^2\right) + m^2 Q^2 v_f^2 - Q^4 \right) \Bigg\}\;, \end{eqnarray} with \begin{eqnarray} B^{RA}&=&\left( \left(\omega + q_0 + i\epsilon - \frac{Q^2}{2m}\right)^2 - v_F^2Q^2\right)\nonumber\\ &&\times \left( \left(q_0 - i\epsilon - \frac{Q^2}{2m}\right)^2 - v_F^2 Q^2\right)\;. \end{eqnarray} Clearly, on the $q_0$-plane, the integrand has two poles on the positive imaginary plane at $q_{0}^{(1,2)} = i\epsilon + \frac{Q^2}{2m} \pm v_F Q$, and two poles on the negative imaginary plane at $q_{0}^{(3,4)} = -i\epsilon - \omega + \frac{Q^2}{2m} \pm v_F Q$. We evaluate the $q_0$ integral by the residue theorem, closing the contour on the upper plane. Thus, \begin{eqnarray} \Pi_{11}^{RA}(\omega,\mathbf{0}) &=& \frac{e^2}{4 m^2}\nonumber\\ &&\hspace{-5mm}\int_{0}^{\infty} \frac{dQ}{(2\pi)^3} Q \frac{P^{RA}(Q,\omega)}{\left(\omega + 2 i \epsilon\right)\left(\omega - 2 v_F Q + 2 i \epsilon\right)\left(\omega + 2 v_F Q + 2 i \epsilon \right)}\;. \end{eqnarray} The numerator of the resulting integrand is defined by the quartic polynomial function \begin{eqnarray} P^{RA}(Q,\omega) &=& -16 i \pi ^2 \Bigg(\omega^2 \left(2 m^2 v_F^2+Q^2\right)+2 i \omega \epsilon \left(2 m^2 v_F^2+Q^2\right)\nonumber\\ &&-2 \epsilon ^2 \left(2 m^2 v_F^2+Q^2\right)-4 Q^2 v_F^2 \left(m^2 v_F^2+Q^2\right)\Bigg), \end{eqnarray} and hence the diverging integral needs also a regularization. As in the former two cases, we first do a partial fraction decomposition of the denominator, to obtain \begin{eqnarray} \Pi_{11}^{RA}(\omega,\mathbf{0}) &=& \frac{e^2}{4 m^2}\frac{\left( Q_3 - Q_4 \right)^{-1}}{-4 v_F^2\left(\omega + 2 i \epsilon\right)}\Bigg\{ \int_{0}^{\infty} \frac{dQ}{(2\pi)^3} Q \frac{P^{RA}(Q,\omega)}{Q - Q_3}\nonumber\\ && - \int_{0}^{\infty} \frac{dQ}{(2\pi)^3} Q \frac{P^{RA}(Q,\omega)}{Q - Q_3} \Bigg\}, \end{eqnarray} where we have defined $Q_3 = (\omega + 2 i \epsilon)/(2 v_F)$, $Q_4 = - Q_3$. For each integral (i.e. $Q_j = Q_3, Q_4$ respectively), we analyze the asymptotic behavior of the integrand at large momentum values, say for $Q > Q^{*}$. In this regime, \begin{eqnarray} \frac{P^{RA}(Q,\omega)}{Q - Q_j} &&\sim -16 i \pi ^2 \frac{Q_j}{Q^2} \Bigg(\omega^2 \left(2 m^2 v_F^2+Q_j^2\right)+2 i \omega \epsilon \left(2 m^2 v_F^2+Q_j^2\right) \nonumber\\ && -2 \left(Q_j^2 \left(2 m^2 v_F^4+\epsilon ^2\right)+2 m^2 v_F^2 \epsilon ^2+2 Q_j^4 v_F^2\right)\Bigg)\nonumber\\ &&-16 i \pi ^2 Q \Bigg(-2 \left(2 m^2 v_F^4+2 Q_j^2 v_F^2+\epsilon ^2\right)+\omega^2+2 i \omega \epsilon \Bigg)\nonumber\\ &&-\frac{16 i \pi ^2}{Q} \Bigg(\omega^2 \left(2 m^2 v_F^2+Q_j^2\right)+2 i \omega \epsilon \left(2 m^2 v_F^2+Q_j^2\right)\nonumber\\ && -2 \left(Q_j^2 \left(2 m^2 v_F^4+\epsilon ^2\right)+2 m^2 v_F^2 \epsilon ^2+2 Q_j^4 v_F^2\right)\Bigg)\nonumber\\ &&-16 i \pi ^2 Q_j \Bigg(-2 \left(2 m^2 v_F^4+2 Q_j^2 v_F^2+\epsilon ^2\right)+\omega^2+2 i \omega \epsilon \Bigg)\nonumber\\ &&+64 i \pi ^2 v_F^2 Q^3+64 i \pi ^2 Q_j v_F^2 Q^2 + O[Q^{-3}]\nonumber\\ &&\equiv P_{RA}^{Asymp}(Q,\omega,Q_j) + O[Q^{-3}], \end{eqnarray} where we have defined $P_{RA}^{Asymp}(Q,\omega,Q_j)$ as the polynomial obtained by truncating the asymptotic expansion above up to $O[Q^{-3}]$, for $Q > Q^{*}$. Therefore, using this expansion, we regularize each of the integrals using the prescription ($d = 2 - s$) \begin{eqnarray} \int_{0}^{\infty} \frac{dQ}{(2\pi)^3} Q \frac{P^{RA}(Q,\omega)}{Q - Q_j} &\rightarrow& \int_{0}^{Q^{*}} \frac{dQ}{(2\pi)^3} Q \frac{P^{RA}(Q,\omega)}{Q - Q_j}\nonumber\\ &&+ \int_{Q^{*}}^{\infty} \frac{dQ}{(2\pi)^3} Q \left[\frac{P^{RA}(Q,\omega)}{Q - Q_j} - P_{RA}^{Asymp}(Q,\omega,Q_j)\right]\nonumber\\ && + \int_{Q^{*}}^{\infty} \frac{dQ}{(2\pi)^3} m^{s}Q^{1-s} P_{RA}^{Asymp}(Q,\omega,Q_j)\;. \end{eqnarray} After lengthy but straightforward algebra, we obtain in the limit $\epsilon \rightarrow 0^{+}$ \begin{eqnarray} \Pi_{11}^{RA}(\omega,\mathbf{0}) &=& \frac{e^2}{4 m^2}\Bigg\{ i\frac{m^2\omega}{2\pi s} - i\frac{m^2\omega}{4\pi}\log\left[ -\frac{(\omega + 2 i \epsilon)}{2 m v_F}\right]\nonumber\\ &&- i\frac{m^2\omega}{4\pi}\log\left[ \frac{(\omega + 2 i \epsilon)}{2 m v_F}\right]\Bigg\}. \end{eqnarray} Notice that the final result does not depend on the arbitrary scale $Q^*$, as it must. \ack H.F. thanks ANPCyT, CONICET and UNLP, Argentina, for partial support through grants PICT-2014-2304, PIP 2015-688 and Proy. Nro. 11/X748, respectively. M.L. acknowledges support from FONDECYT (Chile) under grants No. 1170107, No. 1150471, No. 1150847 and ConicytPIA/BASAL (Chile) Grant No. FB0821. E. M. acknowledges support from FONDECYT (Chile) under grant No. 1141146. A.R.\ acknowledges support from Consejo Nacional de Ciencia y Tecnolog\'ia (Mexico) under grant 256494 and FONDECYT (Chile) under grant 1150847. H.F.\ and A.R\ also acknowledge PUC for hospitality.
1911.02820
\section{Introduction} In this paper we are concerned with the Gross-Pitaevskii equation \begin{equation}\label{eq:GP} i \partial_t \Psi=\Delta \Psi+\Psi\left(1-|\Psi|^2\right) \text{on } {\mathbb R}^d \times {\mathbb R} \end{equation} when $d=2$ or $d=3$. Observe that this is no more than a Nonlinear Schr\"{o}dinger Equation with a Ginzburg-Landau potential. The Gross-Pitaevskii equation was proposed in 1961 (\cite{gross, pita}) to model a quantum system of bosons in a Bose-Einstein condensate, via a Hartree-Fock approximation (see also \cite{b1, b2, jpr1, jpr2}). It appears also in other contexts such as the study of dark solitons in nonlinear optics (\cite{k1, k2}). From the point of view of the dynamics, the Cauchy problem for the Gross-Pitaevskii equation was first studied in one space dimension by Zhidkov \cite{Z} and in dimension $d=2,3$ by B\'{e}thuel and Saut \cite{bs} (see also \cite{ge1,ge2, killip}). At least formally, equation \eqref{eq:GP} presents two invariants, namely: \begin{itemize} \item \emph{Energy:} \[ \mathcal{E}= \int_{{\mathbb R}^d} \frac 12 |\nabla \Psi|^2 +\frac 14 \left(1-|\Psi|^2\right)^2, \] \item \emph{Momentum:} \[ \mathcal{\bf{P}}=\frac 12 \int_{{\mathbb R}^d} \langle i \nabla \Psi, \Psi \rangle, \] where $ \langle f,g \rangle=Re(f)Re(g)+Im(f)Im(g)$. \end{itemize} This paper is focused on the existence of traveling wave solutions to \eqref{eq:GP}, that is, solutions in the form \begin{equation}\label{eq:ansatz} \Psi(x,t)=\psi(x_1-ct, \tilde{x}), \ \ \tilde{x}=(x_2 \dots x_d) \in {\mathbb R}^{d-1}, \end{equation} where the parameter $c\in {\mathbb R}$ characterizes the speed of the traveling wave. Without any lack of generality we will consider $c>0$ throughout the paper. By the ansatz \eqref{eq:ansatz} the equation for the profile $\psi$ is given by \begin{equation}\label{eq:GPellip} i c \,\partial_{x_1}\psi +\Delta \psi+\left(1-|\psi|^2\right)\psi=0 \ \ \mbox{in } {\mathbb R}^d. \end{equation} The study of finite energy traveling waves for \eqref{eq:GP} has also implications in the dinamics of the equation. In particular, their pressence is an obstruction to scattering of solutions. Scattering of small energy solutions has been proved in \cite{gnt,gnt2} for $d=3$, and such result is not true in dimension $d=2$. This latter fact may seem surprising for a defocusing Schr\"{o}dinger equation; the reason is that finite energy solutions of \eqref{eq:GP} do not vanish at infinity. Nontrivial finite energy traveling waves in dimension $d=1$ are explicitly known, and they are uniquely given (up to rotation or translation) by the expression $$\psi_c(x)=\sqrt{\frac{2-c^2}{2}} \tanh \left( \frac{\sqrt{2-c^2} }{2}x \right)+i\frac{c}{\sqrt{2}},$$ if $c<\sqrt{2}$. In the literature the function $\psi_0$ is called black soliton whereas $\psi_c$ ($c \neq 0$) receives the name of dark soliton. Their orbital and asymptotic stability has been studied, see \cite{bgss, bgs}. \medskip The problem of finding solutions to \eqref{eq:GPellip} in dimension $d\geq 2$ has a long story. In the pioneer work of Jones, Putterman and Roberts (\cite{jpr1, jpr2}), formal calculations and numerical analysis gave rise to a set of conjectures regarding existence, asymptotic behavior and stability of finite energy travelling waves: the so-called the Jones-Putterman-Roberts program. In particular, the existence of finite energy traveling waves is expected if and only if $c \in (0, \sqrt{2})$ (the sub-sonic case). The threshold value $c= \sqrt{2}$ comes from the linearization of the problem around the constant solutions of modulus 1. In a certain sense, those solutions correspond to local minima if $c< \sqrt{2}$. In the last years much progress has been made to give rigorous proofs of those conjectures. Nontrivial finite energy traveling waves for supersonic speed $c>\sqrt{2}$ do not exist, see \cite{gravejat-CMP}. In dimension $d=2$ this nonexistence result holds also for $c=\sqrt{2}$, see \cite{gravejat-DIA}. For general nonlinearities analogous results have been proved in \cite{Maris-SIAM}. Concerning the asymptotics of finite energy solutions, for any $d \geq 2$, finite energy solutions of \eqref{eq:GPellip} converge at infinity to a fixed complex number of modulus $1$. By the phase invariance of the problem, we can assume that \begin{equation} \label{limit1} \psi(x) \to 1 \mbox{ as } |x| \to +\infty. \end{equation} A more precise asymptotic description of $\psi$ is indeed available, see \cite{gravejat-AIHP, gravejat-Asymp, gravejat-Adv.Diff}. A very active field of research is the study of the location and dynamics of vortices, namely, the zeroes of the wave function $\psi$. The existence of multi-vortices traveling waves with small speed has been proved in dimension $d=2$, see \cite{ lw, chr2, chr3}. In dimension $3$ there are traveling vortex rings (\cite{lwy}) as well as leapfrogging vortex rings, see \cite{jer2}. \medskip At least formally, the Lagrangian associated to \eqref{eq:GPellip} is defined as: \begin{equation}\label{eq:ac} I^c(\psi)= \mathcal{E}(\psi) - c \mathcal{P}(\psi)= \frac{1}{2} \int_{{\mathbb R}^d} |\nabla \psi|^2 -c\mathcal{P}(\psi)+ \frac 14 \int_{{\mathbb R}^d} \left(1-|\psi|^2\right)^2, \end{equation} where $\mathcal{P}$ is the first component of the momentum $\mathcal{\bf P}$ that, under suitable integrability conditions (and taken into account \eqref{limit1}) can be written as: \begin{equation}\label{eq:MO} \mathcal{P}(\psi):= -\int_{{\mathbb R}^d} \partial_{x_1} (Im \Psi) (Re \Psi-1). \end{equation} A classical approach to prove existence of traveling waves (starting from \cite{jpr1, jpr2}) is a minimization procedure of the energy functional $\mathcal{E}$ under the constraint $\mathcal{P}(\psi)=p$ in a suitable functional space. This approach has been pursued in a number of papers, see for instance \cite{bgs-cmp, bos} for the Gross-Pitaevskii equation and \cite{chr-mar} for more general nonlinearities. A major difficulty in this strategy is to find a natural definition of the momentum for functions with finite energy, since the integrand in \ref{eq:MO} might be non-integrable (see \cite{bos}). This approach has the advantage of providing orbital stability of the solutions found (more precisely, of the set of minimizers). As a drawback, the speed $c$ appears as a Lagrange multiplier and is not under control. In particular the possibility of gaps in the subsonic range of velocity cannot be excluded with the constrained minimization approach (see \cite{bgs-cmp}). We shall also quote existence results for small values of $c$, see \cite{bs} in dimension 2 and \cite{chr} in dimension 3, but a complete existence result in the sub-sonic case remained for many years as a standing open problem. Finally, Mari\c s proved in \cite{Ma} the existence result for any $c \in (0, \sqrt{2})$ in dimension $d \geq 3$. His approach is, summing up, to minimize $I^c(\psi)$ under a Pohozaev-type constraint. Once this is accomplished, Mari\c s proves that the corresponding Lagrange multiplier is 0, concluding the proof. This approach works also for more general nonlinearities with nonvanishing conditions at infinity, such as the cubic-quintic nonlinearity. As commented in \cite{Ma}, this minimization approach breaks down in dimension 2 because of different scaling properties: the infimum is $0$ and is never attained. One important tool in Mari\c s' argument is the use of the fiber $t \mapsto u_t$, where $u_t(x_1, \tilde{x}) = u(x_1, t \tilde{x})$. For instance, in dimension $d \geq 4$ all solutions correspond to a maximizer of $I^c$ with respect to that fiber. In dimension $3$, $I^c(u_t)$ is independent of $t$ for any solution: the argument needs to be adapted, but still the use of the fiber is essential. Those cases have an analogy in the study of the Nonlinear Sch\"{o}dinger equation, see \cite{blions}, \cite{bgk}, respectively. However, in dimension $2$ this approach breaks down, and the fiber $u_t$ seems of no use; $I^c(u_t)$ attains a maximum at $t=1$ for any solution $u$. One of the main motivations of this paper is to deal with the physically relevant $2D$ model where the existence of finite energy traveling waves in the full subsonic range is still an open problem. Our main result is the following: \begin{thm} \label{teo:almost} There exists a subset $E \subset (0,\sqrt{2})$ of plein measure such that, for any $c \in E$, there exists a nontrivial finite energy solution of \eqref{eq:GPellip} $\psi_c$ such that: \begin{enumerate} \item For any $c_0 \in (0, \sqrt{2})$ there exists $\chi=\chi(c_0)>0$ such that $$0 < I^c(\psi_c) \leq \chi \ \mbox{ for all } c \in E, \ c \geq c_0;$$ \item $ind (\psi_c) \leq 1$, where $ind (\psi_c)$ stands for the Morse index of $\psi_c$, that is, $$ \sup \{dim\, Y: \ Y \subset C_0^{\infty}({\mathbb R}^d) \mbox{ vector space, } (I^c)''(\psi_c)(\phi,\phi) <0 \ \forall \, \phi \in Y \} \leq 1.$$ \end{enumerate} \end{thm} The proof deals directly with the Lagrangian $I^c$ and is focused on searching critical points by using min-max arguments. Our proofs use several ingredients: \begin{itemize} \item Several regularization (or relaxation) techniques have been used in the literature to deal with the Gross-Pitaevskii equation (\cite{bs, Ma}). Alternatively, some authors have proposed an approach by approximating domains, like flat tori, see \cite{ bgs-cmp, bos}. In this paper we choose the second approach, but we use as approximating domains the slabs: \begin{equation} \label{omegaN} \Omega_N=\left\{(x_1,\tilde{x})\in {\mathbb R}\times {\mathbb R}^{d-1}, \ \ \ -N<x_1<N\right\}, \ N \in {\mathbb N}. \end{equation} In other words, we first use a mountain-pass argument to address the question of existence of solutions to the problem: \begin{equation}\label{eq:GPstrip} \begin{array}{rcr} i c\partial_{x_1}\psi +\Delta \psi+\left(1-|\psi|^2\right)\psi & = & 0 \ \ \text{ on } \Omega_{N}, \\ \psi & = & 1 \text{ on } \partial \Omega_{N}. \end{array} \end{equation} The boundary condition is motivated by \eqref{limit1}. This approach has several advantages. First, as $\Omega_N$ is bounded in the $x_1$ direction, Poincar\'{e} inequality holds and we can work on the space $1+H_0^1(\Omega_N)$. As a consequence the momentum given by formula \eqref{eq:MO} is well defined. Secondly, as $\Omega_N$ is invariant along the variable $\tilde{x}$, a Pohozaev type inequality is satisfied without boundary terms (see Lemma \ref{lem:poho2}). This allows us to avoid the problem of unfolding choices of tori, as in \cite{bgs-cmp, bos}. \item A second fundamental tool is an energy bound argument via monotonicity in order to control the energy of (PS) sequences for almost all values of $c$. This idea has been used many times in literature starting from \cite{struwe}. The main point here is that we are able to obtain a \emph{uniform bound on the energy for a subsequence of enlarging slabs $\Omega_{k(N)}$}. This is based in a key analytic argument, and it is fundamental in what follows. To the best of our knowledge, this abstract argument is completely new and could be of use in other frameworks where a monotonicity argument is used together with a relaxation procedure. \item The next step is to pass to the limit, and for that we need to deal with the problem of vanishing. Here we rely on arguments of \cite{bgs-cmp}, and we use in an essential way that $\psi_N$ are solutions of \eqref{eq:GPstrip}. We can also exclude the concentration of solutions near the boundary of $\Omega_N$, since the problem posed in the half-space $$ \begin{array}{rcl}i c\partial_{x_1}\psi +\Delta \psi+\left(1-|\psi|^2\right)\psi & = & 0 \ \ \text{ on } {\mathbb R}^d_+, \\ \psi & = & 1 \ \text{ on } \partial {\mathbb R}^d_+, \end{array} $$ does not admit nontrivial solutions. This is another reason for the choice of $\Omega_N$ as approximating domains (see Remark \ref{remark halfspace}). \item Finally, we use the arguments of \cite{FG} to obtain a Morse index bound of the solutions obtained. Roughly speaking, since our solutions come from a mountain pass argument, their Morse index is at most 1. This will be used in an essential way in the proof of Theorem \ref{teo:all}. \end{itemize} With Theorem \ref{teo:almost} in hand, one could ask whether we can pass to the limit and obtain a nontrivial solution for all values of $c \in (0,\sqrt{2})$. This is relatively easy, see Proposition \ref{ascoli}. The problem here is to show that the limit solution has finite energy. Let us point out that the boundedness of the energy cannot be deduced only by using Pohozaev-type identities, and more delicate arguments are needed. We give two results on this aspect. The only requirement of the next theorem is $d=3$: \begin{thm} \label{teo:3} Assume that $d=3$. Let $c \in (0,\sqrt{2})$, $c_n \in E$, $c_n \to c$, where $E$ is the set given by Theorem \ref{teo:almost}. Let $\psi_n$ be the finite energy solutions with speed $c_n$ given by that theorem. Then there exists $\xi_n \in {\mathbb R}^d$ such that: $$ \psi_n(\cdot - \xi_n) \to \psi \ \mbox{ in } C^k_{loc}({\mathbb R}^d),$$ where $\psi$ is a nontrivial finite energy solution of \eqref{eq:GPellip} with speed $c$. \end{thm} Observe that Theorems \ref{teo:almost} and \ref{teo:3} give an alternative proof of the result of Mari\c s \cite{Ma} for the Gross-Pitaevskii equation. \medskip Under minor changes, Theorems \ref{teo:almost} and \ref{teo:3} can be adapted to $d \geq 4$: the problem there is the fact that the term $(1-|\psi|^2)^2$ becomes critical or supercritical with respect to the Sobolev embedding. However, since this term has a positive sign in the functional, this issue could be fixed by changing suitably the functional setting, or, alternatively, by using a convenient truncation argument. For the sake of brevity we will not do so and restrict ourselves to the relevant spatial dimensions $d=2$ or $d=3$. \medskip Regarding compactness of solutions, the case $d=2$ is, again, more involved. It presents analytical difficulties and also topological obstructions, see Remarks \ref{r1}, \ref{r2}. In dimension 2 we are able to conclude only under some assumptions on the vortex set of the solutions: \begin{thm} \label{teo:all} Take $c \in (0,\ \sqrt{2})$, $c_n \in E$ with $c_n \to c$ and $\psi_n$ the finite energy solutions with speed $c_n$ given by Theorem \ref{teo:almost}. Assume that \begin{enumerate} \item either $\psi_n$ are vortexless, that is, $\psi_n(x) \neq 0 $ for all $x \in {\mathbb R}^d$, \item or there exists $R>0$, $\delta>0$ such that: \begin{equation} \label{control} \{x \in {\mathbb R}^d: \ \psi_n(x)=0 \} \subset B(0, R) \mbox{ and } |\psi_n(x)| \geq \delta \ \forall \, x \in \partial B(0,R). \end{equation} \end{enumerate} Then there exists $\xi_n \in {\mathbb R}^d$ such that: $$ \psi_n(\cdot - \xi_n) \to \psi \ \mbox{ in } C^k_{loc}({\mathbb R}^d),$$ where $\psi$ is a nontrivial finite energy solution of \eqref{eq:GPellip} with speed $c$. \end{thm} The proofs of both Theorem \ref{teo:3} and Theorem \ref{teo:all} follow similar ideas, which include the following: \begin{itemize} \item A fundamental tool is the use of a lifting, that is, the existence of real functions $\rho_n(x)$, $\theta_n(x)$ such that $\psi_n = \rho_n e^{i \theta_n}$. This is always possible if the solutions are vortexless. If the solutions present vortices, one needs some information on the location of the vortex set. In Theorem \ref{teo:3} one can show that the vortices are included in a set of disjoint balls, and that the number of balls and their radius is bounded. Generally speaking, a nonvanishing function $\psi$ admits a lifting if its domain is simply connected. Since the complement of a disjoint union of closed balls is simply connected if $d=3$, we can find a lifting outside those balls. In dimension 2 this is no longer true, though, and we can use a lifting only in the complement of one ball, since the total degree of a finite energy solution is 0 (see \cite{gravejat-AIHP}). \item We reason by contradiction assuming that $\mathcal{E}(\psi_n) \to +\infty $. A Pohozaev-type identity implies that $$ \sum_{k=2}^d \int_{{\mathbb R}^d} |\partial_{x_k} \psi_n|^2 = (d-1) I^{c_n}(\psi_n),$$ and $I^{c_n}(\psi_n)$ is bounded by Theorem \ref{teo:almost}. In our arguments we can pass to a limit (locally) which is a 1-D solution of the Gross-Pitaevskii equation (with finite or infinite energy). The knowledge of those 1-D solutions is essential at this point. For instance, in the proof of Theorem \ref{teo:all} we are able to obtain in the limit a circular solution $\psi(x_1) = \rho_0 e^{i \omega_0 x_1}$, with $\rho_0^2 < \frac{2}{3} (1+c^2/4)$. But it turns out that such solution has infinite Morse index, and we reach a contradicion. \end{itemize} Under minor changes, it is possible to adapt the results of this paper to an equation with more general nonlinearities, namely: $$ i c\partial_{x_1}\psi +\Delta \psi+ F(|\psi|)\psi=0 \ \ \text{ on } {\mathbb R}^d.$$ Several assumptions on the nonlinearity $F$ would be in order. However, for the sake of brevity and clarity, we have preferred to focus on the prototype model of the Gross-Pitaevskii equation in this paper. The rest of the paper is organized as follows. Section 2 is devoted to the setting of the notation and some preliminary results. In Section 3 we begin the proof of Theorem \ref{teo:almost} by considering problem \eqref{eq:GPstrip} from a variational point of view. A main issue here is that we are not able to show that (PS) sequences have bounded energy. This problem is solved for almost all values of $c$ via the monotonicity trick of Struwe in Section 4. We are able to find sequences of slabs $\Omega_{k(N)}$ for which those solutions have uniformly bounded energy. In Section 5 we pass to the limit avoiding vanishing or concentration on the boundary, concluding the proof of Theorem \ref{teo:almost}. Sections 6 and 7 are devoted to the proofs of Theorems \ref{teo:3}, \ref{teo:all}, respectively. The appendix deals with the Morse index computation of the 1-D circular solutions of the Gross-Pitaevskii equation, which is needed in the conclusion of Theorem \ref{teo:all}. \bigskip \medskip {\bf Acknowledgements:} The authors wish to thank Rafael Ortega for many discussions on the 1-D solutions of the Gross-Pitaevskii equation, and also for his help in the elaboration of the Appendix. \section{Preliminaries} In this section we collect some well-known properties of solutions of the Gross-Pitaevskii equation. We begin by stablishing the notation that we will use throughout the paper. \medskip {\bf Notation:} We denote by $\langle z_1, z_2 \rangle$ the real scalar product of two elements in $\mathbb{C}$, that is, $\langle z_1, z_2 \rangle = Re (z_1 \overline{z_2})$. We denote instead by $\xi_1 \cdot \xi_2$ the real scalar product in ${\mathbb R}^d$, to avoid confusion. We shall use the letter $\psi$ for complex valued functions, and we will denote its real and imaginary part by $u$ and $v$, respectively, so that $\psi = u + i v$. Moreover, we will write $\rho$ to denote its modulus, that is, $\rho^2 = u^2 + v^2= \langle \psi, \overline{\psi} \rangle$. We denote the partial derivatives by $\partial_{x_1} \psi$, but sometimes we will use $\psi_{x_1}$ for convenience. \medskip In next lemma we are concerned with the regularity of solutions and the uniform boundedness of their derivatives. \begin{lem} \label{lem:bound} Any solution $\psi$ of \eqref{eq:GPellip} or \eqref{eq:GPstrip} is of class $C^{\infty}$ and, for any $k \in {\mathbb N}$, there exists $C_k>0$ such that $\| D^k \psi (x) \| \leq C_k$ for any $x \in {\mathbb R}^d$. \end{lem} The above result is well-known. The starting point is the $L^\infty$ estimate: $$ \| \psi \|_{L^\infty} \leq \sqrt{1+c^2/4}.$$ This was proved in \cite{farina} for all entire solutions of \eqref{eq:GPellip} (not only those with finite energy). The argument works equally well for problem \eqref{eq:GPstrip} since the boundary condition is compatible with the $L^\infty$ bound. From this, one can obtain the result via local elliptic regularity estimates. Indeed the solutions are analytic, see \cite{bgs-cmp}[Theorem 2.1] for more details. \medskip Next lemma gives a Pohozaev identity: \begin{lem} \label{lem:poho} Let $\psi$ be a finite energy solution of \eqref{eq:GPellip}. Then: $$\frac{d-2}{2} \int_{{\mathbb R}^d} |\nabla \psi|^2 -(d-1) c \mathcal{P}(\psi)+ \frac d 4 \int_{{\mathbb R}^d} \left(1-|\psi|^2\right)^2 =0.$$ \end{lem} \begin{proof} See for instance \cite{gravejat-CMP}, or \cite{bgs-cmp}[Lemma 2.5 and following]. \end{proof} Next identity is also of Pohozaev-type, but only uses the invariance of the domain by dilations in the $\tilde{x}$ variable: \begin{lem} \label{lem:poho2} Let $\psi$ be a finite energy solution of either \eqref{eq:GPellip} or \eqref{eq:GPstrip}. Then the following identity holds: $$(d-3) A(\psi)+(d-1)B(\psi)=0,$$ where \[ A(\psi)=\frac 12 \sum_{j=2}^d \int |\nabla_{x_j} \psi|^2 \] and \[ B(\psi)=\frac{1}{2} \int |\partial_{x_1} \psi|^2 + \frac 14 \int \left(1-|\psi|^2\right)^2 -c\mathcal{P}(\psi). \] Moreover, by the definition of the Lagrangian \eqref{eq:ac}, we conclude that \begin{equation}\label{eq:A bounded} I(\psi)=\frac{2}{d-1}A(\psi)\geq 0. \end{equation} Finally, $I(\psi)=0$ if and only if $\psi$ is a constant function of modulus 1. \end{lem} \begin{proof} The case of \eqref{eq:GPellip} has actually been proved in \cite{gravejat-CMP}[Proposition 5], taking into account \cite{bgs-cmp}[Lemma 2.5] (see also\cite{Maris-SIAM}[Proposition 4.1]). The case of the domain $\Omega_N$ is completely analogous and is based on the fact that the dilations $(x_1, \tilde{x}) \mapsto (x_1, \lambda \tilde{x} )$ leave the domain $\Omega_N$ invariant. \end{proof} The following decay estimate has been proved in \cite{gravejat-AIHP}: \begin{lem} \label{lem:decay} Let $\psi$ be a finite energy solution of \eqref{eq:GPellip} satisfying \eqref{limit1}. Then the following asymptotics hold: $$ |v (x) | \leq \frac{K}{1+|x|^{d-1}}, \ \ |u(x)- 1 | \leq \frac{K}{1+|x|^{d}},$$ $$ |\nabla v (x) | \leq \frac{K}{1+|x|^{d}}, \ \ |\nabla u (x) | \leq \frac{K}{1+|x|^{d+1}}.$$ Outside a ball $B(0,R)$ containing all vortices, $\psi$ can be lifted as $\psi = \rho e^{i\theta}$. Then the above decay estimates can be written as: $$ |\theta (x) | \leq \frac{K}{1+|x|^{d-1}}, \ \ | \rho(x) - 1 | \leq \frac{K}{1+|x|^{d}},$$ $$ |\nabla \theta (x) | \leq \frac{K}{1+|x|^{d}}, \ \ |\nabla \rho (x) | \leq \frac{K}{1+|x|^{d+1}}.$$ In particular, the definition \eqref{eq:MO} of the momentum is well defined for any finite energy solution of \eqref{eq:GPellip}. \end{lem} We now define the Morse index of a solution of \eqref{eq:GPellip}: \begin{definition} \label{Morse} Let $\psi$ be a solution of \eqref{eq:GPellip} (either with finite or infinite energy). We define its Morse index $ind(\psi)$ as: $$ \sup \{dim\, Y: \ Y \subset C_0^{\infty}({\mathbb R}^d) \mbox{ vector space, } Q(\phi)<0 \ \forall \, \phi \in Y \},$$ where \begin{equation} \label{defQ} Q(\phi)=\int_{{\mathbb R}^d} |\nabla \phi|^2 - c \langle \phi, i \partial_{x_1} \phi \rangle - (1 - |\psi|^2) |\phi|^2 +2 \langle \phi, \psi \rangle^2. \end{equation} If that set is not bounded from above, we will say that its Morse index is $+\infty$. \medskip Observe that, at least formally, $Q(\phi) = I_c''(\psi)[\phi, \phi]$, and hence the Morse index is nothing but the maximal dimension for which $I_c''(\psi)$ is negative definite. \begin{remark} \label{remark morse} An useful property of the so-defined Morse index is that it is decreasing under convergence in compact sets. Being more specific, assume that $\psi_n$ is a sequence of solutions of \eqref{eq:GPellip} or \eqref{eq:GPstrip}. Assume also that $ind(\psi_n) \leq m$ and $\psi_n$ converges to $\psi_0$ in $C^1_{loc}$ sense. Then $ind (\psi_0) \leq m$. This property will be essential, in particular, in the proof of Theorem \ref{teo:all}. \end{remark} \end{definition} \section{The variational approach of Problem \eqref{eq:GPstrip}} We first recall the definition of $\Omega_N$ \eqref{omegaN} and observe that in the Sobolev Space $H_0^1(\Omega_N)$ the Poincar\'{e} inequality holds: \begin{equation} \label{eq:poincare} \int_{\Omega_N} |\phi|^2 \leq C_N \int_{\Omega_N} |\nabla \phi|^2 \ \ \forall \ \phi \in H_0^1(\Omega_N). \end{equation} If we combine this with the Sobolev inequality we obtain that \begin{equation} \label{sob} \| \phi \|_{L^p} \leq C_N \| \nabla \phi \|_{L^2}, \ \ \left \{ \begin{array}{ll} p\in [2, 6] & \mbox{if } d=3, \\ \\ p \geq 2 & \mbox{if } d=2. \end{array}\right. \end{equation} Let us define the action functional $I^c_N$ as the Lagrangian $I^c$ defined in \eqref{eq:ac} restricted to the affine space $ 1+H_0^1(\Omega_N)$, that is, $$ I^c_N(\psi):= \mathcal{E}(\psi) - c \mathcal{P}(\psi)= \frac{1}{2} \int_{\Omega_N} |\nabla \psi|^2 -c\mathcal{P}(\psi)+ \frac 14 \int_{\Omega_N} \left(1-|\psi|^2\right)^2 $$ We notice that thanks to the identities $$\mathcal{P}(u+i v)=- \int_{\Omega_N} (u(x)-1)\partial_{x_1} v(x),$$ $$(1-u^2-v^2)^2=(2(u-1)+(u-1)^2+v^2)^2$$ the action functional $I^c_N$ is $C^2$ in $H^1_0(\Omega_N)$. Our aim is to prove the existence of critical points of the action functional where the velocity parameter if fixed; these critical points correspond to solution to \eqref{eq:GPstrip}. Let us point out that $H_0^1(\Omega_N)$ is included in $H_0^1(\Omega_{N'})$ if $N'>N$ (up to extension by $0$). Our strategy is to prove that $I^c_N$ has a mountain pass geometry on $1+ H_0^1(\Omega_N)$. More precisely we aim to prove that \begin{equation}\label{gamma} \gamma_N(c):= \inf_{g \in \Gamma} \max_{t\in [0,1]}I^c_N(g(t)) > 0, \end{equation} where \begin{equation}\label{Gamma} \Gamma(N) =\{g \in C([0,1], (1+H_0^1(\Omega_N)): \ g(0)=1, \ g(1)=\psi_0\}, \end{equation} where $\psi_0$ is chosen so that $I^c_N(\psi_0)<0$. \begin{prop} \label{min-max} Given any $c_0 \in (0, \sqrt{2})$, there exist $N_0>0$, $\psi_0 \in 1 + H_0^1(\Omega_{N_0})$ and $\chi(c_0)>0$ such that $\forall N \geq N_0$, $c \in [c_0, \sqrt{2})$: \begin{enumerate} \item[a)] $I^c_N(\psi_0)<0$. \item[b)] $0 < \gamma_N(c) \leq \chi(c_0)$. \end{enumerate} \end{prop} \begin{proof} We can write the action functional $I^c_N(\psi)$, as: $$ I^c_N(\psi) = \int_{\Omega_N} \frac{1}{2} |\nabla u|^2 + \frac{1}{2} |\nabla v|^2 -c (1-u)\partial_{x_1} v +\frac 1 4 (2(u-1)+(u-1)^2+v^2)^2. $$ Moreover we have the elementary inequality $c x y\leq \frac{c^2}{4}x^2+y^2$, so that \begin{eqnarray*} I^c_N(\psi) \geq \displaystyle \int_{\Omega_N} \frac{1}{2} |\nabla u|^2 + \left(\frac{1}{2} -\frac{c^2}{4} \right) |\nabla v|^2 - (u-1)^2 + \frac{(2(u-1)+(u-1)^2+v^2)^2}{4} \\ \geq \displaystyle\int_{\Omega_N} \frac{1}{2} |\nabla u|^2 + \left(\frac{1}{2} -\frac{c^2}{4} \right) |\nabla v|^2 - |u-1|^3 - |u-1|v^2. \qquad \qquad \qquad \quad \ \ \ \end{eqnarray*} By using Holder inequality and \eqref{sob}, we obtain: $$ I^c_N(\psi) \geq \left(\frac{1}{2} -\frac{c^2}{4} \right) ||\psi-1||_{H^1_0(\Omega_N)}^2 - K||\psi-1||_{H^1_0(\Omega_N)}^3, $$ and hence $\psi=1$ is a local minimum of the action functional whenever $c^2<2$. In \cite{Ma}, Lemma 4.4, a compactly supported function $\phi_0$ is found so that $I^{c_0}(1+\phi_0)<0$. So it suffices to take sufficiently large $N_0$ such that $\Omega_N \supset supp \, \psi_0$, to obtain a). Finally, define $\gamma_0(t) = 1 + t \phi_0$, which obviously belongs to $\Gamma(N)$ for all $N \geq N_0$. Observe that: $$ I_N^c(\gamma_0(t)) = \mathcal{E}(\gamma_0(t)) - c \, t^2 \mathcal{P}(\psi_0).$$ As commented above $I_N^c(\psi_0)<0$, which implies that $\mathcal{P}(\psi_0)>0$. Hence, for all $c \geq c_0$, $$ I_{N}^c(\gamma_0(t)) \leq I_N^{c_0}(\gamma_0(t)) \leq \max_{t \in [0,1]} I_{N_0}^{c_0} \circ \gamma_0(t)=\chi(c_0),$$ by definition. As a consequence, $\gamma_N(c) \leq \chi(c_0)$ for all $N \geq N_0$, $c \geq c_0$. \end{proof} It is standard (see for instance \cite{AM, willem}) that the mountain pass geometry induces the existence of a Palais-Smale sequence at the level $\gamma_N$. Namely, a sequence $\psi_n$ such that $$I_N^c(\psi_n)=\gamma_N(c)+o(1), \ \ \ ||(I_N^c)'(\psi_n)||_{H^{-1}_0(\Omega_N)}=o(1).$$ It is not clear if such Palais-Smale sequences are bounded or not; this is one of the main difficulties. The question of the existence of Palais-Smale sequences with bounded energy for almost all values of $c$ will be addressed in next section. In what follows we show that, if bounded, such sequences give rise to critical points of $I^c_N$. \begin{lem}\label{eq:novan} Let $d=2,3$ and $\left\{Q_j\right\}$ be the set of disjoint unitary cubes that covers $\Omega_N$. If $\psi_n=u_n+ i v_n$ is a bounded vanishing sequence in $1+ H^1_0(\Omega_n)$, i.e. such that $$\sup_j \int_{Q_j} |u_n-1|^p+|v_n|^p \rightarrow 0$$ for some $2\leq p<\infty$ if $d=2$, $2\leq p<6$ if $d=3$, then $$\int_{\Omega_N} |u_n-1|^r+|v_n|^r \rightarrow 0$$ for any $2< r<\infty$ if $d=2$, $2<r<6$ if $d=3.$ \end{lem} \begin{proof} The proof is standard, see e.g \cite{lions2} Lemma I.1. \end{proof} \begin{prop}[Splitting property]\label{prop:12} Given $0 <c<\sqrt{2}$ and $\psi_n$ a bounded Palais-Smale sequence at the energy level $\gamma_N(c)$. Then there exist $k$ sequences of points $\left\{y_n^j\right\} \subset \{0\} \times {\mathbb R}^{d-1}$, $1\leq j\leq k,$ with $|y_n^j - y_n^k| \to +\infty$ if $j \neq k$, such that, up to subsequence, \begin{eqnarray} \psi_n -1 =w_n + \sum_{j=1}^k (\psi^j(\cdot + y_n^j)-1) \text{ with } w_n \rightarrow 0 \text { in } H^1_{0}(\Omega_N), \\ ||\psi_n-1||_{ H^1_0(\Omega_N)}^2\rightarrow \sum_{j=1}^k ||\psi^j-1||_{H^1_0(\Omega_N)}^2, \qquad \qquad \qquad \label{eq:spl12} \\ I^c_N(\psi_n ) \rightarrow \sum_{j=1}^k I^c_N(\psi^j), \qquad \qquad \qquad \qquad \quad \label{eq:spl3} \end{eqnarray} where $\psi^j$ are nontrivial finite energy solutions to \eqref{eq:GPstrip}. In particular $I^c_N(\psi^j)\leq \gamma_N(c) \leq \chi(c_0)$, $\mathcal{E}({\psi^j}) \leq \limsup_{n \to +\infty} \, \mathcal{E}(\psi_n)$ for all $j =1, \dots k$. \end{prop} \begin{proof} In this proof, for the sake of clarity, we drop the dependence on $c$. We first claim that $\psi_n$ is not vanishing. Reasoning by contradiction, by means of Lemma \ref{eq:novan} we have \begin{equation}\label{eq:consvan} \int_{\Omega_N} |u_n-1|^r+|v_n|^r \rightarrow 0 \end{equation} for any $2< r<\infty$ if $d=2$, $2<r<6$ if $d=3.$\\ Then, $$\int_{\Omega_N} (1-u_n^2-v_n^2)^2=\int_{\Omega_N} 4(u_n-1)^2+(u_n-1)^4+v_n^4+$$ $$+\int_{\Omega_N} 4(u_n-1)^3+4(u_n-1)v_n^2+2(u_n-1)^2v_n^2$$ and it follows by H\"older and \eqref{eq:consvan} that \begin{equation}\label{eq:crucvan} \int_{\Omega_N} (1-u_n^2-v_n^2)^2 =\int_{\Omega_N} 4(u_n-1)^2 +o(1). \end{equation} Therefore \begin{eqnarray} &I_N(u_n,v_n)=\frac{1}{2} \displaystyle \int_{\Omega_N} |\nabla u_n|^2 + \frac{1}{2} \int_{\Omega_N} |\nabla v_n|^2 -c \int_{\Omega_N} (1-u_n(x))\partial_{x_1} v_n(x) + \nonumber\\ &+ \displaystyle \int_{\Omega_N} (u_n-1)^2 +o(1) \label{eq:vanen} \end{eqnarray} On the other hand direct computation gives $$o(1)=I_N'[\psi_n](1-\psi_n)=\int_{\Omega_N}|\nabla u_n|^2+|\nabla v|^2 -2c \int_{\Omega_N} (1-u_n)\partial_{x_1} v_n(x) -\\$$ $$+\int_{\Omega_N} (1-u_n^2-v_n^2)(u_n(1-u_n)-v_n^2).$$ Arguing as before we notice that $$\int_{\Omega_N} (1-u_n^2-v_n^2)v_n^2=o(1)$$ and, thanks to H\"older inequality and \eqref{eq:crucvan} $$\int_{\Omega_N} (1-u_n^2-v_n^2)(u_n(1-u_n))=\int_{\Omega_N} (1-u_n^2-v_n^2)(u_n-1+1)(1-u_n))$$ $$=\int_{\Omega_N} (1-u_n^2-v_n^2)(1-u_n)+o(1)= \int_{\Omega_N} (1-u_n^2)(1-u_n)+o(1) = $$ $$ \int_{\Omega_N} (2(1-u_n)-(1-u_n)^2)(1-u_n)+o(1)= 2||u_n-1||_{L^2(\Omega_N)}^2+o(1).$$ We get hence that \begin{equation}\label{eq:vander} I_N'[\psi_n](1-\psi_n)=\int_{\Omega_N}|\nabla u_n|^2+|\nabla v_n|^2 -2c \int_{\Omega_N} (1-u_n)\partial_{x_1} v_n(x) +2||u_n-1||_{L^2(\Omega_N)}^2+o(1) \end{equation} Taken into account \eqref{eq:vanen} and\eqref{eq:vander} we conclude $$\gamma(N)+o(1)=I_N(\psi_n)-\frac 12 I_N'[\psi_n](1-\psi_n)=o(1),$$ a contradiction. \medskip Once vanishing is excluded, there exists a sequence $y^1_n \in \{0\} \times {\mathbb R}^{d-1}$ and $\psi^1 \in 1 + H_0^1(\Omega_N)$, $\psi^1 \neq 1$, such that $$\psi_n(\cdot + y^1_n) - \psi^1 \rightharpoonup 0 \mbox { in } H_0^1(\Omega_N)$$ up to a subsequence. From the definition of weak convergence we obtain that $$\frac 12 \int_{\Omega_N}|\nabla \psi_n|^2 -c \mathcal{P}(\psi_n)=\frac 12 \int_{\Omega_N}|\nabla \psi^1|^2 -c \mathcal{P}(\psi^1)+$$ $$+\frac 12 \int_{\Omega_N}|\nabla (\psi_n-\psi_0)|^2 -c \mathcal{P}(\psi_n-\psi^1)+o(1).$$ Now we notice that the nonlinear term fulfills the following splitting property \[ \int_{\Omega_N} \left(1-|\psi_n|^2\right)^2 =\int_{\Omega_N} \left(1-|\psi^1|^2\right)^2 +\int_{\Omega_N} \left(1-|\psi_n-\psi^1|^2\right)^2 +o(1). \] The proof of the splitting property is standard. A a consequence the action splits as \begin{equation}\label{eq:split} I_N(\psi_n ) = I_N(\psi^1)+ I_N((\psi_n-\psi^1))+o(1). \end{equation} Clearly $\psi^1$ is a weak solution of \eqref{eq:GPstrip}. Now if $\psi_n-\psi^1 \rightarrow 0$ in $\tilde H^1_0(\Omega_N)$ the lemma is proved. Let us assume the contrary, i.e. that $z_n^1=\psi_n-\psi^1 \rightharpoonup 0$ and $z_n^1=\psi_n-\psi^1 \nrightarrow 0$ in $H^1_0(\Omega_N)$. We aim to prove that there exists a sequence of points $y^2_n$ and $\psi^2 \in 1 + H_0^1(\Omega_N)$, $\psi^2 \neq 1$, such that $z_n^1(\cdot +y^2_n) - \psi^2 \rightharpoonup 0$. Let us argue again by contradiction assuming that the sequence $z_n^1$ vanishes which means by Lemma \ref{eq:novan} that \begin{equation*} \int_{\Omega_N} |u_n-u^1|^r+|v_n-v^1|^r \rightarrow 0 \end{equation*} for any $2< r<\infty$ if $d=2$, $2<r<6$ if $d=3,$ where $\psi^1 = u^1 + i v^1$. We have $$I_N'[\psi_n](1-\psi_n)=\int_{\Omega_N}|\nabla u_n|^2+|\nabla v_n|^2 -2c \int_{\Omega_N} (1-u_n)\partial_{x_1} v_n(x) +\\$$ $$+\int_{\Omega_N} (1-u_n^2-v_n^2)(u_n(1-u_n)-v_n^2)=o(1),$$ and $$I_N'[\psi^1](1-\psi^1)=\int_{\Omega_N}|\nabla u^1|^2+|\nabla v^1|^2 -2c \int_{\Omega_N} (1-u^1)\partial_{x_1} v^1(x) -\\$$ $$+\int_{\Omega_N} (1-(u^1)^2-(v^1)^2)(u^1(1-u^1)-(v^1)^2)=0.$$ Using the splitting property \eqref{eq:split} and using $I_N'[\psi_n](1-\psi_n)-I_N'[\psi^1](1-\psi^1)=o(1)$ we get $$\int_{\Omega_N}|\nabla z_n^1|^2 -2c \int_{\Omega_N} Re(z_n^1)\partial_{x_1}Im( z_n^1(x)) +o(1)=$$ $$\underbrace{\int_{\Omega_N} (1-u_n^2-v_n^2)(u_n(u_n-1)+v_n^2)-\int_{\Omega_N} (1-(u^1)^2-(v^1)^2)(u^1((u^1)-1)+(v^1)^2)}_{=I_4}.$$ Now, using the elementary inequality $c x y\leq \frac{c^2}{4}x^2+y^2$ we have \begin{equation}\label{eq:import2} \int_{\Omega_N}|\nabla Re(z_n^1)|^2+ (1-\frac{c^2}{2})\int_{\Omega_N}|\nabla Im(z_n^1)|^2 \leq 2 \int_{\Omega_N}|Re(z_n^1)|^2+ I_4+o(1). \end{equation} Notice that $$(1-u_n^2-v_n^2)(u_n(1-u_n)-v_n^2)=(1-u_n^2-v_n^2)^2+(1-u_n^2-v_n^2)(u_n-1)$$ and that $$(1-u_n^2-v_n^2)^2=4(u_n-1)^2+(u_n-1)^4+v_n^4+ 4(u_n-1)^3+4(u_n-1)v_n^2+2(u_n-1)^2v_n^2.$$ On the other hand $$(1-u_n^2-v_n^2)(u_n-1)=-2(1-u_n)^2+(1-u_n)^3+v_n^2(1-u_n).$$ Therefore, assuming that $z_n^1=\psi_n-\psi^1\rightharpoonup 0$ we get $$2 \int_{\Omega_N}|Re(z_n^1)|^2+I_4=o(1)$$ and hence we get a contradiction with \eqref{eq:import2}.\\ We have hence proved the existence of a sequence $y_n^2 \in \{0\} \times {\mathbb R}^{d-1}$ and $\psi^2 \in 1 + H_0^1(\Omega_N)$, $\psi^2 \neq 1$, such that $$z_n^1(\cdot +y_n^2)- \psi^2 \rightharpoonup 0.$$ Clearly, $|y_n^1 - y_n^2| \to +\infty$, and $\psi^2$ is also a (weak) solution of \eqref{eq:GPstrip}. Now we can iterate the splitting argument defining $z_n^2=z_n(x,y+y_n^2)-\psi^2$. We aim to show that we can have only a finite number of iterative steps. \\ We claim that $$\inf_{\psi \in \mathcal{N}}||1-\psi||_{H^1_0(\Omega_N)}>0$$ where $$\mathcal{N}:=\left\{ \psi \in 1 + H^1_0(\Omega_N), \psi\neq 0, \ I_N'[\psi](1-\psi)=0 \right\}.$$ This allows us to conclude thanks to \eqref{eq:spl12}. In order to prove the claim we notice the identity \begin{eqnarray} & \displaystyle \int_{\Omega_N} (1-u_n^2-v_n^2)(u_n(1-u_n)-v_n^2)=\int_{\Omega_N} 2(u-1)^2+3(u-1)^3+ \nonumber\\ &+ \displaystyle \int_{\Omega_N} \left( 3 v^2(u-1)+2(u-1)^2v^2+3(u-1)^3+ (u-1)^4+v^4\right) \end{eqnarray} such that, thanks to the inequality $$ -2c \int_{\Omega_N} (1-u)\partial_{x_1} v(x) \geq -\frac{c^2}{2} \int_{\Omega_N} |\nabla v|^2 - 2 \int_{\Omega_N} (u(x)-1)^2 $$ we obtain \begin{eqnarray} &0=I_N'[\psi](1-\psi)\geq \displaystyle \int_{\Omega_N} |\nabla u|^2 +(1-\frac{c^2}{2}) \displaystyle \int_{\Omega_N} |\nabla v|^2 + \label{eq:kfin}\\ & \displaystyle \int_{\Omega_N} \left(3(u-1)^3+3 v^2(u-1)+2(u-1)^2v^2+3(u-1)^3+ (u-1)^4+v^4\right) \nonumber. \end{eqnarray} From \eqref{eq:kfin} we get $$\alpha||\psi -1||_{H^1_0(\Omega_N)}^3+\beta||\psi -1||_{H^1_0(\Omega_N)}^4\geq (1-\frac{c^2}{2})||\psi - 1||_{ H^1_0(\Omega_N)}^2$$ and hence $\inf_{\psi \in \mathcal{N}}||\psi-1||_{ H^1_0(\Omega_N)}>0$.\\ Finally, recall that by \eqref{eq:A bounded}, $I_n(\psi^j )>0$. Now, we have up to space translation $$\gamma_N+o(1)=I_N(\psi_n)=\sum_j I_N(\psi^j(\cdot +y_n^j)) +o(1) \geq I_N(\psi^j) +o(1)$$ and hence $I_N(\psi^j)\leq \gamma_N.$ \end{proof} \section{Uniformly bounded energy solutions in approximating domains} In this section we prove shall prove the following result: \begin{prop} \label{prop-crucial} There exists a subset $E \subset (0,\sqrt{2})$ of plein measure satisfying that, for any $c \in E$, there exists a subsequence $k: {\mathbb N} \to {\mathbb N}$ strictly increasing such that: \begin{enumerate} \item There exists a nontrivial finite energy solution $\psi_N$ of the problem: $$ \begin{array}{rcr} i c\partial_{x_1}\psi_N +\Delta \psi+\left(1-|\psi_N|^2\right)\psi_N & = & 0 \ \ \text{ on } \Omega_{k(N)}, \\ \psi_N & = & 1 \text{ on } \partial \Omega_{k(N)}. \end{array} $$ \item $\mathcal{E}(\psi_N) \leq M$ for some positive constant $M=M(c)$ independent of $N \in {\mathbb N}$. \item $I^c_{k(N)}(\psi_N) \leq \gamma_{k(N)}(c)$. \item $ind(\psi_N) \leq 1.$ \end{enumerate} \end{prop} One of the key points here is that in (2) the energy is bounded uniformly in $N$. This will be essential later when passing to the limit as $N \to +\infty$. \medskip In a first subsection we will give an abstract result, which is basically well-known but maybe not in this specific form. Later we will apply that result to prove Proposition \ref{prop-crucial}. \subsection{Entropy and Morse index bounds} Entropy bounds on Palais-Smale sequences via monotonicity (also called monotonicity trick argument), is a tool first devised in \cite{struwe} that has been used many times since then, applied to a wide variety of problems. Here we need to adapt this argument to obtain uniform bounds in $N$, for a subsequence $k(N)$. Moreover, we will also use Morse index bounds for Palais-Smale sequences, in the spirit of \cite{FG, FG2}. For the sake of completeness, we state and give a proof of a general result in this subsection. \begin{prop} \label{trick} Let $X$ be a Banach space and $A$, $B: X \to {\mathbb R}$ two $C^1$ functionals. Assume that either $A(\psi) \geq 0$ or $B(\psi) \geq 0 $ for all $\psi \in X$. For any $c \in J \subset {\mathbb R}^+_0$, we define $I^c:X \to {\mathbb R}$, $$I^c(\psi) = A(\psi) - c B(\psi).$$ We assume that there are two points $\psi_0, \psi_1$ in $X$, such that setting $$ \Gamma = \{g \in C([0,1], X),\ g(0) = \psi_0,\ g(1) = \psi_1\},$$ the following strict inequality holds for all $ c \in J$: $$\gamma(c) = \displaystyle \inf_{g \in \Gamma} \max_{t \in [0,1]} I^c (g(t)) > \max \{I(\psi_0),I(\psi_1)\}.$$ Then the following assertions hold true: \begin{enumerate} \item If $B \geq 0$, $\gamma$ is decreasing. If instead $A \geq 0$, then the map $\sigma(c) = \frac{\gamma(c)}{c}$ is decreasing. As a consequence, both the maps $ \gamma, \ \sigma $ are almost everywhere differentiable. \item Let $c \in J$, $c>0$, be a point of differentiability of $\gamma$. Then, there exists a sequence $\{\psi_n\}$ such that \medskip \begin{enumerate} \item $I^c(\psi_n) \to \gamma(c)$, \item $(I^c)'(\psi_n) \to 0$ in $X^{-1}$, and \item $dist (\psi_n, G_n) \to 0$, where $$G_n=\{ \psi \in X:\ B(\psi) \leq \ - \gamma'(c) + 1/n, \ A(\psi) \leq \gamma(c) -\gamma'(c) c + \frac 1 n\}.$$ \end{enumerate} \item Let us define, for any $\delta>0$, the sets \begin{equation} \label{FGH} \begin{array}{c} F_{\delta} = \{\psi \in X: \ |I^c(\psi) - \gamma(c)|< 2\delta \}, \\ \\ G_{\delta}= \{\psi \in X: \ B(\psi) < \gamma'(c) + \delta, \ A(\psi) < - c^2 \sigma'(c) + \delta \}, \\ \\ H_\delta =\{\psi \in F_\varepsilon:\ dist(\psi, G_\delta) < 2 \delta \}. \end{array} \end{equation} Let us assume that $A$ and $B$ are uniformly $C^{2, \alpha}$ functionals in $H_\delta$ for some $\delta>0$. Then in (2) we can choose $\psi_n$ satisfying also that: \medskip \begin{enumerate} \item[d)] There exists a sequence $\delta_n<0$, $\delta_n \to 0$, such that $$ \sup \{dim \, Y: \ Y \subset X: I_c''(\psi_n)(\phi,\phi)\leq \delta_n \| \phi\|^2 \ \forall \ \phi \in Y \} \leq 1.$$ \end{enumerate} \end{enumerate} \end{prop} \begin{remark} Observe that, in general, there exist (PS) sequences for $I^c$ for any $c \in J$; see for instance \cite{AM, willem}. The above proposition shows that, for almost all values $c \in J$, there exist (PS) sequences for $I^c$ that satisfy also condition c). This extra condition c) can be useful in order to show convergence of the (PS) sequence. For instance, if either $A$ or $B$ is coercive, Proposition \ref{trick} implies the existence of bounded (PS) sequences, which is an important information in order to derive convergence. This is the result of \cite{jeanjean}. Assertion (3) comes from \cite{FG} and gives also a Morse index bound of the (PS) sequence. The only novelty is that we have assumed uniform $C^{2,\alpha}$ regularity on the set $H_\delta$. If $A$ or $B$ is coercive, it suffices to have uniform $C^{2,\alpha}$ estimates on bounded sets. \end{remark} \begin{remark} To keep the ideas clear, we have stated the result under a mountain-pass geometric assumption. The same principle holds for other types of min-max arguments. What is essential is that the family $\Gamma$ does not depend on the parameter $c$. \end{remark} \begin{proof} The proof of (1) is inmediate. Indeed, if $B \geq 0$, $I^c(u)$ is decreasing in $c$. Since the family $\Gamma$ is independent of $c$, we have that $\gamma$ is decreasing. Instead, if $A \geq 0$, then the expression $\frac{I^c(u)}{c}$ is decreasing in $c$, and we conclude. In any of the two cases, the maps $\gamma$, $\sigma$ are differentiable in a set $E \subset J$ of plein measure. In order to prove (2), we are largely inspired by \cite{jeanjean}. We first state and prove the following lemma: \begin{lem} Let $c \in E$, $c>0$, then there exists $g_n \in \Gamma$ such that \begin{enumerate} \item $\max_{t \in [0,1]} I^c (g_n(t)) \to \gamma^c$. \item There exists $\rho_n >0$, $\rho_n \to 0$ such that for all $t \in [0,1]$ with $I^c(g_n(t)) \geq \gamma^c - \frac 1 n$, we have: $$ B(g_n(t)) \leq -\gamma'(c) + \rho_n, \ \ \limsup_{n \to +\infty} A(g_n(t)) \leq - c^2 \sigma'(c) + \rho_n.$$ \end{enumerate} \end{lem} \begin{proof}[Proof of the lemma] Take $c_n\in J$ an increasing sequence converging to $c$. For any $n \in {\mathbb N}$, there exists $g_n \in \Gamma$ such that $\max_{t \in [0,1]} I^{c_n}(g_n(t)) \leq \gamma (c_n) + |c_n-c|^2$. If $B \geq 0$ we have that: $$ \max_{t \in [0,1]} I^{c}(g_n(t)) \leq \max_{t \in [0,1]} I^{c_n}(g_n(t)) \leq \gamma (c_n) + |c_n-c|^2 \to \gamma(c).$$ Instead, if $A \geq 0$, $$ \max_{t \in [0,1]} I^{c}(g_n(t)) \leq \frac{c}{c_n} \max_{t \in [0,1]} I^{c_n}(g_n(t)) \leq \frac{c}{c_n} (\gamma (c_n) + |c_n-c|^2) \to \gamma(c).$$ We now take $t \in [0,1]$ such that $I^c(g_n(t)) \geq \gamma(c) - |c-c_n|^2$. Then: $$ B(g_n(t)) = \frac{I^{c_n}(g_n(t)) - I^{c}(g_n(t)) }{c-c_n} $$$$\leq \frac{ \gamma(c_n) + |c_n-c|^2 - \gamma(c) + |c_n-c|^2 }{c-c_n} \to -\gamma'(c).$$ Moreover, $$\limsup_{n \to +\infty} A(g_n(t)) = \limsup_{n \to +\infty} I^c (g_n(t)) + c B(g_n(t)) \leq \gamma(c) - c \gamma'(c).$$ It suffices then to take $c_n = c- \frac{1}{\sqrt{n}}.$ \end{proof} Recall now the definitions of $F_\delta$, $G_\delta$ and $H_\delta$ given in \eqref{FGH}. By the previous lemma the set $F_{\delta} \cap G_\delta$ is not empty: indeed, the curves $g_n$ pass through $F_{\delta} \cap G_\delta$ for sufficiently large $n$ . Proposition \ref{trick}, (2) is proved if we show that for any $\delta>0$, $$\inf \{ \| (I^c)'(\psi)\|: \ \psi \in H_{\delta} \}=0.$$ We argue by contradiction, and assume that there exists $\delta>0$ such that $\inf \{ \| (I^c)'(\psi)\|: \ \psi \in H_{\delta} \} \geq \delta >0$. A classical deformation argument shows that there exists $\varepsilon>0$, $\eta \in C([0,1] \times X:\ X)$ such that: \begin{enumerate} \item[i)] $\eta(s, \psi) = \psi$ if $s=0$, $ |I^c(\psi)- \gamma(c)| > 2\varepsilon$ or $dist (\psi, G_\delta) >2 \delta$. \item[ii)] $I^c(\eta(1, \psi)) \leq \gamma(c) - \varepsilon$ for all $\psi \in G_\delta$ with $I^c(\psi) \leq \gamma(c) + \varepsilon$. \item[iii)] $\eta(s, \cdot)$ is a homeomorphism of $X$. \item[iv)] $\|\eta(s,\psi) - \psi \| < \delta$, \item[v)] $I^c(\eta(s, \psi)) \leq I^c(\psi)$ for all $\psi \in X$. \end{enumerate} The existence of the above deformation can be found in \cite[Lemma 2.3]{willem}, for instance. Actually our notation is compatible with that reference, setting $S= G_\delta$, and taking $\varepsilon = \delta^2/8$, for instance. \bigskip We now take $n$ large enough and the curve $\gamma_n$ given by the lemma. If $I^c(g_n(t)) < \gamma(c) - \frac 1 n$, by b), we have that $I^c(\eta (1, g_n(t))) < \gamma(c) - \frac 1 n$. In on the contrary, $I^c(g_n(t)) \geq \gamma(c) - \frac 1 n$, we can combine the lemma with ii) to conclude that $I^c(\eta(1, g_n(t))) \leq \gamma(c) - \varepsilon$. As a consequence, $$ \max_t I^c(\eta \circ g_n(t)) < \gamma(c),$$ a contradiction. \medskip For the proof of (3) of Proposition \ref{trick}, we just use Theorem 1.7 of \cite{FG} to our sequence of paths $g_n$. It is important to observe that the uniform $C^{2,\alpha}$ regularity in \cite{FG} is required only in the set $H_\delta$ defined above (see, on that purpose, Lemma 3.7 of \cite{FG}). \end{proof} \subsection{Proof of Proposition \ref{prop-crucial}} A direct application of the above results to our setting, combined with Proposition \ref{prop:12}, yields the existence of finite energy solutions in any domain $\Omega_N$, for almost all values of $c$. The problem here is that the energy of those solutions could diverge if we make $N \to +\infty$. In order to obtain uniform bounds independent of the parameter $N$, we need a more subtle application of Proposition \ref{trick}. \medskip Define: \begin{enumerate} \item $X= 1+ H_0^1(\Omega_N)$, which is an affine Banach space, for which Proposition \ref{trick} also holds; \item $A(\psi)= \mathcal{E}(\psi)$, which is positive and coercive; \item $B(\psi)= \mathcal{P}(\psi)$, the momentum; \item $J= (c_0, \sqrt{2})$ for a fixed value $c_0>0$. \end{enumerate} For $N\geq N_0$ the functional $I^c_N$ has a min-max geometry (see Proposition \ref{min-max}); recall that $\gamma_N(c)>0$ is the function that associates to a speed $c \in J$ the min-max value of $I^c_N$. Clearly, $\sigma_N(c)= \frac{\gamma_N(c)}{c}$ is decreasing in $c$ as Proposition \ref{trick} shows. By Proposition \ref{trick}, there exists a bounded (PS) sequence in $H^1_0(\Omega_N)$ at level $\gamma_N(c)$. Proposition \ref{prop:12} yields then the existence of a solution $\psi_N$ with: $$I_N^c(\psi_N) \leq \gamma_N(c), \ \mathcal{E}(\psi_N)=A(\psi_N) \leq -c^2 \sigma_N'(c).$$ Since $A$ is coercive, here the set $H_\delta$ is uniformly bounded, and $I$ is clearly uniformly $C^{2,\alpha}$ in bounded sets. By Proposition \ref{trick}, 3), we have that: $$ \sup \{dim \, Y: \ Y \subset H_0^1(\Omega_N): (I_N^c)''(\psi_N)(\phi,\phi) < 0 \ \forall \ \phi \in Y \} \leq 1.$$ We are now concerned with passing to the limit as $N \to +\infty$. In order to control the energy of the solutions $\psi_N$, we reason as follows. Recall Proposition \ref{min-max}, b), and that $\sigma_N(c)$ is decreasing in $c$; then, for $N\geq N_0$, \begin{equation} \label{prima} \frac{\chi(c_0)}{c_0} \geq \frac{\gamma_N(c_0)}{c_0} \geq \frac{\gamma_N(c_0)}{c_0} -\frac{\gamma_N(c)}{c} \geq \int_{c_0}^c |\sigma_N'(s)| \, ds .\end{equation} Let us now define the sets $$D_{N,M}= \{c \in (c_0, \sqrt{2}): \sigma_N \mbox{ is not differentiable or } |\sigma_N'(c)|> M \},$$ for all $N$, $M \in {\mathbb N}$, $N \geq N_0$. Clearly the sets $D_{N,M}$ also depend on $c_0$, but we avoid to make that dependence explicit in the notation for the sake of clarity. By \eqref{prima}, we have that $$|D_{N,M}| \leq \frac{\chi(c_0)}{c_0 M}.$$ The following claim is the key to be able to pass to the limit for enlargins slabs preserving bounded energy. \medskip {\bf Claim:} The set $D(c_0)$ defined as: $$ D(c_0) = \displaystyle \cap_{M \in {\mathbb N}} \cup_{N \geq N_0} \cap_{k \geq N} D_{k,M}$$ has $0$ measure. \medskip Indeed, the sets $\cap_{k \geq N} D_{k,M}$ are increasing in $N$, and all of them satisfy that have measure smaller than $\frac{\chi(c_0)}{c_0 M}$. Hence the same estimate works also for the union in $N$. Now, $D(c_0)$ is a set given by an intersection of sets of measure $\frac{\chi(c_0)}{c_0 M}$, $M \in {\mathbb N}$, so that $D(c_0)$ has $0$ measure. \medskip Finally, we can set $$D= \cup_{n=1}^{+\infty} \, D(1/n),$$ which has also $0$ measure. \bigskip Let us define $E= (0,\sqrt{2}) \setminus D$, and take $c \in E$. We can fix $n \in N$ such that $c_0=1/n< c$, and $c \notin D(c_0)$. Then, there exists $M(c)$ and a subsequence $k(N)$ such that $|\sigma_{k(N)}'(c)| \leq M(c)$. By Proposition \ref{trick}, for any of these slabs $\Omega_{k(N)}$ there exists a Palais-Smale sequence with bounded energy. According to Proposition \ref{prop:12}, this gives rise to a solution $\psi_{k(N)} \in 1+ H_0^1(\Omega_{k(N)})$ such that: $$I_{k(N)}^c(\psi_{k(N)}) \leq \gamma_{k(N)}(c), \ \mathcal{E}(\psi_{k(N)} ) \leq M(c) c^2.$$ This concludes the proof of Proposition \ref{prop-crucial}. \section{Proof of Theorem \ref{teo:almost}} In view of Proposition \ref{prop-crucial}, we aim to conclude the proof of Theorem \ref{teo:almost} by passing to the limit. This is indeed possible thanks to Lemma \ref{lem:bound}. However, we need to face two difficulties: vanishing of solutions (that is, the limit solution is trivial) and concentration near the boundary (that is, the limit solution is defined in a half-space). The purpose of this section is to exclude both scenarios. Next result deals with the question of vanishing and is actually a version of Proposition 2.4 of \cite{bgs-cmp} adapted to problem \eqref{eq:GPstrip}. \begin{prop}\label{lem:lemmacrucslab} Let $\psi$ be a finite energy solution of \eqref{eq:GPstrip} with $0<c<\sqrt{2}$, then $$\|1-|\psi|\|_{L^{\infty}(\Omega_N)}\geq \frac{2}{5}(1-\frac{c}{\sqrt{2}}).$$ \end{prop} The proof is actually the same as in \cite{bgs-cmp}, with one difference: when integrating by parts, the authors use the decay estimates of the solutions to avoid contributions from infinity, and those estimates are available only for the Euclidean space case. Instead, here we use integrability bounds. In our argument we will use liftings of the solutions, that is, we write $\psi = \rho e^{i \theta}$. The existence of liftings is always guaranteed, for instance, if $|\psi(x)|\neq 0$ for all $x$. \subsection{Liftings for solutions in $\Omega_{N}$ without vortices} We consider here solutions without vortices, i.e. that do not vanish. The energy density is given by the following formula \[ e(\rho, \theta)=\frac 12 \left(|\nabla \rho|^2 + |\nabla \theta|^2\rho^2 \right)+\frac 14 \left(1-|\rho|^2\right)^2 \] and the associated energy is \[ \mathcal{E}(\rho, \theta):=\int_{\Omega_N} e(\rho, \theta) \] By using the fact that $\psi=\rho e^{i \theta}$ is a solution of \eqref{eq:GPellip}, $\rho, \theta$ fulfill the following system of equations \begin{equation}\label{eq:GPstripvarrp} \left\{ \begin{aligned} &\frac{c}{2}\partial_{x_1} \rho^2+\nabla \cdot(\rho^2\nabla \theta)=0,\\ & c \rho \partial_{x_1}\theta-\Delta \rho- \rho(1-\rho^2)+\rho|\nabla \theta|^2=0. \end{aligned}\right.. \end{equation} The following pointwise inequality (Lemma 2.3 in BGS) \begin{equation}\label{eq:pointcruc} \left | (\rho^2-1)\partial_{x_1}\theta \right|\leq\frac{\sqrt{2}}{\rho} e(\rho, \theta) \end{equation} that holds for arbitary $C^1$ scalar function that can be written as $\psi=\rho e^{i \theta}$ (not necessary being a solution) are crucial in the sequel. \begin{lem}\label{prop:rhotheta} Let $\psi$ be a vortexless finite energy solution in $\Omega_N$, then $1-\rho$ and $\theta$ belong to $H^1_0(\Omega_N)$. \end{lem} \begin{proof} Let us notice that for a vortexless finite energy solution \begin{equation} \label{energy-lifting} \mathcal{E}(\psi)=\int_{\Omega_N }|\nabla \rho|^2 + \rho^2 |\nabla \theta|^2 < +\infty \end{equation} which implies, by means of Poincar\'{e} inequality, that $\rho-1\in H^1_0(\Omega_N)$. Since $\nabla \rho$ bounded in $L^{\infty}$ (by Lemma \ref{lem:bound}), one concludes that $\rho(x) \to 1$ uniformly as $|x| \to +\infty$. Hence $\rho(x) \geq \rho_0 >0$ for all $x \in \Omega_N$. Again by \eqref{energy-lifting}, $\nabla \theta \in L^2(\Omega_N)$. To conclude the proof we shall prove that $\theta=0$ on $\partial \Omega_N$ which will allows to use Poincar\'{e} inequality. \medskip Let us assume that $\theta=0$ if $x_1=-N$ and that $\theta=2 \pi$ if $x_1=N$. For any $\tilde{x} \in {\mathbb R}^{d-1}$ there exists $y \in (-N,N)$ such that $u(y, \tilde x)=0$. We get $$1=| u(y, \tilde x)-u(-N, \tilde x)|^2=\left| \int_{-N}^{y} \partial_{x_1} u(s,y) ds \right|^2\leq 2N \int_{-N}^{N} \left| \partial_{x_1} u(s,y)\right|^2 ds.$$ By Fubini we get $$ \int_{\Omega_N} \left| \partial_{x_1} u\right|^2 =\int_{{\mathbb R}^{d-1}}\left( \int_{-N}^{N} \left| \partial_{x_1} u(s,y)\right|^2 ds\right) dy = + \infty,$$ which implies that the energy is infinity. \end{proof} From \eqref{eq:GPstripvarrp} we derive three useful identities that are important in the sequel. These identities have been stablished in \cite[Lemmas 2.8, 2.10]{bgs-cmp} for solutions in the whole euclidean space: here we adapt these arguments to the problem in the domain $\Omega_N$. \begin{lem}\label{prop:stimmoment} Let $\psi$ be a vortexless finite energy solution of \eqref{eq:GPstrip}. Then: \begin{equation}\label{eq:MOwv} \mathcal{P}(\psi)=\frac 12 \int_{\Omega_N}(1-\rho^2)\partial_{x_1}\theta. \end{equation} \begin{equation}\label{eq:MOwv2} c \mathcal{P}= \int_{\Omega_N}\rho^2|\nabla \theta|^2. \end{equation} \begin{equation}\label{eq:MOwv3} \int_{\Omega_N} \left(2 \rho|\nabla \rho|^2+\rho(1-\rho^2)^2\right)= c \int_{\Omega_N} \rho(1-\rho^2)\partial_{x_1}\theta+ \int_{\Omega_N} \rho(1-\rho^2)| \nabla \theta|^2. \end{equation} \end{lem} \begin{proof} Straightforward computation gives $$\mathcal{P}(\psi)=\frac 12 \int_{\Omega_N}\partial_{x_1}(\rho \sin \theta)-\rho^2\partial_{x_1}\theta=\frac 12 \int_{\Omega_N}\partial_{x_1}(\rho \sin \theta -\theta)+(1-\rho^2)\partial_{x_1}\theta.$$ We will prove that $\int_{\Omega_N}\partial_{x_1}(\rho \sin \theta -\theta)=0$. Thanks to Lemma \ref{prop:rhotheta} $\psi_1=(1-\rho^2)\partial_{x_1}\theta$ is integrable in $\Omega_N$ and hence we derive that $\psi_2=\partial_{x_1}(\rho \sin \theta -\theta)$ is integrable as well. By integration by parts together with Lemma \ref{prop:rhotheta} we get $$\int_{\Omega_{N}} \psi_2 =\int_{\partial \Omega_{N} }\left(\rho \sin \theta -\theta\right)\eta_1 =0.$$ To get \eqref{eq:MOwv2} we multiply the first equation of \eqref{eq:GPstripvarrp} by $\theta$ and we integrate in $\Omega_{N,M}$, defined as $$ \Omega_{N,M}=\left\{x\in {\mathbb R}^d, \ \ \ -N<x_1<N, \ |x_j|<M, \ \ 2 \leq j \leq d \right\} \subset \Omega_N.$$ By integrating by parts we obtain: $$\frac{c}{2}\int_{\Omega_{N,M}} (1-\rho^2)\partial_{x_1}\theta-\int_{\Omega_{N,M}} \rho^2|\nabla \theta|^2 $$$$ =\int_{\partial \Omega_{N,M}} \theta \left ( \frac{c}{2} (1-\rho^2) \eta_1- \rho^2 \nabla \theta \cdot \eta \right ).$$ Observe that by Lemma \ref{prop:rhotheta} all functions involved in the expression above belong to $L^1(\Omega_N)$, and recall that $\theta=0$ on $\partial \Omega_N$. Then, there exists a sequence $M_n$ such that $$\lim_{n \rightarrow \infty} \int_{\partial \Omega_{N,M_n}} \theta \rho^2 \nabla \theta \cdot \eta=0.$$ This proves \eqref{eq:MOwv2}. By multiplying the second equation of \eqref{eq:GPstripvarrp} by $\rho^2-1$ and integrating over $\Omega_{N,M}$ by parts we obtain $$\int_{\Omega_{N,M}} \left(2 \rho|\nabla \rho|^2+\rho(1-\rho^2)^2\right)+\int_{\partial \Omega_{N,M}} (1-\rho^2)\nabla \rho \cdot \eta = $$ $$=c \int_{\Omega_{N,M}} \rho(1-\rho^2)\partial_{x_1}\theta + \int_{\Omega_{N,M}} \rho(1-\rho^2)| \nabla \theta|^2.$$ Again by Lemma \ref{prop:rhotheta}, all functions involved in the above expression belong to $L^1(\Omega_N)$. Hence we can finde a sequence $M_n$ such that $$\lim_{n \rightarrow \infty} \int_{\partial \Omega_{N,M_n}}(1-\rho^2)\nabla \rho \cdot \eta=0.$$ This proves \eqref{eq:MOwv3} passing to the limit. \end{proof} \subsection{Proof of Proposition \ref{lem:lemmacrucslab}} Let us call $\delta=||1-|\psi|||_{L^{\infty}(\Omega_N)}$. If $\delta>\frac 12>\frac{2}{5}(1-\frac{c}{\sqrt{2}})$ there is nothing to prove. Let us suppose hence that $\delta<\frac 12$ which implies that $\rho(x) \geq 1 - \delta >\frac 12 $ for any $x \in \Omega_N$. In particular $\psi$ admits a lifting $\psi = \rho e^{i \theta}$. We notice that $$4(1-\delta)\left(\int_{\Omega_N }\frac 12 |\nabla \rho|^2 +\frac 14 \left(1-|\rho|^2\right)^2\right)\leq\int_{\Omega_N } 2 \rho |\nabla \rho|^2 + \rho \left(1-|\rho|^2\right)^2$$ and thanks to \eqref{eq:MOwv3} we get \begin{equation}\label{eq:stimenerg} \int_{\Omega_N}e(\rho, \theta)\leq \frac{1}{4(1-\delta)} \int_{\Omega_N} \rho(1-\rho^2) \left( c \partial_{x_1}\theta + | \nabla \theta|^2 \right)+\frac 12 \int_{\Omega_N}\rho^2|\nabla\theta|^2 \end{equation} The strategy is to estimate r.h.s of \eqref{eq:stimenerg} using the pointwise bound given by \eqref{eq:pointcruc}. We have, thanks to \eqref{eq:MOwv2} and \eqref{eq:pointcruc} $$ \frac{c}{4(1-\delta)} \int_{\Omega_N} \rho(1-\rho^2)\partial_{x_1}\theta+\frac 12 \int_{\Omega_N}\rho^2|\nabla \theta|^2\leq\left(\frac{\sqrt{2}c}{4(1-\delta)}+\frac{\sqrt {2}c}{4} \right)\int_{\Omega_N} e(\rho, \theta)$$ and hence $$ \frac{c}{4(1-\delta)} \int_{\Omega_N} \rho(1-\rho^2)\partial_{x_1}\theta +\frac 12 \int_{\Omega_N}\rho^2|\nabla \theta|^2\leq \frac{c}{\sqrt{2}(1-\delta)}\int_{\Omega_N} e(\rho, \theta).$$ Now we claim that \begin{equation}\label{eq:stimaimazz} \left|\int_{\Omega_N}\rho(1-\rho^2)|\nabla \theta|^2 \right|\leq 6 \delta\int_{\Omega_N} e(\rho, \theta) \end{equation} such that we obtain $$\int_{\Omega_N} e(\rho, \theta) \leq (\frac{c}{\sqrt{2}(1-\delta)}+\frac{3\delta}{2(1-\delta)})\int_{\Omega_N} e(\rho, \theta).$$ The fact that $e(\rho, \theta )\geq 0$ and that $1- (\frac{c}{\sqrt{2}(1-\delta)}+\frac{3\delta}{2(1-\delta)})\leq 0$ if $\delta \geq \frac{2}{5}(1-\frac{c}{\sqrt{2}})$ concludes the proof. Now we prove claim \eqref{eq:stimaimazz}. Notice that $$\left|\int_{\Omega_N}\rho(1-\rho^2)|\nabla \theta|^2 \right| \leq \delta \int_{\Omega_N}\rho(1+\rho)|\nabla \theta|^2.$$ Now, $\rho(1+\rho)\leq 3 \rho^2$ if $\rho\geq \frac 12$, such that thanks to \eqref{eq:MOwv2} $$\left|\int_{\Omega_N}\rho(1-\rho^2)|\nabla \theta|^2 \right| \leq 3 \delta \int_{\Omega_N}\rho^2|\nabla \theta|^2 \leq \frac{3 \delta c}{2}\int_{\Omega_N} (1-\rho^2)\partial_{x_1}\theta \leq 3 \sqrt{2} \delta c\int_{\Omega_N} e(\rho, \theta).$$ The proof of the claim ends noticing that $0<c<\sqrt{2}$. \subsection{Conclusion of the proof of Theorem \ref{teo:almost}} Take $c \in E$, $c_0 \in (0,c)$ and $N_0$ given by Proposition \ref{min-max}. By Proposition \ref{lem:lemmacrucslab}, there exists $\xi_N \in \Omega_{k(N)}$ such that $|\psi_{k(N)}(\xi_N)-1| \nrightarrow 0 $, where $\psi_{k(N)}$ are the solutions given by Proposition \ref{prop-crucial}. By the uniform bounds of Lemma \ref{lem:bound}, we can use Ascoli-Arzel\`{a} Theorem to obtain in the limit ($C^k$ locally) a nontrivial solution of the Gross-Pitaevskii equation $\psi_c$. By Fatou Lemma, $\mathcal{E}(\psi_c) $ is finite. Moreover, by Remark \ref{remark morse}, $ind(\psi_c) \leq 1$. Finally, by Fatou lemma and \eqref{eq:A bounded}, \begin{align*} I(\psi_c)= \frac{1}{d-1} \sum_{j=2}^d \int |\partial_{x_j} \psi_c |^2 \leq \frac{1}{d-1} \liminf_{N \to +\infty} \sum_{j=2}^d \int |\partial_{x_j} \psi_{k(N)} |^2 \\ = \liminf_{N \to +\infty} I_{k(N)} (\psi_{k(N)} ) \leq \liminf_{N \to +\infty} \gamma_{k(N)}(c) \leq \chi(c_0). \end{align*} \medskip If $d (\xi_N, \partial \Omega_{k(N)}) $ is bounded, up to a subsequence, our limit solution $\psi_c$ is defined in a half-space $\{x \in {\mathbb R}^d: \ x_1 > - m\}$ or $\{x \in {\mathbb R}^d: \ x_1 < m\}$, for some $m>0$, and $\psi_c=1$ on its boundary. Instead, if $d (\xi_N, \partial \Omega_{k(N)}) $ is unbounded, the solution $\psi_c$ is defined in the whole euclidean space ${\mathbb R}^d$. In next proposition we rule out the first possibility, and this concludes the proof of Theorem \ref{teo:almost}. \begin{prop} \label{prop:half} Let $\psi$ be a finite energy solution of the problem: \begin{equation}\label{eq:half} \begin{array}{rcl}i c\partial_{x_1}\psi +\Delta \psi+\left(1-|\psi|^2\right)\psi & = & 0 \ \ \text{ on } {\mathbb R}^d_+, \\ \psi & = & 1 \ \text{ on } {\mathbb R}^d_+, \end{array} \end{equation} where ${\mathbb R}^d_+ = \{x \in {\mathbb R}^d:\ x_1>0\}.$ Then $\psi=1$. \end{prop} \begin{proof} The proof follows well-known ideas that date back to \cite{el}. If $\psi$ is a finite energy solution, then $\nabla \psi$ and $(1-|\psi|^2)$ are functions in $L^2({\mathbb R}^d_+)$. Since $\psi$ is in $L^\infty({\mathbb R}^d_+)$ and is a strong solution, standard regularity results allow us to conclude that $D^2 \psi$ belongs to $L^2({\mathbb R}^d_+)$. Hence we can multiply equation \eqref{eq:half} by $\partial_{x_1}\psi$ and integrate by parts, obtaining: $$ c \int_{{\mathbb R}^d_+} \langle (i \partial_{x_1}\psi) , \partial_{x_1}\psi \rangle =0;$$ $$\int_{{\mathbb R}^d_+} \langle \Delta \psi, \partial_{x_1}\psi \rangle = \int_{\partial {\mathbb R}^d_+} \langle (\nabla \psi \cdot \nu ), \partial_{x_1} \psi \rangle - \int_{{\mathbb R}^d_+} \frac{1}{2} \partial_{x_1} \left ( |\nabla \psi|^2 \right ) $$ $$= - \int_{\partial {\mathbb R}^d_+} | \partial_{x_1} \psi|^2 + \frac{1}{2} \int_{\partial {\mathbb R}^d_+} |\partial_{x_1} \psi|^2= -\frac{1}{2} \int_{\partial {\mathbb R}^d_+} |\partial_{x_1} \psi|^2; $$ $$ \int_{{\mathbb R}^d_+} \left(1-|\psi|^2\right)\langle \psi, \partial_{x_1}\psi \rangle = - \frac 1 4 \int_{{\mathbb R}^d_+} \partial_{x_1} \left ( (1- |\psi|^2)^2 \right )=0. $$ These computations imply that: $$\int_{\partial {\mathbb R}^d_+} |\partial_{x_1} \psi|^2 =0.$$ In other words, $\partial_{x_1} \psi=0$ in $\partial {\mathbb R}^d_+$. By unique continuation, we conclude that $\psi=1$. \end{proof} \begin{remark} \label{remark halfspace} The proof of Proposition \ref{prop:half} breaks down if we consider the half-space $\{ x \in {\mathbb R}^d:\ x_j >0 \}$, if $j>1$. The reason is that we do not know if: $$ \int_{{\mathbb R}^d_+} \langle (i \partial_{x_1}\psi) , \partial_{x_j}\psi \rangle$$ may cancel. This is one of the reasons why we choose slabs as approximating domains, instead of expanding balls, for instance (a choice that would have had advantages from the point of view of compactness). The second reason is that in $\Omega_N$ the Pohozaev-type identity given in Lemma \ref{lem:poho2} does not involve boundary terms. \end{remark} \section{Proof of Theorem \ref{teo:3}} In this section we prove the compactness criterion given in \ref{teo:3}. We start by the following result, which is independent of the dimension: \begin{prop} \label{ascoli} Let $d=2$ or $3$, $c_n \to c$, $c_n \in E$ where the set $E$ is given by Theorem \ref{teo:almost}. Let $\psi_n$ be the sequence of solutions provided by that theorem. Then there exists $\xi_n \in {\mathbb R}^d$ such that $\psi_n(\cdot - \xi_n)$ converges locally in $C^k$ (up to a subsequence) to a nontrivial solution $\psi_0$ of \eqref{eq:GPellip}. \end{prop} \begin{proof} Let $\psi$ be a finite energy solution of \eqref{eq:GPellip} with $0<c<\sqrt{2}$. Then there exists $\varepsilon= \varepsilon(c) >0$ such that \begin{equation} \label{puff} \|1-|\psi|\|_{L^{\infty}(\Omega_N)}\geq \varepsilon. \end{equation} Statement \eqref{puff} is just Proposition 2.4 of \cite{bgs-cmp}. Compare it with Proposition \ref{lem:lemmacrucslab}, which is nothing but its version for problem \eqref{eq:GPstrip} (with a slight change of the constants). Then, there exists $\xi_n$ such that $ | 1- |\psi_n(\xi_n)||> \varepsilon$ for some fixed $\varepsilon>0$. By Lemma \ref{lem:bound} we can use Ascoli-Arzel\`{a} Theorem to obtain that $\psi_n(\cdot - \xi_n)$ converges locally in $C^k$ to a nontrivial solution $\psi_0$ of \eqref{eq:GPellip}. \end{proof} The main problem to conclude the proof of Theorem \ref{teo:3} or \ref{teo:all} is to assure that $\psi_0$ has finite energy. Let us point out that the boundedness of the energy cannot be deduced only by using the Pohozaev identities given in Lemmas \ref{lem:poho}, \ref{lem:poho2}. Observe that since $I_{c_n}(\psi_n )$ is bounded, Lemma \ref{lem:poho2} implies that $$ \sum_{j=2}^3 \int_{{\mathbb R}^3} |\partial_{x_j} \psi_n|^2 = O(1).$$ The idea of the proof is to try to relate the behavior of $\psi_n$ with that of the 1-D solutions of the Gross-Pitaevskii equation. Next proposition is a first step in this line (see also Remark \ref{rrr}). \begin{prop} Let $\psi_n$ be solutions of \eqref{eq:GPellip} for $c_n$, $c_n \to c$, such that $I_{c_n}(\psi_n) \leq C$. Then, $$ \int_{{\mathbb R}^3} |\nabla g_n|^2 + \int_{{\mathbb R}^3} |\nabla h_n|^2 =O(1),$$ where \begin{equation} \label{g} g_n= (\partial_{x_1} u_n) v_n - (\partial_{x_1} v_n) u_n - \frac{c_n}{2} (\rho_n^2-1), \end{equation} \begin{equation} \label{h} h_n= \frac 1 2 |\partial_{x_1} \psi_n|^2 - \frac 1 4 (1-\rho_n^2)^2. \end{equation} \end{prop} \begin{remark} \label{rrr} The two quantities defined above correspond to the invariants of the 1-D Gross-Pitaevskii equation. Indeed $h$ represents its hamiltonian, whereas $g$ is another invariant given by the fact that the problem, after a change of variables, is radially symmetric (see equations \eqref{eq:GP1D}, \eqref{eq:GP1Dbis}). On this aspect, see for instance \cite{bgs-survey}, pages 3-4. \end{remark} \begin{proof} For the sake of clarity we drop the subscript $n$ in the proof of this proposition. We first consider the function $g$, which is an $L^2$ function, but with $L^2$ norm out of control. Observe that equation \eqref{eq:GPellip} implies that $\nabla \cdot G=0$, where \begin{equation} \label{defG} G=(g, u_{x_2} v - v_{x_2} u, u_{x_3} v - v_{x_3} u). \end{equation} Straightforward computations give: $$ curl \, G =\left( \begin{array}{c} \displaystyle 2 u_{x_3} v_{x_2} - 2 v_{x_3} u_{x_2} \\ \displaystyle 2 v_{x_3} u_{x_1} - 2 u_{x_3} v_{x_1} - \frac{c}{2} (\rho^2-1)_{x_3} \\ \displaystyle \displaystyle - 2 v_{x_2} u_{x_1} + 2 u_{x_2} v_{x_1} + \frac{c}{2} (\rho^2-1)_{x_2} \end{array} \right).$$ Observe that by \eqref{eq:A bounded}, the derivatives with respect to $x_2$, $x_3$ are uniformly bounded (with respect to $n$) in $L^2$. Moreover, all factors involved are bounded in $L^\infty$ by Lemma \ref{lem:bound}. As a consequence, $curl G$ is uniformly bounded in $L^2$. Observe now that: $$ G= curl (-\Delta^{-1} curl \, G ),$$ where $\Delta^{-1}$ is given by convolution with the Coulomb potential $\frac{1}{4 \pi |x|}$, see for instance \cite[Subsection 2.4.1]{bertozzi}. By using the Fourier Transform and Plancherel, all partial derivatives of $G$ are uniformly bounded in $L^2$, independently of $n$. This concludes the proof for $g$. \medskip For $h$, the proof follows the same ideas. Let us define the vector field: $$H =\left( h, u_{x_1} u_{x_2} + v_{x_1} v_{x_2}, u_{x_1} u_{x_3} + v_{x_1} v_{x_3} \right).$$ Let us recall here that $|\psi_{x_1}|^2 = u_{x_1}^2 + v_{x_1}^2$. Observe first that $H$ is an $L^2$ vector field, even if its $L^2$ norm could be unbounded as $n \to +\infty$. Taking into account \eqref{eq:GPellip}, straightforward computations give: $$ \nabla \cdot H = u_{x_1 x_2} u_{x_2} + u_{x_1 x_3} u_{x_3} + v_{x_1 x_2} v_{x_2} + v_{x_1 x_3} v_{x_3} $$ which is uniformly bounded in $L^2$ norm, again, by \eqref{eq:A bounded} and Lemma \ref{lem:bound}. Moreover, we can compute: $$ curl \, H = \left( \begin{array}{c} \displaystyle u_{x_1 x_2} u_{x_3} + v_{x_1 x_2} v_{x_3} - u_{x_1 x_3} u_{x_2} -v_{x_1 x_3} u_{x_2} \\ \displaystyle - u_{x_1 x_1} u_{x_3} - v_{x_1 x_1} v_{x_3} - (1-\rho^2) (u u_{x_3} + v v_{x_3})\\ \displaystyle u_{x_1 x_1} u_{x_2} + v_{x_1 x_1} v_{x_2} + (1-\rho^2) (u u_{x_2} + v v_{x_2}) \end{array} \right).$$ which is also uniformly bounded in $L^2$ norm. We now recall that: $$ H = \nabla (\Delta^{-1} (\nabla \cdot H)) - curl (\Delta^{-1} \, curl H),$$ see again \cite[Subsection 2.4.1]{bertozzi}. By using the Fourier Transform and Plancherel, all partial derivatives of $H$ are uniformly bounded in $L^2$, finishing the proof. \end{proof} \begin{remark} \label{r1} Let us point out that the above result can be easily extended to any dimension. However, in dimension 3 it implies, by Sobolev inequality, that: \begin{equation} \label{eq:Sob} \int_{{\mathbb R}^3} |g_n|^6 + \int_{{\mathbb R}^3} |h_n|^6 = O(1) \end{equation} In dimension $d>3$ the Sobolev exponent is $\frac{2d}{d-2}$. However we cannot deduce a similar expression in dimension 2. The lack of a Sobolev inequality in dimension 2 is one of the obstacles for this approach to work also in the planar case. \end{remark} \begin{definition} \label{defS} We define the set $S_n^r= \{ x \in {\mathbb R}^3: \rho_n(x) < r \}$. The behavior of these sets will be important in our arguments. \end{definition} Next lemma is a key ingredient in our proof. \begin{lem} \label{lem:51} Under the assumptions of Theorem \ref{teo:3}, assume that for some $r\in (0, 1)$, $|S_n^{r}| \to +\infty$. Then, there exists $\xi_n \in S_n^r$ and $R_n \to +\infty$ such that: $$\int_{B(\xi_n, R_n)} |g_n |^6 + |h_n |^6 + \sum_{i=2}^3 |\partial_{x_i} \psi_n |^2 \to 0.$$ \end{lem} \begin{proof} Take $x^n_1 \in S_n^r$, and define $R_n = |S_n^r|^{\frac{1}{6}}$. Observe that $$|B(x^n_1, 2 R_n)| = c_0 |S_n^r|^{\frac{1}{2}}, \ \ c_0 = \frac{32}{3} \pi.$$ As a consequence, there exists $x^n_2 \in S_n^r \setminus B(x^n_1, 2 R_n)$. Clearly, $$ B(x^n_1, R_n) \cap B(x^n_2, R_n) = \emptyset \mbox { and } |B(x^n_1, 2 R_n) \cup B(x^n_2, 2 R_n) | \leq 2 c_0 |S_n^r|^{\frac{1}{2}}.$$ We can choose then $x^n_3 \in S_n^r \setminus (B(x^n_1, 2 R_n) \cup B(x^n_2, 2 R_n))$, with $$ B(x^n_1, R_n) \cap B(x^n_2, R_n) \cap B(x^n_3, R_n)= \emptyset $$ and $$ |B(x^n_1, 2 R_n) \cup B(x^n_2, 2 R_n) \cup B(x^n_3, R_n)| \leq 3 c_0 |S_n^r|^{\frac{1}{2}}.$$ \medskip In this way we find $x_n^1 \dots x_n^{j_n} \in S_n^r$ with: $$ B(x^n_j, R_n) \cap B(x^n_k, R_n) = \emptyset, \ \ j,\ k \in \{1, \dots j_n \}, \ j \neq k.$$ where $j_n = [ \frac{|S_n|^{1/2}}{c_0}]$ (here $[a]$ denotes the largest integer smaller or equal than $a$). Hence we can choose $\xi_n = x_n^k$ such that, taking into account \eqref{eq:Sob} and \eqref{eq:A bounded}: $$ \int_{B(\xi_n, R_n)} | g_n |^6 + | h_n |^6 + \sum_{i=2}^d |\partial_{x_i} \psi_n |^2 \leq \frac{C}{j_n} \to 0.$$ \end{proof} The above result will be the key to prove next proposition, which allows us to have some control on the set of vortices of the solutions. \begin{prop} \label{prop:vortices} Let us fix $r \in (0, c/\sqrt{2})$. Then, there exists $N \in \mathbb{N}$ and $N$ sequences of disjoint closed balls $\overline{B_k^n}= \overline{B}(\xi_k^n, R_k)$ ($k=1 \dots n$) with $R_k \in (1, N)$ such that $$ S_n^r \subset \cup_{k=1}^N B_k^n.$$ \end{prop} \begin{proof} The proof is divided into several steps: \medskip {\bf Step 1: } $|S_n^r|$ remains bounded. Assume by contradiction that $|S_n^r| \to +\infty$ as $n \to +\infty$. Take $\xi_n \in {\mathbb R}^3$ given by Lemma \ref{lem:51}, and define $\tilde{\psi}_n = \psi_n(\cdot - \xi_n)$. By Lemma \ref{lem:bound} we can use Ascoli-Arzel\`{a} theorem to conclude that, up to a subsequence, $\tilde{\psi}_n$ converges $C^k$ locally to a solution $\psi$ of \eqref{eq:GPellip}. By the choice of $\xi_n$, this solution satisfies that $\rho(0) \leq r$. Moreover, by Fatou lemma we have that: $$ \partial_{x_2} \psi =0, \ \partial_{x_3} \psi =0, \ g=0, \ h =0,$$ where $g$ and $h$ are the analogous of \eqref{g}, \eqref{h}, namely: \[ g= u_{x_1} v - v_{x_1} u - \frac{c}{2} (\rho^2-1), \] \[h= \frac 1 2 |\psi_{x_1}|^2 - \frac 1 4 (1-\rho^2)^2. \] As a consequence, $\psi$ is a 1-D solution to the Gross-Pitaevskii equation with $g=0$, $h=0$. But those are precisely the finite energy 1-D travelling waves (see \cite[pages 3, 4]{bgs-survey}); hence, after a rotation, $\psi$ has the explicit expression: $$\psi(x_1)=\sqrt{\frac{2-c^2}{2}} \tanh \left( \frac{\sqrt{2-c^2} }{2}(x_1 + t) \right)+i\frac{c}{\sqrt{2}}, \ t \in {\mathbb R}.$$ But this is in contradiction with $|\psi(0)| \leq r < c/\sqrt{2}$, concluding the proof. \medskip {\bf Step 2:} There exists $N \in \mathbb{N}$, $\xi_k^n \in {\mathbb R}^3$ such that: $$ S_n^r \subset \cup_{k=1}^N B(\xi_k^n, 1).$$ Fix $s \in (r, \frac{c}{\sqrt{2}})$, and define $\xi_1^n$ as any point in $S_n^r$. Since $\nabla \rho_n$ is uniformly bounded (Lemma \ref{lem:bound}), there exists $\delta >0$ such that $B(\xi_1^n, \delta) \subset S_n^s$. It suffices to take $$ \delta \leq \frac{s-r}{sup_{n} \| \nabla \rho_n \|_{L^{\infty}}}.$$ Without loss of generality we can assume that $\delta < 1/2$. Take now $\xi_2^n$ any point in $S_n^r \setminus B(\xi_1^n, 1)$; again, $B(\xi_2^n, \delta) \subset S_n^s$, and observe that $B(\xi_1^n, \delta) \cap B(\xi_2^n, \delta) = \emptyset$. We follow by taking $\xi_3^n$ any point in $S_n^r \setminus \left( B(\xi_1^n, 1) \cup B(\xi_2^n, 1) \right)$, if there exists one. Since $|S_n^s|$ is bounded by the step 1, this procedure has to finish at a certain point, yielding the thesis of the proposition. Indeed we cannot find more than $N$ such points, where $$ N = \left [ \frac{\sup_n |S_n^s|}{\frac 4 3 \pi \delta^3} \right ].$$ Recall that $[a]$ stands for the largest integer smaller or equal than $a$. \medskip {\bf Step 3:} Conclusion By step 2, we have already $S_n^r$ contained in $N$ balls of radius 1. The problem is that they might not be disjoint. We now make a procedure of aggregation of balls which is gererally described as follows: \medskip Take a closed ball $\overline{B}(x, R_x)$. If it intersects a closed ball $\overline{B}(y,R_y)$, we replace both balls by $\overline{B}(x, R_x+R_y)$. We now repeat the procedure to the new set of balls. \medskip We apply this procedure iteratively to the balls given in Step 2, and in this way we conclude. \end{proof} The above proposition is the first milestone in our proof: it allows us to control the vortices of the solutions, as they are always contained in a fixed number of disjoint balls of bounded radii. Since ${\mathbb R}^3 \setminus \cup_{k=1}^N \overline{B}_k^n$ is a simply connected open set, we can guarantee the existence of a lifting of $\psi_n$ outside these balls. Being more specific, taking $\frac{c}{2}$ as the value $r$ (for instance), we can write: $$\psi_n(x) = \rho_n(x) e^{i \theta_n(x)} \ \forall \ x \in {\mathbb R}^d \setminus \cup_{k=1}^N B_k^n$$ where the balls $B_k^n$ are given by Proposition \ref{prop:vortices}. Since $\psi_n$ is a solution of \eqref{eq:GPellip}, we have that $\rho_n$, $\theta_n$ satisfy equations \eqref{eq:GPstripvarrp}. \begin{remark} \label{r2} This is a second crucial point in which the requirement $d\geq 3$ is crucial. If $d=2$ we can have liftings of finite energy solutions outside one ball (see \cite[Lemma 15]{gravejat-AIHP}), but this is not possible in the complement of two or more disjoint balls. \end{remark} Next lemma is inspired in \cite{bgs-cmp}[Lemmas 2.8, 2.10], which are concerned with the case without vortices. Compare it with the identities \eqref{eq:MOwv}, \eqref{eq:MOwv2} for the vortexless case. \begin{lem} \label{lem:O(1)} Take $c \in (0,\ \sqrt{2})$, $c_n \in E$ with $c_n \to c$ and $\psi_n$ the solutions given by Theorem \ref{teo:almost}. Then \begin{equation}\label{eq:stimmomassa} \mathcal{P}(\psi_n)=\frac 12 \int_{{\mathbb R}^3 \setminus \cup_{k=1}^N B_k^n}(1-\rho_n^2)\partial_{x_1}\theta_n +O(1), \end{equation} \begin{equation}\label{eq:stimmomass2b} c \mathcal{P}(\psi_n)= \int_{{\mathbb R}^3 \setminus \cup_{k=1}^N B_k^n}\rho_n^2|\nabla \theta_n|^2 +O(1), \end{equation} \begin{equation}\label{eq:stimmomass3c} \int_{{\mathbb R}^3 \setminus \cup_{k=1}^N B_k^n}|\nabla \rho_n|^2 =O(1). \end{equation} \end{lem} \begin{proof} First of all, observe that $$\nabla \theta = \frac{u \nabla v-v \nabla u}{\rho^2},$$ so that \begin{equation} \label{boundtheta} |\nabla \theta_n | = O(1) \mbox{ in } \partial B_k^n \Rightarrow |\theta_n(p) - \theta_n(q)| \leq C \ \forall \ p,\ q \in \partial B_{k}^n. \end{equation} This is useful in what follows; observe that we do not know whether $\| \theta_n \|_{L^{\infty}}$ is bounded or not. Direct computation gives $$ \mathcal{P}(\psi_n)=\frac 12 \int_{{\mathbb R}^3 \setminus \cup_{k=1}^N B_k^n}\partial_{x_1}(\rho_n \sin \theta_n)-\rho_n^2\partial_{x_1}\theta +\frac 12 \int_{\cup_{k=1}^N B_k^n} \langle i \partial_{x_1} \psi_n, \psi_n-1\rangle $$ which implies that \begin{equation}\label{eq:stimmomassz}\mathcal{P}(\psi_n)=\frac 12 \int_{{\mathbb R}^3 \setminus \cup_{k=1}^N B_k^n}\partial_{x_1}(\rho_n \sin \theta_n -\theta_n)+(1-\rho_n^2)\partial_{x_1}\theta_n +O(1). \end{equation} In order to get \eqref{eq:stimmomassa} it suffices hence to prove that \begin{equation}\label{eq:stimmomasszz} \int_{{\mathbb R}^3 \setminus \cup_{k=1}^N B_k^n}\partial_{x_1}(\rho_n \sin \theta_n-\theta_n)=O(1). \end{equation} We recall that $ \partial_{x_1} (\rho_n \sin \theta_n - \theta_n)$ is integrable thanks to \eqref{eq:pointcruc} and \eqref{eq:stimmomassz}. By integration by parts, using the the decay estimates at the infinity, we get $$\int_{{\mathbb R}^3 \setminus \cup_{k=1}^N B_k^n} \partial_{x_1}(\rho_n \sin \theta_n-\theta_n) =\sum_{k=1}^N \int_{\partial B_k^n} \left(\rho_n \sin \theta_n -\theta_n\right)\eta_1,$$ where $\eta_1$ is the first component of the inward unit normal vector to the spheres $B_k^n$. Relation \eqref{eq:stimmomasszz} follows now from \eqref{boundtheta} together with the fact that the outward unit surface normal $\eta_1$ has zero average on the sphere, which implies that $$\int_{\partial B_k^n} \theta_n\eta_1=\int_{\partial B_k^n} \left(\theta_n-\theta_n(p_0)\right)\eta_1=O(1),$$ where $p_0$ is an arbitrary point on the sphere $B_k^n$.\\ In order to prove \eqref{eq:stimmomass2b} we argue as in Lemma \ref{prop:stimmoment}, i.e. multiplying per first equation of \eqref{eq:GPstripvarrp} by $\theta_n$ and then integrating on $R^3 \setminus \cup_{k=1}^N B_k^n$. By integration by parts we get $$\frac{c}{2}\int_{R^3 \setminus \cup_{k=1}^N B_k^n} (1-\rho_n^2)\partial_{x_1}\theta_n -\int_{R^3 \setminus \cup_{k=1}^N B_k^n} \rho_n^2|\nabla \theta_n|^2 $$$$ = \sum_{k=1}^N \int_{\partial B_k^n} \theta_n \left ( \frac{c}{2} (1-\rho_n^2) \eta_1- \rho_n^2 \nabla \theta \cdot \eta \right ).$$ Observe that $G_n(x)= (\frac{c}{2} (1-\rho_n^2), 0, 0 ) - \rho_n^2 \nabla \theta_n$, where $G_n$ is defined in \eqref{defG}. In particular it is defined in the whole euclidean space and $\nabla \cdot G_n=0$. By integrating by parts in $B_k^n$, we obtain that $$ \int_{\partial B_k^n} \frac{c}{2} (1-\rho_n^2) \eta_1- \rho_n^2 \nabla \theta \cdot \eta =0.$$ As a consequence, we can use \eqref{boundtheta} to obtain: $$ \int_{\partial B_k^n} \theta_n \left ( \frac{c}{2} (1-\rho_n^2) \eta_1- \rho_n^2 \nabla \theta \cdot \eta \right ) $$$$ = \int_{\partial B_k^n} (\theta_n - \theta_n(p_0)) \left ( \frac{c}{2} (1-\rho_n^2) \eta_1- \rho_n^2 \nabla \theta \cdot \eta \right )=O(1).$$ \medskip Now we prove \eqref{eq:stimmomass3c}. From Lemma \ref{lem:poho} we get $$\frac{1}{2} \int_{{\mathbb R}^3} |\nabla \psi_n|^2 -2 c \mathcal{P}(\psi_n)+ \frac 3 4 \int_{{\mathbb R}^3} \left(1-|\psi_n|^2\right)^2 =0,$$ which implies, thanks to \eqref{eq:stimmomass2b} $$\frac{1}{2} \int_{{\mathbb R}^3 \setminus \cup_{k=1}^N B_k^n} |\nabla \rho_n|^2 +\frac{3}{2} \int_{{\mathbb R}^3 \setminus \cup_{k=1}^N B_k^n} \rho^2|\nabla \theta_n|^2 + \frac 3 4 \int_{{\mathbb R}^3 \setminus \cup_{k=1}^N B_k^n} \left(1-|\rho_n|^2\right)^2 =$$ $$=3 \int_{{\mathbb R}^3 \setminus \cup_{k=1}^N B_k^n} \rho^2|\nabla \theta_n|^2 +O(1)=3c \mathcal{P}(\psi_n)+O(1).$$ As a consequence we get $$3 \left(\mathcal{E}(\psi_n)-c \mathcal{P}(\psi_n)\right)=\int_{{\mathbb R}^3 \setminus \cup_{k=1}^N B_k^n} |\nabla \rho_n|^2+O(1)$$ and hence \eqref{eq:stimmomass3} follows by the fact that $\mathcal{E}(\psi_n)-c \mathcal{P}(\psi_n)=I(\psi_n)=O(1).$ \end{proof} For next proposition it is useful to recall the definition \ref{defS}. \begin{prop} \label{prop:ma} Take $c \in (0,\ \sqrt{2})$, $c_n \in E$ with $c_n \to c$ and $\psi_n$ the solutions given by Theorem \ref{teo:almost}. Assume that: \begin{equation} \mathcal{E}(\psi_n) \to + \infty. \end{equation} Then, for any $r \in (\frac{c}{\sqrt{2}}, 1)$, $|S_n^r| \to +\infty$. \end{prop} \begin{proof} We recall the form of the energy for functions $\psi$ given by a lifting $\psi = \rho e^{i \theta}$: $$e(\rho, \theta)= \frac 12 \left(|\nabla \rho|^2 + |\nabla \theta|^2\rho^2 \right)+\frac 14 \left(1-|\rho|^2\right)^2.$$ The following function represents the lagrangian in the vortexless case, and is an approximation of the real lagrangian in view of \eqref{eq:stimmomassa}: $$l(\rho, \theta)= e(\rho, \theta)- \frac{c_n}{2} (1-\rho^2)\partial_{x_1} \theta.$$ Assume by contradiction that $|S_n^r|$ is bounded for some $r > \frac{c}{\sqrt{2}}$. Observe that: $$ I^{c_n} (\psi_n ) = \int_{{\mathbb R}^3 \setminus \cup_{k=1}^N B_k^n} l(\rho_n, \theta_n) + O(1)= \int_{\{\rho_n \geq r\}} l(\rho_n, \theta_n) + O(1).$$ We now use the inequality $|\frac c 2 (1-\rho_n^2)\partial_{x_1} \theta_n| \leq \frac{(1-\rho^2)^2}{4 (1+\varepsilon)} + \frac{c^2}{4} (\partial_{x_1} \theta_n)^2(1+\varepsilon)$ with suitable $\varepsilon>0$ to obtain: $$ \int_{\{\rho_n \geq r\}} l(\rho_n, \theta_n) \geq \int_{\{\rho_n \geq r \} } \frac 1 2 |\nabla \rho_n|^2 + \left[ \frac 1 2 - \frac{c^2 (1+\varepsilon)}{4\rho_n^2} \right] |\nabla \theta_n|^2\rho_n^2 + \frac{\varepsilon}{1+\varepsilon} \frac{(1-\rho_n^2)^2}{4}$$ $$ \geq \varepsilon_0 \int_{\{\rho_n \geq r \}} e(\rho_n, \theta_n) = \varepsilon_0 \, \mathcal{E}(\psi_n) + O(1),$$ for suitable $\varepsilon_0>0$. Then, $$O(1) = I^{c_n}(\psi_n) \geq \varepsilon_0 \, \mathcal{E}(\psi_n) + O(1),$$ and this allows us to conclude. \end{proof} \subsection{Proof of Theorem \ref{teo:3}} With all the results above we can inmediately conclude the proof of Theorem \ref{teo:3}. Indeed, by \eqref{eq:stimmomass3} and Sobolev inequality, we have that $$ \int_{{\mathbb R}^3} (1-\rho_n)^6 = O(1).$$ If $\mathcal{E}(\psi_n) \to +\infty$, Proposition \ref{prop:ma} implies $|S_n^r|$ is unbounded for $r > \frac{c}{\sqrt{2}}$, and this is a contradiction with the above estimate. Hence $\mathcal{E}(\psi_n)$ is bounded. By Fatou Lemma, the solution $\psi_0$ given in Proposition \ref{ascoli} has finite energy, concluding the proof. \section{Proof of Theorem \ref{teo:all}} In this section we prove the compactness criterion given in Theorem \ref{teo:all}. The proof follows some of the ideas of the previous section, but with important differences. As previously, we will be done if we show that $\mathcal{E}(\psi_n)$ is bounded. By \eqref{control}, we have that $\psi_n \neq 0$ outside $B(0,R)$; as a consequence, $\psi_n$ admit a lifting $\psi_n(x) = \rho_n(x) e^{i \theta_n(x)}$ for all $x \in {\mathbb R}^2 \setminus B(0,R)$. This is a consequence of the fact that $\psi_n$ have finite energy, see \cite{gravejat-AIHP}[Lemma 15]. In the vortexless case, this lifting holds in the whole euclidean space. Next lemma is a version of Lemma \ref{lem:O(1)}: \begin{lem} \label{jopeta} Take $c \in (0,\ \sqrt{2})$, $c_n \in E$ with $c_n \to c$ and $\psi_n$ the solutions given by Theorem \ref{teo:almost}. Assume also that there exists $R>0$ and $\delta>0$ such that \eqref{control} is satisfied. Then \begin{equation}\label{eq:stimmomass} \mathcal{P}(\psi_n)=\frac 12 \int_{B(0,R)^c}(1-\rho_n^2)\partial_{x_1}\theta_n +O(1), \end{equation} \begin{equation}\label{eq:stimmomass2} c \mathcal{P}(\psi_n)= \int_{B(0,R)^c}\rho_n^2|\nabla \theta_n|^2 +O(1), \end{equation} \begin{equation}\label{eq:stimmomass3} \int_{B(0,R)^c}|\nabla \rho_n|^2 =O(1). \end{equation} \end{lem} \begin{proof} The proof is completely analogue to that of Lemma \ref{lem:O(1)}. Observe that in the vortexless case we have exact identities in \eqref{eq:stimmomass}, \eqref{eq:stimmomass2}, \eqref{eq:stimmomass3}. \end{proof}. With Lemma \ref{lem:O(1)} in hand, we can adapt the proof of Proposition \ref{prop:ma} to our setting, obtaining the following result: \begin{prop} \label{prop:ma2} Take $c \in (0,\ \sqrt{2})$, $c_n \in E$ with $c_n \to c$ and $\psi_n$ the solutions given by Theorem \ref{teo:almost}. Assume that: \[ \mathcal{E}(\psi_n) \to + \infty. \] Then, for any $r \in (\frac{c}{\sqrt{2}}, 1)$, $|S_n^r| \to +\infty$. \end{prop} Next result is analogue to Lemma \ref{lem:51}. The only difference is that now we do not know that \eqref{eq:Sob} holds, but instead we have \eqref{eq:stimmomass3}. \begin{lem} \label{lem:61} Under the assumptions of Theorem \ref{teo:all}, assume that for some $r\in (0, 1)$, $|S_n^{r}| \to +\infty$. Then, there exists $\xi_n \in S_n^r$ and $R_n \to +\infty$ such that: $$\int_{B(\xi_n, R_n)} |\nabla \rho_n |^2 + |\partial_{x_2} \psi_n |^2 \to 0.$$ \end{lem} \begin{proof} The proof is analogue to that of Lemma \ref{lem:51}. \end{proof} \subsection{Proof of Theorem \ref{teo:all}} Assume by contradiction that $\mathcal{E}(\psi_n) \to +\infty$. By Proposition \ref{prop:ma2}, we can apply Lemma \ref{lem:61} to a value $r$ satisfying that: $$\frac{c}{\sqrt{2}} < r < \sqrt{ \frac{2}{3} (1+c^2/4)} <1.$$ Notice that this is possible if $c<\sqrt{2}$. Let $\xi_n \in {\mathbb R}^d$ given by Lemma \ref{lem:61} and define $\tilde{\psi}_n(x)= \psi_n(x- \xi_n)$. Up to a subsequence we have that: $$ \tilde{\psi}_n \to {\psi}_0 \mbox{ in } C^k_{loc}({\mathbb R}^d).$$ Taking into account Remark \ref{remark morse}, $ind(\psi_0) \leq 1$. By Lemma \ref{lem:51}, $\psi_0$ depends only of the $x_1$ variable. Moreover $\nabla \rho_0 =0$ where $\rho_0 = |{\psi}_0| \leq r$. That is, $\psi_0 (x_1)$ is a $1D$ circular solution, $$ \psi_0(x_1) = \rho_0 e^{i \omega (x_1 - t)},$$ where $\omega^2 + c \omega + \rho_0^2=1$. By the choice of $r$, we have that $\rho_0^2 < \frac{2}{3} (1+c^2/4)$. But those solutions have infinite Morse index, as shown in Proposition \ref{appendix} (see Appendix). This contradiction shows that $\mathcal{E}(\psi_n)$ is bounded. By Fatou Lemma, the solution $\psi_0$ given in Proposition \ref{ascoli} has finite energy, concluding the proof. \begin{remark} Let us point out that Theorem \ref{teo:3} does not need the information on the Morse index of the solutions. The main tool there is that $I^{c_n}(\psi_n) = O(1)$. Instead, Theorem \ref{teo:all} requires in a essential way that the Morse index of the solutions obtained is bounded. \end{remark} \section{Appendix (by Rafael Ortega)} In this appendix we prove the following result: \begin{prop} \label{appendix} Given $t \in {\mathbb R}$, $\omega_0 \in {\mathbb R}$, $\rho_0 >0$ satisfying that $\omega_0^2 + c \omega_0 + \rho_0^2=1$, the function $\psi_0(x) = \rho_0 e^{i \omega_0 (x - t)}$ is a (infinite energy) solution of \eqref{eq:GPellip}. Assume also that $\rho_0^2 < \frac{2}{3} (1+c^2/4)$. Then its Morse index, as defined in Definition \ref{Morse}, is infinity. \end{prop} \begin{proof} The problem is autonomous so that we can assume $t=0$. The proof is based on the study of the $1D$ problem: \begin{equation}\label{eq:GP1D} \psi'' + i c \psi' + \left(1-|\psi|^2\right)\psi=0 \ \ \text{ on } {\mathbb R}. \end{equation} By the change of variables $\phi= e^{i x c/2 } \psi$ we pass to a problem: \begin{equation}\label{eq:GP1Dbis} \phi'' +\left(1+ c^2/4 -|\phi|^2\right)\phi=0 \ \ \text{ on } {\mathbb R}. \end{equation} The Morse index of this problem depends on the existence of conjugate points to some solutions of the linearized equation, see for instance \cite[Chapter 5]{gelfand}. The function $\phi(x)= \rho_0 e^{i \omega_1 x}$ is a solution of \eqref{eq:GP1Dbis}, where , $\omega_1 = \omega_0 + c/2$. Observe that \begin{equation} \label{relation} \omega_1^2 + \rho_0^2 = 1 + c^2/4 \end{equation} The linearized equation to \eqref{eq:GP1Dbis} around the solution $\phi$ is: \[ \zeta'' + (1+c^2/4) \zeta - 2 \overline{\phi(s)} \phi(s) \zeta - \phi(s)^2 \overline{\zeta}=0. \] We will follow the lines of \cite[Section 21]{SM} to analyze the oscillatory properties of this equation. \[ \zeta'' + (1+c^2/4) \zeta - 2 \rho_0^2 \zeta - \rho_0^2 e^{2 i \omega_1 s }\overline{\zeta}=0. \] We now make the change of variable $\zeta = e^{i \omega_1 s} \eta$, to obtain a constant coefficient linear system: \begin{equation} \label{constant} \eta'' + 2 i \omega_1 \eta' - \rho_0^2 \eta - \rho_0^2 \overline{\eta}=0. \end{equation} If $\rho_0^2 < 2 \omega_1^2$ (which, by \eqref{relation}, reduces to $\rho_0^2 < \frac{2}{3}(1+c^2/4)$) we can find the explicit solution to \eqref{constant}: $$ \eta(s)= \frac{ \sin \left( s \sqrt{4 \omega_1^2 - 2\rho_0^2} \right) }{\sqrt{4 \omega_1^2 - 2\rho_0^2}} + i \omega_1 \frac{ \cos \left( s \sqrt{4 \omega_1^2 - 2\rho_0^2} \right) -1}{2 \omega_1^2 - \rho_0^2}.$$ Clearly, $\zeta(s)= e^{i \omega_1 s} \eta(s)$ has infinitely many conjugate points $\frac{2 \pi n}{\sqrt{4 \omega_1^2 - 2\rho_0^2}}$, $n \in {\mathbb N}$. Given any interval $I$, the quadratic functional $\tilde{Q}_{1,I}: H_0^1(I, \mathbb{C}) \to {\mathbb R}$, $$\tilde{Q}_{1, I}(\sigma_k)= \int_{I} |\sigma_k'|^2 - (1+c^2/4 - |\phi|^2) |\sigma_k|^2 +2 (\langle \sigma_k, \phi \rangle)^2 <0$$ is in the conditions of Section 29.2 of \cite{gelfand}. We can apply \cite[Theorem 3' in page 122]{gelfand} to deduce that $\tilde{Q}_{1, I}$ takes negative values as soon as the length of the interval $I$ is greater than $\frac{2 \pi }{\sqrt{4 \omega_1^2 - 2\rho_0^2}}$. Then we can find infinitely many functions $\sigma_k \in C^{\infty}_0({\mathbb R})$ \emph{ with disjoint support} such that $$\tilde{Q}_1(\sigma_k)= \int_{-\infty}^{\infty} |\sigma_k'|^2 - (1+c^2/4 - |\phi|^2) |\sigma_k|^2 +2 (\langle \sigma_k, \phi \rangle)^2 <0.$$ We now want to pass to the original problem \eqref{eq:GP1D} and estimate its Morse index. In order to do so, define $\tau_k(s)$ by $\sigma_k(s)= e^{ics/2}\tau_k(s)$. Simple computations give: $$ \sigma_k'(s)= (i \frac c 2 \tau_k(s) + \tau_k'(s) )e^{ics/2},$$ $$|\sigma_k'(s)|^2 = |\tau_k'(s)|^2 + \frac{c^2}{4} |\tau_k(s)|^2 + c \langle i \tau_k(s), \tau_k'(s) \rangle = |\tau_k'(s)|^2 + \frac{c^2}{4} |\tau_k(s)|^2 - c \langle \tau_k(s), i \tau_k'(s) \rangle.$$ Moreover, $$ \langle \sigma_k(s), \phi(s) \rangle = \langle \tau_k(s), \psi(s) \rangle.$$ As a consequence $\tilde{Q}_1(\sigma_k)= Q_1(\tau_k)<0$, where $$Q_1(\tau_k)=\int_{-\infty}^{\infty} |\tau_k'|^2 - c \langle \tau_k, i \tau_k' \rangle - (1 - |\psi|^2) |\tau_k|^2 +2 (\langle \tau_k, \psi \rangle)^2.$$ Observe that this is the quadratic form associated to \eqref{eq:GP1D}. Take now a $C_0^\infty$ function $\chi_k: {\mathbb R}^{d-1} \to {\mathbb R}^+$, and let us estimate $Q$ on the function $\iota_k(x)=\chi_k(\tilde{x}) \tau_k(x_1)$, where $Q$ is defined in \eqref{defQ}: $$Q(\iota_k)= Q_1(\tau_k) \int_{{\mathbb R}^{d-1}} \chi_k(\tilde{x})^2 \, d\tilde{x} + \left( \int_{-\infty}^{+ \infty} |\tau_k(x_1)|^2 \, dx_1\right) \left( \int_{{\mathbb R}^{d-1}} |\nabla \chi_k(\tilde{x})|^2 \, d\tilde{x} \right).$$ It suffices to take now $\chi_k$ such that $\int_{{\mathbb R}^{d-1}} \chi_k^2=1$ and $\int_{{\mathbb R}^{d-1}} |\nabla \chi_k|^2 $ is sufficiently small, to conclude that $Q(\iota_k )<0$. \medskip Observe also that $supp \ \iota_k \cap supp \ \iota_{k'} = \emptyset$ if $k \neq k'$, since an analogue property holds for $\sigma_k$ and $\tau_k$. Hence, $Q$ is negative definite on the vector space generated by the linearly independent functions $\{\iota_1, \ \dots, \iota_k\}$ for any $k \in {\mathbb N}$, concluding the proof. \end{proof}
1911.02867
\section{Introduction} \label{intro} This article gives a short review of the main methods used to analyse linear polarization maps. Given the more and more prolific literature on the subject, not all the articles using each technique are cited but the generic articles introducing the conceptual ideas behind each procedure are used as references. Additional methods employing wavelet transform analysis technics and Bayesian inference statistical tools currently in development are not discussed here, and we refer the interested reader to the method investigated by \cite{robitaille2019} and to the IMAGINE consortium project \cite{imagine2018}, respectively. The review focuses on the analysis of maps obtained from the observation of linearly polarized thermal dust emission at submillimeter (submm) wavelengths toward Galactic Molecular Clouds (MCs) and protostellar cores, but the same methods can be used on maps obtained from simulations. Independently of the dust grain alignment mechanism that is considered (see \cite{and2015} for a review on this subject), the main accepted current picture is that of dust grains aligned perpendicular to the local magnetic field direction pervading the Interstellar Medium (ISM). Each polarization pseudo-vector displayed in one pixel of a polarization map is therefore an average measurement of the weighted contribution by all dust grains along a given Line-Of-Sight (LOS) in a direction perpendicular to the average magnetic field on the Plane-Of-Sky (POS). From the measurements of the Stokes parameters $I$, $Q$ and $U$ the total fraction of polarization ($p=\frac{\sqrt{(Q^2+U^2)}}{I}$) is often represented by the length of the pseudo-vector and the polarization angle (P.A.; $\theta=\frac{1}{2} \times \rm arctan (\frac{U}{Q})$) by its orientation with respect to a given reference frame. Other representations of $p$ exists in the literature and maps showing only drapery patterns of the POS magnetic field lines, or of the P.A.s, are more and more common. An example of submm polarization is shown in figure~\ref{fig-1} (left panel). \begin{figure}[h] \begin{center} \includegraphics[scale=3.3]{fig1-a.eps} \includegraphics[scale=0.14,angle=0]{fig1-c.eps} \label{fig-1} \caption{Left: Figure 1 from \cite{koch2012} showing the Submillimeter Array (SMA) 870 $\mu$m polarization map of the Collapsing core W51 e2 obtained by \cite{tang2009}. The thick red segments show the magnetic field orientation after rotation by 90$^\circ$ of the polarization pseudo-vectors (see Introduction) where polarization was detected such that $p/\sigma_{p}>3$. The degree of polarization is not represented in this map. The blue pseudo-vectors show the gradient directions (see Section~\ref{sec-2} and Section~\ref{sec-3}) of the dust emission continuum mapped at 1.3 mm by \cite{lai2001} with the Berkeley-Illinois-Maryland Association (BIMA). Right: Figure 2 from \cite{poidevin2013} showing the ADF of the SCUBA JCMT DR21 850 $\mu$m polarization maps (see Section~\ref{sec-1}).} \end{center} \end{figure} \section{Davis-Chandrasekhar and Fermi (DCF) method and Angular Dispersion function (ADF)} \label{sec-1} The DCF method was introduced in the middle of the twentieth century by \cite{davis1951} and \cite{cf1953}. It was first designed to get estimates of the POS magnetic field strength, $B_{pos}$, assuming a turbulent diffuse ISM. It was applied on polarimetry data obtained on star fields by \cite{hiltner1951} at visible wavelengths. In this regime, the polarization is produced by dichroic absorption of starlight by dust grain layers pervading the diffuse ISM, in a direction perpendicular to the one produced in emission at submm wavelengths by identical polarizing dust grains (see \cite{planck_ir_xxi}). Inferred from the data is a large-scale uniform magnetic field along the Galactic Plane (GP). The fluctuations around the mean of the distribution of the polarization pseudo-vectors are assumed by \cite{davis1951} and \cite{cf1953} to be produced by Alfv\'en waves that are coupled to the gas such that there is equipartition between kinetic and perturbed magnetic energies. With this model $B_{pos}$ is a function of the ISM gas density $\rho$, of the gas velocity dispersion $\sigma_{v}$, and of the polarization angle dispersion $\sigma_{\theta}$, such that: $B_{pos} = Q \sqrt{4 \pi \rho} \frac{\sigma_{v}}{\sigma_{\theta}}$, where $Q$ is a factor of proportionality. With this method, \cite{cf1953} calculated a diffuse ISM $B_{pos}$ estimate of a few $\mu$G consistent with those obtained with other independent methods. Later on, in the 1990s, when polarimetry detectors at submm wavelengths started to be sensitive enough, the DCF method was used on submm maps of molecular clouds, meaning transposed to spatial scales 1000 to 10000 times smaller than the GP scale in regions of density two orders or more magnitudes that of the diffuse ISM density. The first comprehensive study was done by \cite{gonatas1990} from 10 measurements obtained at 100$\,\mu$m with the Kuiper Airborne Observatory (KAO) in the Orion Nebula, leading to $B_{pos}$ estimates of a few mG. The DCF method has been tested numerically by \cite{ostriker2001} and \cite{dfg2008} and some refinements to the calculations of $B_{pos}$ have been proposed. A review on the values of $Q$ has been given by \cite{poidevin2013}. \textbf{Improvements to the Method}: Further major improvements to the method were designed by \cite{hildebrand2009}, \cite{houde2009} and \cite{houde2011}. The Angular Dispersion Function (ADF) was introduced by \cite{hildebrand2009} to avoid inaccurate estimates of magnetohydrodynamic or turbulent dispersion, as well as to avoid inaccurate estimates of $B_{pos}$, due to the large-scale, non turbulent field structure. The ADF is expressed by $<\Delta \Phi^2(l)>^{1/2} \equiv \left\{ \frac{1}{N(l)} \Sigma_{i=1}^{N(l)}[\Phi(x) - \Phi(x+l)]^2 \right\}^{1/2}$, where $\Phi(x)$ is the angle asssociated to the projected POS magnetic field vector $B(x)$ at position $x$ in a map. The difference in angle between two points is obtained by $\Delta \Phi(l) \equiv \Phi(x) - \Phi(x+l)$, and is calculated between the $N(l)$ pairs of vectors separated by displacement, or lag, $l$. $<...>$ denotes an average and $l=|l|$. The square of the ADF, a second order structure function, is also often used (e.g. \cite{dfg2008}). One example of ADF is shown in Figure~\ref{fig-1} (right panel), where the angular dispersion $b$ is the intercept of the fit at $l=0$. Correlations in polarization angles at lags $l$ smaller than the telescope beam ($1.22 \lambda / D$) or than the turbulent correlation length ($\delta$) have to be avoided. The fit is ideally applied on the set of points calculated by taking into account the measurement uncertainties and such that the lag distance $l$ is smaller that the typical length scale $d$ for variations in the large-scale magnetic fields. Once $b$ is estimated from a map this method also provides an estimate of the turbulent to large-scale magnetic field strength ratio such that $\frac{<B_{t}^{2}>^{1/2}}{B_{pos}}=\frac{b}{\sqrt[]{2-b^2}}$, and the POS strength of the large-scale component is estimated by $B_{pos} \simeq \sqrt[]{8 \pi \rho} \frac{\sigma_{v}}{b}$. The method to take into account the effect of the signal integration through the thickness of the clouds as well as across the area sustended by the telescope is fully incorporated by \cite{houde2009}. The authors also show how to evaluate the turbulent magnetic field correlation length scale from polarization maps obtained with sufficiently high resolution and high enough spatial sampling rate. Further examples are given and discussed by \cite{houde2011} as well as the application of the technique to interferometry measurements. \textbf{Results:} The DCF method has been applied to data from many Galactic MCs obtained on sky patches, including Gould belt MCs (e.g. \cite{coude2019}) and some of the closests low, intermediate or high star-forming regions. Estimates of $B_{pos}$ lie typically in the range of a few $\mu$G to a few mG. In OMC-1 the turbulent correlation length is estimated to $\delta \approx $ 10 mpc (e.g. \cite{houde2009}). Independently of the uncertainties coming from the propagation of the errors, the estimates of $B_{pos}$ and $\delta$ will rely on the choice (or availabilty) of the gas tracers and of the value of $Q$ used for the calculation of $B_{pos}$ (e.g. see discussion in \cite{pattle2017}). In principle accurate and reliable results can be obtained with polarization data of sufficient spatial resolution and high enough spatial sampling rate (\cite{houde2009}). The DCF method has been applied to a large fraction of the sky by \cite{planck_ir_xix} and on the full sky by \cite{planck2018_xii}. Using the Planck Release 3 \cite{planck2018_xii} have obtained the following relation between the ADF, $S$, and the fraction of polarization, $p$, as a function of the map resolution, $w$: $<S_{p}>=\frac{0^{\circ}.31}{p} \left( \frac{w}{160^{'}} \right)$. The results are displayed in Figure~\ref{fig-2} (top-left panel), and show that down to a resolution of $10'$ the systematic decrease in $p$ with $N_{\rm H}$ is determined mostly by the magnetic-field structure. At a lower resolution of $2.5'$, using the $500 \mu$m BLASTPol polarization map of Vela C, \cite{fissel2016} discuss the dependence of $p$ on the dust temperature and on $N_{\rm H}$ and show that $p \varpropto N_{\rm H}^{-0.45} S^{-0.60}$, suggesting that dust grain alignment properties may also contribute to the decrease of $p$ in some conditions. \begin{figure}[h] \begin{center} \includegraphics[scale=0.38,angle=90]{fig2.ps} \label{fig-2} \caption{ Top-Left: Figure 11 from \cite{planck2018_xii} showing the $S \times p$ relation as a function of column density $N_{\rm H}$ where $w$ is the map resolution parameter (see Section~\ref{sec-1}). Bottom-Left: Figure 6 from \cite{fissel2019} showing the PRS $Z_{x}$ corrected for oversampling obtained with several molecular tracers (see Section~\ref{sec-2}). Right: Figure 3 from \cite{koch2012} showing the 90$^{\circ}$ rotated P.A. $\alpha$, and the angle associated to the gradient orientation $\psi$, used to retrieve information on the magnetic field strength and significance (see Section~\ref{sec-3}). } \end{center} \end{figure} \section{Histogram of Relative Orientations (HROs) between ISM tracer structures and magnetic field structures} \label{sec-2} This method has been designed by \cite{soler2013} and subsequently applied to many molecular cloud regions (e.g. \cite{soler2017}, \cite{planck_ir_xxxv}). The concept of the method relies on the calculation of the gradient of the column density structure provided by a given material tracer (e.g. $N_{\rm H}$) in the pixels of a polarization map. As an illustration the blue pseudo-vectors displayed in Figure~\ref{fig-1} (left panel), show the gradient orientations obtained from a dust emission continuum map. Once the gradients are quantified their relative orientation with the POS magnetic field orientations inferred from the P.A.s can be estimated and the HROs can be analysed. Several methods exist to calculate the gradients in a map as a function of the morphology structure of the object one is interested to isolate (see e.g. \cite{alina2019} and references therein). HROs can be built by considering the relative orientation of P.A.s with respect to the average orientation of a large-scale structure identified in a map (e.g. \cite{alina2019}). More frequently, HROs are calculated by considering the relative orientation angle $\phi$ between the POS magnetic field $<\widehat{B}_{pos}>$ and a line tangent to the local iso-contour (see \cite{soler2013}, \cite{soler2017}, \cite{fissel2019}) which is equivalent to the angle between the polarization direction $\widehat{E}$ and the intensity gradient $\bigtriangledown I$: $\phi = \rm arctan (| \bigtriangledown I \times \widehat{E} |, \bigtriangledown I. \widehat{E})$. Statistical measures of HROs have been quantified using the histogram shape parameter (e.g. \cite{soler2013}, \cite{soler2017}), $\zeta=\frac{A_{0}-A_{90}}{A_{0}+A_{90}}$, where $A_{0}$ is a measure of the total number of points in the quartile $0^{\circ} < \phi < 22^{\circ}.5$, and $A_{90}$ a measure of the same quantity in the quartile $67^{\circ}.5 < \phi < 90^{\circ}$. Smoother and more accurate definitions of $\zeta$ have been investigated by \cite{jow2018}. For this Rayleigh statistics $Z$ is used to test whether a given set ${\theta_{i}}$ of $n$ independent angles are uniformly distributed within the range$[0, 2\pi]$. Using the relation $\theta = 2 \phi$, where $\phi$ is the relative orientation angle discussed above, \cite{jow2018} showed that the Projected Rayleigh Statistic (PRS) of $Z$: $ Z_{x} = \Sigma_{i}^{n_{\rm ind}} \rm cos \theta{_i}/\sqrt{n_{\rm ind}/2}$, where $n_{\rm ind}$ is the number of independent data samples in the map, can be used to test for a preference for perpendicular or parallel alignment between the magnetic field orientations and the iso-contours. The variables $\zeta$ or $Z_{x}$ are often calculated on samples of data lying in different ranges of intensity of the material tracer map, and used to explore variations of the relative orientations between the magnetic field structures and the ISM morphology structures as a function of the column density or number density parameters. Figure~\ref{fig-2} (bottom-left panel) shows the PRS obtained by \cite{fissel2019} using several molecular tracers. For each tracer, $Z_{x} >0 (<0)$ indicates the $I$ structure preferentially aligns parallel (perpendicular) to $<\widehat{B}_{pos}>$. \textbf{Results:} The main trend found from the analysis of HROs derived with lines tangent to local iso-contours is that the POS magnetic field orientations change from mostly parallel (or not clearly defined) to perpendicular with respect to the molecular cloud structures probed with a given material tracer (\cite{soler2017}, \cite{planck_ir_xxxv}, \cite{fissel2019}). This translates by a change of orientation from lower to higher column densities $N_{\rm H}$. In Vela C the transition is estimated to occur at molecular hydrogen number density of approximately $n_{\rm H} \approx 10^{3} \rm cm^{-3}$ (\cite{fissel2019}). This local iso-contours approach has been tested by simulations (e.g. \cite{soler2013}). In their study \cite{alina2019} first produce a component separation analysis of the magnetic fields in the diffuse ISM and in higher column density regions, and use image analysis technics to extract filaments and clumps embedded in different background column densities, i.e. showing density contrast varying with their environment. Their analysis of the HROs obtained between filaments, embedded clumps and internal and background magnetic field orientations lead to a more complex picture than in other studies. Overall their results support the possibility of magnetic fields strong enough to influence the formation of molecular clouds and also of their embedded clumps. \section{Polarization-Intensity Gradient Relation (P-IGR)} \label{sec-3} This method has been designed by \cite{koch2012} to study star-forming regions. In the case of negligible viscosity and infinite conductivity (ideal MHD case) the force equation is given by the following expression $\rho \left( \frac{\partial }{\partial t} + \mathbf{v} . \bigtriangledown \right) \mathbf{v} = - \bigtriangledown \left( P + \frac{B^{2}}{8 \pi} \right) - \rho \bigtriangledown \phi + \frac{1}{4 \pi} (\mathbf{B} . \bigtriangledown ) \mathbf{B}, $ where $\rho$ and $\textbf{v}$ are the dust density and velocity, respectively. $\textbf{B}$ is the magnetic field, $P$ is the hydrostatic dust pressure, $\phi$ is the gravitational potential resulting from the total mass contained in the region of interest and, $\bigtriangledown$ denotes the gradient. The left-hand side term in the equation represents the resulting motion of the dust produced by the right-hand side terms which are the gradients of the hydrostatic pressure terms of the gas, the magnetic field, and the gravitational potential together as well as the magnetic field tension term (last term). The force equation can be transformed and under several assumptions the magnetic field strength can in principle be derived geometrically at each position of a polarization map and expressed by: $B = \sqrt{\frac{\rm sin(\psi)}{\rm sin(\alpha)} ( \bigtriangledown P + \rho \bigtriangledown \phi ) 4 \pi R}$, where $\alpha$ = P.A.-90$^{\circ}$, and $\psi$ is the angle associated to the gradient orientation. Figure~\ref{fig-2} (right panel), illustrates the terms displayed in this equation. The red and blue pseudo-vectors displayed in the figure can be compared to those displayed in figure~\ref{fig-1} (left panel) where the gradients in intensity are estimated assuming central symmetry towards the brightest pixel in the map, but the method can be generalized to arbitrary cloud shapes. An important outcome of the method is given by the magnetic field significance: $\Sigma_{B} \equiv \left( \frac{\rm sin(\psi)}{\rm sin(\alpha)}\right)_{local} = \left( \frac{F_{B}}{\mid F_{G}+F_{P} \mid} \right)_{local}$, which gives a direct physical meaning to the factor $\frac{\rm sin(\psi)}{\rm sin(\alpha)}$ derived geometrically from a map. \textbf{Results:} The method provides a quantification of the local significance of the magnetic field force compared to the other forces in a model-independent way. In W51 e2 it allows derivations of the azimuthally averaged radial profile $B (r ) \sim r^{-1/2}$ (\cite{koch2012}). The potential of the method is explored in additional works (e.g. \cite{koch2018} and references therein). In their study, \cite{tang2019} propose that in G34 the varying relative importance between magnetic field, gravity, and turbulence from large-to-small-scale drives and explains the different fragmentation types seen at subparsec scales (no fragmentation, aligned fragmentation and clustered fragmentation). \section{Conclusions and perspectives} \label{conclusions} The methods discussed above are complementary to each other. If submm polarization maps are available at different resolutions they are suited to explore the role of magnetic fields, turbulence and gravity on different spatial scales. Their combination starts to give insights on the interplay between magnetic fields, gravity and turbulence (e.g. \cite{tang2019}) and they are promising tools to shed light on the physics of hubs-filaments (e.g. \cite{andre2019}) detected at subparsec scales in molecular clouds. In addition to these methods, getting estimates of the mean inclination with respect to the LOS of the large-scale ordered magnetic field considered as the main factor regulating the mean level of polarization in a map can help guiding the overall interpretation. In this regard, a method first proposed by \cite{poidevin2013} has been further explored by other authors (e.g. \cite{chen2019}). Multi-wavelengths submm polarimetry of a region can give insights about polarization dust grain properties and also help to put constraints on the interpretation of the maps. On this matter we refer the reader to \cite{and2015}, \cite{vaillancourt2012}, \cite{shariff2019} and references therein. \textbf{} \textbf{Acknowledgments:} FP acknowledges support from the Spanish Ministerio de Economia y Competitividad (MINECO) under grant numbers ESP2015-65597-C4-4-R and ESP2017-86852-C4-2-R.
2002.09637
\section{Introduction} Bayesian inference of phylogeny has great impact on evolutionary biology. It is believed that all the species are related through a history of a common descent \cite{Huelsenbeck2001}, that is to say, the reason we have various wildlife, including human beings, is because of evolution. We can show the process of evolution and solve the problems, like what is the phylogeny of life, by showing a phylogenetic tree (see Figure \ref{fig:my_label3}). \begin{figure} \centering \includegraphics[width=0.5\textwidth]{figure3.png} \caption{\label{fig:my_label3} The evolution phylogenetic tree of the carnivores. The data in \cite{treegene} are used to generate the tree.} \end{figure} As a matter of fact, any research of DNA sequences or protein pattern taken from the species or an individual wildlife can start with a phylogenetic analysis. Language change, which is regarded as just like life evolution, can similarly borrow the phylogenetic analysis to discover the process in which how languages change. It is accepted that all human languages start from the proto-language which is the ancestor of modern languages. For example, Germanic languages (like English, German, Dutch), Romance languages (like Spanish, French and Portuguese), Slavic languages (like Russian, Bosnian and Polish) are from Indo-European languages, which was developed from proto-Indo-European languages. Therefore, we can borrow the computational method from biology and evolution to apply to the language topology.\\ Not only Natural Language Processing (NLP), more and more linguistic branch disciplines begin to have digital datasets and use computational methods to solve some problems. Historical linguistics recently have dramatically increasing digital datasets. The availability of new data and researches of different language from different language families start to challenge the traditional way to carry on the study of historical linguistics. The comparative method has been the core method for linguistic reconstruction for the past 200 years, and is based on manually identifying systematic phonetic correspondences between many words in pair of languages \cite{1}, because different culture has different writing system and different way to spell their words. That is why International Phonetic Alphabet was created to help linguists analyze different languages in parallel. However, there are too few scholars, i.e., historical linguists, to analyze the world's over 7500 type of languages \cite{1,2}, including thousands of languages that have not been studied and are facing extinction. Thereafter, computational methods can help people do researches on unstudied languages faster and give a possible solution. Phylogenetic inference of human languages task is composed with two parts: \textbf{cognate set detection} and \textbf{phylogenetic tree construction}. Cognate set detection automatically assists people put language words with similar or possible evolutionary patterns to one cluster. The phylogenetic tree construction task build trees given the information from the clusters. In the following, I will divided the whole paper into two main steps: the way I implement cognate detection would be discussed in section 2. After that, I will use the cluster data to carry on the phylogenetic inference program and build the relationship trees, which I will describe the way that how I finished this part in section 3. I will show the results and evaluate them in section 4, and make a conclusion in the last section 5. \section{Cognate Detection} A great number of algorithms and mechanisms \footnote{All the algorithms discussed here are applied in LingPy, which is a python3 historical linguistics package. It is used in this project.} to antomatic cognate detection which could be applied in historical linguistics have been used and tested if they are working by many linguists and computer scientists \cite{1,3,4,8,11}. In detail, many of these works are very similar to each other, which consist of two main stages. For the first stage, they first extract the words with same meaning from the wordlists of different languages, either same or different language families, and compare them and use the distance calculation matrix to compute how similar they are. Regarding the second stage, a flat cluster algorithm or a network partitioning algorithm is used to partition all words into cognate sets, and also take the information in the matrix of word pairs as basis \cite{3, 4}. However, the methods in which those researchers use to compare the word pairs are totally different in that people could use different methods to pre-process their language datasets, or even use different algorithms to finish the comparison and clustering task. For example, intuitively, people will start the automated word comparison by computing the distance between the words, such as word embedding in NLP, GloVe \cite{5}, which computes the semantic similarities between the two words. In computational historical linguistics, phonetic segments are used to calculate how close the two words are instead, because the semantics of a single word is not changing as easy as phonetics is. The problem is since the program involves the data pre-processing, then the whole dataset would be traverse twice and the computation would be a big problem when the dataset is about to be over 100 languages. Consequently, people began to think about a faster method to help. \begin{table}[] \centering \caption{Some examples of consonants in IPA, also used in the experiments. \label{tab:table1}} \begin{tabular}{||c | c||} \hline Consonant type & Int'l Phoneic Alphabet (IPA)\\ \hline \hline velars & \textbf{k, g, x} \\ dentals & \textbf{t, d, \textbaro}\\ liquids & \textbf{r, l, \textinvscr}\\ nasals & \textbf{n, m, \textltailm, \textscn}\\ \hline \end{tabular} \end{table} \subsection{Consonant Class Matching Algorithm (CCM)} The first linear time method was proposed by \cite{6}, and later modified by \cite{8}. The algorithm compares the word pairs by their \textit{consonant class}. A consonant class is hereby understood as a rough partitioning of speech sounds into groups that are conveniently used by historical linguists when comparing languages \cite{3}. Table \ref{tab:table1} shows some of the international phonetic alphabet (IPA) (for more details about IPA, please refer to \cite{7}). After getting the IPA of the word pairs from the wordlist, the algorithm is to determine if they are cognate by judging their first two consonants class match each other or not. However, since the algorithm only compares the first two consonant classes, the accuracy of this algorithm is relatively low. I have two reasons for this: \textbf{(a)} In linguistics, the number of possible sounds in human languages in the world, excluding artificial languages, amounts to the thousands \cite{4}. It is unrealistic to enroll all the sounds into the system. If we enroll all the sounds in the algorithm to simulate the language change process, we need to get a new matrix in which the probabilities of one sound switching to the other sound will be calculated, which is very time-consuming. \textbf{(b)} comparing the first two consonant classes are not sufficient to determine if the two words in pair are derived from the same cognate word. \subsection{Edit Distance} The Edit Distance approach is to take the normalized Levenshtein distance \cite{9}, which is a concept in information theory. It aims to measure the difference of two sequences by calculating the minimum number of character or string edits, such as insertion, deletion, which are coincidentally two basic language phonetic change. The distance could be used as a probability to estimate how possible one word changes to the other one. \subsection{Sound Class Algorithm (SCA)} This algorithm is for pairwise and multiple alignment analysis \cite{1}. It not only takes an expanded sound class into account, but it considers the prosodic aspect of each word. As a result, it is able to align within morpheme boundaries instead of the sound segments, suppose the morpheme information is the prior knowledge and we already have it. \subsection{LexStat} The previous three methods use the same strategy to put the words from different languages into clusters, i.e., UPGMA clustering algorithm, while LexStat uses language-specific scoring schemes which are derived from a Monte-Carlo permutation of the data \cite{4}. The word pairs from different languages are first aligned and scored, and the MC permutation shuffled the word pairs according to their scores. The scores could be calculated by the frequencies of daily use by native speakers. Thus, a distribution of sound-correspondence frequencies is generated. Then, the distribution is used to compare with an attested distribution and then converted into a language-specific scoring scheme for all word pairs.\\ Following the algorithms above, with the consideration of both the advantages and disadvantages of them, in this project, I am going to use a modified method: \textit{sound-class based skip-grams with bipartite networks (BipSkip)}. The whole procedure is quite straightforward and could be divided into three steps. \textbf{First step:} the word pair and their skip-grams are two sets of the bipartite networks. The \textbf{second step} is optional, which is to refine the bipartite network. Before I run the program, I will be asked to input a threshold, which determines if the program should delete the skip-gram nodes linked to fewer word nodes than the threshold itself. According to the experiment, even though I did not input any threshold as one of the parameters, the algorithm could still give the same answer but with more executing time. In the \textbf{last step}, the final generated bipartite graph would be connected to a monopartite graph and partitioned into cognate sets with the help of graph partitioning algorithms. Here I would use Informap algorithm \cite{10}. To make a comparison to this method, I am using CCM and SCA for distance measurement in this experiment, too. UPGMA algorithm would be used accordingly in these two cases. \section{Bayesian Phylogenetic Inference} Methods for Bayesian phylogenetic inference in evolutionary biology and historical linguistics are all based on the following Bayes rule \cite{3, 13}: \[f(\Lambda |X) = \frac{f(X|\Lambda)f(\Lambda ) }{f(X)}\] or \[Pr(Tree |Data) = \frac{Pr(Data|Tree)\times Pr(Tree)}{Pr(Data)}\] Where \(f\) means the probability density function, $\Lambda$ consists of the tree topology $\tau$, the branch length vector of the tree $T$ and the substitution model parameter $\theta$; $X$ is a data matrix with dimension $N*K$, within which there are $N$ rows indicating $N$ different words from different kinds of languages and they can be put into $K$ cluster in a language family. Figure \ref{fig:my_label} shows an example of the matrix. As we can see, the data matrix is a binary matrix with every element $i_{ij}$. $1$ means language $i$ could be classified into cognate $j$, while $0$ means language $i$ does not belong to cluster $j$. Based on the shape of the data matrix, to get the parameter ($\Lambda = \tau, X, \theta$) for tree generation, we need to sum over the whole matrix and will have to compute all possible topologies of \(\frac{(2N-3)!}{2^{N-2}(N-2)!}\). \begin{figure} \centering \includegraphics[width=0.35\textwidth]{figure1.png} \caption{\label{fig:my_label} Data matrix example} \vspace{-3mm} \end{figure} This method is practical only for a small amount of dataset and the calculation is not easy once it is over 5 languages \cite{13}. In addition, based on the Bayesian formula, if we use prior probability $Pr(tree)$ and likelihood $Pr(Data|Tree)$ to produce the posterior probability\footnote{The posterior probability represents how likely the tree is correct. The tree with the largest posterior probability can be treated as the best estimate of the phylogenetic inference tree.}, it seems that the posterior probability is very easy and convenient to formulate based on the Bayesian formula, but to compute this probability and get the greatest estimate of the final tree, the machine has to compute and compare all the possible trees and in each tree, the machine will compute all the combination of branches with different length.\\ Metropolis-Hastings (MH) algorithm \cite{hastings1970} , one of the Markov Chain Monte Carlo techniques, would be a good tool to overcome this computation difficulty by avoiding summing over all of the topologies by evaluating the posterior probability \(f(\Lambda |X)\). The likelihood in this case from one data to the next parameter is calculated by the prunning algorithm. Prunning algorithm, or we can call it K81 model \cite{Felsenstein1981}, is a Markov model of DNA evolution, and this model is a description of the DNA in evolution as a string of four discrete state, i.e., G, C, A, T. Fortunately, language model is very similar to DNA model in that both of them are discrete models , in the language model, we only apply this model in the binary dataset. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{algorithm.png} \end{figure} \subsection{Markov Chain Monte Carlo (MCMC)} TBA \section{Experiments} \subsection{Settings and Implementation} It is not easy to figure out which kind of skip-gram and sound-class system would generate the satisfying results, so I design my experiment as follows. \textbf{(1)} During the training process, I would use datasets proposed by \cite{11}, I will give the training dataset in table \ref{tab:my_label6}, and compute skip-gram by length 4. Although there are languages repeated in both training language dataset and testing language dataset, such as Chinese in Sino-Tibet and Austronesian, I manually tick out the repeated data from both dataset and also the concept from both datasets are not overlapping each other. \textbf{(2)} Regarding sound-class, I would refer to SCA sound class \cite{11} \textbf{(3)} Set the threshold in step 2 as 0.2 (20\%). \textbf{(3)} The evaluation matrix is B-Cubes \cite{12}. The F-score based on the configuration above is, when I use connected components partitioning, \textbf{85.4\%}, and \textbf{85.2\%} when I use Infomap partitioning algorithm. \begin{table}[] \centering \caption{\label{tab:my_label6} Training data from \cite{11}} \begin{tabular}{||l c c c||} \hline \textbf{Dataset} & \textbf{Concepts} & \textbf{Languages} & \textbf{Cognates} \\ \hline \hline Austronesian & 210 & 20 & 2864 \\ Bai & 110 & 9 & 285 \\ Chinese & 140 & 15 & 1189 \\ Japanese & 200 & 10 & 460 \\ Ob-Ugrian & 110 & 21 & 242 \\ \hline \end{tabular} \end{table} \subsection{Evaluation Methods} TBA \subsection{Results and Discussion} \textbf{Cognate Detection} Table \ref{tab:my_label2} shows the statistics in test data developed by \cite{1}. The result of the BipSkip approach for cognate detection is shown in the table \ref{tab:my_label3} as well as the result of using SCA and CCM. As shown in the tables, we can see that the BipSkip approach is not the quickest method, although it is more user-friendly. CCM is surprsingly fastest method with slightly higher precision than BipSkip approach, especially in Austronesian and Sino-Tibetan. Indo-European languages stays almost the same, my guess is because the languages in Indo-European language family are more phonetically similar to each other than other languages in their corresponding language families, as a result of which the three methods would perform almost the same. SCA algorithm is not recommended here in that it costs the longest time and the largest spece since I need to prepare the expanded sound class and also some morphological features. \begin{table}[h] \centering \caption{\label{tab:my_label2} Test data from \cite{1}} \begin{tabular}{||l c c c||} \hline \textbf{Dataset} & \textbf{Concepts} & \textbf{Languages} & \textbf{Cognates} \\ \hline \hline Austro-Asiatic & 200 & 58 & 1872 \\ Austronesian & 210 & 45 & 3804 \\ Indo-European & 208 & 42 & 2157 \\ Pama-Nyungan & 183 & 67 & 6634 \\ Sino-Tibetan & 110 & 64 & 1402 \\ \hline \end{tabular} \end{table} \begin{table}[h] \centering \caption{\label{tab:my_label3} Three methods to extract the cognate on test data.} \begin{tabular}{||l l l l||} \multicolumn{4}{c}{\textbf{Infomap analysis on test data}}\\ \hline \textbf{Dataset} & \textbf{Precision} & \textbf{Recall} & \textbf{F-score} \\ \hline \hline Austro-Asiatic & 0.69 & 0.78& 0.7323 \\ Austronesian & 0.71 & 0.74 & 0.7233 \\ Indo-European & 0.81 & 0.72 & 0.7616 \\ Pama-Nyungan & 0.53 & 0.69 & 0.6321 \\ Sino-Tibetan & 0.64 & 0.62 & 0.6033 \\ \hline \textsc{Total} & 0.688 & 0.732 & 0.7066\\ \hline \multicolumn{4}{l}{Running Time: 18.45s} \\ \multicolumn{4}{l}{} \\ \multicolumn{4}{c}{\textbf{SCA analysis on test data}}\\ \hline \textbf{Dataset} & \textbf{Precision} & \textbf{Recall} & \textbf{F-score} \\ \hline \hline Austro-Asiatic & 0.72 & 0.80& 0.7601 \\ Austronesian & 0.82 & 0.74 & 0.7748 \\ Indo-European & 0.89 & 0.74 & 0.8063 \\ Pama-Nyungan & 0.59 & 0.85 & 0.6930 \\ Sino-Tibetan & 0.73 & 0.46 & 0.5614 \\ \hline \textsc{Total} & 0.75 & 0.7180 & 0.7191\\ \hline \multicolumn{4}{l}{Running Time: 240.050s} \\ \multicolumn{4}{l}{} \\ \multicolumn{4}{c}{\textbf{CCM analysis on test data}}\\ \hline \textbf{Dataset} & \textbf{Precision} & \textbf{Recall} & \textbf{F-score} \\ \hline \hline Austro-Asiatic & 0.79 & 0.64& 0.7070 \\ Austronesian & 0.88 & 0.58 & 0.6963 \\ Indo-European & 0.89 & 0.64 & 0.7484 \\ Pama-Nyungan & 0.64 & 0.82 & 0.7194 \\ Sino-Tibetan & 0.78 & 0.35 & 0.4831 \\ \hline \textsc{Total} & 0.7960 & 0.6060 & 0.6708\\ \hline \multicolumn{4}{l}{Running Time: 3.247s} \end{tabular} \vspace{-3mm} \end{table} \textbf{Phylogenetic Inference} The result of phylogenetic inference in modified MH algorithm is shown in table \ref{tab:my_label5}. I designed a branch of experiments, changing the settings and get some good results. I set the initial temperature as $T_{0} = \{10, 20, 30, 40, 50, 60, 70, 80, 90, 100\}$. During the project, I will take down the number of iteration, the time it takes to finish running the program. Table \ref{tab:my_label4} is the ground truth results, testing on golden standard cognates from Glottolog. It is hard to determine which one is outperforming the other two. Overall, Indo-European costs the shortest time and fewer iterations, since in the three methods this language family always has the highest the accuracy. In addition, the cognate sets from the automatic approaches is easier to build phylogenetic trees than the manually annotated standard cognate sets, from the result, the automatic methods obviously shorten the time of automatic phylogenetic inference. \begin{table}[h] \centering \caption{\label{tab:my_label4} showing the results for golden standard cognates from \cite{3}} \begin{tabular}{|| l c c c c ||} \hline Family & $T_{0}$ & GQD & \#interation & Time \\ \hline \hline Austro-Asiatic & 90 & 0.058 & 26900 & 476.113 \\ Austronesian & 80 & 0.0389 & 21280 & 123.167 \\ Indo-European & 10 & 0.0135 & 2260 & 16.713 \\ Pama-Nyungan & 10 & 0.061 & 22600 & 605.319 \\ Sino-Tibetan & 50 & 0.0475 & 25700 & 206.952 \\ \hline \end{tabular} \end{table} \begin{table}[] \centering \caption{\label{tab:my_label5} Three methods to extract the cognate on test data.} \begin{tabular}{||l c c c c||} \multicolumn{4}{c}{\textbf{Test on Infomap cognate}}\\ \hline Family & $T_{0}$ & GQD & \#interation & Time \\ \hline \hline Austro-Asiatic & 80 & 0.0245& 21248 & 310.403\\ Austronesian & 10 & 0.0875 & 9040 & 82.443\\ Indo-European & 100 & 0.0690 &2710 & 28.691\\ Pama-Nyungan & 70 & 0.0159 & 21120 & 662.447\\ Sino-Tibetan & 80 & 0.3268 & 10640 & 129.903\\ \hline \multicolumn{4}{l}{} \\ \multicolumn{4}{c}{\textbf{Test on SCA cognate}}\\ \hline Family & $T_{0}$ & GQD & \#interation & Time \\ \hline \hline Austro-Asiatic & 60 & 0.0568& 26523 & 340.605\\ Austronesian & 10 & 0.0356 & 8560 & 74.230\\ Indo-European & 10 & 0.469 &3205 & 35.691\\ Pama-Nyungan & 80 & 0.0658 & 20136 & 302.657\\ Sino-Tibetan & 50 & 0.2033 & 20365 & 263.219\\ \hline \multicolumn{4}{l}{} \\ \multicolumn{4}{c}{\textbf{Test on CCM cognate}}\\ \hline Family & $T_{0}$ & GQD & \#interation & Time \\ \hline \hline Austro-Asiatic & 100 & 0.056& 23580 & 310.403\\ Austronesian & 80 & 0.0785 & 23650 & 309.653\\ Indo-European & 10 & 0.203 &2510 & 26.723\\ Pama-Nyungan & 70 & 0.0023 & 25640 & 369.002\\ Sino-Tibetan & 50 & 0.0569 & 10520 & 185.367\\ \hline \end{tabular} \end{table}\\ \vspace{-3mm} \section{Conclusion} Obviously, the result is not surprisingly good. We can observe from the data, the accuracy for each language, some of them are only slightly over 50\%. Among the five language families in the testing data, Indo-european has more accuracy than the other four language families, due to the similar phonetic features. Also, some places where native people used to live and was conquered by immigrates, for example languages in the islands in south pacific, Hawaii, Indonesia, etc, their accuracy is obviously high and easy to cluster by cognate and find their relationship. Some native languages in Australia, Pama-Nyungan language family whose languages are mainly on Autralian continent is surprisingly lower than any other southern pacific islands languages.\\ From this exmperiment, we can obviously use this method to help historcial linguists to make an analysis of language development and change, but the result is not very accurate basically. How do we solve the problem? The current datasets only includes the main class of phonetic alphabet, I think it is necessary to enroll some language phonetic change background knowledge to let the machine recognize the probability of change from phoneme $A$ to $B$, such as \textbf{G}reat \textbf{V}owel \textbf{S}hift, etc. \FloatBarrier \ifCLASSOPTIONcompsoc
1510.03595
\section{Introduction} Supersymmetric extension of the standard model (SM) is one of candidates for TeV-scale physics, because supersymmetry (SUSY) can stabilize a large hierarchy. The minimal supersymmetric standard model (MSSM) is quite interesting because of its minimality, and various phenomenological aspects have been studied. However, from theoretical point of view, it has a problem. The MSSM includes supersymmetric mass terms of Higgs superfields, $H_u$ and $H_d$ i.e. the so-called $\mu$-term, $\mu H_u H_d$, in the superpotential. It must be comparable with soft SUSY breaking masses in order to realize successfully the electroweak symmetry breaking. However, the $\mu$-term and soft SUSY breaking terms, in general, have origins different from each other. Why are these comparable with each other ? That is the so-called $\mu$-problem~\cite{Kim:1983dt}. The next-to-minimal supersymmetric standard model (NMSSM) is an extension of the MSSM by adding a singlet superfield $S$~\cite{Fayet:1974pd} (see for a review Ref.~\cite{Ellwanger:2009dp}). Then, the NMSSM superpotential has $\lambda S H_u H_d$. Also, we impose the $Z_3$ symmetry, which forbids dimensionful parameters in the superpotential. Dimensionful parameters appear only as soft SUSY breaking parameters. Thus, vacuum expectation values (VEVs) of Higgs and singlet fields are determined by soft SUSY breaking terms. That is, the $\mu$-problem is solved, and the effective $\mu$-term is generated as $\mu= \lambda \langle S \rangle$. In the NMSSM, the Higgs sector as well as the neutralino sector has a richer structure than one in the MSSM, because of inclusion of the singlet superfield $S$. Also, the NMSSM can raise the SM-like Higgs boson mass. At any rate, heavier superpartner masses such as ${\cal O}(1)-{\cal O}(10)$ TeV may be favorable. We may need fine-tuning to realize a little hierarchy between the electorweak scale and SUSY breaking scale. However, such a fine-tuning can be improved in a certain mediation mechanism, e.g. in the TeV-scale mirage mediation scenario \cite{Kobayashi:2012ee}.\footnote{ See for phenomenological aspects of MSSM in the TeV-scale mirage mediation scenario \cite{Choi:2005hd} and for generic mirage mediation \cite{Choi:2004sx,Choi:2005uz,Endo:2005uy}.} The $Z_3$ symmetry is important to forbid dimensionful parameters in the superpotential and to solve the $\mu$-problem. However, it is problematic. VEVs of the Higgs scalar and singlet break spontaneously the $Z_3$ symmetry. In general, when a discrete symmetry is spontaneously broken, domain walls appear. They would dominate the energy density of the Universe and change the standard cosmology drastically. Thus, the exact $Z_3$ symmetry and the stable domain walls are problematic \cite{Zeldovich:1974uw}. See for the NMSSM \cite{Abel:1995wk} . Here, we assume that the $Z_3$ symmetry is broken explicitly, but its breaking size is much smaller than the electroweak scale. Then, the domain walls are unstable. They may dominate the energy density of the Universe at a certain period but decay. It has important effects on thermal history. (See e.g. Ref~\cite{Kadota:2015dza}.) In this paper, we study implications of unstable domain walls in the NMSSM. In general, SUSY models have other problems due to moduli, gravitino and the lightest superparticle (LSP). For example, in the gravity mediation scenario, moduli and gravitino masses would be comparable with masses of superpartners in the visible sector. When those are of ${\cal O}(1)-{\cal O}(10)$ TeV, they affect successful big bang nucleosynthesis (BBN), that is, the so-called moduli-problem and gravitino problem. They could be diluted by decay of domain walls~\cite{Kawasaki:2004rx}. Furthermore, even if the moduli and gravitino are heavier than superpartners in the visible sector, that would lead to another problem. Indeed, in the mirage mediation mechanism~\cite{Choi:2004sx}, the gravitino is heavier by ${\cal O}(8\pi^2)$ than superpartners in the visible sector, and the modulus is also heavier by ${\cal O}(8\pi^2)$ than the gravitino. In such a case, the moduli decay into the gravitino with a large rate and the gravitino decays into the LSP. This overproduces non-thermally the LSP~\cite{Endo:2006zj}. We need to dilute the moduli, gravitino and the LSP. Also, in some other scenarios, the LSP such Bino-like neutralino has a large thermal relic density~. The decay of domain walls, which was mentioned above, can produce a large entropy and dilute moduli and dark matter candidates in the NMSSM. This paper is organized as follows. In section 2, we study the domain wall solution in the NMSSM. In section 3, we study cosmological evolution of unstable domain walls. In sections 4 and 5, we study implications of the domain wall decay in two scenarios. Section 6 is devoted to conclusion and discussion. \section{Domain wall solution in the NMSSM} \subsection{Domain wall solution in the $Z_3$ symmetric NMSSM} We briefly review a domain wall solution of the Higgs potential in the $Z_3$ symmetric NMSSM \footnote{ The full scalar potential includes superpartners of quarks and leptons, and it has several unrealistic vacua. We assume that taken SUSY breaking parameters in the full potential satisfy the condition to avoid such unrealistic vacua. (See e.g., Ref~\cite{Kanehata:2011ei} and references therein.)}. We adopt the convention for $H_u$, $H_d$ and $S$ that the superfield and its lowest component are written by the same letter. The superpotential terms including only $H_u$, $H_d$ and $S$ are written as \begin{eqnarray} W_{\rm Higgs}=\lambda S H_u H_d+\frac{\kappa}{3}S^3, \end{eqnarray} where the $Z_3$ symmetry is imposed as mentioned. The scalar potential is written by \begin{eqnarray} V_{\rm Higgs}=\sum_{\phi_i=H_u, H_d, S} \left|\frac{\partial W}{\partial \phi_i} \right|^2 + V_D + V_{{\rm soft}}, \end{eqnarray} where $V_D$ is the D-term potential due to $SU(2)\times U(1)_Y$ and $V_{\rm soft}$ denotes the soft SUSY breaking terms, \begin{eqnarray} V_{{\rm soft}} = m_{H_u}^2 |H_u|^2+ m_{H_d}^2 |H_u|^2 + \frac13 \kappa A_{\kappa}S^3 + \lambda A_{\lambda} H_u H_dS+ h.c. \end{eqnarray} Only the neutral components develop their VEVs, and their scalar potential is written explicitly by \begin{eqnarray} V_{\rm Higgs} &=& \left|\kappa S^2 - \lambda H_u^0H_d^0 \right|^2+m_{H_u}^2 |H_u^0|^2+m_{H_d}^2 |H_d^0|^2+m_S^2 \left|S \right|^2 +\left| \lambda \right|^2\left|S \right|^2(|H_d^0|^2+|H_u^0|^2) \nonumber \\ & & +\frac{ g^2 +g'^2 }{8} \left( |H_u^0|^2-|H_d^0|^2 \right) ^2 +\frac13 \kappa A_{\kappa}S^3-\lambda A_{\lambda} H_u^0 H_d^0 S+ h.c., \end{eqnarray} where $g$ and $g'$ are the $SU(2) $ and $U(1)_Y$ gauge couplings, respectively. Here, we assume that all of $\lambda$, $\kappa$, $A_\lambda$ and $A_\kappa$ are real. The potential minima are obtained by analyzing the stationary conditions, \begin{eqnarray} \frac{\partial V_{\rm Higgs}}{\partial H_u^0} = \frac{\partial V_{\rm Higgs}}{\partial H_d^0} = \frac{\partial V_{\rm Higgs}}{\partial S} =0, \end{eqnarray} and these VEVs lead to the successful electroweak symmetry breaking, where the effective $\mu$ term is obtained as $\mu = \lambda \langle S \rangle$. Since the scalar potential has the $Z_3$ symmetry, three vacua are degenerate, \begin{eqnarray} \left(\left<S\right>,\left<H_u^0\right>,\left<H_d^0\right> \right) = \left(v_s e^{2\pi i m/3} ,v_{u} e^{2\pi i m/3} ,v_{d} e^{2\pi i m/3} \right), \end{eqnarray} with $m=0,1,2$, where all $v_s, v_u$ and $v_d$ are real with $v = \sqrt{v_u^2+v_d^2}\simeq 174$ GeV. One of three degenerate vacua is selected in the vacuum, and then the $Z_3$ symmetry is broken spontaneously. Then, the domain walls are generated. First, we study the domain wall solution~\cite{Vilenkin:2000jqa}. We fix field values of radial directions of $S, H_u$ and $H_d$, and discuss a field equation for the phase degree of freedom $\phi$, \begin{eqnarray} \left(S,H_u^0,H_d^0\right) = \left(v_s e^{{\mathrm i}\phi} ,v_{u} e^{{\mathrm i}\phi} ,v_{d} e^{{\mathrm i}\phi} \right). \end{eqnarray} The potential of $\phi$ can be obtained from $V_{\rm Higgs}$ as \begin{eqnarray} V_{}(\phi)&=& -2\left(\frac13 \left| \kappa A_{\kappa}v_s^3\right| +\lambda A_{\lambda} v_sv_uv_d\right)\cos (3\phi) + V_0, \end{eqnarray} where $V_0$ denotes $\phi$-independent terms. The first term would be dominant when $A_\kappa \sim A_\lambda$, $\lambda \sim \kappa$ and $v_s^2 \gg v_u v_d$. Also, the kinetic term of $\phi$ is written by \begin{eqnarray} \mathcal{L}_{{\rm kinetic}}(\phi) &=&\eta^2(\partial_{\mu} \phi)(\partial^{\mu} \phi), \end{eqnarray} with $\eta^2 = v_s^2+v_u^2+v_d^2$. For simplicity, we consider a planar domain wall orthogonal to the $z$-axis, $\phi(z)$. Then, the field equation, \begin{eqnarray} \partial_{\mu}\frac{\partial \mathcal{L}_{{\rm kinetic}}}{\partial_{\mu}(\partial \phi)}+\frac{\partial V_{{\rm VEV}}}{\partial \phi}=0, \end{eqnarray} can be written by \begin{eqnarray} \frac{ \mathrm{d}^2 \phi }{ \mathrm{d}z^2 }-\frac{1}{3B^2} \sin (3\phi)&=&0, \end{eqnarray} with \begin{eqnarray} \left( \frac{1}{B}\right)^2 = \frac{9\left(|\frac13 \kappa A_{\kappa}v_s^3|+\lambda A_{\lambda} v_sv_uv_d\right)}{\eta^2}. \label{def:B} \end{eqnarray} The first term in the numerator of the left hand side of Eq.~(\ref{def:B}) is dominant when $v_s^2 \gg v_uv_d$. We set the boundary condition such that $\phi = 2 \pi n/3$ at $z \rightarrow - \infty$ and $\phi = 2 \pi (n+1)/3$ at $z \rightarrow + \infty$ with $n=0,1,2$. By solving the above field equation with this boundary condition, the domain wall solution is derived as \begin{eqnarray} \phi &=& \frac{2 n \pi}{3}+ \frac43 \arctan\left(\mathrm{e}^{\pm \frac1B(z-z_0)}\right), \end{eqnarray} where $B$ corresponds to the width of the domain wall. Figure 1 shows this solution for $n=0$. Now, we can estimate the domain wall tension \begin{eqnarray} \sigma &=& \int dz \rho_{{\rm wall}}(z)= \int dz \left( \left|\frac{ \mathrm{d}S }{ \mathrm{d}z } \right|^2+ \left|\frac{ \nonumber \mathrm{d}H_u^0 }{ \mathrm{d}z } \right|^2+ \left|\frac{ \mathrm{d}H_d^0 }{ \mathrm{d}z } \right|^2+V(\phi)\right) \\ \nonumber &=&\frac{16}{9}\frac{\eta^2}{B} . \end{eqnarray} Thus, we can estimate \begin{eqnarray} \sigma \simeq \frac{16}{3\sqrt{3}}v_s^2\sqrt{\kappa A_\kappa v_s} = \frac{16}{3\sqrt{3}}\frac{\mu^2}{\lambda^2}\sqrt{\frac{\kappa}{\lambda} A_\kappa \mu}, \end{eqnarray} for $v_s^2 \gg v_uv_d$. The size of $\mu$ is of the SUSY breaking scale.~\footnote{When $\mu$ is much larger than the elwctroweak scale, we have the fine-tuning problem to derive the Z-boson mass $m_Z$ from $m_{H_u}^2, \mu,$ and $m_{H_d}^2$. However, in a certain mediation such as the TeV-scale mirage mediation contributions due to $\mu$ and $m_{H_d^2}$ cancel each other in $m_Z$, and $m_Z$ is independent of $\mu$. Without severe fine-tuning $\mu$ can be larger than the electroweak scale, e.g. $\mu = {\cal O}(1) $TeV~\cite{Kobayashi:2012ee}.} The couplings $\lambda$ and $\kappa$ must be of ${\cal O}(0.1)$ or less at the electorweak scale such that they do not blow up below a high energy scale such as the GUT or Planck scale. Thus, the size of $\sigma^{1/3}$ would be of the SUSY breaking scale or larger. Figure 2 shows an example of $\rho_{DW}(z)$.\\ \begin{figure}[ht] \begin{center} \epsfig{figure=phase.eps, width=10cm,height=8cm,angle=0} \end{center} \caption{The phase of scaler field$(S(z), H_u(z), H_d(z))$ of planer domain wall solution. Here we take $n=0$, $z_0=0$, and normalize z-axis by $1/B$ (Eq.~(\ref{def:B})). } \label{Fig:phase} \end{figure} \begin{figure}[ht] \begin{center} \epsfig{figure=energydensity.eps, width=12cm,height=8cm,angle=0} \end{center} \caption{Spatial configuration of a domain wall energy density for $\lambda=\kappa=0.01$, $A_{\lambda}=A_{\kappa}=10~{\rm TeV}$, $\mu=1~{\rm TeV}$, $\tan \beta =10$. The z-axis is normalized by $1/B$. } \label{Fig:energydensity} \end{figure} \subsection{Decaying domain wall by $Z_3$ breaking} Formed domain walls are stretched by the cosmic expansion and smoothed by interactions with particles in the background thermal plasma. The energy density of domain walls $\rho_{DW}$ and its pressure $p_{DW}$ can be read from the averaged energy momentum tensor of domain walls. The equation of state of domain walls is given by \begin{eqnarray} p_{DW} = \left(v^2-\frac{2}{3}\right)\rho_{DW} , \end{eqnarray} with $v$ being the averaged velocity of walls~\cite{KolbTurner}. The dynamics depends on $v$. In one extremal limit, non-relativistic limit or static limit with $v=0$, the energy density behaves \begin{eqnarray} \rho_{DW} \propto a^{-1}, \end{eqnarray} where $a(t)$ is the scale factor of the Universe. Such domain wall network is sometime referred to as ``frustrated domain wall''. Such a frustrated domain wall dominated Universe causes acceralating expansion because of $w = p/\rho = -2/3 < -1/3$. On the other hand, for $v^2 \geq 1/3$ where $w \geq -1/3$ is realized, the cosmic expansion is not acceralating. In fact, the dynamics of domain walls has been investiagted and many detailed numerical simulations show that the dynamics of domain wall network is relaxed at a late time to so-called scaling regime, where the typical length scale $\xi$ of the system stays of the Hubble radius $H^{-1}$~\cite{Press:1989yh,Hindmarsh:2002bq,Garagounis:2002kt, Leite:2011sc,Leite:2012vn,Hiramatsu:2013qaa}. Then, the energy density of domain walls also scales as~\cite{Hiramatsu:2013qaa} \begin{eqnarray} \rho_{DW} \simeq \frac{\sigma}{t}. \end{eqnarray} The energy density of domain wall decreases slower than any other ``matter'' or radiation in the scaling solution~\footnote{In the static limit $v=0$, it is furtehr slower.}. Thus, at some point, the energy density of domain walls dominates that in the Universe. This is the domain wall problem~\cite{Zeldovich:1974uw}. Thus, the stable domain wall in the $Z_3$ symmetric NMSSM is problematic~\cite{Abel:1995wk}. In this paper, we consider a tiny but explicit breaking of the $Z_3$ discrete symmetry so that domain walls might have a long life time but finally decay. In fact, the decay of domain walls after domain wall domination has an interesting cosmological implication, namely the dilution of unwanted relics by late time entropy production~\cite{Kawasaki:2004rx}. Few numerical detailed study on dymanics of the domain walls network in a domain wall dominated Universe has been done. Hence, the domain wall dynamics in a domain wall dominated Universe after its scaling behavior is uncertain. One likely possibility is that the scale of the system remains of the order of the Hubble radius as in the scaling regime after domain wall domination too. This can be realized for the equation of the state $w\simeq -1/3$. Thus, in the most of the following analysis, we assume this. On the other hand, there is another possibility that the dynamics after the domination would be frozen as suggested in Ref.~\cite{Leite:2011sc}, where $\xi \propto a(t)$ and $\rho \propto a(t)^{-1}$ are realized as in the non-relativistic limit. We briefly discuss results for this latter case too. Before closing this subsection, here we briefly note some examples of the $Z_3$ symmetry breaking in the literature for information. In Ref.~\cite{Panagiotakopoulos:1998yw}, Panagiotakopoulos and Tamvakis proposed adding extra symmetries which consistently allows to induce a tiny enough tad pole term \begin{equation} \Delta V \sim \frac{1}{(16\pi^2)^n} m_{SUSY}^3 (S+S^*), \end{equation} where $m_{SUSY}$ is a soft SUSY breaking mass and $n$ is a power of loop inducing this term, in the scalar potential and the degeneracy of vacua is resolved. Hamaguchi et al proposed another solution by intorducing hidden QCD theory, where the $Z_3$ symmetry becomes anomalous and is broekn by quantum effects~\cite{Hamaguchi:2011nm}. In such a minor extension of $Z_3$ symmetric NMSSM, the domain walls become unstable. Since the size of the $Z_3$ breaking term is highly model dependent and the main purpose of this paper is to study cosmological effects of late time domain walls decay, the decay rate of a domain wall $\Gamma_{DW}$, which also parameterise the size of the $Z_3$ symmetry breaking, is treated as a free parameter. Throughout this paper, in order to connect successful BBN, we take the domain wall decay temperature $T_d$ of a few MeV. We note that the lower bound of the rehearting temperature by late decay objects is about a few MeV~\cite{Kawasaki:2000en,Hannestad:2004px,Ichikawa:2005vw}. \section{Cosmological evolution of unstable domain walls} When doublet and/or singlet Higgs fields develop the VEVs, domain walls are formed. As mentioned above, after certain dynamics, the domain wall network would be relaxed to be in the scaling solution. In the scaling regime, the energy density of domain walls is given by \begin{eqnarray} \rho_{DW} \simeq \sigma H. \label{eq:ehoDW:inscaling} \end{eqnarray} \subsection{Matter-dominated era to domain wall dominated era} The first case we consider is that, at the domain wall formation time $H_i^{-1}$, the Universe is dominated by the energy density of a matter $\rho_M$ such as a long-lived coherent oscillating moduli field. In the scaling solution of domain wall, the energy density of domain walls relative to that of the background increases and eventually dominates the Universe. The domain wall energy density becomes equal to one of the matter at $H_{eq}^{-1}$, which is estimated with Eq.~(\ref{eq:ehoDW:inscaling}) as \begin{eqnarray} H_{eq} \simeq \frac{\sigma}{3 M_P^2 } , \label{eq:H-eq:Scaling:MD} \end{eqnarray} where $M_P$ is the reduced Planck mass. The condition that domain walls indeed dominate the Universe before those decay is expressed as \begin{eqnarray} H_{eq} > \Gamma_{DW}. \end{eqnarray} After $H_{eq}$, the domain walls dominate the energy density. At the domain wall decay time $\Gamma_{DW}^{-1}$, the ratio of these energy densities is estimated as \begin{eqnarray} \left. \frac{\rho_M}{\rho_{DW}}\right|_{\Gamma_{DW}} = \left( \frac{\Gamma_{DW}}{H_{eq}}\right), \end{eqnarray} from $a \propto t $, where we assume $\rho_{DW} \propto a^{-2}$ during the domain wall domination between $H_{eq}$ and $\Gamma_{DW}$. After the domain walls decay, the energy density of the matter is diluted as \begin{eqnarray} \frac{\rho_{M}}{s} = \frac{ 3 T_d}{4}\left( \frac{\Gamma_{DW}}{H_{eq}}\right) \simeq \frac{ 3 T_d}{4}\left( \frac{\pi^2 g_*(T_d) T_d^4}{10} \frac{M_P^2}{\sigma^2} \right)^{1/2} , \label{eq:rho/s:Scaling2} \end{eqnarray} for the case that the domain wall decays earlier than the matter does. Here, $g_*$ is the number of relativistic degrees of freedom. \subsection{Radiation-dominated era to domain wall dominated era} Next, we discuss the case that domain walls are formed in radiation-dominated Universe. Both energy densities become comparable with each other at \begin{eqnarray} H_{eq} \simeq \frac{\sigma}{3 M_P^2}, \label{eq:H-eq:Scaling:RD} \end{eqnarray} since domain walls are in the scaling solution. The entropy density ratio of after- to before-domain wall decay is given by \begin{eqnarray} \Delta = \frac{s_{after}}{s_{before}} \simeq \frac{T_{eq}}{T_d}\left( \frac{H_{eq}}{\Gamma_{DW}}\right) \simeq \left(\frac{10 \sigma^2}{\pi^2 g_*(T_d) T_d^4 M_P^2} \right)^{3/4} \left(\frac{g_*(T_d)}{g_*(T_{eq})}\right)^{1/4}, \label{rad:delta1-2:Scaling2} \end{eqnarray} for $\Delta \gg 1$. We can obtain an entropy production \begin{eqnarray} \Delta \simeq 10 \left( \frac{\sigma^{1/3}}{ 50 \,{\rm TeV}}\right)^{9/2} \left( \frac{2\,{\rm MeV}}{T_d}\right)^3 . \label{rad:delta2:Scaling2} \end{eqnarray} One might think that the tension of about $100$ TeV looks somewhat too large. However, for instance, in the MSSM-like region of the NMSSM with $\lambda\sim \kappa \ll 1$ and $v_s \gg v$, the domain wall tension \begin{eqnarray} \sigma \simeq \frac{16}{3}\sqrt{\frac{2}{3}} \kappa v_s^3 , \end{eqnarray} can be of such an order with $\lambda\sim \kappa \sim 10^{-2}$ and $v_s \sim 100$ TeV. Those results in the effective $\mu$ term and the singlino mass of about $1$ TeV. Figure 3 shows the entropy density ratio of after- to before- domain wall decay for $\lambda=\kappa=0.01$, $T_d=3~{\rm MeV}$. The ratio increases as $\mu$ and $A_{\kappa}$ increase. Such a large late-time entropy production can dilute unwanted relics such as gravitino, overproduced LSP as well as axion.\\ \begin{figure}[ht] \begin{center} \epsfig{figure=entropy3.eps, width=8cm,height=8cm,angle=0} \end{center} \caption{ The entropy density ratio $\Delta$ of after- to before- domain wall decay in radiation-dominated era to domain wall dominated era for $\lambda=\kappa=0.01$, $T_d=3~{\rm MeV}$. } \label{Fig:entropy} \end{figure} \subsection{Non-relativistc domain wall during the domination} Here, we note resultant quantities if the domain wall energy density scales as $a^{-1}$ during the domination. \subsubsection{Matter-dominated era to domain wall dominated era} At the domain wall decay time $\Gamma_{DW}^{-1}$, the ratio of these energy densities is estimated as \begin{eqnarray} \left. \frac{\rho_M}{\rho_{DW}}\right|_{\Gamma_{DW}} = \left( \frac{\Gamma_{DW}}{H_{eq}}\right)^4 , \end{eqnarray} from $H \propto a^{-1/2}$, where we assume $\rho_{DW} \propto a^{-1}$ during the domain wall domination between $H_{eq}$ and $\Gamma_{DW}$. After the domain walls decay, the energy density of the matter is diluted as \begin{eqnarray} \frac{\rho_{M}}{s} = \frac{ 3 T_d}{4}\left( \frac{\Gamma_{DW}}{H_{eq}}\right)^4 \simeq \frac{ 3 T_d}{4}\left( \frac{\pi^2 g_*(T_d) T_d^4}{10} \frac{M_P^2}{\sigma^2} \right)^2 , \label{eq:rho/s:Scaling} \end{eqnarray} for the case that the domain wall decays earlier than the matter does. \subsubsection{Radiation-dominated era to domain wall dominated era} Assuming $\rho_{DW} \propto a^{-1}$ during the domain wall domination, the entropy density ratio of after- to before-domain wall decay is given by \begin{eqnarray} \Delta = \frac{s_{after}}{s_{before}} \simeq \frac{T_{eq}}{T_d}\left( \frac{H_{eq}}{\Gamma_{DW}}\right)^4 \simeq \left(\frac{10 \sigma^2}{\pi^2 g_*(T_d) T_d^4 M_P^2} \right)^{9/4} \left(\frac{g_*(T_d)}{g_*(T_{eq})}\right)^{1/4}, \label{rad:delta1-2:Scaling} \end{eqnarray} for $\Delta \gg 1$. We can obtain an entropy production \begin{eqnarray} \Delta \simeq 600 \left( \frac{\sigma^{1/3}}{ 50 \,{\rm TeV}}\right)^{27/2} \left( \frac{2\,{\rm MeV}}{T_d}\right)^9 . \label{rad:delta2:Scaling} \end{eqnarray} \section{Cosmological implications} In this section, we study implications of the NMSSM domain wall decay to some relics in several models. \subsection{Thermal relic WIMP LSP such as singlino or sneutrino} WIMPs have been regarded as a promising dark matter candidate in our Universe. In the NMSSM, neutralino is the candidate~\cite{Ellwanger:2009dp}. In a right-handed neutrino extended model, right-handed sneutrino also becomes a WIMP dark matter candidate~\cite{Cerdeno:2008ep}. Since the WIMP thermal relic abundance is inversely proportional to its thermal averaged annihilation cross section $\langle\sigma v\rangle$ as \begin{equation} \Omega_{WIMP}h^2 \simeq \frac{0.1\, {\rm pb}}{\langle \sigma v \rangle}, \end{equation} too small annihilation cross section leads to overabundant WIMPs. The Singlino- or Bino-like neutralino, or right-handed sneutrino with small couplings is indeed such a case. The domain wall decay produces extra entropy with the dilution factor (\ref{rad:delta1-2:Scaling2}) and could regulate the WIMP relic abundance to be \begin{equation} \Omega_{WIMP}h^2 \frac{1}{\Delta} \simeq 0.1 , \end{equation} even for a small annihilation cross section $\langle \sigma v \rangle \ll 1$ pb. \subsection{The moduli problem in the mirage mediation scenario} Mirage mediation models appear free from the cosmological moduli problem because a moduli mass is quite large. However, nonthermally produced LSP through a decay chain by way of gravitino are in fact overabundant. Let us examine whether the dmain wall decay dilute those LSPs. Moduli decay before the energy density of domain walls dominates the Universe, because the moduli decay rate \begin{equation} \Gamma_{moduli} \simeq \frac{m_{moduli}^3}{8\pi M_P^2}, \end{equation} is larger than $H_{\rm eq}$ given by Eq.~(\ref{eq:H-eq:Scaling:MD}) in the mirage mediation scenario. At $H\simeq \Gamma_{moduli}$, the moduli decay at a moduli dominated Universe produces gravitinos as \begin{eqnarray} Y_{3/2} = \frac{n_{3/2}}{s } = B_{3/2}\frac{3T_D}{2m_{moduli}} , \end{eqnarray} with the branching ratio of moduli decay into gravitinos $B_{3/2} = {\cal O}(0.01)-{\cal O}(1)$ \cite{Endo:2006zj}, and the Universe becomes radiation dominated. Here $T_D$ is the decay temperature of the moduli field given by \begin{eqnarray} 3 M_P^2 \Gamma_{moduli}^2 = \frac{\pi^2 g_*(T_D)}{30}T_D^4 . \label{def:TD} \end{eqnarray} The entropy density ratio of after- to before-domain wall decay is given by Eq.~(\ref{rad:delta1-2:Scaling2}). Unstable gravitinos decay into LSP with $n_{3/2}= n_{LSP}$ due to R-parity conservation. Usually, this leads to the overproduction of LSP whose abundance exceeds the dark matter abundance. After extra entropy production by the domain wall decay, the resultant final LSP abundance becomes \begin{eqnarray} \frac{\rho_{LSP}}{s} \simeq \frac{3 m_{LSP} T_D}{2m_{moduli}}\frac{B_{3/2}}{\Delta} , \end{eqnarray} in other words, \begin{eqnarray} \Omega_{LSP}h^2 \simeq 4.2 \times 10^8 \frac{m_{LSP} T_D}{m_{moduli}}\frac{B_{3/2}}{\Delta} \,{\rm GeV}{}^{-1} . \label{def:delta} \end{eqnarray} In figure 4, we consider the case that the LSP is the dark matter, and plot $\Omega_{LSP}h^2 = 0.1$ by using (\ref{def:delta}). The input parameters are $\lambda=\kappa=0.01$, $T_d=3~{\rm MeV}$, $m_{{\rm LSP}}=100~{\rm GeV}$, $m_{{\rm moduli}}=1000~{\rm TeV}$.\\ \begin{figure}[ht] \begin{center} \epsfig{figure=lsp2.eps, width=8cm,height=8cm,angle=0} \end{center} \caption{The required branching ratio contour to keep $\Omega_{LSP}h^2 = 0.1$ in the mirage mediation scenario for $\lambda=\kappa=0.01$, $T_d=3~{\rm MeV}$, $m_{{\rm LSP}}=100~{\rm GeV}$, $m_{{\rm moduli}}=1000~{\rm TeV}$. Above each curve, the relic abundance is smaller than $\Omega_{LSP}h^2 = 0.1$. } \label{Fig:LSP} \end{figure} \subsection{The decay constant of the QCD axion} Finally, we comment on the QCD axion, $a$, with the decay constant $f_a$. After the QCD transition, axions are produced by coherent oscillation, so-called misalignment mechanism, and a good candidate for dark matter because its lifetime is much longer than the age of the Universe. Its abundance is proportional to $f_a^{7/6}$~\cite{Sikivie:2006ni}. The condition $\Omega_{a} \lesssim \Omega_{DM}$ is rewritten as \begin{eqnarray} f_a \lesssim 10^{12} \, {\rm GeV}. \label{bound:fa} \end{eqnarray} $f_a$, which is larger than (\ref{bound:fa}), corresponds to the overproduction of axions. Again, the domain wall decay can dilute the axion abundance for such a larger $f_a$~\cite{Kawasaki:2004rx}. For example, with the dilution (\ref{rad:delta1-2:Scaling2}) by the domain wall decay, the bound on $f_a$ is relaxed as \begin{eqnarray} f_a \lesssim 10^{16} \,{\rm GeV} , \end{eqnarray} for $\sigma^{1/3}=300$ TeV and $T_d=2$ MeV. The GUT scale axion decay constant is allowed, which is remarkable. In superstring theory, the natural decay constant of axionic parts in a closed string moduli would be of the order of the GUT scale or string scale~\cite{Svrcek:2006yi}~\footnote{Even larger decay constants can be obtained in a certain situation (see e.g., Ref.~\cite{Abe:2014pwa}).}. Such stringy axions with larger decay constant can be the QCD axion. \section{Cosmological implications for $w=-1/3$ domain walls} In this section, we study implications of the NMSSM domain wall decay with $w=-1/3$. for the moduli problem within the gravity mediation scenario. Now, let us study the dilution of moduli to avoid the moduli problem. After inflation, the moduli would start to oscillate and dominate the energy density of the Universe. They may decay during or after the BBN and chage the success of BBN. To avoid such a situation, the energy density of moduli must satisfy \begin{eqnarray} \frac{\rho_{moduli}}{s} \lesssim c \cdot 3.6 \times 10^{-9}\, {\rm GeV}, \label{eq:moduli-constraint} \end{eqnarray} where $c \sim 10^{-2} - 10^{-4}$ for 10 TeV moduli mass depending on the coupling between the moduli and the gauge field \cite{Asaka:1999xd}. We use $c=10^{-3}$ in the following analysis. The decay of domain walls can dilute the moduli density, which is given as \begin{eqnarray} \frac{\rho_{moduli}}{s} \simeq \frac{ 3 T_d}{4}\left( \frac{\pi^2 g_*(T_d) T_d^4}{10} \frac{M_P^2}{\sigma^2} \right)^2, \label{rho/s:Scaling} \end{eqnarray} as derived in Eq.~(\ref{eq:rho/s:Scaling}). It depends on only $T_d$ and tension $\sigma$, which depends on $\lambda$, $\kappa$, $A_\kappa$ and $\mu$. Imposing the constraint (\ref{eq:moduli-constraint}) on the resultant abundance (\ref{rho/s:Scaling}), we find \begin{eqnarray} \sigma^{1/3} \gtrsim 220 \,{\rm TeV}\left(\frac{10^{-3}}{c}\right)^{1/12} \left(\frac{T_d}{3\, {\rm MeV}}\right)^{3/4} , \end{eqnarray} where $g(T_d)=10$ is used. Figure 5 shows the constraints (\ref{eq:moduli-constraint}) with (\ref{rho/s:Scaling}) for $\lambda=\kappa=0.01$, $T_d=3~{\rm MeV}$. The shaded region is excluded by the constraint.\\ \begin{figure}[ht] \begin{center} \epsfig{figure=moduli2.eps, width=8cm,height=8cm,angle=0} \end{center} \caption{The bound of the moduli abundance in gravity mediation scenario for $\lambda=\kappa=0.01$, $T_d=3~{\rm MeV}$. The yellow region is allowed in the $(\mu, A_{\kappa})$ plane. } \label{Fig:moduli} \end{figure} \section{Conclusion and discussion} We have studied the cosmological implication of unstable domain walls in the NMSSM. The spontaneous breaking of the $Z_3$ discrete symmetry in the NMSSM causes the cosmological domain wall problem. We consider that the $Z_3$ symmetry is slightly but explicitly broken and the domain walls decay with the decay temperature $T_d$. The domain walls easily dominate the density of the Universe and its decay causes a late-time entropy production, depending on its tension $\sigma$ and $T_d$. Such entropy production has significant implications in thermal history. They can dilute unwanted relics such moduli, gravitino, LSP and axion. We have shown that $T_d$ of several MeV dilute various relics in several scenarios. Those includes thermal WIMP LSP in gravity mediation model, nonthermally produced LSP in mirage mediation and misalignment produced cold axion in Peccei-Quinn extented models. If the energy density of domain wall network decreases as $\rho_{DW}\propto a^{-1}$ during domain wall domination, cosmological moduli problem in gravity mediation also might be relaxed. \section*{Acknowledgments} This work was supported in part by the Grant-in-Aid for Scientific Research No.~25400252 and No.~26247042 (T.K.) and on Innovative Areas No.~26105514 (O.S.) from the Ministry of Education, Culture, Sports, Science and Technology in Japan.
1510.03464
\section{Introduction} A \emph{quandle} is an algebraic structure with a set $X$ and a binary operation $*:X \times X \rightarrow X$ satisfying the following axioms: \begin{enumerate} \item (idempotency) $a*a=a$ for any $a \in X,$ \item (invertibility) for each $b \in X$, $*_{b}:X \rightarrow X$ given by $*_{b}(x)=x*b$ is invertible (the inverse is denoted by $\overline{*}_{b}$), \item (right self-distributivity) $(a*b)*c=(a*c)*(b*c)$ for any $a,b,c \in X.$ \end{enumerate} A set $X$ with a binary operation $*:X \times X \rightarrow X$ that satisfies the right self-distributive property and invertibility is called a \emph{rack}. Rack and quandle theory is motivated by knot theory. The axioms of a quandle correspond to the three Reidemeister moves \cite{Joy, Matv}. When we color an oriented knot diagram by a given magma $(X;*)$ consisting of a finite set $X$ and a binary operation $*:X \times X \rightarrow X$ with the convention of Figure $1.1$, we need the above conditions $(1),(2),$ and $(3)$ to be satisfied in order that Reidemeister moves preserve the number of the allowed colorings. Quandles can be used for classifying classical knots \cite{Joy, Matv}. \begin{figure}[h] \centerline{{\psfig{figure=crossing-quandle.eps,height=2.3cm}}}\ \\ \centerline{Figure 1.1; Quandle coloring}\ \\ \end{figure} The followings are some examples of quandles: \begin{example} \begin{enumerate} \item \emph{Let $X$ be a set with a binary operation $a * b = a$ for any $a,b \in X$. Then $X$ is a quandle called a \emph{trivial quandle}.} \item \emph{A group $G$ with the conjugate operation $g * h = h^{-1}gh$ forms a quandle structure called a \emph{conjugate quandle}. More generally, any subset of $G$ which is closed under conjugation is a subquandle of a conjugate quandle $G.$} \item \cite{Tak} \emph{If $G$ is an abelian group, we define a quandle called a \emph{Takasaki quandle} (or \emph{kei}) of $G$, denoted by $T(G)$, by taking $a*b=2b-a$. Specially, if $G=\mathbb{Z}_{n}$, then we denote $T(\mathbb{Z}_{n})$ by $R_{n}$ called a \emph{dihedral quandle}.} \item \emph{Let $M$ be a module over the Laurent polynomial ring $\mathbb{Z}[t^{\pm1}]$. Then a quandle $M$ with the operation $a * b = ta + (1-t)b$ is said to be an \emph{Alexander quandle}.} \end{enumerate} \end{example} Elementary quandle theory has been developed in a way similar to basic group theory. A function $h:X \rightarrow Y$ between two quandles $(X;\ast)$ and $(Y;\ast^{'})$ is said to be a \emph{quandle homomorphism} if $h(a * b)=h(a)*^{'}h(b)$ for any $a, b \in X$. If a quandle homomorphism is invertible, then we call it a \emph{quandle isomorphism}. A \emph{quandle automorphism} is a quandle isomorphism from a quandle $X$ onto itself. Recall that $*$ has an inverse binary operation $\overline{*}$, that is $(a*b)\overline{*}b=a=(a\overline{*}b)*b$ for any $a,b \in X.$ A \emph{subquandle} of a quandle $X$ is a subset $S$ of $X$ which is closed under $*$ and $\overline{*}$ operations. \begin{observation} \label{Observation 1.2} If $(X;*)$ is a finite quandle, then $S \subset X$ is a subquandle if it is closed under $*$ operation. It is the case because for any $a,b \in X$ the sequence $a*^{k}b$ where $a*^{k}b=(\cdots((a*b)*b)*\cdots)*b$ has to have repetitions, say $a*^{m}b=a*^{n}b$ for $0 \leq m < n,$ so $a*^{n-m}b=a.$ That is, $a\overline{*}b = a*^{n-m-1}b.$ \end{observation} A quandle $X$ is said to be a \emph{quasigroup quandle} (or \emph{Latin quandle}) if it satisfies the quasigroup property, i.e. for any $a,b \in X$, the equation $a * x =b$ has a unique solution $x$. For each $a \in X$, the map $*_{b}:X \rightarrow X$ defined by $*_{b}(x)=x*b$ is a quandle automorphism. A subgroup of the quandle automorphism group $\textrm{Aut}(X)$ of a quandle $X$ generated by $*_{b}$ for every $b \in X$ is called a \emph{quandle inner automorphism group} denoted by $\textrm{Inn}(X).$ If the action of $\textrm{Inn}(X)$ on a quandle $X$ is transitive\footnote{It means that for any $a$ and $b$ in a quandle $X$ there are $x_{1},\ldots, x_{n} \in X$ so that $(\cdots((a*^{\varepsilon_{1}}x_{1})*^{\varepsilon_{2}}x_{2})*^{\varepsilon_{3}}\cdots)*^{\varepsilon_{n}}x_{n}=b$ where $*^{\varepsilon}=*$ or $\overline{*}.$ For a finite quandle $X$, by Observation \ref{Observation 1.2}, we can find $y_{1},\ldots, y_{k} \in X$ so that $(\cdots((a*y_{1})*y_{2})*\cdots)*y_{k}=b.$}, then we call $X$ a \emph{connected quandle}. Note that if a quandle $X$ has the quasigroup property, then it is a connected quandle. The converse, however, does not hold in general (see Example \ref{Example 1.3} $(3)$). \begin{example} \label{Example 1.3} \begin{enumerate} \item \emph{Dihedral quandles $R_{n}$ of odd order are quasigroup quandles.} \item \emph{An Alexander quandle $M$ is a quasigroup quandle if and only if $1-T$ is invertible.} \item \emph{The quandle QS(6) (or Rig quandle\footnote{L. Vendramin obtained a list of all connected quandles of order less than $48$ by using GAP, and they can be found in the GAP package Rig \cite{Ven}. There are $431$ connected quandles of order less than $36$ \cite{CSV}. These quandles are called \emph{Rig quandles}, and we denote $i$-th quandle of order $n$ in the list of Rig quandles by $Q(n,i).$ } $Q(6,2)$), which is the orbit of $4$-cycles in the symmetric group $S_{4}$ of order $24$ with the conjugate operation, is a connected quandle but not a quasigroup quandle (see \cite{CKS-1} and \cite{CKS-2}).} \end{enumerate} \end{example} Rack homology theory was introduced by Fenn, Rourke, and Sanderson \cite{FRS}, and it was modified by Carter, Jelsovsky, Kamada, Langford, and Saito \cite{CJKLS} to define a (quandle) cocycle knot invariant. We recall the definition of rack and quandle homology based on \cite{CKS-2}, starting from the rack chain complex. \begin{definition} \emph{Let $C_{n}^{R}(X)$ be the free abelian group generated by n-tuples $(x_{1}, \ldots ,x_{n})$ of elements of a rack $X$, i.e. $C_{n}^{R}(X)=\mathbb{Z}X^{n}=(\mathbb{Z}X)^{\otimes n}$. We define a boundary homomorphism $\partial_{n}:C_{n}^{R}(X) \rightarrow C_{n-1}^{R}(X)$ by $$\partial_{n} (x_{1}, \ldots , x_{n})=\sum\limits_{i=1}^{n}(-1)^{i}(d_{i}^{(*_{0})}-d_{i}^{(*)})(x_{1}, \ldots , x_{n})$$ where $d_{i}^{(*_{0})}(x_{1}, \ldots , x_{n})=(x_{1},\ldots,x_{i-1},x_{i+1},\ldots,x_{n})$ and\\ \hspace*{1.5cm} $d_{i}^{(*)}(x_{1}, \ldots , x_{n})=(x_{1}*x_{i},\ldots,x_{i-1}*x_{i},x_{i+1},\ldots,x_{n}).$\\ The face maps $d_{i}^{(*)}$ and $d_{i}^{(*_{0})}$ are illustrated in Figure $1.2.$\\ Then $(C_{n}^{R}(X),\partial_{n})$ is said to be a \emph{rack chain complex} of $X$.} \end{definition} \centerline{{\psfig{figure=facemaps.eps,height=3.9cm}}}\ \\ \centerline{Figure 1.2; Graphical descriptions of face maps $d_{i}^{(*)}$ and $d_{i}^{(*_{0})}$}\ \\ \begin{definition} \emph{For a quandle $X$, we consider the subgroup $C_{n}^{D}(X)$ of $C_{n}^{R}(X)$ generated by n-tuples $(x_{1}, \ldots ,x_{n})$ of elements of $X$ with $x_{i}=x_{i+1}$ for some $i=1,\ldots,n-1$. Then $(C_{n}^{D}(X),\partial_{n})$ is the subchain complex of a rack chain complex $(C_{n}^{R}(X),\partial_{n})$ and it is called the \emph{degenerate chain complex of $X$}. Then we take the quotient chain complex $(C_{n}^{Q}(X),\partial_{n})=(C_{n}^{R}(X)/C_{n}^{D}(X),\partial_{n})$ and call it the \emph{quandle chain complex}.} \end{definition} \begin{definition} \emph{For an abelian group $G$, we define the chain complex $C_{*}^{W}(X;G)=C_{*}^{W}(X) \otimes G$ with $\partial = \partial \otimes \text{Id}$ for W=R, D, and Q. Then the \emph{$n$th rack, degenerate, and quandle homology groups of a quandle $X$ with coefficient group $G$} are respectively defined as $$H_{n}^{W}(X;G)=H_{n}(C_{*}^{W}(X;G)) \hbox{~for W=R, D, and Q}.$$} \end{definition} For any finite rack or quandle, the free parts of rack, degenerate, and quandle homology groups have been completely obtained in \cite{CJKS,E-G,L-N}. \begin{theorem} \cite{CJKS,E-G,L-N} Let $\mathcal{O}$ be the set of orbits of a rack $X$ with respect to the action of $X$ on itself by right multiplication. Then \begin{enumerate}\label{Theorem 1.7} \item \text{\emph{rank}}$H_{n}^{R}(X)=|\mathcal{O}|^{n}$ for a finite rack $X$, \item \text{\emph{rank}}$H_{n}^{Q}(X)=|\mathcal{O}|(|\mathcal{O}|-1)^{n-1}$ for a finite quandle $X$, \item \text{\emph{rank}}$H_{n}^{D}(X)=|\mathcal{O}|^{n}-|\mathcal{O}|(|\mathcal{O}|-1)^{n-1}$ for a finite quandle $X$. \end{enumerate} \end{theorem} For every finite connected quandle $X,$ by Theorem \ref{Theorem 1.7} we have \textrm{rank}$H_{n}^{R}(X)=1$ for all $n,$ \textrm{rank}$H_{1}^{Q}(X)=1,$ and \textrm{rank}$H_{n}^{Q}(X)=0$ if $n \geq 2.$ Annihilation of torsion subgroups of rack and quandle homology of quasigroup quandles was addressed in \cite{P-Y}\footnote{Annihilation of torsion of homology of $R_{3}$ by $3$ was proven in \cite{N-P-1}. See \cite{P-Y} for the history of annihilation problem. }: \begin{theorem} \cite{P-Y} \label{Theorem 1.8} Let $Q$ be a finite quasigroup quandle. Then the torsion subgroup of $H_{n}^{R}(Q)$ is annihilated by $|Q|$. \end{theorem} \begin{corollary} \cite{P-Y} \label{Corollary 1.9} The reduced quandle homology of a finite quasigroup quandle is annihilated by its order, i.e. $|Q|\widetilde{H}_{n}^{Q}(Q)=0$. \end{corollary} We can generalize the Theorem \ref{Theorem 1.8} to finite quasigroup racks. \begin{corollary} Let $X$ be a finite quasigroup rack. Then the torsion subgroup of $H_{n}^{R}(X)$ is annihilated by $|X|$. \end{corollary} \begin{proof} It is well known that in a rack, $b$ and $b*b$ are functionally the same. Namely for any $a\in X$ we have $$a*b= ((a \overline{*} b)*b)*b= a*(b*b).$$ From the quasigroup property we know that if $a*x=a*y$ then $x=y$ thus in a quasigroup rack $b=b*b$ and the rack is a quandle. \end{proof} The $6$-elements quandle $QS(6)$ (or Rig quandle $Q(6,2)$, see Table \ref{Table 1} and Example \ref{Example 1.3} (3)) is a connected quandle, but it is not a quasigroup quandle and $H_{3}^{Q}(QS(6))=\mathbb{Z}_{24}$ (see \cite{CKS-1}). Thus $6$ does not annihilate $\text{tor}H_{3}(QS(6))$; this shows that Theorem \ref{Theorem 1.8} and Corollary \ref{Corollary 1.9} do not hold for connected quandles in general. We show that the torsion subgroup of rack and quandle homology of $QS(6)$ is annihilated by the order of its quandle inner automorphism group ($24$ in this case) in Theorem \ref{Theorem 3.2}. \begin{table}[h] \centering \caption{A quandle $QS(6)$}\label{Table 1} \begin{tabular}{c|cccccc} $\ast$ & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline 1 & 1 & 1 & 6 & 5 & 3 & 4 \\ 2 & 2 & 2 & 5 & 6 & 4 & 3 \\ 3 & 5 & 6 & 3 & 3 & 2 & 1 \\ 4 & 6 & 5 & 4 & 4 & 1 & 2 \\ 5 & 4 & 3 & 1 & 2 & 5 & 5 \\ 6 & 3 & 4 & 2 & 1 & 6 & 6 \end{tabular} \end{table} \section{$m$-almost quasigroup ($m$-AQ) quandles} An $m$-almost quasigroup quandle ($m$-AQ quandle, in short) is a generalization of a quasigroup quandle. A formal definition follows. \begin{definition} \emph{A quandle $X$ is said to be \emph{$m$-almost quasigroup} if it satisfies the following conditions: \begin{enumerate} \item for each $a \in X$, the stabilizer set $S_{a}=\{ x \in X | a*x=a \}$ has order $m$, say $S_{a}=\{a=a^{(1)}, a^{(2)}, \cdots, a^{(m)} \},$ \item for any $b \in X \setminus S_{a}$, the equation $a*x=b$ has a unique solution. \end{enumerate}} \end{definition} If $m=1$ above, then $X$ is a quasigroup quandle. Note that if $X$ is finite and $m \geq 2$, then the equation $a*x = a^{(k)}$ has no solution if $2 \leq k \leq m.$ The following are some examples of $m$-almost quasigroup quandles: \begin{example} \begin{enumerate} \item \emph{Every finite trivial quandle $X$ is an $|X|$-almost quasigroup quandle.} \item \emph{The quandle QS(6) (or Rig quandle $Q(6,2)$) is a $2$-almost quasigroup quandle.} \item \emph{A quandle composed of $2$-cycles in the symmetric group $S_{n}$ and the conjugate operation is an $(\binom{n-2}{2}+1)$-almost quasigroup quandle if $n \geq 2$ (for $2$-cycles in $S_{n},$ $(i,j)*(k,l)=(i,j)$ if and only if $\{i,j\}=\{k,l\}$ or $\{i,j\} \cap \{k,l\}=\emptyset$). In particular, Rig quandles $Q(6,1),$ $Q(10,1),$ $Q(15,7),$ $Q(21,9),$ and $Q(28,13)$ are the cases of $n=4,$ $n=5,$ $n=6,$ $n=7,$ and $n=8,$ respectively.} \item \emph{The Rig quandle $Q(15,2)$ which is composed of elements of $S_{5}$ with cycle partition $(2,2,1)$ is a $3$-almost quasigroup quandle.} \end{enumerate} \end{example} We check some basic properties of $m$-almost quasigroup quandles: \begin{lemma}\label{Lemma 2.3} Suppose that $X$ is an $m$-almost quasigroup quandle. \begin{enumerate} \item For each $a\in X$, the stabilizer set $S_{a}$ is a subquandle of $X$. \item For any $a,b \in X$, $S_{a}*b=S_{a*b}$ where $S_{a}*b=\{a*b, a^{(2)}*b,\cdots,a^{(m)}*b\}.$ \item If $X$ is finite and $m \leq 3,$ then every stabilizer set is a trivial subquandle of $X$. \item If $X$ is nontrivial and every stabilizer set is a trivial subquandle of $X,$ then $X$ is connected. \end{enumerate} \end{lemma} \begin{proof} $(1)$ For $a \in X$, we consider the stabilizer set $S_{a}=\{a=a^{(1)}, a^{(2)}, \cdots, a^{(m)} \}.$ Then $a=a*a^{(j)}=(a*a^{(i)})*a^{(j)}=(a*a^{(j)})*(a^{(i)}*a^{(j)})=a*(a^{(i)}*a^{(j)})$ so that $a^{(i)}*a^{(j)} \in S_{a}$ for any $1 \leq i, j \leq m.$ Moreover, $a*a^{(j)}=a$ implies that $a = a\overline{*}a^{(j)}$ so that $a = (a*a^{(i)})\overline{*}a^{(j)}=(a\overline{*}a^{(j)})*(a^{(i)}\overline{*}a^{(j)})=a*(a^{(i)}\overline{*}a^{(j)}).$ Therefore we similarly have $a^{(i)}\overline{*}a^{(j)} \in S_{a}$ for any $1 \leq i, j \leq m.$ That is, $S_{a}$ is closed under both operations $*$ and $\overline{*},$ hence $S_{a}$ is a subquandle of $X.$ $(2)$ For any $i \in \{1, 2, \cdots m \},$ $(a*b)*(a^{(i)}*b)=(a*a^{(i)})*b=a*b$, and this implies $S_{a}*b \subset S_{a*b}.$ Since $*_{b}$ is bijective, $|S_{a}*b|=m=|S_{a*b}|$ therefore $S_{a}*b=S_{a*b}.$ $(3)$ Let $a \in X$, then the stabilizer set $S_{a}$ is a subquandle of $X$ by Lemma \ref{Lemma 2.3} $(1).$ If $m \leq 2,$ then $S_{a}$ is clearly trivial, so we are done. Assume that $m=3$ and denote $S_{a}=\{a,b,c\}.$ Since $a*c=a$ and $c*c=c,$ by the invertibility condition of a quandle we have $b*c=b,$ i.e. $c\in S_{b}.$ Then $b*a$ should be $b$ or $c$ because $S_{a}$ is closed under $*$ and $a*a=a.$ Since $X$ is finite, the second axiom of the definition of an $m$-almost quasigroup quandle implies that for any $y \in S_{b}$ the equation $b*x=y$ has no solution. Therefore, the equation $b*x=c$ has no solution, so $b*a=b.$ Then $c*a=c$ and $c*b=c$ by the invertibility condition of a quandle, therefore $S_{a}$ is a trivial subquandle of $X.$ $(4)$ Since $X$ is a nontrivial $m$-almost quasigroup quandle, $m < |X|.$ Let $a, b \in X.$ If $b$ is not contained in $S_{a},$ then there exists a unique solution $y \in X$ such that $a*y=b$ by the second axiom of the definition of an $m$-almost quasigroup quandle. Suppose that $b \in S_{a}.$ Then the triviality of $S_{a}$ implies that $S_{a}=S_{b}.$ Since $m < |X|,$ there is an element $c \in X$ such that $c \in X \setminus S_{a}.$ Then we can find a unique element $y_{1} \in X$ so that $a*y_{1}=c.$ We easily check that $b \in X \setminus S_{c}$ (because if $b \in S_{c},$ then since $S_{c}$ is trivial, we have the equation $b*c=b$, i.e. $c \in S_{b}=S_{a},$ and this contradicts $c \in X \setminus S_{a}$). Then there is a unique element $y_{2} \in X$ so that $c*y_{2}=b,$ and then we have $(a*y_{1})*y_{2}=c*y_{2}=b.$ Therefore the quandle $X$ is connected. \end{proof} Notice that the stabilizer set $S_{a}=\{a=a^{(1)}, a^{(2)}, \cdots, a^{(m)} \}$ is a trivial subquandle of a quandle $X$ if and only if $S_{a}=S_{a^{(k)}}$ for any $k.$ In general the equality $S_{a}=S_{a^{(k)}}$ does not have to hold; see the Table \ref{Table 2}. Carter, Elhamdadi, Nikiforou, and Saito \cite{CENS} introduced an abelian extension theory of quandles, and a generalization to extensions with a dynamical cocycle was defined by Andruskiewitsch and Gra\~na \cite{AG}. We review the definition of quandle extension by a dynamical cocycle after \cite{CHNS}. \begin{definition}\cite{AG, CHNS} \emph{Let $X$ be a quandle and $S$ be a non-empty set. Let $\alpha: X \times X \rightarrow \text{Fun}(S\times S,S)=S^{S\times S}$ be a function, so that for $a, b \in X$ and $s,t \in S$ we have $\alpha_{a, b}(s,t) \in S.$ Then $S \times X$ is a quandle by the operation $(s, a)\ast (t, b)= (\alpha_{a, b}(s,t),a \ast b),$ where $a \ast b$ denotes the quandle operation in $X$, if and only if $\alpha$ satisfies the following conditions: \begin{enumerate} \item $\alpha_{a, a}(s,s)=s$ for all $a \in X$ and $s \in S,$ \item $\alpha_{a, b}(-,t):S\rightarrow S$ is a bijection for all $a, b \in X$ and for all $t \in S,$ \item $\alpha_{a \ast b, c}(\alpha_{a, b}(s,t),u)=\alpha_{a \ast c, b \ast c}(\alpha_{a, c}(s,u),\alpha_{b, c}(t,u))$ for all $a, b, c \in X$ and $s, t, u \in S.$ \end{enumerate} Such a function $\alpha$ is called a \emph{dynamical quandle cocycle}. The quandle constructed above is denoted by $S\times_{\alpha}X,$ and is called the \emph{extension} of $X$ by a dynamical cocycle $\alpha.$} \end{definition} Unlike quasigroup quandles, $m$-almost quasigroup quandles are not connected in general. We obtain interesting examples of non-connected $m$-almost quasigroup quandles from quandle extensions of quasigroup quandles and trivial quandles. \begin{example} \emph{Let $(X;*)$ be a finite quasigroup quandle and $T_{n}$ the trivial quandle of order $n$. Then the dynamical quandle cocycle extension $X\times_{\alpha}T_{n}$ is an $((n-1)|X|+1)$-almost quasigroup quandle (but not a connected quandle if $n \geq 2$) where the dynamical quandle cocycle $\alpha: T_{n} \times T_{n} \rightarrow X^{X\times X}$ is defined by $\alpha_{a,b}(s,t)= s*t$ if $a=b$ and $\alpha_{a,b}(s,t)= s$ if $a\neq b.$} \end{example} \begin{table}[h] \centering \caption{A $4$-almost quasigroup quandle which is not connected; $R_{3}\times_{\alpha} T_{2}$}\label{Table 2} \begin{tabular}{c|cccccc} $\ast$ & 1 & 2 & 3 & 4 & 5 & 6 \\ \hline 1 & 1 & 3 & 2 & 1 & 1 & 1 \\ 2 & 3 & 2 & 1 & 2 & 2 & 2 \\ 3 & 2 & 1 & 3 & 3 & 3 & 3 \\ 4 & 4 & 4 & 4 & 4 & 6 & 5 \\ 5 & 5 & 5 & 5 & 6 & 5 & 4 \\ 6 & 6 & 6 & 6 & 5 & 4 & 6 \end{tabular} \end{table} We computed that $H_{2}^{R}(R_{3}\times_{\alpha} T_{2})=\mathbb{Z}^{4},$ $H_{3}^{R}(R_{3}\times_{\alpha} T_{2})=\mathbb{Z}^{8}\oplus\mathbb{Z}_{3}^{2},$ $H_{4}^{R}(R_{3}\times_{\alpha} T_{2})=\mathbb{Z}^{16}\oplus\mathbb{Z}_{3}^{8},$ $H_{2}^{Q}(R_{3}\times_{\alpha} T_{2})=\mathbb{Z}^{2},$ $H_{3}^{Q}(R_{3}\times_{\alpha} T_{2})=\mathbb{Z}^{2}\oplus\mathbb{Z}_{3}^{2},$ and $H_{4}^{Q}(R_{3}\times_{\alpha} T_{2})=\mathbb{Z}^{2}\oplus\mathbb{Z}_{3}^{6}.$ \section{Annihilation of rack and quandle homology groups of $m$-almost quasigroup quandles} Our main result of the paper is Theorem \ref{Theorem 3.2}. We start from the important preparatory Lemma \ref{Lemma 3.1}. Let $X$ be an $m$-almost quasigroup quandle and ${\bf x}=(x_{1}, \cdots, x_{n}) \in X^{n}$. We consider two chain maps $g_{1}^{j},g_{2}^{j}:C_{n}^{R}(X) \rightarrow C_{n}^{R}(X)$ given by $$g_{1}^{j}({\bf x})=\sum\limits_{k=1}^{m}( x_{j}^{(k)}, \ldots, x_{j}^{(k)}, x_{j}, x_{j+1},\ldots, x_{n} ),$$ $$g_{2}^{j}({\bf x})=\sum\limits_{k=1}^{m}( x_{j}^{(k)}, \ldots, x_{j}^{(k)}, x_{j}^{(k)}, x_{j+1},\ldots, x_{n} )$$ for $1 \leq j \leq n.$ Note that the chain map $g_{1}^{1}$ is equal to $m\text{Id}.$ \begin{lemma}\label{Lemma 3.1} Let $X$ be a finite $m$-almost quasigroup quandle. Suppose that for each $a \in X$ the stabilizer set $S_{a}$ is a trivial subquandle of $X$. Then the chain maps $(|X|-m)g_{1}^{j}$ and $(|X|-m)g_{2}^{j}$ are chain homotopic for each $1 \leq j \leq n.$ \end{lemma} \begin{proof} We define a chain map $g_{0}^{j}:C_{n}^{R}(X) \rightarrow C_{n}^{R}(X)$ by $$g_{0}^{j}({\bf x})=\sum\limits_{k=1}^{m}( x_{j}, \ldots, x_{j}, x_{j}^{(k)}, x_{j+1},\ldots, x_{n} )$$ for $1 \leq j \leq n.$ Notice that $g_{0}^{1}=g_{2}^{1}.$\\ We consider a chain homotopy $G_{n}^{j}:C_{n}^{R}(X) \rightarrow C_{n+1}^{R}(X)$ given by for $1 \leq j \leq n$ $$G_{n}^{j}({\bf x})=\sum\limits_{k=1}^{m}\sum\limits_{y \in X}\{( x_{j}^{(k)}, \ldots, x_{j}^{(k)}, x_{j}, y, x_{j+1},\ldots, x_{n} ) -( x_{j}, \ldots, x_{j}, x_{j}^{(k)},y, x_{j+1},\ldots, x_{n} )\}.$$ If $i \leq j-1,$ then by the idempotence condition of a quandle we have $$d_{i}^{(\ast)}G_{n}^{j}({\bf x})=\sum\limits_{k=1}^{m}\sum\limits_{y \in X}\{( x_{j}^{(k)}, \ldots, x_{j}^{(k)}, x_{j}, y, x_{j+1},\ldots, x_{n} ) -( x_{j}, \ldots, x_{j}, x_{j}^{(k)},y, x_{j+1},\ldots, x_{n} )\}$$ so that the formula above does not depend on the binary operation $*$, therefore $(d_{i}^{(\ast_{0})}-d_{i}^{(\ast)})G_{n}^{j}=0.$\\ If $i = j,$ then the formula again does not depend on the binary operation $*$ because the stabilizer set $S_{x_{j}}$ is trivial $$d_{i}^{(\ast)}G_{n}^{j}({\bf x})=\sum\limits_{k=1}^{m}\sum\limits_{y \in X}\{( x_{j}^{(k)}, \ldots, x_{j}^{(k)}, y, x_{j+1},\ldots, x_{n} ) -( x_{j}, \ldots, x_{j}, y, x_{j+1},\ldots, x_{n} )\}$$ thus $(d_{i}^{(\ast_{0})}-d_{i}^{(\ast)})G_{n}^{j}=0.$\\ If $i = j+1$, then since the stabilizer set $S_{x_{j}}$ is trivial and $X$ satisfies the $m$-almost quasigroup property, we have $\sum\limits_{y \in X}(x_{j}^{(k)}*y)=m(x_{j}^{(k)})+\sum\limits_{y \in X \setminus S_{x_{j}}}(x_{j}^{(k)}*y)=m(x_{j}^{(k)})+\sum\limits_{y \in X \setminus S_{x_{j}}}(y)$ and therefore $$d_{i}^{(\ast)}G_{n}^{j}({\bf x})=\sum\limits_{k=1}^{m}\{m( x_{j}^{(k)}, \ldots, x_{j}^{(k)}, x_{j}, x_{j+1},\ldots, x_{n} )+ \sum\limits_{y \in X \setminus S_{x_{j}}}( x_{j}^{(k)}*y, \ldots, x_{j}^{(k)}*y, x_{j}*y, x_{j+1},\ldots, x_{n} ) $$ $$ -m( x_{j}, \ldots, x_{j}, x_{j}^{(k)}, x_{j+1},\ldots, x_{n} ) -\sum\limits_{y \in X \setminus S_{x_{j}}}( x_{j}*y, \ldots, x_{j}*y, x_{j}^{(k)}*y, x_{j+1},\ldots, x_{n} )\} $$ $$=m\sum\limits_{k=1}^{m}\{( x_{j}^{(k)}, \ldots, x_{j}^{(k)}, x_{j}, x_{j+1},\ldots, x_{n} ) - ( x_{j}, \ldots, x_{j}, x_{j}^{(k)}, x_{j+1},\ldots, x_{n} )\}.$$ The part corresponding to the sum over $y \in X \setminus S_{x_{j}}$ cancels out because for any $x_{j},$ $x_{j}^{(k)} \in S_{x_{j}},$ and $y \in X \setminus S_{x_{j}},$ there are unique $l \in \{1,\ldots, m \}$ and $z \in X \setminus S_{x_{j}}$ so that $(x_{j}^{(k)}*y,x_{j}*y)=(x_{j}*z,x_{j}^{(l)}*z).$ Then we have $$(d_{i}^{(\ast_{0})}-d_{i}^{(\ast)})G_{n}^{j}({\bf x})=(|X|-m)\sum\limits_{k=1}^{m}\{( x_{j}^{(k)}, \ldots, x_{j}^{(k)}, x_{j}, x_{j+1},\ldots, x_{n} ) -( x_{j}, \ldots, x_{j}, x_{j}^{(k)}, x_{j+1},\ldots, x_{n} )\}.$$ Lastly, if $j+2 \leq i \leq n+1$, then by the invertibility condition of a quandle we have $\sum\limits_{y \in X}(y*x_{i-1})=\sum\limits_{y \in X}(y)$ and therefore $$d_{i}^{(\ast)}G_{n}^{j}({\bf x})=\sum\limits_{k=1}^{m}\sum\limits_{y \in X}\{( x_{j}^{(k)}*x_{i-1}, \ldots, x_{j}^{(k)}*x_{i-1}, x_{j}*x_{i-1}, y, x_{j+1}*x_{i-1},\ldots,x_{i-2}*x_{i-1},x_{i},\cdots, x_{n} )$$ $$-( x_{j}*x_{i-1}, \ldots, x_{j}*x_{i-1}, x_{j}^{(k)}*x_{i-1}, y, x_{j+1}*x_{i-1},\ldots,x_{i-2}*x_{i-1},x_{i},\cdots, x_{n} )\}.$$ On the other hand, if $i \leq j$, then $$G_{n-1}^{j}d_{i}^{(\ast)}({\bf x})=\sum\limits_{k=1}^{m}\sum\limits_{y \in X}\{( x_{j+1}^{(k)}, \ldots, x_{j+1}^{(k)}, x_{j+1}, y, x_{j+2}, \ldots, x_{n} )-( x_{j+1}, \ldots, x_{j+1}, x_{j+1}^{(k)}, y, x_{j+2},\ldots, x_{n} )\},$$ so that the formula above does not depend on the binary operation $*$, and $G_{n-1}^{j}(d_{i}^{(\ast_{0})}-d_{i}^{(\ast)})=0.$\\ If $j+1 \leq i$, then $$G_{n-1}^{j}d_{i}^{(\ast)}({\bf x})=\sum\limits_{k=1}^{m}\sum\limits_{y \in X}\{((x_{j} \ast x_{i})^{(k)},\ldots,(x_{j} \ast x_{i})^{(k)},x_{j} \ast x_{i}, y, x_{j+1} \ast x_{i},\ldots, x_{i-1} \ast x_{i},x_{i+1},\ldots,x_{n}) $$ $$ -(x_{j} \ast x_{i},\ldots,x_{j} \ast x_{i},(x_{j} \ast x_{i})^{(k)}, y, x_{j+1} \ast x_{i},\ldots, x_{i-1} \ast x_{i},x_{i+1},\ldots,x_{n})\}.$$ Notice that $d_{i+1}^{(\ast_{0})}G_{n}^{j}=G_{n-1}^{j}d_{i}^{(\ast_{0})}$ and by Lemma \ref{Lemma 2.3} $(2)$ $d_{i+1}^{(\ast)}G_{n}^{j}=G_{n-1}^{j}d_{i}^{(\ast)}$ if $j+1 \leq i \leq n.$\\ Hence we have the following equality: $$\partial_{n+1}G_{n}^{j}({\bf x})+G_{n-1}^{j}\partial_{n}({\bf x})=(-1)^{j+1}(|X|-m)(g_{1}^{j}({\bf x})-g_{0}^{j}({\bf x})),$$ that means $(|X|-m)g_{1}^{j}$ and $(|X|-m)g_{0}^{j}$ are chain homotopic for each $1 \leq j \leq n.$\\ We next consider a chain homotopy $F_{n}^{j}:C_{n}^{R}(X) \rightarrow C_{n+1}^{R}(X)$ defined by $$F_{n}^{j}({\bf x})=\sum\limits_{k=1}^{m}\sum\limits_{y \in X}\{( x_{j}, \ldots, x_{j}, y, x_{j}^{(k)}, x_{j+1},\ldots, x_{n} ) -( x_{j}^{(k)}, \ldots, x_{j}^{(k)},y, x_{j}^{(k)}, x_{j+1},\ldots, x_{n} )\}$$ for $2 \leq j \leq n.$ Then by a similar calculation as above, we have the following equality: $$\partial_{n+1}F_{n}^{j}({\bf x})+F_{n-1}^{j}\partial_{n}({\bf x})=(-1)^{j}(|X|-m)(g_{0}^{j}({\bf x})-g_{2}^{j}({\bf x})),$$ hence $(|X|-m)g_{0}^{j}$ is chain homotopic to $(|X|-m)g_{2}^{j}$ for each $2 \leq j \leq n.$\\ Therefore, we obtain the following sequence of chain homotopic chain maps $$(|X|-m)g_{1}^{j} \simeq (|X|-m)g_{0}^{j} \simeq (|X|-m)g_{2}^{j}$$ for each $1 \leq j \leq n.$ \end{proof} We now study annihilation of rack and quandle homology groups of $m$-almost quasigroup quandles under the same assumption as in Lemma \ref{Lemma 3.1} that every stabilizer set $S_{a}$ is a trivial subquandle of $X$. \begin{theorem} \label{Theorem 3.2} Let $X$ be a finite $m$-almost quasigroup quandle. Suppose that for every $a \in X$ the stabilizer set $S_{a}$ is a trivial subquandle of $X$. Then the torsion subgroup of $H_{n}^{R}(X)$ is annihilated by $m(\emph{lcm}(|X|,|X|-m)).$ \end{theorem} \begin{proof} If $X$ is a trivial quandle, i.e. $m=|X|$, then we are done because the rack homology group of a trivial quandle does not have any torsion elements. Assume now that $X$ is a nontrivial quandle. Then by Lemma \ref{Lemma 2.3} $(4),$ the quandle $X$ is connected. For each $j \in \{1,2,\cdots,n\},$ we define the symmetrizer chain map $g_{s}^{j}:C_{n}^{R}(X) \rightarrow C_{n}^{R}(X)$ by $g_{s}^{j}({\bf x})=\sum\limits_{y \in X}(y,\cdots,y,x_{j+1},\cdots,x_{n}).$ We first prove that $mg_{s}^{j}-|X|g_{2}^{j}$ for $j \in \{1,\cdots,n\}$ and $mg_{s}^{j-1}-|X|g_{1}^{j}$ for $j \in \{2,\cdots,n\}$ are null-homotopic by using chain homotopies $D_{n}^{j},E_{n}^{j}:C_{n}^{R}(X) \rightarrow C_{n+1}^{R}(X)$, respectively, given by $$D_{n}^{j}({\bf x})=\sum\limits_{k=1}^{m}\sum\limits_{y \in X}(x_{j}^{(k)},\cdots,x_{j}^{(k)},x_{j}^{(k)},y,x_{j+1},\cdots,x_{n}) \text{~for~} 1 \leq j \leq n ,$$ $$E_{n}^{j}({\bf x})=\sum\limits_{k=1}^{m}\sum\limits_{y \in X}(x_{j}^{(k)},\cdots,x_{j}^{(k)},y,x_{j},x_{j+1},\cdots,x_{n}) \text{~for~} 2 \leq j \leq n .$$ When $i \leq j,$ by the idempotence condition of a quandle we obtain the following equation: $$d_{i}^{(\ast)}D_{n}^{j}({\bf x})=\sum\limits_{k=1}^{m}\sum\limits_{y \in X}( x_{j}^{(k)}, \ldots, x_{j}^{(k)},y, x_{j+1},\ldots, x_{n} ).$$ Since the above formula does not depend on the binary operation $\ast,$ $(d_{i}^{(\ast_{0})}-d_{i}^{(\ast)})D_{n}^{j}=0.$\\ We check that $\sum\limits_{y \in X}(x_{j}^{(k)}*y)=m(x_{j}^{(k)})+\sum\limits_{y \in X \setminus S_{x_{j}}}(y)$ because the stabilizer set $S_{x_{j}}$ is trivial and $X$ satisfies the $m$-almost quasigroup property so that $\sum\limits_{k=1}^{m}\sum\limits_{y \in X}(x_{j}^{(k)}*y)=m\sum\limits_{y \in X}(y).$ Thus, if $i = j+1,$ then $$d_{i}^{(\ast)}D_{n}^{j}({\bf x})=m\sum\limits_{y \in X}(y,\cdots,y,x_{j+1},\cdots, x_{n} )$$ therefore, $(d_{i}^{(\ast_{0})}-d_{i}^{(\ast)})D_{n}^{j}=|X|g_{2}^{j}-mg_{s}^{j}.$ Finally, if $j+2 \leq i \leq n+1$, then we obtain $\sum\limits_{y \in X}(y*x_{i-1})=\sum\limits_{y \in X}(y)$ from the invertibility condition of a quandle so that $$d_{i}^{(\ast)}D_{n}^{j}({\bf x})=\sum\limits_{k=1}^{m}\sum\limits_{y \in X}( x_{j}^{(k)}*x_{i-1}, \ldots, x_{j}^{(k)}*x_{i-1},y, x_{j+1}*x_{i-1},\ldots,x_{i-2}*x_{i-1},x_{i},\cdots, x_{n} ).$$ On the other hand, when $i \leq j$ we have the following equation: $$D_{n-1}^{j}d_{i}^{(\ast)}({\bf x})=\sum\limits_{k=1}^{m}\sum\limits_{y \in X}(x_{j+1}^{(k)}, \ldots, x_{j+1}^{(k)},y,x_{j+2},\ldots, x_{n}),$$ hence $D_{n-1}^{j}(d_{i}^{(\ast_{0})}-d_{i}^{(\ast)})=0$ because the formula above does not depend on the operation $*.$\\ If $j+1 \leq i$, then we have $$D_{n-1}^{j}d_{i}^{(\ast)}({\bf x})=\sum\limits_{k=1}^{m}\sum\limits_{y \in X}((x_{j} \ast x_{i})^{(k)},\ldots,(x_{j} \ast x_{i})^{(k)},y, x_{j+1} \ast x_{i},\ldots, x_{i-1} \ast x_{i},x_{i+1},\ldots,x_{n}).$$ Note that if $j+1 \leq i \leq n,$ then $d_{i+1}^{(\ast_{0})}D_{n}^{j}=D_{n-1}^{j}d_{i}^{(\ast_{0})}$ and by Lemma \ref{Lemma 2.3} $(2)$ $d_{i+1}^{(\ast)}D_{n}^{j}=D_{n-1}^{j}d_{i}^{(\ast)}.$ \\ Finally, we obtain the following equality: $$\partial_{n+1}D_{n}^{j}({\bf x})+D_{n-1}^{j}\partial_{n}({\bf x})=(-1)^{j+1}(|X|g_{2}^{j}({\bf x})-mg_{s}^{j}({\bf x})),$$ which means the chain maps $|X|g_{2}^{j}$ and $mg_{s}^{j}$ are chain homotopic for each $1 \leq j \leq n.$ By using the chain homotopy $E_{n}^{j}:C_{n}^{R}(X) \rightarrow C_{n+1}^{R}(X)$ and a similar calculation as above, we also have the following equality: $$\partial_{n+1}E_{n}^{j}({\bf x})+E_{n-1}^{j}\partial_{n}({\bf x})=(-1)^{j}(|X|g_{1}^{j}({\bf x})-mg_{s}^{j-1}({\bf x})),$$ so that the chain maps $|X|g_{1}^{j}$ and $mg_{s}^{j-1}$ are chain homotopic for each $2 \leq j \leq n.$\\ Then from the above calculations and by Lemma \ref{Lemma 3.1}, we have a sequence of chain homotopic chain maps $$N\text{Id} \simeq \frac{N}{m}g_{2}^{1} \simeq \frac{N}{|X|}g_{s}^{1} \simeq \frac{N}{m}g_{1}^{2} \simeq \frac{N}{m}g_{2}^{2} \simeq \frac{N}{|X|}g_{s}^{2} \simeq \cdots \simeq \frac{N}{m}g_{1}^{n} \simeq \frac{N}{m}g_{2}^{n} \simeq \frac{N}{|X|}g_{s}^{n}$$ where $N=m(\text{lcm}(|X|,|X|-m)).$ We therefore obtain the same induced homomorphisms\\ $N\text{Id}=(\frac{N}{|X|}g_{s}^{n})_{*}:H_{n}^{R}(X) \rightarrow H_{n}^{R}(X).$ Recall that since $X$ is connected, the free part of the $n$th rack homology group of $X$ is $\mathbb{Z}$ generated by $(y,\ldots,y)$ for $y \in X.$ Hence $N\textrm{tor}(H_{n}^{R}(X))=0$ for any dimension $n.$ \end{proof} \begin{corollary}\label{Corollary 3.3} Let $X$ be as in Theorem \ref{Theorem 3.2}. If $X$ is nontrivial, then the reduced quandle homology of $X$ is annihilated by $N=m(\emph{lcm}(|X|,|X|-m))$, i.e. $N\widetilde{H}_{n}^{Q}(X)=0$. \end{corollary} \begin{proof} The rack homology of a quandle splits into degenerate homology and quandle homology, i.e. $H_{n}^{R}(X)=H_{n}^{D}(X) \oplus H_{n}^{Q}(X) $ (see \cite{L-N}), hence the torsion of $H_{n}^{Q}(X)$ is annihilated by $N$ by Theorem \ref{Theorem 3.2}. Furthermore, since $X$ is finite and nontrivial, $X$ is a finite connected quandle by Lemma \ref{Lemma 2.3} $(4).$ Then $\textrm{rank}(H_{n}^{Q}(X))=0$ for $n \geq 2$ and $\textrm{rank}(H_{1}^{Q}(X))=1.$ Therefore, the reduced quandle homology of $X$ is a torsion group annihilated by $N$. \end{proof} Corollary \ref{Corollary 3.4} is immediate from Lemma \ref{Lemma 2.3} $(3)$ and Theorem \ref{Theorem 3.2}. \begin{corollary} \label{Corollary 3.4} Let $X$ be a finite $m$-almost quasigroup quandle where $m \leq 3$. Then the torsion subgroup of $H_{n}^{R}(X)$ is annihilated by $N=m(\emph{lcm}(|X|,|X|-m)).$ \end{corollary} The connected quandle $QS(6)$ (which is not a quasigroup quandle) is used to show that for any connected quandles the torsion subgroup of its rack and quandle homology can not be annihilated by its order in general. However, we obtain the best solution for the quandle $QS(6)$ from Theorem \ref{Theorem 3.2} and Corollary \ref{Corollary 3.3}. \begin{example} \emph{The least upper bound for orders of the torsion elements in $H_{n}^{R}(QS(6))$ is $|\textrm{Inn}(QS(6))|.$ For every dimension $n$, in particular, $|\textrm{Inn}(QS(6))|\widetilde{H}_{n}^{Q}(QS(6))=0.$\\ Since $QS(6)$ is a $2$-almost quasigroup quandle of order $6,$ $24$ is an upper bound for orders of the torsion elements in $H_{n}^{R}(QS(6))$ by Theorem \ref{Theorem 3.2}. Then $\textrm{Inn}(QS(6)) \cong S_{4}$ and $H_{3}^{Q}(QS(6))=\mathbb{Z}_{24}$ (see \cite{CKS-1}) implies that $|\textrm{Inn}(QS(6))|=|S_{4}|=24$ is the smallest number which annihilates $\textrm{tor}H_{n}^{R}(QS(6))$.} \end{example} \begin{remark} The order of a quandle inner automorphism group is not always the least upper bound for orders of the torsion elements in the rack or quandle homology groups of the quandle. For example, the $3$-almost quasigroup quandle of order $12$ (Rig quandle $Q(12,10)$) given by the Table \ref{Table 3} (see \cite{Cla}) has $\emph{Inn}(Q(12,10))$ of order $216$ while Theorem \ref{Theorem 3.2} shows that $108$ annihilates the torsion of homology. \end{remark} \begin{example} \emph{According to \cite{Ven} there are seven connected quandles of $15$ elements. Five of them are quasigroup quandles while the Rig quandle $Q(15,2)$ is a $3$-almost quasigroup quandle and the Rig quandle $Q(15,7)$ is a $7$-almost quasigroup quandle. Furthermore, by Corollary \ref{Corollary 3.4} $\textrm{tor}H_{n}(Q(15,2))$ is annihilated by $3(\textrm{lcm}(15,15-3))=180.$ We have\footnote{We would like to thank L.~Vendramin for computing the homology $H_{3}^{R}(Q(15,2))=\mathbb{Z}\oplus\mathbb{Z}_{2}^{3}\oplus\mathbb{Z}_{30}.$} $H_{2}^{Q}(Q(15,2))=\mathbb{Z}_{2}\oplus\mathbb{Z}_{2}$ and $H_{3}^{Q}(Q(15,2))=\mathbb{Z}_{2}\oplus\mathbb{Z}_{30},$ therefore it is not annihilated by any power of $|X|=15.$ This is possible because the Rig quandle $Q(15,2)$ is not a homogeneous quandle, so the result in \cite{L-N} does not apply. The group $\textrm{Inn}(Q(15,2))$ has $60$ elements, so it annihilates $H_{2}^{Q}(Q(15,2))$ and $H_{3}^{Q}(Q(15,2)).$} \end{example} \begin{table}[h] \centering \caption{A $3$-almost quasigroup quandle of order $12$; $Q(12,10)$ }\label{Table 3} \begin{tabular}{c|cccccccccccc} $\ast$ & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 \\ \hline 1 & 1 & 1 & 1 & 12 & 11 & 10 & 5 & 4 & 6 & 9 & 7 & 8 \\ 2 & 2 & 2 & 2 & 11 & 10 & 12 & 6 & 5 & 4 & 8 & 9 & 7 \\ 3 & 3 & 3 & 3 & 10 & 12 & 11 & 4 & 6 & 5 & 7 & 8 & 9 \\ 4 & 8 & 9 & 7 & 4 & 4 & 4 & 10 & 12 & 11 & 3 & 2 & 1 \\ 5 & 7 & 8 & 9 & 5 & 5 & 5 & 11 & 10 & 12 & 2 & 1 & 3 \\ 6 & 9 & 7 & 8 & 6 & 6 & 6 & 12 & 11 & 10 & 1 & 3 & 2 \\ 7 & 11 & 12 & 10 & 3 & 1 & 2 & 7 & 7 & 7 & 4 & 5 & 6 \\ 8 & 12 & 10 & 11 & 1 & 2 & 3 & 8 & 8 & 8 & 5 & 6 & 4 \\ 9 & 10 & 11 & 12 & 2 & 3 & 1 & 9 & 9 & 9 & 6 & 4 & 5 \\ 10 & 6 & 5 & 4 & 7 & 8 & 9 & 3 & 2 & 1 & 10 & 10 & 10 \\ 11 & 5 & 4 & 6 & 9 & 7 & 8 & 1 & 3 & 2 & 11 & 11 & 11 \\ 12 & 4 & 6 & 5 & 8 & 9 & 7 & 2 & 1 & 3 & 12 & 12 & 12 \end{tabular} \end{table} \section{Future research} We plan to work on the problem of when the order of the inner automorphism group of a quandle annihilates the torsion of its homology, i.e. $|\textrm{Inn}(X)|\textrm{tor}H_{n}(X)=0.$ We have several examples for which it is the case: \begin{example} \begin{enumerate} \item \emph{Finite quasigroup quandles. \\ For a finite quasigroup quandle $X,$ $|X|$ annihilates $\textrm{tor}H_{n}(X)$ by Theorem \ref{Theorem 1.8} and Corollary \ref{Corollary 1.9}. Furthermore, since $X$ embeds in $\textrm{Inn}(X),$ we know that $|X|$ divides $|\textrm{Inn}(X)|$ by the orbit-stabilizer theorem. Therefore $|\textrm{Inn}(X)|\textrm{tor}H_{n}(X)=0.$ In particular, for odd $k,$ $\textrm{tor}H_{n}(R_{k})$ is annihilated by $k$ because the dihedral quandle $R_{k}$ is a quasigruop quandle.} \item \emph{Rig quandles $Q(6,1),$ $Q(6,2),$ $Q(12,8),$ and $Q(12,9).$\\ Since $Q(6,1)$ and $Q(6,2)$ are $2$-almost quasigroup quandles and $Q(12,8)$ and $Q(12,9)$ are $4$-almost quasigroup quandles, we see that $\textrm{tor}H_{n}(Q(6,1))$ and $\textrm{tor}H_{n}(Q(6,2))$ are annihilated by $24$ and $\textrm{tor}H_{n}(Q(12,8))$ and $\textrm{tor}H_{n}(Q(12,9))$ are annihilated by $96$ by Theorem \ref{Theorem 3.2} and Corollary \ref{Corollary 3.3}. Moreover, $\textrm{Inn}(Q(6,1))$ and $\textrm{Inn}(Q(6,2))$ are isomorphic to the symmetric group $S_{4}$ of order $24$ and $|\textrm{Inn}(Q(12,8))|=96=|\textrm{Inn}(Q(12,9))|$ by \cite{Cla}. Therefore, in all cases described above the order of the inner automorphism group of each quandle annihilates the torsion of its homology. } \end{enumerate} \end{example} It is still an open problem whether the order of the quandle inner automorphism group of a quandle annihilates the torsion subgroup of its rack and quandle homology for every finite connected quandle. We can make this problem more general, i.e. we ask whether for any finite quandle $X$, $\textrm{tor}H_{n}(X)$ is annihilated by $|\textrm{Inn}(X)|.$ So far, we can show that if $k$ is odd, then $k$ annihilates the torsion of rack and quandle homology of $R_{2k}$ (compare Conjecture $14$ in \cite{N-P-2}). Notice that $R_{2k}$ is a non-connected quandle and $\textrm{Inn}(R_{2k})$ is isomorphic to the dihedral group $D_{k}$ of order $2k.$ The smallest connected quandle for which our method does not work is the Rig quandle $Q(8,1)$ which is the abelian extension of the Alexander quandle $Q(4,1)=\mathbb{Z}_{2}[t]/(t^2+t+1).$ The order of the quandle inner automorphism group of $Q(8,1)$ is $24$; $H_{2}^{Q}(Q(8,1))=0$ and $H_{3}^{Q}(Q(8,1))=\mathbb{Z}_{8}.$ \section{Acknowledgements} J\'ozef~H.~Przytycki was partially supported by Simons Collaboration Grant-316446.\\ Seung Yeop Yang was supported by the George Washington University fellowship.
2011.11918
\section{Introduction} \noindent Quantum Approximate Optimization Algorithms (abbreviated as QAOA) is a class of gate-model algorithms that can be implemented on near-term quantum computers (\cite{farhi2014quantum}). Initially, QAOA was designed to be applied in the context of unconstrained optimization problems (\cite{farhi2014quantum},\cite{farhi2014lin}) but any instance in which QAOA performs better than its classical counterparts is yet to be seen. However, Farhi and Harrow showed that efficiently simulating QAOA for even the lowest depth circuits would collapse the Polynomial Hierarchy (\cite{farhi2016quantum}). This put QAOA as a strong contender at the forefront of the Quantum supremacy debate (\cite{Harrow_2017}) which sparked a renewed interest in the field.\par \noindent A major modification to the QAOA framework was given in \cite{Hadfield_2019}, where the framework of QAOA was modified to work with constrained optimization problems by producing only feasible states (with respect to the constraints of the problems) on measurement in the computational basis. The authors termed this new framework as the Quantum Alternating Operator Ansatz (which we abbreviate as \textsc{QAOA}\textsuperscript{+}{}). \par \noindent We turned our attention on applying the \textsc{QAOA}\textsuperscript{+}{} setup to the matching problem. A matching is a set of edges which are vertex disjoint. Finding a maximum matching in a graph is already known to be solvable in polynomial time \cite{edmonds1965paths} classically. There also exists quantum analogues to the classical algorithms that employ Grover amplifications \cite{ambainis2005quantum}. However, counting problems with respect to matchings are \textsc{\#P}-hard \cite{valiant1979complexity}. Hence, there does not exist efficient deterministic classical algorithms to create a superposition over all distinct matchings with non-zero amplitudes or all maximal matchings with non-zero amplitudes in polynomial time.\par \noindent In this work we design and apply a \textsc{QAOA}\textsuperscript{+}{} style algorithm to two different input states - a quantum state corresponding to the empty matching, and a quantum state corresponding to a superposition over all matchings of size $1$ with a non-zero amplitude. We obtained the following results: \begin{itemize} \item Even for $p=1$ and starting from the empty matching, our \textsc{QAOA}\textsuperscript{+}{} algorithm creates a superposition over all distinct matchings with non-zero amplitudes. \item Using our \textsc{QAOA}\textsuperscript{+}{} setup and the $\ket{W_1}$ state as the initial state, we can converge to a superposition over maximal matchings in iterations at most twice the input size on expectation. \item For $2$-regular{} graphs, we show that the output state of our \textsc{QAOA}\textsuperscript{+}{} setup gives us a better expected matching size compared to the expected matching size from a uniform distribution over all matchings. \item For $2$-regular{} graphs, we compare the two initial states and show that using a superposition over all distinct matchings having size $1$, we can obtain a better expected matching size compared to using the empty matching as the initial state. \end{itemize} \section{Background: Quantum Alternating Operator Ansatz} \noindent\textsc{QAOA}\textsuperscript{+}{} style algorithms are applied to combinatorial optimization problems. A combinatorial optimization problem may be formulated in terms of $m$ clauses and an $n$-bit string $z$ (which represents $n$ variables). \begin{equation} C(z) = \sum_{i=1}^{m} C_{i}(z) \end{equation} where $C_{i}(z)=1$ if $z$ satisfies $C_{i}(z)$, and $0$ otherwise. Any input string $z$ forms the computational basis $\ket{z}$. Optimizing (maximizing in this case) $C(z)$ refers to finding the $z$ for which the maximum number of clauses is satisfied. The objective function is $C(z)$($C:D\rightarrow \mathbb{R}$) which we have to optimize. We consider a Hilbert space $\mathcal{D}$, which has dimension $|D|$. $\{\ket{z}:z \in D\}$ is the standard basis of $\mathcal{D}$. The domain $D$ is usually a feasible subset (following a specific set of constraints) of a larger configuration space. Now we define parameterized families of operators that act on $\mathcal{D}$.\par \noindent We should be able to create a feasible initial state $\ket{s},s\in D$ efficiently from the $\ket{0}^{\otimes |D|}$ state. First, we apply to $\ket{s}$ the family of phase-separation operators $U_{P}(\gamma)$ that depend on the objective function $f$. Normally we define $U_{P}(\gamma)$ as $U_{P}(\gamma)=e^{-i\gamma H_{f}}$ where $H_{f}$ is the Hamiltonian corresponding to the objective function $f$ (we follow the techniques outlined in \cite{hadfield2018representation}). However, we could alter the definition to suit our needs. Next, we have the family of mixing-operators $U_{M}(\beta)$ which depend on $D$ and it's structure. $U_{M}(\beta)$ must preserve the feasible subspace, and provide transitions between all pairs of feasible spaces. A \textsc{QAOA}\textsuperscript{+}{} circuit consists of $p$ alternating layers of $U_P(\gamma)$ and $U_M(\beta)$ applied to a suitable initial state $\ket{s}$. \begin{equation} \ket{\gamma, \beta} = e^{-i\beta_{p}H_{m}}e^{-i\gamma_{p}H_{p}}\ldots e^{-i\beta_{1}H_{m}}e^{-i\gamma_{1}H_{p}}\ket{s}\label{gamma_beta_State} \end{equation} A computational basis measurement over the state $\ket{\gamma,\beta}$ returns a candidate solution state $\ket{z}$ having objective function value $f(z)$ with probability $\left|\bra{z}\ket{\gamma,\beta}\right|^{2}$. The goal of QAOA and \textsc{QAOA}\textsuperscript{+}{} is to prepare a state $\ket{\gamma, \beta}$, from which we can sample a solution $z$ with a high value of $f(z)$. \section{Designing the Quantum Alternating Operator Ansatz circuit for the Matching problem} \noindent A matching $M$ in a graph $G(V,E)$ is a set of independent edges. Let us assume $G$ is undirected. We define a variable $x_e$ for every edge $e\in E$ and a constraint for every vertex $v\in V$. Consider the following integer linear program: \begin{equation} \begin{split} &\text{maximize } \sum_{e\in{E}}x_{e}\\ &\text{subject to}\\ &\sum_{e \sim v}x_{e}\leq 1,\;\;\forall v\in V\\ & x_{e} \in \{0,1\},\;\;\forall e\in E\\ \end{split}\label{Max_Matching_ILP} \end{equation} Here $e\sim v$ means edge $e$ is incident on vertex $v$. The solution to the ILP in \eqref{Max_Matching_ILP} gives us the maximum matching for $G$.\par \noindent Each individual qubit in the basis state corresponds to an edge in $G$. For example, in a rectangular graph or $C_4$ (cycle with 4 edges), the state $\ket{1010}$ denotes a matching containing the first and third edges only, and $\ket{0000}$ indicates the empty matching.\par \noindent We consider two choices for our initial state $\ket{s}$. The first choice is the empty matching $\ket{0}^{\otimes |E|}$. The second choice is the $W_{1}$ state, which is a generalization of $W$ states as defined in \cite{PhysRevA.62.062314}. $\ket{W_1}$ is a uniform superposition over all states of Hamming Weight $1$. We note that both of these choices are feasible solutions to \eqref{Max_Matching_ILP}, and hence form valid matchings.\par \noindent Our objective function $g(x) = \sum_{e\in{E}}\;x_{e}$ counts the number of edges in the matching. We map it to the Hamiltonian $H_g$ (using Hamiltonian composition rules from \cite{hadfield2018quantum}) described below. \begin{equation} H_g = \sum_{e\in E}\frac{1}{2}(I-Z_{e})=\frac{|E|}{2}I-\frac{1}{2}\sum_{e\in E}Z_{e}\label{phase_hamil} \end{equation} In \eqref{phase_hamil}, $X$ and $Z$ denote the unitaries for Pauli-X and Pauli-Z respectively. $Z_e$ signifies that the Pauli-Z operator is applied to the $e$\textsuperscript{th} qubit. As discussed before, the family of phase-separation operators is diagonal in the computational basis. Our phase separation unitary is $U_{P}(\gamma) = e^{i\gamma H_g}$. We drop the constant term (since it affects the algorithm by a global phase) to get: \begin{equation} U_{P}(\gamma) = e^{i\frac{\gamma}{2}\sum_{e\in E}Z_{e}}=\prod_{e\in E} e^{i\frac{\gamma}{2}Z_{e}} = \prod_{e\in E}R_{Z_{e}}(-\gamma)\label{phase-separation-unitary} \end{equation} \begin{definition}[Control Clause] The constraints are programmed into the control clause \begin{equation} f(e)=\prod_{\Tilde{e} \in \mathrm{nbhd}(e)} \overline{x_{\Tilde{e}}} \end{equation} where $\mathrm{nbhd}(e)$ refers to the edges that are adjacent to $e$.\label{def:control_clauses} \end{definition} \noindent The mixing unitary is responsible for evolving our system from one feasible state to another feasible state. We achieve this by encoding the constraints from \eqref{Max_Matching_ILP} into the mixing Hamiltonian $M_{e} = f(e)X_{e}$, using control clauses. The corresponding unitary operator is: \begin{equation} U_{M,e}(\beta) = e^{-i\beta M_{e}} = \bigwedge_{f(e)}\left(e^{-i\beta X_{e}}\right) = \bigwedge_{f(e)}R_{X_{e}}(\beta)\label{mixing_unitary} \end{equation} $R_{X_e}$ is the $X$-rotation gate applied to the qubit $e$. In \eqref{mixing_unitary}, $\bigwedge_{f(e)}R_{X_{e}}(\beta)$ signifies a multi-qubit controlled $X$-rotation gate, where the control on the $R_{X_e}$ unitary is the control clause $f(e)$ corresponding to qubit $e$. Equation \eqref{mixing_unitary} can be efficiently implemented using multi-qubit controlled rotation gates. \noindent In one round of the \textsc{QAOA}\textsuperscript{+}{} algorithm, we apply the individual mixing unitaries to every qubit. The (consolidated) mixing unitary is mathematically represented as: \begin{equation} U_{M}(\beta)= \prod_{e\in E}U_{M,e}(\beta)= \prod_{e\in E}e^{-i\beta M_{e}}=\prod_{e\in E}\bigwedge_{f(e)}R_{X_{e}}(\beta)\label{consolidated-mixing-unitary} \end{equation} Since the mixing unitaries as defined in \eqref{mixing_unitary} are not necessarily diagonal in the computational basis, the ordering of the unitaries in \eqref{consolidated-mixing-unitary} matters. We formally define the concept of fixed orderings and arbitrary orderings in Definition \ref{def:ordering}, when we prove results for $2$-regular graphs. At this point we also note that for $p=1$, the depth of our circuit is polynomial with respect to the input size (the number of edges $|E|$) as the Phase Separation unitary can be implemented in depth $1$, while the Mixing Unitary can implemented in depth $c\cdot|E|, c<5$.\par \noindent We now prove that the mixing unitary preserves the feasibility of the input state. \begin{figure*}[tbhp] \centering \begin{subfigure}{0.35\linewidth} \includegraphics[width=\linewidth]{images/cases.png} \caption{Construction tree branches} \label{fig:cases} \end{subfigure} \begin{subfigure}{0.6\linewidth} \includegraphics[width=\linewidth]{images/Construction_Tree.png} \caption{Construction tree for the $C_3$ graph (Triangle Graph)} \label{fig:construction_tree_c3} \end{subfigure} \hfill \caption{Construction Tree} \label{fig:construction_tree} \end{figure*} \begin{lemma}\label{lemma:feasibility_mixing} The mixing operator $U_M(\beta)$ preserves the feasibility of the initial state $\ket{s}$. If $\ket{s}$ is feasible, then $U_M(\beta)\ket{s}$ is also feasible. \end{lemma} \begin{proof} At any intermediate stage, let the quantum state be represented as $\ket{x}=\ket{x_1 x_2\ldots x_n}$. After applying an individual mixing unitary to its corresponding qubit $x_e$, the resulting state $U_{M,e}(\beta)\ket{x}$ can be expanded and written as \begin{equation} \begin{split} U_{M,e}(\beta)\ket{x}&=\bigwedge_{f(e)}R_{X_{e}}\ket{x}\\&=f(e)R_{X_{e}}\ket{x}+\overline{f(e)}I\ket{x}\\ &=\left(f(e)\cos(\beta/2)+\overline{f(e)}\right)\ket{x}\\ &-if(e)\sin(\beta/2)\ket{x_{1}\ldots \overline{x}_{e}\ldots x_{n}} \end{split} \label{mixing_feasibility} \end{equation} From \eqref{mixing_feasibility}, we see that we can have two cases: \begin{enumerate} \item The control clause evaluates to 0, $f(e)=0$. This means that the output state is $\ket{x}$ itself. \item The control clause evaluates to 1, $f(e)=1$. This means that none of the edges adjacent to current edge $e$ is already selected as part of the matching. The resultant state is a superposition between a matching including current edge $e$, and a matching excluding current edge $e$. expand with respect to $x_e$. The resulting state is consistent with the constraints of \eqref{Max_Matching_ILP}. \end{enumerate} Hence we see that if $\ket{x}$ is a feasible state, then $U_{M,e}(\beta)\ket{x}$ is also feasible. We can now show by induction that the output state $U_M(\beta)\ket{s}$ will always be a superposition of feasible states, if and only if the initial state $\ket{s}$ is feasible. \end{proof} \noindent We note that in \eqref{mixing_feasibility}, each $U_{M,e}(\beta)$ unitary is composed of three separate unitaries. We rename these unitaries, as it makes most of the subsequent analysis easier. \begin{definition}[Renaming the unitaries]\label{def:unitaries} We rename the component unitaries of \eqref{mixing_feasibility} as follows: \begin{enumerate} \item When $f(e)=0$, $U_{M,e}(\beta)\ket{x}=I\ket{x}$. We denote this as the $I_1$ unitary. \item When $f(e)=1$, $U_{M,e}(\beta)\ket{x}=\cos{(\beta/2)}\ket{x}-i\sin{(\beta/2)}X_e\ket{x}$. \begin{itemize} \item The $(\cos{\frac{\beta}{2}})I$ operator is denoted as the $I_2$ unitary, and \item the $(\sin{\frac{\beta}{2}})X_e$ operator is denoted as the $X_2$ unitary. \end{itemize} \end{enumerate} \end{definition} \section{\textsc{QAOA}\textsuperscript{+}{} with empty matching as the initial state} \noindent The empty matching can be represented by the state $\ket{0}^{\otimes |E|}$. Using this as out initial state we derive a few results for our \textsc{QAOA}\textsuperscript{+}{} setup. First we define the concept of a construction tree. \begin{definition}[Construction Tree] In the \textsc{QAOA}\textsuperscript{+}{} setup for $p=1$, applying the $U_{M,e}(\beta)$ unitary on every qubit gives rise to at most two different branches of computation at every step. Thus the actions of the mixing unitary $U_{M}(\beta)$ can be represented as a binary tree having exactly $|E|+1$ layers. Each layer $i, 0<i<|E|$ corresponds to the possible actions we can take for the current edge $e_i$, conditioned on the actions we have taken on edges $e_0$ to $e_{i-1}$. We refer to the binary tree corresponding to a given \textsc{QAOA}\textsuperscript{+}{} setup (for $p=1$) as its construction tree. \end{definition} We can see the concept of branches of computation in Figure \ref{fig:cases} and an example construction tree for the $C_3$ graph in Figure \ref{fig:construction_tree_c3}. \begin{theorem}\label{thm:superposition_all_matchings} Applying $|E|$ unitaries of type $U_{M,e}(\beta)\ket{x}$ in the $\textsc{QAOA}\textsuperscript{+}{}_{p=1}$ setup with $\ket{0}^{\otimes |E|}$ as initial state, yields a superposition over all possible distinct matchings with non-zero amplitudes, in a graph with $|E|$ edges, where $\beta\in(0,\pi)$. \end{theorem} If the controls permit, the output of applying $U_{M,e}(\beta)$ on the current state $\ket{x}$ is a superposition of a matching including the edge $e$, and a matching excluding the edge $e$. There is always a branch of construction which allows us to pick the current edge, and the rest of the subtree is conditioned on this choice. Hence, if we start with the empty matching, for any matching, at least one of the branches of our construction tree always gives that particular matching. This shows us that if there exists a feasible matching, then there exists a branch of construction which gets to that matching in $|E|$ steps. Here, $|E|$ steps are necessary since the edges might be supplied to us in an arbitrary order. This is seen in Figure \ref{fig:construction_tree_c3}. We now prove this formally. \begin{proof} First, we show that all possible distinct matchings are present in the output state of our \textsc{QAOA}\textsuperscript{+}{} setup for $p=1$. This proof is obtained by induction on the construction tree. We hypothesize that at the start of layer $i, i>0$ we have a superposition over all possible distinct matchings of size at most $i$ with non-zero amplitude, using edges $e_0$ to $e_{i-1}$. We can easily verify this for the base case $i=1$, where we have a superposition over two matchings with non-zero amplitude - a matching including edge $e_0$ and a matching excluding edge $e_0$. For the inductive step, let us assume that at the start of layer $k$ we have a superposition over all possible distinct matchings with non-zero amplitude of size at most $k$, using edges $e_0$ to $e_{k-1}$. Now we apply $U_{M,e_k}(\beta)$ to every node in this layer. \begin{itemize} \item When $f(e)=0$, only the current matchings are carried forward to the next layer. \item When $f(e)=1$, we carry forward both the current matchings, and new matchings which are formed by union of the current matchings and edge $e_k$. \end{itemize} This exhaustively creates all distinct matchings of size at most $k+1$ with non-zero amplitude, using edges $e_0$ to $e_{k}$, since we have assumed that the induction step is true and all distinct matchings of size at most $k$ with non-zero amplitude were present at the beginning of the current layer. By principle of mathematical induction, at the end of $|E|$ layers, we have a superposition over all possible distinct matchings of size at most $|E|$ with non-zero amplitude, using all edges. \end{proof} \noindent Now, we define two concepts, and an important corollary of Theorem \ref{thm:superposition_all_matchings}. \begin{definition} The number of distinct $k$-matchings in a graph $G$ is given by the function $\Phi_k(G)$. We also define $\Phi(G)$ as \begin{equation}\label{eq:number_of_matchings} \Phi(G) = \sum_{k=0}^{\nu(G)} \Phi_{k}(G) \end{equation} where $\nu(G)\leq\lfloor |E|/2\rfloor$ is the matching number of graph $G$.\label{def:phik_G} \end{definition} We can make the following observation from the construction tree \begin{observation}\label{thm:number_of_matchings} In the construction tree for Theorem \ref{thm:superposition_all_matchings} for $\textsc{\textsc{QAOA}\textsuperscript{+}}_{p=1}$, we have exactly $\Phi(G)$ number of leaves, where each leaf corresponds to a distinct matching. We also have exactly $\Phi(G)$ number of branches, each ending in a distinct leaf. \end{observation} \begin{proof} In ${p=1}$, every layer of the construction tree corresponds to all possible choices we make regarding one particular edge. Once an edge has been seen, we never go back to it again in the same iteration. Each branch of the computation represents a unique sequence of operators applied to the edges, since every subtree is conditioned on the choices taken on the earlier edges. Hence, there are no two branches in the construction tree producing the same output state. By Theorem \ref{thm:superposition_all_matchings}, we see that the output state is a superposition over all possible distinct matchings. Combining the two arguments gives us exactly $\Phi(G)$ number of branches, each ending in a distinct leaf. \end{proof} \section{\textsc{QAOA}\textsuperscript{+}{} with $W_1$ state as the initial state} \noindent $\ket{W_1}$ represents an uniform superposition over all matchings of size $1$ in the context of our \textsc{QAOA}\textsuperscript{+}{} setup. \begin{equation} \ket{W_1}=\frac{1}{\sqrt{|E|}}\left(\ket{10\ldots 0}+\ket{01\ldots 0}+\ldots+\ket{0\ldots01}\right) \end{equation} With the help of $\ket{W_1}$ states, we are able to eliminate the empty matching from the superposition of states in the output state. In this section we explore the behaviour of the output state, and show that we converge to a superposition over maximal matchings with non-zero amplitudes in expected number of iterations almost $|E|$.\par \noindent At this point we must note that the definition of control clauses as given in Definition \ref{def:control_clauses} is not sufficient to demonstrate the superiority of using $\ket{W_1}$ states over $\ket{0}^{\otimes |E|}$ theoretically. Hence we put forward the following modification \begin{definition}\label{def:modified_control} We update the definition of $f_e(x)$ as given in Definition \ref{def:control_clauses} to include the current qubit in its own control set. \begin{equation} f(e)=\overline{x_e}\cdot\prod_{\Tilde{e} \in \mathrm{nbhd}(e)} \overline{x_{\Tilde{e}}} \end{equation} Thus the control clause is set when neither the current edge, nor its adjacent edges are already part of a matching. \end{definition} \noindent The advantage offered by this small modification is significant (as seen in Theorem \ref{thm:w1_state}). We see that if we only evaluate the neighbourhood of $e$, then in the case where $\ket{x}$ represents a matching including the current edge (this is possible when the initial matching already contains the current edge, like in the $W_1$ state), the resulting state after applying $U_{M,e}$ only retains the current matching or reduces the size of the matching. If we use the updated control clause, then in both cases ($x$ contains/does not contain $e$), applying $U_{M,e}$ retains the current matching or increases the size of the matching. \begin{theorem}\label{thm:w1_state} Applying $|E|$ unitaries of type $U_{M,e}(\beta)$ in the $\textsc{\textsc{QAOA}\textsuperscript{+}}_{p=1}$ setup with the $W_1$ state as initial state and the modified control set as given in Definition \ref{def:modified_control}, yields a superposition over all possible non-empty distinct matchings, in a graph with $|E|$ edges where $\beta\in(0,\pi)$. \end{theorem} \begin{proof} If we use the control sets from Definition \ref{def:control_clauses}, then there exists a possibility that the current edge might be flipped to $0$, giving rise to the possibility of the empty matching existing. With the control clauses of Definition \ref{def:modified_control}, we simply apply $I_1$ to the current edge if it was already in the initial matching. This means that we never get the empty matching in the output state. The rest of the proof follows the proof of Theorem \ref{thm:superposition_all_matchings}. \end{proof} \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{images/Construction_Tree_W1.png} \caption{Construction Tree for $C_3$ using $\ket{W_1}$ as the initial state} \label{fig:c3_w1_construction} \end{figure} \begin{definition} Let $\Phi^{+}(G)$ be defined as \begin{equation}\label{eq:number_of_matchings+} \Phi^{+}(G) = \sum_{k=1}^{\nu(G)} k\cdot\Phi_{k}(G) \end{equation} where $\nu(G)\leq\lfloor |E|/2\rfloor$ is the matching number of graph $G$. \end{definition} \begin{observation}\label{thm:number_of_matchings_w1} In the construction tree for Theorem \ref{thm:w1_state} for $\textsc{QAOA}_{p=1}$, we have exactly $\Phi^{+}(G)$ number of leaves, where each leaf corresponds to a distinct matching. We also have exactly $\Phi^{+}(G)$ number of branches, each ending in a distinct leaf. \end{observation} \begin{proof} The construction tree for Theorem \ref{thm:w1_state} can be decomposed into $|E|$ different construction trees, corresponding to the $|E|$ different states in the initial $\ket{W_1}$ state. If we consider the construction tree corresponding to the $i$\textsuperscript{th} edge, then the output states of the $i$\textsuperscript{th} tree consists exactly of all matchings that include the $i$\textsuperscript{th} edge. Hence every distinct matching of size $k$ occurs is present exactly $k$ times in the output state when we consider all the $E$ construction trees together. Now following the arguments of Theorem \ref{thm:number_of_matchings}, we get that there are exactly $\Phi^+(G)$ number of states in the output of $\textsc{QAOA}\textsuperscript{+}{}_{p=1}$, when we have $\ket{W_1}$ as the initial state. \end{proof} \begin{theorem}\label{thm:max_no_iterations} In a graph with $|E|$ edges, with the modified control set from Definition \ref{def:modified_control}, $\beta\in[\pi/2,\pi)$ and $\ket{W_1}$ as the initial state, we can obtain an output state which is a superposition over all maximal matchings with non-zero amplitudes in $p\leq 2|E|$. \end{theorem} \begin{proof} If $f(e)=1$, then the $X_2$ unitary is applied to the qubit $e$ with the probability ${\sin^{2}(\beta/2)}$. This means that on expectation, it will take $1/{\sin^{2}(\beta/2)}$ iterations for us to apply the $X_2$ unitary to the qubit $e$. Since Definition \ref{def:modified_control} ensures that the expected matching size of iteration $p$ is either greater than or equal to the expected matching size of iteration $p-1$, we see that we converge to a superposition over all maximal matchings in $p\leq\frac{1}{\sin^{2}(\beta/2)}|E|$. This concludes our proof since $\beta\in[\pi/2,\pi)$. \end{proof} \noindent The output state from Theorem \ref{thm:max_no_iterations} is a superposition over maximal matchings with non-zero amplitudes. Any further iterations of \textsc{QAOA}\textsuperscript{+}{} depends on the phase separation operator only, and the mixing operator does not work. From here, Grover diffusion techniques may be used to increase the amplitude of the state with greater hamming weight, and using this in conjunction with good sampling techniques yields the maximum matching with high probability. \section{\textsc{QAOA}\textsuperscript{+}{} applied to $2$-regular graphs} \noindent Until now we have been showing results for general graphs. However, in order to prove stronger bounds, we have to limit our focus to cycle graphs ($\textsc{C}_n$) and $2$-regular{} graphs. In this section we show that the expected matching size of the output state when using $\ket{W_1}$ as initial state is greater than the expected matching size of the output state when using $\ket{0}^{\otimes{|E|}}$ as initial state. Additionally we also show that the expected matching size of the output state when using $\ket{0}^{\otimes{|E|}}$ as initial state is greater than the expected matching size obtained from a uniform superposition over all matchings.\par \noindent First, we would like to formally define the concept of fixed orderings and arbitrary orderings. \begin{definition}[Fixed Ordering and Arbitrary Ordering]\label{def:ordering} We define a cyclical ordering of a cycle graph $C_n$ as ordering the edges from $e_0$ to $e_{n-1}$ in a clockwise manner. A fixed ordering is a ordering of edges, in which the mixing unitary $U_M(\beta)$ acts on the edges in a cyclical order. If the edges are not supplied to the \textsc{QAOA}\textsuperscript{+}{} algorithm in a cyclical order, we refer to this ordering of edges as an arbitrary ordering. \end{definition} \noindent We shall mention a few lemmas now, which can be shown via counting arguments: \begin{lemma}\label{lemma:cycle_graph_k_mat} The number of distinct $k$-matchings in a cycle graph $\textsc{C}_n${} is \begin{equation} \Phi_k (\textsc{C}_n) = \frac{n}{n-k}\binom{n-k}{k} \end{equation} \end{lemma} \begin{lemma}\label{lemma:path_graph_k_mat} The number of distinct $k$-matchings in a path graph $\textsc{P}_n$ is \begin{equation} \Phi_k(\textsc{P}_n) = \binom{n-k}{k} \end{equation} \end{lemma} \begin{lemma}\label{lemma:component_graph_k_mat} The number of distinct $k$-matchings in a Graph $G$ with $r$ components $(G_1,G_2,\ldots,G_r)$ is \begin{equation} \Phi_k(G) = \prod_{i=1}^{r}\Phi_k(G_i) \end{equation} \end{lemma} \begin{theorem}\label{thm:exp_mat_w1_empty} In a $2$-regular graph with $|E|>16$ edges, with fixed ordering, and $\beta\in\left({\pi/2},{\pi}\right]$, the expected matching size of the output of $\textsc{\textsc{QAOA}\textsuperscript{+}}$ for ${p=1}$ with $\ket{W_1}$ as the initial state is greater than the expected matching size size of the output of $\textsc{\textsc{QAOA}\textsuperscript{+}}$ for ${p=1}$ with $\ket{0}^{\otimes |E|}$ state as the initial state. \end{theorem} \begin{proof} The construction tree of $\textsc{\textsc{QAOA}\textsuperscript{+}}$ for ${p=1}$ with $\ket{W_1}$ as the initial state, can be represented as $|E|$ parallel construction trees of depth $E+1$ each having a distinct matching of size $1$ in the first layer. Let us consider the tree $T_i$, where the initial matching is $\{e_i\}$. Using the arguments of Theorems \ref{thm:superposition_all_matchings} and \ref{thm:w1_state} we see that $T_i$ produces exactly all matchings which contain the edge $e_i$. Similarly from Observation \ref{thm:number_of_matchings_w1}, we can argue that every matching of size $k$, is produced in exactly $k$ construction trees.\par \noindent Let the amplitude of an arbitrary matching $M$ of size $k$ in the output of $\textsc{\textsc{QAOA}\textsuperscript{+}}$ for ${p=1}$ with $\ket{0}^{\otimes |E|}$ state as the initial state be denoted as $\alpha_{M,0}$. Let the amplitude of $M$ for the $\ket{W_1}$ case be denoted as $\alpha_{M,W}$.\par \noindent In the $\ket{W_1}$ case, there are $k$ trees which give rise to the matching $M$. In one of these trees $T_i$, the edge $e_i$ is already included in the initial matching. Let the amplitude of $M$ for this tree be denoted as $\alpha_{M,W,d}$. Since we use the modified control clauses, it means that we don't have to apply \begin{itemize} \item The $X_2$ unitary on edge $e_i$ to include it in the matching $M$. \item The $I_2$ unitary on all edges in $\mathrm{nbhd}(e_i)$ that appears before $e_i$ in the fixed ordering, to exclude them from the matching. \end{itemize} In both of these cases we simply use $I_1$ unitaries which does not change the amplitude of the state. Let $d$ be the number of edges in $\mathrm{nbhd}(e_i)$, that appear before $e_i$ in the fixed ordering. We know that $d\in[0,k-1]$, as a particular matching of size $k$ is generated in $k$ out of the $|E|$ construction trees. Hence $\alpha_{M,W,d}$ can be expressed in terms of $\alpha_{M,0}$ as \begin{equation} \alpha_{M,W,d}=\frac{\alpha_{M,0}}{\sqrt{|E|}\sin{(\beta/2)}\cos{}^{d}(\beta/2)} \end{equation} The $1/\sqrt{E}$ term comes from the nature of the $\ket{W_1}$ state. The probability of obtaining $M$ in the $W_1$ case is given as \begin{equation} \alpha_{M,W}^{2}=\sum_{d=0}^{k-1}\alpha_{M,W,d}^{2} \end{equation} We can express this in terms of the probability of obtaining $M$ in the $\ket{0}^{\otimes |E|}$ case as \begin{equation} \label{prob_comp_sum_w_0} \alpha_{M,W}^{2}=\sum_{d=0}^{k-1}\frac{\alpha_{M,0}^{2}}{|E|\sin^{2}{(\beta/2)}\cos^{2d}(\beta/2)} \end{equation} For the range $\beta\in\left({\pi/2},\pi\right]$ we have \begin{equation} \label{prob_comp_w_0} \alpha_{M,W}^{2}\geq\frac{2^{k+1}-2}{|E|}\cdot{\alpha_{M,0}^{2}} \end{equation} In \eqref{prob_comp_w_0}, $2^{k+1}-2>|E|$ occurs, when $k\geq\log_{2}\left({|E|}+2\right)$. Therefore in the interval $\left[\log_{2}\left({|E|}+2\right),\nu(G)\right]$, we have $\alpha_{M,W}^{2}>\alpha_{M,0}^{2}$.\par \noindent Let us define a random variable $X$, which denotes the size of the matching in a cycle graph. We calculate the expected matching size as \begin{equation} \begin{split} \ex{X}=&\underset {\text{Case A}} { \sum_{k_A=0}^{\log_{2}\left({|E|}+2\right)-1}k_A\mathbb{P}[X=k_A] }\\&+\\&\underset{\text{Case B}} { \sum_{k_B=\log_{2}\left({|E|}+2\right)}^{\nu(G)}k_B\mathbb{P}[X=k_B] } \end{split} \end{equation} Now we compare the expected matching size in the two cases. For Case A we upperbound $\ex{X}_{\ket{0}^{\otimes |E|}}-\ex{X}_{\ket{W_1}}-$ as: \begin{equation}\label{uppercaseA} \begin{split} &{\sum_{k_A=0}^{\log_{2}\left({|E|}+2\right)-1}k_A\cdot\left(\mathbb{P}[X=k_A]_{\ket{0}^{\otimes |E|}}-\mathbb{P}[X=k_A]_{\ket{W_1}}\right)}\\ &={\sum_{k_A=0}^{\log_{2}\left({|E|}+2\right)-1}k_A\cdot\sum_{M,|M|=k_A}\left(\alpha_{M,0}^{2}-\alpha_{M,W}^{2}\right)}\\ &\text{To upperbound }\left(\alpha_{M,0}^{2}-\alpha_{M,W}^{2}\right)\text{ put }k_A=0\text{ in }\alpha_{M,W}^{2} \\ &\leq \sum_{k_A=0}^{\log_{2}\left({|E|}+2\right)-1}k_A\cdot\sum_{M,|M|=k_A}\alpha_{M,0}^{2} \end{split} \end{equation} For Case B we lowerbound $\ex{X}_{\ket{W_1}}-\ex{X}_{\ket{0}^{\otimes |E|}}$ as: \begin{equation}\label{lowercaseB} \begin{split} &{\sum_{k_B=\log_{2}\left({|E|}+2\right)}^{\nu(G)}k_B\cdot\left(\mathbb{P}[X=k_B]_{\ket{W_1}}-\mathbb{P}[X=k_B]_{\ket{0}^{\otimes |E|}}\right)}\\ &={\sum_{k_B=\log_{2}\left({|E|}+2\right)}^{\nu(G)}k_B\cdot\sum_{M,|M|=k_B}\left(\alpha_{M,W}^{2}-\alpha_{M,0}^{2}\right)}\\ &\text{To lowerbound }\left(\alpha_{M,W}^{2}-\alpha_{M,0}^{2}\right)\text{, put }\\ &k_B=\log_{2}\left({|E|}+2\right)\text{ in }\alpha_{M,W}^{2} \\ &\geq \sum_{k_B=\log_{2}\left({|E|}+2\right)}^{\nu(G)}k_B\cdot\sum_{M,|M|=k_B}\left(\frac{4|E|+6}{|E|}\cdot\alpha_{M,0}^{2}-\alpha_{M,0}^{2}\right)\\ &> \sum_{k_B=\log_{2}\left({|E|}+2\right)}^{\nu(G)}k_B\cdot\sum_{M,|M|=k_B}\left(4\cdot\alpha_{M,0}^{2}-\alpha_{M,0}^{2}\right)\\ &= \sum_{k_B=\log_{2}\left({|E|}+2\right)}^{\nu(G)}k_B\cdot3\cdot\sum_{M,|M|=k_B}\alpha_{M,0}^{2} \end{split} \end{equation} Since the sum of $\alpha_{M,0}^{2}$ over all possible matchings is $1$, we know that \begin{equation}\label{point1} \sum_{M,|M|=k}\alpha_{M,0}^{2}\leq 1 \end{equation} We set this sum to $1$. We always have \begin{equation}\label{point2} k_B>k_A \end{equation} When $|E|>16$, we have \begin{equation}\label{point3} \underset{\text{Number of terms in Case B}}{\nu(G)-\log_{2}(|E|+2)} > \underset{\text{Number of terms in Case A}}{\log_{2}(|E|+2)} \end{equation} From \eqref{point1}, \eqref{point2}, and\eqref{point3} we see that the lower bound obtained in \eqref{lowercaseB} is greater than the upper bound obtained in \eqref{uppercaseA}. Hence, we have \begin{equation} \ex{X}_{\ket{W_1}}>\ex{X}_{\ket{0}^{\otimes |E|}},\;\;\; |E|>16 \end{equation} \noindent We know that $2$-regular graphs are composed of disconnected components, where each component is a cycle graph. Let us define a random variable $Y$, which denotes the size of the matching in a $2$-regular graph $G$. We also define random variables $Y_i$ for all the $r$ disconnected components of $G$, which denotes the size of the matching in $G_i$. Using linearity of expectation over $\ex{Y_i}$, we have \begin{equation} \begin{split} \ex{Y_i}_{\ket{W_1}}&>\ex{Y_i}_{\ket{0}^{\otimes |E|}}\;\;\forall i\in\{1,2,\ldots,r\}\\ \implies \ex{Y}_{\ket{W_1}}&>\ex{Y}_{\ket{0}^{\otimes |E|}},\;\;\;|E|>16 \end{split} \end{equation} This concludes our proof. \end{proof} \begin{figure*}[bpht] \centering \includegraphics[width=\linewidth]{images/Theoretical_bounds.png} \caption{This figure corresponds to Theorem \ref{thm:qaoa_greater_uniform}. The upper and lower bounds for $\ex{X}_{\textsc{QAOA}\textsuperscript{+}{}}$ vs $\ex{X}_{\texttt{uniform}}$ for $\beta=\pi/2$ on Cycle Graphs having up to 59 edges.} \label{fig:theorem_bounds} \end{figure*} \begin{theorem}\label{thm:qaoa_greater_uniform} Let us consider a \textsc{QAOA}\textsuperscript{+}{} for a cycle graph $C_n, n>6$, $p=1$, fixed ordering, empty initial matching, and $\beta\in\left[\frac{\pi}{2},\pi\right)$. The expected matching size of the \textsc{QAOA}\textsuperscript{+}{} output state is greater than the expected matching size obtained from a uniform distribution over all matchings. \end{theorem} \begin{figure}[htbp] \centering \includegraphics[width=\linewidth]{images/Experimental_Bounds.png} \caption{This figure corresponds to Theorem \ref{thm:qaoa_greater_uniform}. Expected matching size of \textsc{QAOA}\textsuperscript{+}{} vs uniform distribution for $\beta=\pi/2$ on Cycle Graphs up to 12 vertices, using \texttt{qasm\_simulator}. Figure \ref{fig:experiment_bounds} is a zoomed-in version (along with experimental value obtained on \texttt{qasm\_simulator}) of Figure \ref{fig:theorem_bounds}.} \label{fig:experiment_bounds} \end{figure} \begin{figure*}[htbp] \centering \includegraphics[width=\textwidth]{images/QAOA_2_regular_comparisons_new.png} \caption{Experimentally demonstrating Theorem \ref{thm:exp_mat_w1_empty}, Theorem \ref{thm:qaoa_greater_uniform}, Corollary \ref{corr:empty_state_two_reg}, and Corollary \ref{corr:w1_state_two_reg} for $2$-regular graphs up to $10$ vertices, on the \texttt{qasm\_simulator} backend of QISKIT.} \label{fig:w1_empty_two_reg} \end{figure*} \noindent\textbf{Note:} We were unable to find a completely theoretical proof for Theorem \ref{thm:qaoa_greater_uniform}. The strict lower bound for $\ex{X}_{\textsc{QAOA}\textsuperscript{+}{}}$ and the value for $\ex{X}_{\texttt{uniform}}$ have asymptotically identical curves. We obtained the proof for the theorem by plotting the values in equations \eqref{prob_quantum} and \eqref{expect_uniform_classical}. The results are shown in Figure \ref{fig:theorem_bounds}, for Cycle Graphs up to 50 vertices. \begin{proof} We want to calculate the expected size of matching in the output of \textsc{QAOA}\textsuperscript{+}{}($p=1$), with $\ket{0}^{\otimes{|E|}}$ as the initial state. Let us define a random variable $X$, which denotes the size of the matching. We know that $\ex{X}=\sum_{k=0}^{\nu(G)}k\;\mathbb{P}[X=k]$. The value of $\mathbb{P}[X=k]$ can be obtained by squaring the amplitudes of the leaves in the construction tree of the \textsc{QAOA}\textsuperscript{+}{} circuit as given in Theorem \ref{thm:number_of_matchings}, and adding together the probabilities of all $\Phi_k(G)$ leaves. Let $M_k$ denote a matching of size $k$, and $e_{\ell}$ denote the last edge. In order to calculate the probabilities of $k$-matchings in a cycle graph under the proposed \textsc{QAOA}\textsuperscript{+}{} setup, we have to consider two cases: \begin{enumerate} \item Case \textbf{a}: $\mathbf{e_{\ell}\notin M_k}$. This transforms our underlying graph into a path graph of $n$ vertices and edges. If the first edge is included in the matching, then for the last edge we have $f(e)=0$, leading to an $I_1$ unitary. If the first edge is not included the underlying graph is transformed into a path graph of $n-1$ vertices. The rest of the $n-2k$ unitaries will be $I_2$ unitaries, which contributes a total probability of $\cos^{2n-4k}{\beta/2}$. If the first edge is included, there are at most $n-2k$ $I_2$ unitaries. This analysis is recursive, so in order to make the calculation easier we lower bound the expectation. Let $s=\sin^2{\beta/2}$ and $c=\cos^2{\beta/2}$. Using Lemma \ref{lemma:path_graph_k_mat}, we have \begin{equation} \label{last_edge_not_in_matching} \begin{split} a &> s^{k}c^{n-2k}\left(\binom{n-k-1}{k}+\binom{n-k-1}{k-1}\right)\\ \implies a&>\binom{n-k}{k}s^{k}c^{n-2k} \end{split} \end{equation} \item Case \textbf{b}: $\mathbf{e_{\ell}\in M_k}$. Then we have to count the number of distinct $(k-1)$-matchings path graph of $n-2$ vertices, which is $\phi_{k-1}(\textsc{P}_{n-2}) = \binom{n-k-1}{k-1}$. There are $k-1$ pairs of $X_2$ and $I_1$ unitaries, and one extra $X_2$ unitary for the last edge. The total contribution to the probability is $\sin^{2k}{\beta/2}$. The rest of the $n-(2k-1)$ edges are hit with the $I_2$ unitary, which contributes a total probability of $\cos^{2n-4k+2}{\beta/2}$. We again assign $s=\sin^2{\beta/2}$ and $c=\cos^2{\beta/2}$. Using Lemma \ref{lemma:path_graph_k_mat}, we have \begin{equation}\label{last_edge_in_matching} b = \binom{n-k-1}{k-1}s^{k}c^{n-2k+1} <\frac{k}{n-k}\binom{n-k}{k}s^{k}c^{n-2k} \end{equation} \end{enumerate} Combining \eqref{last_edge_not_in_matching} and \eqref{last_edge_in_matching}, for $n>5$: \begin{equation} s^{k}\cdot c^{n-2k}\cdot \Phi_k(G) \left(1-\frac{s\cdot k}{n}\right)<\mathbb{P}[X=k]<s^{k}\cdot c^{n-2k}\cdot\Phi_k(G)\label{bound_prob} \end{equation} which gives us the lower bound for the expectated matching size as \begin{equation}\label{bound_expectation} \ex{X}_{\textsc{QAOA}\textsuperscript{+}{}}>\sum_{k=0}^{\nu(G)}s^{k}\cdot c^{n-2k}\cdot k\cdot\Phi_k(G)\left(1-\frac{s\cdot k}{n}\right) \end{equation} When $\beta=\pi/2$, we have the lower bound of \eqref{bound_prob} transform into: \begin{equation} \ex{X}_{\textsc{QAOA}\textsuperscript{+}{}}>\sum_{k=0}^{\nu(G)}k\cdot\Phi_k(G)\cdot\left(\frac{1}{2}\right)^{n-k}\cdot\left(1-\frac{k}{2n}\right)\label{prob_quantum} \end{equation} Let $M(G)$ be the set of matchings of graph $G$. Let us consider the uniform distribution over $M(G)$. When $G$ is $C_n$, we can calculate the probability of obtaining a matching of size $k$ as \begin{equation} \mathbb{P}[X=k]_{\texttt{uniform}}=\frac{\Phi_k(G)}{\Phi(G)} \label{prob_uniform_classical} \end{equation} and the expected matching size as \begin{equation} \ex{X}_{\texttt{uniform}}=\sum_{k=0}^{\nu(G)}k\cdot \mathbb{P}[X=k] =\sum_{k=0}^{\nu(G)}\frac{k\cdot\Phi_k(G)}{\Phi(G)}\label{expect_uniform_classical} \end{equation} As noted earlier, we get the theorem from plotting and comparing the values of \eqref{prob_quantum} and \eqref{expect_uniform_classical}. \end{proof} \noindent It can be noted that we may improve the bounds in \eqref{bound_expectation} at the expense of making the analysis more complicated, by continuing to analyze the recursive structure of the construction trees. Also we can further increase the values obtained in \eqref{prob_quantum}, by using a value of $\frac{\pi}{2}<\beta<\pi$. \begin{corollary}[Theorem \ref{thm:qaoa_greater_uniform}]\label{corr:empty_state_two_reg} Let us consider a \textsc{QAOA}\textsuperscript{+}{} for a $2$-regular Graph, $p=1$, fixed ordering, empty initial matching, and $\beta\in\left[\frac{\pi}{2},\pi\right)$. The expected matching size of the output state is greater than the expected matching size obtained from a uniform distribution over all matchings. \end{corollary} \begin{proof} The proof follows directly from using linearity of expectation on Theorem \ref{thm:qaoa_greater_uniform}. In a $2$-regular{} graph, every component is a Cycle Graph. The expected matching size of the entire graph is a sum over expected matching size of the individual components. Our theorem follows. \end{proof} \begin{corollary}[Theorem \ref{thm:qaoa_greater_uniform}]\label{corr:w1_state_two_reg} Let us consider a \textsc{QAOA}\textsuperscript{+}{} for a $2$-regular Graph, $p=1$, fixed ordering, $W_1$ state as the initial matching, and $\beta\in\left[\frac{\pi}{2},\pi\right)$. The expected matching size of the output state is greater than the expected matching size obtained from a uniform distribution over all matchings. \end{corollary} \begin{proof} The proof follows directly from Theorem \ref{thm:qaoa_greater_uniform} and Theorem \ref{thm:exp_mat_w1_empty}. \end{proof} \section{Conclusion and Future Work} In this work we have seen how the Quantum Alternating Operator Ansatz (and by extension the Quantum Approximate Optimization Algorithm) framework can be applied to the graph matching problem. We have argued the merits of using $W_1$ states over the trivial empty state as the initial state and showed that a choice of initial state matters in terms of the output. We have also obtained a small amount of experimental validation for our various theoretical claims.\par \noindent A possible future direction of this work would be investigating sampling techniques which produce the Maximum Matching from a superposition over all maximal matchings with non-zero amplitudes. Designing efficient samplers on the output state itself might have both algorithmic and complexity theoretic importance since counting problems with respect to matchings is \textsc{\#P}-hard as proved in \cite{valiant1979complexity}. Other future directions include theoretically improving the lower bounds of \eqref{prob_quantum}, and proving similarly strong results for more general classes of graphs. \input{main.bbl} \end{document}
2011.11882
\section{Introduction} Over the last decade, ever-increasing research trends concentrate on the studies of multi-agent systems (MASs). A fundamental problem of MASs is designing a networking protocol of consensus, which means that agents converge to a common point or state value. Consensus has been extensively investigated in literature \cite{shang2020resilient,huang2020distributed}. Reviewing the existing studies of this area, one may note that the effect of network-induced communicating constraints over the consensus control performance attracts extensive attention, such as the problems of communication delays \cite{wang2020distributed}, switching topology \cite{Xie2015Event}, discrete interagent information exchange \cite{zhang2020semi}. A main source of communication constraints\cite{yan2020performance} is the scarce network bandwidth. Generally speaking, the communication channels of MASs are usually multipurpose and various kinds of interagent information shares the common channels. To achieve desired timeliness, with limited bandwidth, reducing the burden of communication is expected. It is well-known that the continuous signals among agents usually requires more communication bandwidth than discrete signals. In light of this, the sample-based communicating mechanism is proposed to take samples of exchange of information among agents. Under the control of a well-designed sample-based communicating mechanism, consensus of MASs can be guaranteed with less channel occupancy. In the sample-based mechanisms, there are two kinds which are event-triggering mechanism (ETM) and time-trigger mechanism (TTM). TTM takes the samples according to time, which is widely used in many existing sample-based network protocols \cite{Xian2015Event,Zhang2016Survey}. But one potential drawback is that the time-driven sampling, which is independent of system states, feedbacks, communication resources et al., may result in unnecessary and redundant sampled-data\cite{8959085, Hetel2017Recent}. To optimize the strategies of sampling, ETM is proposed, where the sampling is triggered by the predefined conditions. The conditions depend on the system state, the feedback signals or other artificially designed conditions. ETM can be dated back to the late 50s in the 20th-century \cite{Ellis1959Extension} and then event-trigger mechanism has been extensively developed during the past decades in control community. One of pioneering work in the research of MAS consensus with ETM is presented by \cite{Tabuada2007Event}, where the control actuation is triggered whenever the errors become large enough w.r.t. the norm of the state. The proposed ETM guarantees the performance of consensus and relaxes the requirements of periodic execution. Following \cite{Tabuada2007Event}, some publications also show that ETMs are suitable for a class of first-order MASs \cite{Dimarogonas2012Distributed}, for the double-integrator MASs \cite{Seyboth2013Event}, for a class of linear time-invariant MASs \cite{Guinaldo2012Distributed}. By ETM, the above literature focuses on reducing updates of the controllers in agents but still requires the continuous interagent exchange. Aiming at alleviating the burden of network channels, \cite{Fan2013Technical} presents an event-triggered algorithm to get rid of the continuous measurements of neighbors' state. Additionally, \cite{Zhu2014Event} proposes two event-triggered condition functions: one for reducing control updates, the other for avoiding continuous communication among agents. Considering the second-order leader-following MASs with nonlinear dynamic behaviors, \cite{Li2015Event,Zhao2017Event} propose distributed event-triggered sampling control approaches, where the agents only broadcast their discrete state values and the local controllers are only updated their outputs when the triggering conditions are satisfied. \cite{Guo2014A} uses an event-triggered framework which is free of continuous measurement of the triggered condition and gives the sufficient conditions on the consensus of MASs with the linear agents. There is a common flaw in the aforementioned work. Centralized information depending on the spectrum of the Laplacian matrix is required a prior in the course of designing ETM and the control protocol. See Remark 2 in \cite{Zhu2014Event}, Eq. (11) in \cite{Zhao2017Event}, Theorem 2 in \cite{Guo2014A}, etc., just to name a few. It means the ETMs and control protocols are based on the assumption that each agent knows the overall communication topology. Relaxing such a centralized assumption therefore is worthwhile to be investigated to take full advantage of the power of distributed protocols. Fortunately, in the studies of fully distributed consensus protocol in MASs, there has been much progress \cite{Yu2011Distributed,Zhongkui2013Distributed,Zhongkui2015Designing, yan2019event,Yuezu2017}. Depending on only local information of each agent, some distributed adaptive consensus protocols are presented and applied to undirected communication graphs \cite{Yu2011Distributed,Zhongkui2013Distributed, goebel2020unifying} and directed communication graphs \cite{Zhongkui2015Designing, Yuezu2017, fu2019robust}. However, these protocols require the continuously local states and interacting states, which does not suit the case that resources of communication and computation are limited. In particular, when combining with ETMs, it is a promising topic to obtain less conservation in terms of reducing the frequency of interagent exchanges. To the best of the authors’ knowledge, however, the issue is still not appropriately addressed in literature. Designing a novel distributed ETM consensus protocol naturally becomes a motivation of the present paper. In this paper, the consensus protocol is expected to be free of centralized information depending on the spectra of the Laplacian matrix. Moreover, agents in the considered MAS are with nonlinear dynamics under the connected undirected graph. The primary contributions are summarized as follows. a) To overcome the challenging problem that information depending on the spectra of Laplacian matrices is required a prior to select parameters of event-triggered functions, a novel event-triggered sampled-data mechanism with an adaptive threshold is first proposed b) The fully distributed consensus protocol for second-order MASs with nonlinear dynamics is designed, which is based on event-triggered sampled-data interacting information among agents. c) Only the relative discrete position information is employed in both the event-triggered rule and the consensus protocol, which results in that the undesired velocity measurements can be avoided. Throughout this paper, $\mathbb{R}^n$ and $\mathbb{R}^{n\times n}$ denote the $n-$Euclidean space and the set of all $n\times n$ real matrices, respectively; $\|\cdot\|$ stands for either the Euclidean vector norm or the spectral norm of a matrix; $\otimes$ denotes the Kronecker product; $I_n$ represents an $n \times n$ identity matrix; $\lambda_{min}(\cdot)$ and $\lambda_{max}$ denote the minimum and maximum eigenvalue of a matrix; $diag\{d_1,\ldots,d_n\}$ denotes the diagonal matrix with the elements $d_1,\ldots,d_n$ on the diagonal. \section{Preliminaries}\label{2secPre} The following lemmas are necessary for the analysis of this paper. \subsection{Some supporting lemmas} \begin{lem}\label{lem3} Let $\omega: R \rightarrow R$ be a uniformly continuous function on $[0,\infty \} $. Suppose that $lim_{t \rightarrow \infty} \int_{0}^{t} \omega (\tau) d \tau$ exists and is finite. Then, $$ \omega(t) \rightarrow 0~~ as ~~t \rightarrow \infty. $$ \end{lem} \begin{lem}\label{lem-SCHUR} The following linear symmetric matrix inequality (LMI) \begin{equation*} S=S^T= \begin{pmatrix} A&B\\ *&C \end{pmatrix}<0, \end{equation*} is equivalent to one of the following conditions: \begin{enumerate} \item $S<0$; \item $A<0,C-B^TA^{-1}B<0$; \item $C<0,A-BC^{-1}B^T<0$. \end{enumerate} \end{lem} \begin{lem}\label{lem-BABA} \cite{Slotine2004Applied} For the function $V(x,t)$, $V(x,t) \rightarrow 0$ as $t\rightarrow \infty$ holds, when the following conditions is met. \begin{enumerate} \item $V(x,t)$ has lower bound; \item $\dot V(x,t)$ is negative semi-definite; \item $\dot V(x,t)$ is a uniformly continuous function w.r.t. time, in other word, $\ddot V(x,t)$ has bound. \end{enumerate} \end{lem} \begin{lem}\label{LEM - L} The Laplacian matrix $L$ of an undirected graph $\mathcal{G}$ is semi-positive definite, which has a simple zero eigenvalue and all the other eigenvalues are positive if and only if the undirected graph $\mathcal{G}$ is connected. \end{lem} \begin{lem}\label{lem - H}\cite{Chen2007Pinning} If $L$ is reduciable, $L_{ij} = L_{ji} \leq 0$ for $i \ne j $, and $\sum_{j=1}^{N}L_{ij} = 0,i=1,2,\ldots,N$ then, for any constant $\varpi >0$, all eigenvalues of the matrix $H= L+B$ are positive, i.e. $\lambda(H)>0$, where $B = diag\{\varpi,0,\ldots,0\}$ \end{lem} \subsection{Graph theory}\label{GT} The notation of communication graph in this paper are extensively used in literature. The networking topology among $N$ follower is modeled by a positively weighted undirected graphs $\mathcal{G}=(\mathcal{V},\mathcal{E},\mathcal{W})$, where $\mathcal{V}$ denotes a nonempty vertex set $\left\lbrace v_1,v_2,\ldots,v_N \right\rbrace $ describing agents; $\mathcal{E} \subset \mathcal{V} \times \mathcal{V}$ denotes the set of undirected edges $e_{ij}$ describing the information exchanging and $\mathcal{W}= ({w}_{ij})_{N\times N}$ denotes the underlying weighted adjacency matrix with nonnegative elements. An undirected edge $e_{ij}$ in graph $\mathcal{G}$ means that nodes $v_i$ and $v_j$ can exchange information with each other. If $e_{ij}$ exists between two nodes, $w_{ij} = w_{ji}>0$; otherwise, $w_{ij} = w_{ji}=0$. A graph is connected if every vertex in $\mathcal{V}$ is globally reachable and a vertex $i \in \mathcal{V} $ is globally reachable if any vertex other than $i$ has at least one path starting at the vertex and ending at the vertex $i$. Furthermore, we assume that $i \not\in \mathcal{N}_i$ (no self-loop contained), and hence for all $i \in \mathcal{V}$, $w_{ii}=0$. The Laplacian matrix $\textbf{L}=[l_{ij}]$ is defined by \begin{equation*} l_{ij} = \left\{\begin{aligned} & \sum_{j\in \mathcal{N}_i}w_{ij}~~i=j,\\ & -w_{ij}~~~i\neq j.\\ \end{aligned}\right. \end{equation*} For the networking topology with a leader, the total communication topology between the leader and its followers can be formulated by graphs $\bar{\mathcal{G}}$, namely, $\mathcal{G} \subset \bar{\mathcal{G}}$. In $\bar{\mathcal{G}}$, one leader can only send information to out-neighboring followers but not receive reversely. Let $K=[k_1,\ldots,k_N]^T$ denote the set of the weights from the leader to its followers. Accordingly, the Laplacian matrix of $\bar{\mathcal{G}}$ is defined by \begin{equation*} \bar L =\begin{pmatrix} 0&\textbf{0}\\ -K& H \end{pmatrix}, \end{equation*} where $H=\left\lbrace h_{ij}\right\rbrace_{N\times N} = \textbf{L}+D$ and $D=diag\{k_1,\ldots,k_N\}$. \subsection{Problem formulation} The second-order MAS considered in this paper consists of one leader and $N$ followers, which can be formulated by \begin{equation}\label{sys1} \begin{split} \dot x_i (t) =& v_i (t) ,\\ \dot v_i (t) =& f\left(t,x_i(t),v_i(t) \right) + u_i (t),\\ \end{split} \end{equation} where $x_i(t)$,$v_i(t),u_i(t) \in \mathbb{R}^{n}$ denote the position, velocity and control input of the agent $i$, respectively; $f\left(\cdot \right) $ is a continuously differentiable vector-valued nonlinear function to describe the self-dynamics of agents. The dynamics of the leader is governed by \begin{equation}\label{sys-leader} \begin{split} \dot x_0 (t) =& v_0 (t) ,\\ \dot v_0 (t) =& f\left(t,x_0(t),v_0(t) \right),\\ \end{split} \end{equation} where $x_0$, $v_0$ are the position and velocity of the leader. Throughout this paper, the following assumption is made. \begin{assumption}\label{ASS1} For the nonlinear function $f(t,x_i(t),v_i(t))$, the velocity state ${v}_i(t)$ is linearly coupled, which means \begin{equation*} f(t,x_i(t),v_i(t))\leq \varsigma v_i(t)+ f(t, x_i(t)) ~~~\forall x_i, v_i \in \mathbb{R}^n , \end{equation*} where $\varsigma$ is a scalar or a matrix with proper dimensions. Additionally, for any $ x,y,z,v \in \mathbb{R}^n $, there exists nonnegative constant $\rho$ such that \begin{equation*} \|f(t,x_i) - f(t,x_{j})\| \leq \rho \|x_i - x_j \|. \end{equation*} \end{assumption} In existing literature, the event-triggered controller for agent $i$ is usually designed as (taking \cite{Zhao2017Event} as an example) \begin{equation}\label{existing} \begin{split} u_i(t) =& - \tilde\alpha \sum_{j=1}^{N}w_{ij} \left[ {x}_j (t_k^j)-{x}_i (t_k^i)+ {v}_j (t_k^j)-{v}_i (t_k^i) \right]\\ & -\tilde\alpha k_i \left[ {x}_i (t) - {x}_0 (t) + {v}_i (t) - {v}_0 (t)\right], t\in [t_k^i,t_{k+1}^i ), \end{split} \end{equation} where $\tilde{\alpha}>0$ is coupling strength and $t_k^j \triangleq \text{arg~min}_p \{t-t_p^j|t\geq t_p^j,p\in \mathbb{N}\}$, i.e., $t_k^j$ is the latest triggering time of agent j before time $t$. The control protocol is distributed since each agent only uses local information of neighboring agents, which can be clearly seen in \eqref{existing}. Similar distributed protocols can be found in \cite{Xie2015Event,Zhu2014Event,Li2015Event,Guo2014A}. In these literature, the feasibilities of the consensus criteria depend on that the coupling gains and the eigenvalue of a special matrix associated the Laplacian matrix must satisfy some additional conditions. For example, in \cite{Zhao2017Event}, $\lambda_{min} (L+D+(L+D)^T)>\frac{2\rho}{\tilde{\alpha}}$, where $L$ denotes Laplacian matrix and $D$ denotes the leader adjacency matrix. To satisfy the condition, the information of Laplacian matrix and leader adjacency matrix has to be known a priori for coupling gains design. One may question why not apply a sufficiently small value $\frac{2\rho}{\tilde\alpha}$, without using the global spectra information for solving this problem. It is noticed that a sufficiently small value $\frac{2\rho}{\tilde\alpha}$ means a large value of $\tilde\alpha$, which will directly increase the energy cost of the control. Hence it is energy-efficient and of great significance to design a fully distributed approach without using the Laplacian matrix and the leader adjacent matrix. In this paper, we will design an event-triggered communication mechanism to achieve leader-following consensus for second-order MASs and a consensus control protocol with updated coupling gains. \begin{defi} Consensus of a leader-following second-order MAS is said to be asymptotically achieved if both $\lim_{t\rightarrow \infty}\|\hat x_i - \hat{x}_0\| = 0$ and $\lim_{t\rightarrow \infty}\|\hat v_i - \hat{v}_0\| = 0, i \in \mathbb{N}$ are satisfied for any initial values. \end{defi} \section{Main results}\label{mainresults} In this section, the main results of this paper are proposed. Generally speaking, the event-triggered transmission strategy consists of two modules \cite{Zhao2017Event}: (a) the consensus control protocol and (b) the event-triggered rule. For {a better understanding}, the overall framework of the proposed event-triggered transmission strategy is illustrated in the Fig.\ref{fig1}, which will be specifically explained in the following subsections. \begin{figure} \centering \includegraphics[% width=1\linewidth]{1.pdf} \caption{A fully distributed event-triggered transmission strategy for agent-$i$}\label{fig1} \end{figure} \subsection{The event-triggered module} The sampling process of event-trigger mechanisms relies on the event-triggered condition rather than the elapse of a fixed time. Thus the $k$-th sampled-data indicates the data sampled at the $k$-th triggered event. Denote the $k$-th event-triggered instant of agent-$i$ with $t_k^i$. There exist measurement errors of the event-triggered sampling states $x_i(t_k^i),v_i(t_k^i)$ to its current states $x_i(t),v_i(t)$, which can be defined by $e_{xi}(t) = x_i(t_k^i) -x_i(t)$, and $e_{vi}(t)= v_i(t_k^i) -v_i(t),~~i = 1,2,\ldots,N,$ where $t\in [t_k^i,t_{k+1}^i)$. The next broadcasting instant of agent $i$ is determined by \begin{equation}\label{nextins} t_{k+1}^i = inf\left\lbrace t>t_{k}^i: E_i(t)\geq 0 \right\rbrace, \end{equation} where \begin{equation}\label{evet-rule} \begin{split} &E_i(t)= \|e_{xi}(t)\|^2 - \underbrace{ d_i(t)sign(d_i)\left\| \sum_{j=1}^{N}L_{ij} x_j(t_k^j) + k_i \left( x_i(t_k^i) - x_0(t) \right) \right\|^2 }_{\Upsilon(t)} , \end{split} \end{equation} and $d_i(t)$ is an updated threshold to be designed and $t_k^j \triangleq \text{arg~min}_p \{t-t_p^j|t\geq t_p^j,p\in \mathbb{N}\}$, i.e., $t_k^j$ is the latest triggering time of agent j before time $t$. From \eqref{evet-rule}, it can be seen that only relative position information is employed. The workflow of the event-triggered module can be described as follows. \begin{enumerate} \item The storer $i$ receives the latest state values from the neighboring agents and the leader (if agent $i$ is leader's neighbor). Basing on the information received, storer-$i$ generates the continuous output signals. \item The adaptive law $\dot d_i(t)$ updates the threshold $d_i(t)$ according to the information from the local storer. \item The sampling rule formulated by \eqref{nextins} processes the sampled-data from the storer with respect to the event-triggered condition. \item The event trigger obtains a triggering signal from the sampling rule and then performs sampling. \end{enumerate} \begin{rmk} In the existing literature, there are two forms of control input in the event-triggered control protocol for MAS. One can be formulated by $u_i = \beta\sum_{j\in \mathcal{N}_j}a_{i}\left[ x_j(t_k^i) - x_i(t_k^i)\right] $; another is $u_i = \beta\sum_{j\in \mathcal{N}_j}a_{i}\left[ x_j(t_k^j) - x_i(t_k^i)\right] $, which can be seen that the main difference is the event-triggered sampled time of neighbors' states. In the former scheme, the control input only updates the state signals (from the local agent and the neighboring agents) at the local sampling time instant $t^i_k$; in the latter scheme, these state values need to be updated whenever the local agent samples its state value or receives a new measurement state value from the neighboring agents. The two schemes have their own advantages in different aspects: the latter scheme is superior in the aspect of reducing the burden of networking transmission and the former one serve the purpose of fewer controller updates. Hence, the latter scheme is adopted in this paper from the perspective of alleviating burdens on communication. \end{rmk} \begin{rmk} In the case that agent $i$ is not the leader's neighbor, the storer $i$ also accounts for zero-order holding of the latest discrete state values received from the neighbors as well as storing them. In the case that it is the leader's neighbor, the store $i$ adds the continuous state values from the leader and the latest discrete state values together and output the sum. It explains why the storer $i$ generates the continuous signals. \end{rmk} \subsection{The consensus control module} Now we are at the position to present the fully distributed consensus protocol of this paper as follows, \begin{equation}\label{protocol} \left\{\begin{aligned} \dot x_i(t) =& v_i(t),\\ \dot v_i(t) =& f\left(t,x_i(t),v_i(t) \right) - \alpha c_i(t)\sum_{j=1}^{N}L_{ij} {x}_j (t_k^j) -\alpha c_i(t)k_i \left[ {x}_i (t_k^i) - {x}_0 (t) \right] - \alpha w_i(t),\\ \dot{w}_i(t)=& -\gamma w_ i -\beta c_i(t)\sum_{j=1}^{N}L_{ij} {x}_j (t_k^j) -\beta c_i(t)k_i \left[ {x}_i (t_k^i) - {x}_0 (t) \right], \end{aligned}\right. \end{equation} where $\dot w_i$ is the estimator of the networking coupled velocities; $\alpha>0, \beta>0$, $\gamma>0$ are positive coupling gains and $c_i(t)$ is time-varying parameters to be designed. With Fig.\ref{fig1}, the protocol \eqref{protocol} can be specifically explained by the following workflow. \begin{enumerate} \item The adaptive law updates the time-varying gain $c_i(t)$ basing on information from interaction and local estimator; \item The estimator calculates estimates of networking coupling velocities term $w_i(t)$; \item The controller generates the control input and transmits it to the actuator $i$. \end{enumerate} Let $\tilde{x}_i(t_k^i,t) ={x}_i (t_k^i) - {x}_0 (t)$, $f\left(t,\tilde{x}_i(t),\tilde{v}_i(t) \right) = f\left(t,x_i(t),v_i(t) \right) - f\left(t,x_0(t),v_0(t) \right)$. The error dynamical equations can be written as \begin{equation}\label{protocol-a } \left\{\begin{aligned} &\dot {\tilde{x}}_i(t) = v_i(t),\\ &\dot {\tilde{v}}_i(t) = f\left(t,\tilde{x}_i(t),\tilde{v}_i(t) \right) - \alpha c_i(t)\sum_{j=1}^{N}h_{ij} \tilde{x}_j(t_k^j,t) - \alpha w_i(t),\\ &\dot{w}_i(t)= -\gamma w_ i -\beta c_i(t)\sum_{j=1}^{N}h_{ij} \tilde{x}_j(t_k^j,t), \end{aligned}\right. \end{equation} where $h_{ij}$ denotes the element of matrix $H$. From Lemma \ref{lem - H}, $H$ is positive definite if there is at least one informed agent. Throughout this paper, we make an assumption that there is at least one agent connected to the leader; otherwise, it is impossible to expect the agents in the graph can follow the leader. \begin{rmk} Since $\sum_{j=1}^{N}L_{ij} = 0$, one can easily derive \begin{equation} \begin{aligned} \sum_{j=1}^{N}h_{ij} \tilde{x}_j(t_k^i,t) = & \sum_{j=1}^{N}L_{ij}\left[ x_j(t_k^j) - x_0(t)\right]+k_i\left[ x_i(t_k^i) - x_0(t) \right] \\ =& -\sum_{j=1}^{N} w_{ij} \left[ x_j(t_k^j) - x_i(t_k^i)\right]-k_i \left[ x_0(t) - x_i(t_k^i) \right] \end{aligned} \end{equation} \end{rmk} To facilitate analysis, define a new error state vector $z(t)=[\tilde{x}(t),\tilde{v}(t),{w}(t)]^T \in \mathbb{R}^{3nN}$, where $\tilde{x}(t) = [\tilde{x}_1,\ldots,\tilde{x }_N] \in \mathbb{R}^{nN}$, $\tilde{v}(t) = [\tilde{v}_1,\ldots,\tilde{v}_N]\in \mathbb{R}^{nN}$, ${w}(t) = [{w}_1,\ldots,{w}_N]\in \mathbb{R}^{nN}$. Then the protocol in \eqref{protocol-a } can be recast in the compact form \begin{equation}\label{compact-close} \dot { z}(t) =\widetilde{F}(t,\tilde{x}(t),\tilde{v}(t))+ \widetilde H z(t) + \widetilde{G} \varepsilon(t), \end{equation} where $\varepsilon(t) = [e_{x1} - e_{x0}, \ldots, e_{xN} - e_{x0}]^T\in \mathbb{R}^{nN}$, $$ \widetilde{H} = \begin{pmatrix} \textbf{0}& I_N& \textbf{0}\\ -\alpha C H &\textbf{0}& -\alpha I_N\\ - \beta CH & \textbf{0} &-\gamma I_N \end{pmatrix}\in \mathbb{R}^{3nN\times 3nN}, $$ $\widetilde{G} = \begin{pmatrix}\textbf{0}&-\alpha CH&- \beta C H\end{pmatrix}^T \in \mathbb{R}^{3nN\times nN}$, $\widetilde{F}(t,\tilde{x}(t),\tilde{v}(t))=\begin{pmatrix}\textbf{0}&f\left(t,\tilde{x} (t),\tilde{v}(t) \right)&\textbf{0}\end{pmatrix}^T\in \mathbb{R}^{3nN\times nN}$, $f\left(t,\tilde{x}(t),\tilde{v}(t) \right)= \left[f\left(t,\tilde{x}_1(t),\tilde{v}_1(t)\right),\ldots,f\left(t,\tilde{x}_N(t),\tilde{v}_N(t) \right) \right]^T \in \mathbb{R}^{nN} $ and $C$ is diagonal matrix $C=diag\left\lbrace c_1 ,\ldots,c_N \right\rbrace \in \mathbb{R}^{nN}$. \subsection{Consensus analysis}\label{consensusanalysis} Based on the event-triggered rule \eqref{evet-rule} and the protocol \eqref{protocol}, the following theorem gives the adaptive laws $\dot c(t)$ and $\dot d(t)$ to guarantee the consensus of the considered MAS in this paper. \begin{thm}\label{thm1} Consider a second-order leader-following multiagent system \eqref{sys1} and \eqref{sys-leader} with the distributed sampling control protocol \eqref{protocol} and the event-triggered sampling rule \eqref{evet-rule}. Suppose that the graph $\mathcal{G}$ is connected and assumption \ref{ASS1} holds. Then the second-order consensus can be reached under the following distributed adaptive laws: \begin{eqnarray}\label{dotc} \dot c_i(t) = \tilde{x}_i^T (\beta - \alpha)\sum_{j=1}^{N}h_{ij} \tilde{x}_j(t_k^j,t) + w_i^T(t)\delta \frac{\beta^2 - \alpha^2}{\beta} \sum_{j=1}^{N}h_{ij} \tilde{x}_j(t_k^j,t) , \end{eqnarray} \begin{eqnarray}\label{dotd} \dot d_i(t) = - \xi_i\sum_{j=1}^{N}h_{ij} \tilde{x}_j^T(t^j_k,t)\sum_{j=1}^{N}h_{ij} \tilde{x}_j (t^j_k,t). \end{eqnarray} where $\delta >0$, $\xi_i>0$ are constants. \end{thm} \begin{proof} Consider the following Lyapunov function candidate \begin{equation}\label{LF} V = \frac{1}{2}\tilde z^T(t) \Omega\otimes I_n \tilde{z}(t) + \sum_{i=1}^{N} \frac{\varpi }{2\zeta_i}(c_i (t) - \hat c_i) ^2 + \sum_{i=1}^{N} \frac{\omega}{2\xi_i}(d_i (t) + \hat{d}_i)^2, \end{equation} where $\Omega = \begin{pmatrix} \mu& -\varpi &\varpi\\ * & \eta& -\frac{\alpha}{\beta}\eta \\ *&*&\eta \end{pmatrix}\otimes I_N$, $\varpi$, $\omega$, $\hat c_i$ and $\hat d_i $ are positive constants to be determined. By letting the parameters in matrix $\Omega$ satisfy $\mu \gg \varpi >0 $, $\eta >0$, it can be guaranteed that $\Omega>0$. The positive semi-definiteness of $V$ in \eqref{LF} can also be easily ensured, which means $V(\tilde z(t) ,\varepsilon, t) \geq 0 $ and $V(\tilde z(t) ,\varepsilon, t)= 0$ , if and only if $\tilde{z} (t) = 0$ and all $c_i(t) = \hat c_i$ and $d_i(t) = \hat d_i$. For simplicity, we assume $n=1$ in the proof and $I_n$ is equivalent to 1 such that it will be omitted hereafter. Differentiating \eqref{LF} along the trajectories of \eqref{compact-close} yields \begin{equation}\label{dotV - 1} \begin{split} \dot V(\tilde z(t) ,\varepsilon, t) = & \tilde{z}^T \Omega\dot {\tilde z}(t) + \sum_{i=1}^{N} \frac{\varpi}{\zeta_i}(c_i (t) - \hat c_i) \dot c_i (t)+\sum_{i=1}^{N} \frac{\omega}{\xi_i}(d_i (t) + \hat{d}_i) \dot d_i (t)\\ =&\tilde{z}^T(t) \Omega {F} + \tilde{z}^T(t) \frac{1}{2}\left(\Omega \widetilde{H}+ \widetilde{H}\Omega^T \right) \tilde{z}(t)+ \tilde{z}^T(t) \Omega \widetilde{G} \varepsilon(t)\\ &+ \sum_{i=1}^{N} \frac{\varpi }{\zeta_i}(c_i (t) - \hat c_i) \dot c_i (t)+ \sum_{i=1}^{N} \frac{\omega}{\xi_i}(d_i (t) + \hat{d}_i) \dot d_i (t),\\%这里是不是有个小问题?omega去哪里了??? \end{split} \end{equation} where \begin{equation*} \frac{1}{2}\left(\Omega \widetilde{H}+ \widetilde{H}\Omega^T \right) = \begin{pmatrix} \varpi( \alpha- \beta ) C H & \frac{1}{2}\mu & \frac{\varpi }{2}(\alpha-\gamma)+\frac{\alpha^2-\beta^2}{2\beta}\eta C H \\ * & -\varpi&-\alpha \eta+\frac{\varpi}{2}+ \frac{\alpha\gamma}{2\beta}\eta\\ *& *& \frac{\alpha^2 }{\beta}\eta - \gamma\eta \end{pmatrix}, \end{equation*} \begin{equation*} \Omega\widetilde{G}= \left[ \varpi(\alpha - \beta) C H ,0,\frac{\alpha^2-\beta^2}{\beta}\eta C H \right] ^T, \end{equation*} and $\Omega F =\left[ -\varpi f\left(* \right), \eta f\left(* \right), -\frac{\alpha}{\beta} \eta f\left(* \right) \right] ^T.$ From \eqref{dotc}, one obtains \begin{equation}\label{compact-dotc} \sum_{i=1}^{N} \frac{\varpi}{\zeta_i}(c_i (t) - \hat c_i) \dot c_i (t) = \tilde{z}^T\varpi \begin{pmatrix} \beta-\alpha \\%就是在此处的\zeta_i 0\\ \delta\frac{\beta^2- \alpha^2 }{\beta} \end{pmatrix}\otimes (C- \hat C) H (\tilde{x} + \varepsilon) \end{equation} where $\hat C = diag\{\hat c_1,\ldots,\hat c_N \}$. Let $\frac{\eta}{\varpi} = \delta $. By substituting \eqref{compact-dotc} into \eqref{dotV - 1} and some simple calculation, one has \begin{equation}\label{dotV - 2} \begin{split} \dot V(\tilde z(t) ,\varepsilon, t) = &\tilde{z}^T(t) \Omega F +\tilde{z}^T(t) \frac{1}{2}\left(\Omega \bar{H}+ \bar{H}\Omega^T \right) \tilde{z}(t)+ \tilde{z}^T(t) \Omega \bar{G} \varepsilon(t)\\ & + \sum_{i=1}^{N} \frac{\omega}{\xi_i}(d_i (t) + \hat d_i) \dot d_i (t), \end{split} \end{equation} where \begin{equation*} \frac{1}{2}\left(\Omega \bar{H}+ \bar{H}\Omega^T \right) = \begin{pmatrix} \varpi( \alpha- \beta ) \hat C H & \frac{1}{2}\mu & \frac{\varpi }{2}(\alpha-\gamma)+\frac{\alpha^2-\beta^2}{2\beta}\eta \hat C H \\ * & -\varpi&-\alpha \eta+\frac{\varpi}{2}+ \frac{\alpha\gamma}{2\beta}\eta\\ *& *& \frac{\alpha^2 }{\beta}\eta - \gamma\eta \end{pmatrix}, \end{equation*} $\Omega\bar{G}= \left[ \varpi(\alpha - \beta) \hat C H ,0,\frac{\alpha^2-\beta^2}{\beta}\eta \hat C H \right] ^T$. Following Assumption \ref{ASS1}, one gets \begin{equation}\label{OF} \begin{split} \tilde{z}^T(t) \Omega F = &\sum_{i=1}^{N}\left( -\varpi \tilde{x}_i^T + \eta \tilde{v}_i^T - \frac{\alpha}{\beta} \omega_i^T \right) \left[ f\left(t,x_i(t),v_i(t) \right) - f\left(t,x_0(t),v_0(t) \right)\right] \\ \leq & \varsigma\left( -\varpi \tilde{x}_i^T + \eta \tilde{v}_i^T -\frac{\alpha}{\beta}\omega_i^T\right)\tilde{v}_i +\rho \left(\varpi \|\tilde{x}_i\|^2+ \eta\|\tilde{v}_i\tilde{x}_i\|+\frac{\alpha}{\beta}\|\omega_i\tilde{x }_i\|\right) \\ \leq &\varsigma\left( -\varpi \tilde{x}_i^T + \eta \tilde{v}_i^T -\frac{\alpha}{\beta}\omega_i^T\right)\tilde{v}_i +\kappa\|z\|^2, \end{split} \end{equation} where $\kappa = max\{\rho (\varpi+\frac{\eta}{2}+\frac{\alpha}{2\beta}), \frac{\eta}{2}, \frac{\alpha}{2\beta}\}$. Recasting the event-triggered condition \eqref{evet-rule} in the compact form, one obtains \begin{equation} \label{dotV - a1} -\omega\varepsilon^T(t)\varepsilon(t)+\omega\bar D\tilde{x}^T(t_k,t)H^2 \tilde{x}(t_k,t)\geq 0. \end{equation} where $\bar {D} = diag\{d_1sgn(d_1),\ldots,d_N sgn(d_N)\}$. Besides, substituting the adaptive law \eqref{dotd} into $\sum_{i=1}^{N} \frac{\omega}{\xi_i}(d_i (t) + \hat{d}_i) \dot d_i (t)$, one gets \begin{equation}\label{dotV - d} \begin{split} \sum_{i=1}^{N} \frac{\omega}{\xi_i}(d_i (t) + \hat{d}_i) \dot d_i (t) =& - \omega\left[\tilde{x} (t) + \varepsilon(t) \right]^T(D+\hat D) H^2 \left[\tilde{x} (t) + \varepsilon(t) \right], \end{split} \end{equation} where $D=diag\{d_1,\ldots,d_N\},\hat{D}=diag\{\hat d_1,\ldots,\hat d_N\}$ Combining \eqref{dotV - a1} and \eqref{dotV - d}, then \begin{equation} \label{dotV - a2} \begin{split} \sum_{i=1}^{N} \frac{\omega}{\xi_i}(d_i (t) + \hat{d}_i) \dot d_i (t) \leq& - \omega \varepsilon^T(t) \varepsilon(t) -\omega\left[\tilde{x} (t) + \varepsilon(t) \right]^T\underbrace{(D-\bar{D}+\hat D)}_{\Delta_D} H^2 \left[\tilde{x} (t) + \varepsilon(t) \right], \end{split} \end{equation} holds. From the definitions of $D$ and $\bar D$, $\Delta_D=\hat D$, if $ d_i(t)\geq 0$; $\Delta_D = \hat D-2\bar D$, otherwise. Besides, recalling the definition $\eqref{dotd}$, one can observe that $\dot{d}_i(t)\leq 0$, which means the value of $d_i(t)$ will never increase. Then $d_i(t)\leq d_i(t_0)$, where $d_i(t_0)$ denotes the value of $d_i(t)$ at initial time instant $t_0$. Here, by choosing the constants $\hat{d}_i\geq 2 d_i(t_0)sgn(d_i(t_0))\geq 2d_isgn(d_i)$, it is not hard to derive $\Delta_D\geq 0, \forall d_i(t)\in (-\infty,+\infty)$. Substituting \eqref{dotV - a2} into \eqref{dotV - 2}, one obtains \begin{equation}\label{5-a} \begin{split} \dot V(\tilde z(t) ,\varepsilon, t) =& \tilde{z}^T(t) \Omega F + \tilde{z}^T(t) \frac{1}{2}\left(\Omega \bar{H}+ \bar{H}\Omega^T \right) \tilde{z}(t)+\tilde{z}^T(t) \Omega \bar{G} \varepsilon(t)- \omega \varepsilon^T(t) \varepsilon(t) \\ &-\omega\left[\tilde{x} (t) + \varepsilon(t) \right]^T\Delta_D H^2 \left[\tilde{x} (t) + \varepsilon(t) \right]\\ \leq &\begin{pmatrix} z^T(t)&\epsilon^T \end{pmatrix}\Pi\begin{pmatrix} z(t)\\\epsilon \end{pmatrix} , \end{split} \end{equation where $$ \Pi = \begin{pmatrix} \Pi_{11}&\Pi_{12}\\ *& \Pi_{22} \end{pmatrix} $$ \begin{equation*} \Pi_{11}= \begin{pmatrix} \varpi(\alpha- \beta ) \hat{C} H+\kappa I_N -\omega\Delta_D H^2 &(\frac{1}{2}\mu -\frac{\varpi \varsigma}{2})I_N & \frac{\varpi }{2}(\alpha-\gamma)+\frac{\alpha^2-\beta^2}{2\beta}\eta \hat C H \\ * & \left( -\varpi + \varsigma \eta + \kappa \right) I_N &-\alpha \eta+\frac{\varpi}{2}+ \frac{\alpha\gamma}{2\beta}\eta - \frac{\varsigma\alpha}{2\beta}\\ *& *& \left( \frac{\alpha^2 }{\beta}\eta - \gamma\eta+\kappa\right) I_N \end{pmatrix}, \end{equation*} \begin{equation*} \Pi_{12} = \begin{pmatrix} \varpi(\alpha - \beta) \hat{C} H -\omega\Delta_D H^2 \\ 0\\ \frac{\alpha^2-\beta^2}{\beta}\eta \hat{C} H \end{pmatrix},~~~\Pi_{22}= - \omega(I_N+ \Delta_D H^2). \end{equation*} By properly selecting the parameters $\mu,\varpi$, $\eta$, $\omega$, $\hat{c}_i$, $\hat{d}_i$ in Lyapunov function candidate \eqref{LF}, not hard to derive that $\Pi<0$ holds with the help of Lemma \ref{LEM - L} and Lemma \ref{lem - H}. It is observed that $V(\tilde z(t) ,\varepsilon, t) $ and $\dot V(\tilde z(t) ,\varepsilon, t) $ satisfy conditions (1) and (2) of Lemma.\ref{lem-BABA}, respectively. To verify the condition (3) in Lemma \ref{lem-BABA}, the following analysis is needed. From \eqref{5-a} and $\Pi<0$, one may easily derive that $\tilde{z}, c(t),d(t)$ are all bounded. Also, from \eqref{dotV - a2}, the boundedness of $\tilde{z}(t)$ is used to derive that $\epsilon(t)$ is bounded. Then from \eqref{compact-close}, \eqref{dotc} and \eqref{dotd}, one can derive the boundedness of $\dot {\tilde{z}} (t),\dot c(t),\dot d(t)$. Then, by invoking \eqref{dotV - 1}, it is finally obtained that $\ddot V(\tilde z(t) ,\varepsilon, t)$ is bounded, i.e., condition (3) in lemma \ref{lem-BABA} is satisfied. Hence the proof can be completed. \begin{rmk} One may question that the matrix $H$ including the Laplacian matrix $L$ as well as the matrix $D$ from the whole graph information and topology needs to be known by each agent when solving LMIs to guarantee $\Pi <0$ and the method could not considered as a fully distributed method. It should be pointed out that the parameters obtained by solving $\Pi<0$ are based on the fact $H>0$. Namely, as long as the matrix $H$ is positive definite, the proposed method guarantees the consensus of the network. It is well known $H$ is positive definite if there is at least one informed agent, which is assumed throughout the paper. Therefore, the method is fully distributed. \end{rmk} \begin{rmk} From the Themrem \ref{thm1}, it can be seen that the event-triggered second-order consensus in the considered leader-following MAS can be reached under the distributed adaptive laws \eqref{dotc} and \eqref{dotd} without requiring any centralized conditions, like some existing literature \cite{Zhao2017Event}, \cite{Li2015Event}, \cite{Guo2014A} and \cite{Zhu2014Event}. In the whole networking control design, including the event-triggered rule and the consensus protocol, only local information of neighboring agents is used. \end{rmk} \begin{rmk} One may notice that the dimension of $\Pi$ is $4N$, which may result in that the selection of the parameters in the Lyapunov function candidate is not easy. As a matter of fact, the selecting of parameters can transfer to the problem of solving feasible solutions of multiple linear matrix inequations. By solving these LMIs, one can easily obtain proper feasible solutions. Also, we provide an example of the feasible solutions for these parameters in the numerical result section. \end{rmk} \end{proof} The following theorem shows the existence of a lower bound of inter-event times, which means that the Zeno behavior is excluded in Theorem \ref{thm1}. \begin{thm}\label{thm2} With the event-triggered consensus protocol and the conditions given in Theorem \ref{thm1}, there exists no agent in MAS \eqref{sys1} that exhibits Zeno behavior during the consensus process. That is, for each agent $i\in \mathcal{V}$, the inter-event time $\Delta_{k}^i = t_{k+1}^i - t_{k}^i = \tau>0 . $ \end{thm} \begin{proof} Suppose the velocities of all agents in the network considered are bounded by $M_v>0$. At the triggering time instants $\{t^i_k\}_{k=0}^{\infty}$, $e_{xi}(t^i_k) = 0, i=1,2,\ldots,N$ from the definition of $e_{xi}(t)$. In each interval $t\in \left[t_k^i,t_{k+1}^i \right)$, one gets \begin{equation}\label{4-2-1} \begin{split} \|e_i(t)\| =& \|\int_{t_{k}^i}^{t}\dot e_i(s)ds \|\leq \int_{t_{k}^i}^{t}\|\dot e_i(s)\|ds \leq \int_{t_{k}^i}^{t}\|\dot x_i(s)\|ds\\ \leq& \int_{t_{k}^i}^{t}\|v_i(s)\|ds \leq M_v(t-t_{k}^i), \forall t\in \left[t_k^i,t_{k+1}^i \right) \end{split} \end{equation} According to event triggered rule \eqref{nextins}, the next event will not be triggered until trigger function $E_i(t)= 0$, which means that, for agent i the next sampling time instant $t= t_{k+1}^i$ is at the moment when $\|e_{xi}(t_{k+1}^i,t)\|^2 = {\varUpsilon_i(t)}$ holds, where $\Upsilon(t)$ is defined in \eqref{evet-rule}. Assume that before consensus is reached, there exists a positive constant $\underline{\varUpsilon}_i$ such that $\varUpsilon_i (t)\geq \underline{\varUpsilon}_i>0 $, for some $t\in \{t_l^i \}_{l=0}^{\infty}$; otherwise, $\varUpsilon_i (t_{k'}^i) =0$ for some $t\in \{t_{k'}^i \}_{k'=0}^{\infty}$. At the event time $t_{k'}^i$, the consensus has been achieved and there is no need to trigger the event. That is to say, before the consensus being achieved, from \eqref{4-2-1}, one gets \begin{equation}\label{4-2-2} \begin{split} \left[M_v(t_{k+1}^i - t_k^i) \right]^2& \geq \|e_{xi}(t_{k+1}^i)\|^2 = {\varUpsilon_i(t)}\geq{\underline{\varUpsilon}_i}>0. \end{split} \end{equation} Now we will prove $\lim_{k\rightarrow \infty}t_k^i = \infty$ by contradiction. Assuming $\lim_{k\rightarrow \infty}t_k^i =T^i< \infty$, one can easily drive $ {\underline{\varUpsilon}_i} \leq \left[M_v(t_{k+1}^i - t_k^i) \right]^2 $, which implies ${\underline{\varUpsilon}_i}\leq0$. This contradicts \eqref{4-2-2}. Consequently, $\lim_{k\rightarrow \infty}t_k^i = \infty$ is proven. Assuming $\Delta_{k}^i \rightarrow 0$ and invoking \eqref{4-2-2}, one can verify that $ \underline{\varUpsilon}_i \leq0$, which contradicts the condition $\underline{\varUpsilon}_i>0$. $\Delta_{k}^i = t_{k+1}^i - t_{k}^i = \tau>0$ therefore holds. This completes the proof of Theorem \ref{thm2}. \end{proof} \section{Numerical results}\label{simu} In this section, a numerical example is presented to illustrate the feasibility and effectiveness of the proposed mechanism. We consider a multi-agent system with 1 leader and 6 agents. To verify that it is fully distributed without requiring the spectra of Laplacian matrices, we use the following two graphs whose eigenmatrices are $\mathcal{G}_1 $ and $\mathcal{G}_2 $, which are given by $$\mathcal{G}_1 = \begin{pmatrix}6&-1&-2&-1&-2&0\\-1&8&-3&0&0&-4\\-2&-3&5&0&0&0\\-1&0&0&4&-3&0\\-2&0&0&-3&6&-1\\0&-4&0&0&-1&5 \end{pmatrix}, \mathcal{G}_2 = \begin{pmatrix}2&-1&-1&0 &0&0\\-1&5&-3&-1&0&0\\-1&-3&6&-1&-1&0\\0&-1&-1&3&0&-1\\0&0&-1&0&5&-3\\0&0&0&-1&-3&4\end{pmatrix}.$$ Accordingly, leader weight matrices are set to be $diag\{2,0,0,0,0\}$ and $diag\{0,0,5$ ,$0,0\}$. The nonlinear dynamics of agents is the pendulum model which is given by $f(t,x_i,v_i) = -\frac{g}{l}sin(x_i) - \frac{k}{m} v_i $, where $g,k,l,m$ are the gravitational acceleration, the coefficient parameters, the length and the mass of the rob, respectively. It is easy to verify that such a nonlinear dynamic model satisfies assumption \ref{ASS1}. Here, we take $g=9.8, k=0.1, l= 4, m=1$ here. To find a group of feasible parameters satisfying $\Pi<0$ in Theorem.\ref{thm1}, one can use LMI toolbox in MATLAB. Here, we present a group of the parameters for the Lyapunov function candidate \eqref{LF}: $\mu = 2, \varpi = 1.5, \eta = 20.5, \omega = 40, \Delta_D = 0.5I_6,\varsigma = -0.5, \rho = -2$ and $\hat C = \frac{1}{20} H^{-1}$. For the parameters in consensus control protocol \eqref{protocol} and the adaptive law of the thresholds \eqref{dotd}, in the simulation, their values are taken as $\alpha = 1, \beta = 30, \gamma = 35, \delta = 13.67, \xi = \left[\xi_1,\ldots,\xi_6 \right] = \left[ 0.5,0.2,0.4,0.3,0.5,0.6\right] $. Besides, the initial positions and velocities of the leader and followers are randomly generated between $\left[ -1,1 \right]$. From Fig.\ref{Positions}, it can be observed that all the follower agents can track the position of the leader under both graph $\mathcal{G}_1$ and graph $\mathcal{G}_2$ without retuning the parameters. Also, Fig. \ref{Velocities} shows that the tracking performance of velocities is also guaranteed. In Fig.\ref{3D}, the tracking errors of positions and velocities of 6 agents with graph $\mathcal{G}_1$ are presented and it demonstrates the second-order consensus performance of the proposed method. Under $\mathcal{G}_1$, the states of adaptive protocol coupling gains are presented in Fig.\ref{C-Gains}, where the distributed control gains $c_i$ adaptively converge to proper ones. Fig.\ref{Positions} and Fig.\ref{Velocities} demonstrates that the second-order leader-following consensus can be achieved with the proposed network protocol in this paper. \begin{figure*}[htbp] \centering \subfloat[Positions of the leader and followers under $\mathcal{G}_1$]{\includegraphics[width = 0.5\linewidth]{Positions-G1.pdf}} \centering \subfloat[Positions of the leader and followers under $\mathcal{G}_2$]{\includegraphics[width = 0.5\linewidth]{Positions-G2.pdf}} \caption{Consensus of positions under different topologies}\label{Positions} \end{figure*} \begin{figure*}[htbp] \centering \subfloat[Velocites of the leader and followers under $\mathcal{G}_1$]{\includegraphics[width = 0.5\linewidth]{Velocities-G1.pdf}} \centering \subfloat[Velocites of the leader and followers under $\mathcal{G}_2$]{\includegraphics[width = 0.5\linewidth]{Velocities-G2.pdf}} \caption{Consensus of velocities under different topologies}\label{Velocities} \end{figure*} \begin{figure}[!ht] \hfil \subfloat{\includegraphics[width=3in]{3D.pdf}}% \caption{The second-order consensus under the proposed control protocol} \label{3D} \end{figure} \begin{figure}[!ht] \hfil \subfloat{\includegraphics[width=4in]{C-Gains.pdf}}% \caption{Consensus protocol coupling gains $c_i(t)$ under $\mathcal{G}_1$ } \label{C-Gains} \end{figure} \begin{figure}[!ht] \hfil \subfloat{\includegraphics[width=4in]{Comm.pdf}}% \caption{The state of triggered events of broadcasting signals under $\mathcal{G}_1$} \label{Comm} \end{figure} To show the effectiveness of ETM on reducing the frequency of interagent exchanges, Fig.\ref{Comm} presents the states of the events that each agent broadcasts its state to others under topology graph $\mathcal{G}_1$, where the blue areas represent that the predefined events are triggered. For comparison, we also conduct the simulation for the ETM with the constant event-triggered thresholds in \cite{Zhao2017Event} (see Eq.(7)). In this work, the number of broadcasting interacting signals of each follower is negative related to the event-triggered thresholds parameters. By taking agent-$1$ as an example, the relationship between $\varrho$ and the number of the triggered events $R$ is presented in Fig.\ref{VAR-R} under $\mathcal{G}_1$ and $\mathcal{G}_2$, respectively. Note that, to facilitate analysis, we use a new defined parameter $\varrho\in \left[ 0,1\right] $ to replace $\varrho_1,\varrho_2$ where $ \varrho_1 = 0.12 \varrho, \varrho_2=0.18 \varrho$ in this simulation. Also, the adaptive threshold $d_1(t)$ of agent-$1$ under $\mathcal{G}_1$ and $\mathcal{G}_2$ are accordingly given in Fig.\ref{thre-gains-in2}. The comparison of ETM proposed in this paper and its counterpart in \cite{Zhao2017Event} demonstrates that the adaptive triggering thresholds are free of using the spectra of Laplacian matrices, which verifies the effectiveness of the proposed control protocol. \begin{figure}[!ht] \hfil \subfloat{\includegraphics[width=3in]{VAR-R.pdf}}% \caption{Ratio of broadcasting interacting signal and event-triggered criterion $\rho$ of the ETM in \cite{Zhao2017Event}} \label{VAR-R} \end{figure} \begin{figure}[!ht] \hfil \subfloat{\includegraphics[width=3in]{thre-gains-in2.pdf}}% \caption{The time-varying threshold $d_1(t)$ of agent-1 under $\mathcal{G}_1$ and $\mathcal{G}_1$} \label{thre-gains-in2} \end{figure} \section{Conclusion}\label{con} In this paper, we have proposed a novel event-triggered control protocol for leader-following consensus in second-order MASs under undirected topologies. To overcome the drawbacks of using continuous communicating signals among the follower agents, we have presented the event-triggered consensus protocol with an event-triggered mechanism. To get rid of the centralized information depending on the spectrum of the Laplacian matrix, we have proposed the adaptive laws to update the coupling gains and event-triggered thresholds resulting in that these parameters are free of the centralized information. Moreover, considering that the velocity measurement process may be noise prone, we only use relative positions among agents in the protocol design. Compared with some existing results, the protocol in this paper has been less conservative and has excluded Zeno behavior. It has been found that consensus can be achieved under the distributed coupling gains and the distributed thresholds only if the undirected network is connected, which nevertheless is a simple and natural condition. \section*{Acknowledgment} This work was funded by National Science Foundation of China (No.61973040), China Postdoctoral Science Foundation (No.2020M680445), Postdoctoral Science Foundation of Beijing Academy of Agriculture and Forestry Sciences of China (No.2020-ZZ-001).
2201.02242
\section{Introduction} In ophthalmological imaging, different imaging systems, such as color fundus (CF), infrared (IR), fluorescein angiography (FA), or the more recent optical coherence tomography (OCT) and OCT angiography (OCTA), are used. For the diagnosis, often multiple images that might come from different systems or capturing times are used, particularly for long-term monitoring of the progression of retinal diseases, such as diabetic retinopathy, glaucoma, or age-related macular degeneration. Multi-modal registration techniques that accurately align the vessel structures in the different images can support ophthalmologists by allowing a direct pixel-based comparison of the images. Multi-modal retinal registration methods estimate an affine transform, a homography, or a non-rigid displacement field. Here, we focus on feature-based methods for homography estimation. These methods generally consist of keypoint detection, descriptor learning, and descriptor matching. Conventional methods address the multi-modal keypoint detection and description, for instance by introducing a partial intensity invariant feature descriptor (PIIFD)~\cite{ChenJ2010} combined with the Harris corner detector. Deep learning methods replace all or some steps by neural networks. In DeepSPA~\cite{LeeJ2019}, a convolutional neural network (CNN) is used to classify patches extracted at vascular junctions based on a step pattern representation. GLAMPoints~\cite{TruongP2019} uses a U-Net as keypoint detector with root SIFT descriptors for retinal images. It is trained to maximize the keypoint matching in a self-supervised manner by homographic warping. Wang \etal~\cite{WangY2021} proposed a weakly supervised learning-based pipeline composed of a multi-modal retinal vessel segmentation network, SuperPoint~\cite{DeToneD2018}, and an outlier network based on homography estimation. In this paper, we propose a deep learning method for multi-modal retinal image registration based on jointly learning a keypoint detection and description network that extracts features of the vessel structure. We base our approach on CraquelureNet~\cite{SindelA2021} and transfer the task of learning a cross-modal keypoint detector and descriptor based on crack structures in paintings to the medical domain. For this task, we created a multi-modal dataset with manual keypoint pair and class annotations. \begin{figure}[t] \setlength{\figbreite}{0.49\textwidth} \centering \caption{Our multi-modal retinal image registration method using a CNN to extract deep features of the vessel structures.} \subfigure[Training]{\includegraphics[width=\figbreite]{graphics/RetinaCraquelureNet_flowchart_training.pdf}} \hfill \subfigure[Inference]{\includegraphics[width=\figbreite]{graphics/RetinaCraquelureNet_flowchart_inference.pdf}} \label{fig-01} \end{figure} \section{Materials and methods} \subsection{Network for multi-modal keypoint detection and description} We adopt the CraquelureNet~\cite{SindelA2021} as network architecture for our task of multi-modal retinal image registration, named RetinaCraquelureNet, as shown in Figure~\ref{fig-01}. The fully-convolutional CraquelureNet consists of a ResNet~\cite{HeK2016} backbone and two heads. \vspace{0.5em} \noindent\textbf{Detector and descriptor loss functions:} The keypoint detection and description heads are jointly trained with small image patches of size $32 \times 32 \times 3$ using the multi-task loss~\cite{SindelA2021}: $\mathcal{L}_{\text{Total}} = \lambda_{\text{Det}} \mathcal{L}_{\text{BCE}} + \lambda_{\text{Desc}} \mathcal{L}_{\text{QuadB}}$, where $\lambda_{\text{Det}},\lambda_{\text{Desc}}$ are the weights for the detector and descriptor loss. The keypoint detection head is trained using the binary cross-entropy loss $\mathcal{L}_{\text{BCE}}$ with the two classes \elqq vessel\erqq~and \elqq background\erqq, where \elqq vessel\erqq~refers to patches centered at a striking position in the vessel structure, such as a bifurcation, intersection, or a sharp bend. Analogously to~\cite{SindelA2021}, we randomly sample the same number of patches from each class and each modality per batch. The description head is trained using the bidirectional quadruplet loss~\cite{SindelA2021}, which applies an online in-batch hard negative mining strategy to randomly sample positive pairs (anchor $a$ and positive counterpart $p$) for one batch and to select the closest non-matching descriptors in both directions within this batch~\cite{SindelA2021}: \begin{align} \begin{split} \mathcal{L}_{\text{QuadB}}(a,p,n_a,n_p) = \max [0, m + d(a,p) - d(a,n_a)] \\+ \max [0, m + d(p,a) - d(p,n_p)], \end{split} \end{align} where $m$ is the margin, $d(x,y)$ the Euclidean distance, $n_a$ the closest negative to $a$, and $n_p$ is the closest negative to $p$. \vspace{0.5em} \noindent\textbf{Keypoint detection, descriptor matching, and homography estimation:} For inference, we feed the complete image to the network at once. The detector output is bicubically upscaled by a factor of 4 and a dense keypoint confidence heatmap is computed by the difference of the two class predictions of the upscaled output. Then, non-maximum suppression with a threshold of $4$ pixel is applied and the $N_\text{max}$ keypoints with the highest confidence values are extracted and the corresponding descriptors are linearly interpolated~\cite{SindelA2021}. Distinctive point correspondences are determined in both images by brute force mutual nearest neighbor descriptor matching and random sample consensus (RANSAC)~\cite{FischlerMA1981} (reprojection error of $5$ pixel) is applied for homography estimation. \begin{figure}[t] \setlength{\figbreite}{0.9\textwidth} \centering \caption{Keypoint confidence heatmaps (red to blue) and extracted keypoints of our multi-modal registration method RetinaCraquelureNet for both datasets.} \includegraphics[width=\figbreite]{graphics/RetinaCraquelureNet_featuremaps.pdf} \label{fig-02} \end{figure} \subsection{Multi-modal retinal datasets} Our IR-OCT-OCTA dataset, provided by the Department of Ophthalmology, FAU Erlangen-N\"urnberg, consists of multi-modal images of the macula of $46$ controls measured by Spectralis OCT II, Heidelberg Engineering. For each control, the multi-modal images (IR images, OCT volumes, and OCTA volumes) of the same eye were scanned up to three times a day. For this work, we use the IR image ($768 \times 768$) and the en-face OCT/OCTA projections of the SVP layer (Par off) of the macula (both $512 \times 512$). The image splits for each modality are: train: $89$, val: $15$, and test: $30$. Eyes of the same control are inside one set. Further, we use a public dataset~\cite{ShirinH2021} of color fundus (CF, $576 \times 720\times 3$) and fluorescein angiography (FA, $576 \times 720$) images, which are composed of $29$ image pairs of controls and $30$ pairs of patients with diabetic retinopathy. The image pair splits are: train: $35$, val: $10$, and test: $14$, where healthy and non-healthy eyes are equally distributed. We manually annotated $N_\text{kp}$ matching keypoints at striking vessel positions in each image for each test person, where $N_\text{kp}>=21$ for IR-OCT-OCTA and $N_\text{kp}=40$ for CF-FA. For the keypoint detection task, we use all keypoints for the vessel class (train: 6273|2800, val: 945|800 for IR-OCT-OCTA|CF-FA) and the same number of points for the background class. For the description task, we build the positive pairs from all multi-modal image pairs (train: 6273|1400, val: 945|400 for IR-OCT-OCTA|CF-FA) and additional for the IR-OCT-OCTA dataset from uni-modal image pairs of scans from different recording times (train: 6021, val: 945). For the test set, we annotated $6$ control points per image and computed ground truth homographies. \begin{figure}[t] \setlength{\figbreite}{0.7\textwidth} \centering \caption{Qualitative results for one IR-OCTA example.} \includegraphics[width=\figbreite]{graphics/IR_OCTA_results.pdf} \label{fig-03} \end{figure} \subsection{Experimental details} We train our RetinaCraquelureNet (using pretrained weights by~\cite{SindelA2021}) for the combination of our IR-OCT-OCTA dataset and the public CF-FA dataset, where we oversample the smaller dataset. We use Adam solver with a learning rate of $\eta=1\cdot10^{-4}$ for $20$ epochs with early stopping, a batch size of $576$ for detector and $288$ for descriptor, $m=1$, $\lambda_\text{Det}=1$, $\lambda_\text{Desc}=1$ (analogously to~\cite{SindelA2021}), and online data augmentation (color jittering, horizontal/vertical flipping and for the keypoint pairs additional joint rotation, scaling, and cropping). We compare our method with SuperPoint~\cite{DeToneD2018} and GLAMPoints~\cite{TruongP2019}. We fine-tuned both methods with our combined dataset by extending the training code of~\cite{JauYY2020} for SuperPoint and~\cite{TruongP2019} for GLAMPoints by additionally incorporating multi-modal pairs for the homography warping and picked the best models based on the validation split. For the experiments, we invert the images of OCT, OCTA, and FA to depict all vessels in dark. For RetinaCraquelureNet, we feed the images in RGB, for GLAMPoints as green channel, and for SuperPoint in grayscale. For all methods, we use the same test settings (descriptor matching, RANSAC, $N_\text{max}=4000$). As metrics we use the success rate of the registration based on the mean Euclidean error $\text{SR}_\text{ME} = \frac{1}{N} \sum_{i=1}^{N} (( \frac{1}{N_j} \sum_{j=1}^{N_j} \mathrm{D}_{ij}) <= \epsilon )$ and maximum Euclidean error $\text{SR}_\text{MAE} = \frac{1}{N} \sum_{i=1}^{N} ((\max_{j \in N_j} \mathrm{D}_{ij}) <= \epsilon )$ of the $N_j=6$ control points and the pixel error threshold $\epsilon$. Therefore, we compute the Euclidean error $\mathrm{D}_{ij} = \| T(p_{ij},H_{\text{pred}_i}) - q_{ij} \|_2$ with $p_{ij}$ being the $j$-th source point and $q_{ij}$ the $j$-th target point, $H_{\text{pred}_i}$ the predicted homography, and $T(p_{ij},H_{\text{pred}_i})$ the projected $j$-th source point of image $i$. Further, we compute the detector repeatability (Rep) as defined in~\cite{DeToneD2018} and the matching inlier ratio (MIR) as the fraction of number of RANSAC inliers and number of detected matches per image. \begin{table}[t] \caption{Quantitative evaluation for the public CF-FA dataset.} \label{tab-01} \begin{scriptsize} \begin{tabular*}{\textwidth}{l@{\extracolsep\fill}llll} \hline Metrics [\%] & $\text{SR}_\text{ME}$ ($\epsilon=3$) & $\text{SR}_\text{MAE}$ ($\epsilon=5$) & Rep ($\epsilon=5$) & MIR ($\epsilon=5$)\\ \hline SuperPoint (fine-tuned) & \textbf{100.0} & 92.9 & 53.7 & 56.5 \\ GLAMPoints (fine-tuned) & 78.6 & 78.6 & 35.3 & 27.6 \\ RetinaCraquelureNet & \textbf{100.0} & \textbf{100.0} & \textbf{78.4} & \textbf{69.2} \\ \hline \end{tabular*} \end{scriptsize} \end{table} \begin{figure}[b] \setlength{\figbreite}{0.27\textwidth} \centering \includegraphics[width=0.9\textwidth]{graphics/IR-OCT-OCTA-results-legend} \subfigure[IR-OCT]{ \includegraphics[width=\figbreite]{graphics/IR-OCT-OCTA-results-a} } \subfigure[IR-OCTA]{ \includegraphics[width=\figbreite]{graphics/IR-OCT-OCTA-results-b} } \subfigure[OCT-OCTA]{ \includegraphics[width=\figbreite]{graphics/IR-OCT-OCTA-results-c} } \subfigure[IR-IR]{ \includegraphics[width=\figbreite]{graphics/IR-OCT-OCTA-results-d} } \subfigure[OCT-OCT]{ \includegraphics[width=\figbreite]{graphics/IR-OCT-OCTA-results-e} } \subfigure[OCTA-OCTA]{ \includegraphics[width=\figbreite]{graphics/IR-OCT-OCTA-results-f} } \caption{Quantitative evaluation for our IR-OCT-OCTA dataset using $\text{SR}_\text{ME}$ ($\epsilon=5$), $\text{SR}_\text{MAE}$ ($\epsilon=10$), Rep ($\epsilon=5$), and MIR ($\epsilon=5$). In (c-f) we use follow-up image pairs.} \label{fig-04} \end{figure} \section{Results} Some qualitative results of our RetinaCraquelureNet are shown in Figure~\ref{fig-02} that depicts for one example of each modality the confidence keypoint heatmaps and the extracted keypoints that are concentrated on interesting points in the vessel structures. In Figure~\ref{fig-03} the registration performance of RetinaCraquelureNet, GLAMPoints, and SuperPoint is visually compared for one IR-OCTA example. RetinaCraquelureNet detects the highest number of correct matches which are densely spread over the images resulting in the most accurate image overlay. GLAMPoints detects densely distributed keypoints which are also located in the background but with fewer correct matches. SuperPoint finds the lowest number of keypoints and correct matches. The image overlays of the competing methods show some misalignments at the left borders. The quantitative evaluation for the IR-OCT-OCTA dataset is summarized in Figure~\ref{fig-04}. RetinaCraquelureNet clearly outperforms GLAMPoints and SuperPoint for the multi-modal image pairs in all metrics. Regarding the registration of the follow-up images, all methods reach a success rate of about $100 \, \%$ with \mbox{$\text{ME}<=5$} and $\text{MAE}<=10$, however our method obtains considerably higher scores in Rep and MIR. Results for the CF-FA dataset in Table~\ref{tab-01} also show the advantage of our method, which successfully registers all images with \mbox{$\text{ME}<=3$} and $\text{MAE}<=5$ and gains best scores in Rep and MIR. \section{Discussion} We trained a CNN that extracts features of the vessel structure to jointly learn a cross-modal keypoint detector and descriptor for multi-modal retinal registration. In the experiments, we showed that our method achieves the best registration results for both multi-modal datasets. For the more challenging IR-OCT and IR-OCTA registration, which has a smaller overlap region, we still achieve good success rates while the competing methods show a strong decline. By training our method jointly on two datasets, the network learns to detect distinctive features in five different modalities. Further, we demonstrated that the same trained model can be used to register multi-modal images and follow-up scans. Thus, our method can be very beneficial to support the long time analysis of retinal disease progression. As future work, we will investigate deep learning methods inspired by known operators for our registration pipeline. \bibliographystyle{bvm}
1509.08748
\section{Introduction}\label{intro} Let $E$ denote an elliptic curve defined over a number field~$K$. The canonical height is a quadratic form $\hat{h} \colon E(K)\otimes \R \to \R$, first constructed by N\'eron~\cite{Neron} and Tate (unpublished). For several applications, such as computing generators for~$E(K)$ and computing the regulator appearing in the conjecture of Birch and Swinnerton-Dyer, one needs to compute~$\hat{h}(P)$ for points $P \in E(K)$. To this end, one typically chooses a Weierstrass equation~$W$ for~$E$ over~$K$ with $\O_K$-integral coefficients and decomposes $\hat{h}(P)$ (or $\hat{h}(P) - h(P)$, where $h$ is the naive height on~$E$ with respect to~$W$) into a sum of local terms, one for each place of~$K$. For simplicity, let us assume $K=\Q$. There are several efficient algorithms for the computation of the contribution at infinity, see Section~\ref{arch}. A very simple and efficient algorithm of Silverman~\cite{SilvermanHeights} can be used to compute the non-archimedean contributions separately. However, in order to determine the non-archimedean places which contribute to $\hat{h}(P)$ (or $\hat{h}(P) - h(P)$), the algorithm of~\cite{SilvermanHeights} assumes that the prime factorization of the discriminant~$\Delta(W)$ is known, which renders this approach inefficient when the coefficients of~$W$ are large. This observation motivated Silverman's article~\cite{SilvermanLittleFact}, where it is shown how to compute~$\hat{h}(P)$ without the need to factor~$\Delta(W)$. Nevertheless, the algorithm of~\cite{SilvermanLittleFact} requires the prime factorization of $\gcd(c_4(W), c_6(W))$ in order to find a globally minimal Weierstrass equation for~$E$. In this note, we introduce an algorithm for the computation of~$\hat{h}(P)$ that does not require any factorization into primes at all and runs in time quasi-linear in the size of the input data and the desired precision of the result. More precisely, let $\|W\|$ denote the largest absolute value of the coefficients of~$W$, and let $d$ denote the number of desired bits of precision after the binary point. We denote the time needed to multiply two $d$-bit integers by~$\operatorname{\sf M}(d)$. The following result is our main theorem. Recall the `soft-O' notation: $f(n) \in \tilde{O}(g(n))$ means that there are constants $c, m > 0$ such that for $n$ sufficiently large, $|f(n)| \le c g(n) (\log g(n))^m$. Using fast multiplication algorithms, we have $\operatorname{\sf M}(d) \in \tilde{O}(d)$. We also use the notation $f(n) \ll g(n)$ to express the fact that there is a constant $c > 0$ such that $|f(n)| \le c g(n)$ for $n$ sufficiently large. \begin{thm}\label{T:main} Let $E$ be given by a Weierstrass equation~$W$ with coefficients in~$\Z$ and let $P \in E(\Q)$. Then we can compute~$\hat{h}(P)$ to $d$~bits of (absolute) precision in time \begin{align*} &\ll \log(d + h(P)) \operatorname{\sf M}(d + h(P)) \\ & \qquad{} + (\log\log \|W\|) \operatorname{\sf M}\bigl((\log\log \|W\|) (\log \|W\|)\bigr) \\ & \qquad{} + \log(d + \log\|W\|)^2 \operatorname{\sf M}(d + \log\|W\|) \\ &{} \in \tilde{O}(d + h(P) + \log\|W\|) \, . \end{align*} \end{thm} Since the size of the input is measured by $h(P) + \log\|W\|$ --- the first term gives the size of~$P$, the second term gives the size of~$W$ --- and the size of the output is measured by $\log h(P) + d$, this means that we can compute $\hat{h}(P)$ in quasi-linear time. The strategy of the proof is to first find an algorithm for the computation of the local non-archimedean contributions that does not assume minimality; see Proposition~\ref{P:fast algo}. Building on this, the non-archimedean contribution to $\hat{h}(P) - h(P)$ can be computed upon observing that it is a sum of rational multiples of logarithms of prime numbers, which can be determined by working globally modulo a suitable power of~$\Delta(W)$. Combining this with a complexity analysis of the fastest known algorithm for the computation of the local height at infinity due to Bost-Mestre~\cite{BMisog}, Theorem~\ref{T:main} follows. We note that Marco Caselli, working on his PhD under the supervision of John Cremona, is currently extending the Bost-Mestre algorithm to also deal with complex places. The paper is organized as follows. In Section~\ref{formulas} we set up some notation and introduce the notion of Kummer coordinates of points on elliptic curves. Heights and their local decompositions are recalled in Section~\ref{heights}. In Section~\ref{nonarch} we discuss an algorithm that allows us to compute a non-archimedean local summand of $\hat{h}(P)-h(P)$ efficiently without assuming minimality, and we estimate its running time. Section~\ref{arch} contains a discussion of the algorithm of Bost-Mestre for the computation of the local height at infinity and of its running time. We then combine the non-archimedean and the archimedean results into an efficient algorithm for the computation of $\hat{h}(P)$ in Section~\ref{algo}, leading to a proof of Theorem~\ref{T:main}. In the final Section~\ref{examples} we discuss the practicality of our algorithm. \subsection*{Acknowledgments} We would like to thank Jean-Fran\c{c}ois Mestre for providing us with a copy of the unpublished manuscript~\cite{BMisog}, Mark Watkins for answering our questions about the computation of canonical heights in {\sf Magma} and Elliot Wells for pointing out an inaccuracy in Algorithm~\ref{algo2}. \section{Kummer coordinates}\label{formulas} Let $K$ be a field and consider an elliptic curve~$E/K$, given by a Weierstrass equation \begin{equation}\label{W_eqn} W \colon y^2 + a_1xy + a_3y = x^3 + a_2x^2 + a_4x + a_6\,, \end{equation} where $a_1, a_2, a_3, a_4, a_6 \in K$. As usual, let \begin{align*} b_2 & = a_1^2+4a_2\,,\\ b_4 & = 2a_4+a_1a_3\,,\\ b_6 &= a_3^2+4a_6\,,\\ b_8 &= a_1^2a_6+4a_2a_6-a_1a_3a_4+a_2a_3^2-a_4^2\,, \end{align*} and let \[ \Delta(W) = -b_2^2 b_8 - 8b_3^4 - 27b_6^2 + 9b_2 b_4 b_6 \] denote the discriminant of the equation $W$. Consider the functions $f$ and~$g$, defined for $P \in E(K)\setminus\{O\}$ by \begin{align*} f(P)&= 4x(P)^3 + b_2x(P)^2 + 2b_4x(P) + b_6\,,\\ g(P)&= x(P)^4 - b_4x(P)^2 - 2b_6x(P) - b_8\,. \end{align*} Then for $P \in E(K)\setminus E[2]$, we have $x(2P)=g(P)/f(P)$. We now extend this to all $P \in E(K)$. Note that ${\mathbb P}^1$ is the Kummer variety $E/\{\pm 1\}$ of $E$. An explicit covering map $E \to {\mathbb P}^1$ is given by \[\begin{array}{crcl} \kappa \colon &E& \longrightarrow& {\mathbb P}^1\\ & (x:y:1)&\longmapsto& (x:1)\\ & O&\longmapsto& (1:0)\,.\\ \end{array}\] We call $(x_1,x_2) \in \A^2(K)\setminus\{(0,0)\}$ a pair of {\em Kummer coordinates} for $P \in E(K)$, if $\kappa(P) = (x_1:x_2)$. The degree~4 homogenizations of $g$ and~$f$ are \begin{align*} \delta_1(x_1,x_2) &= x_1^4 - b_4x_1^2x_2^2 - 2b_6x_1x_2^3 - b_8x_2^4\, ,\\ \delta_2(x_1,x_2) &= 4x_1^3x_2 + b_2x_1^2x_2^2 + 2b_4x_1x_2^3 + b_6x_2^4\, , \end{align*} respectively. For $(x_1,x_2) \in \A^2_K$, we set \[ \delta(x_1,x_2) = (\delta_1(x_1,x_2), \delta_2(x_1,x_2))\,. \] It follows that if $(x_1,x_2)$ is a pair of Kummer coordinates for $P\in E(K)$, then $\delta(x_1,x_2)$ is a pair of Kummer coordinates for $2P$. \section{Heights}\label{heights} Let $K$ be a number field and let $E/K$ be an elliptic curve, given by a Weierstrass equation~$W$ as in~\eqref{W_eqn}. We denote by~$M_K$ the set of places of~$K$. For a place $v \in M_K$, we normalize the associated absolute value $|{\cdot}|_v$ so that it restricts to the usual absolute value on~$\Q$ when $v$ is an infinite place and so that $|p|_v = p^{-1}$ when $v$ is a finite place above~$p$. We write $n_v = [K_v : \Q_w]$ for the local degree, where $w$ is the place of~$\Q$ below~$v$. Then we have the product formula $\prod_{v \in M_K} |x|_v^{n_v} = 1$ for all $x \in K^\times$. The {\em naive height} of $P \in E(K)\setminus\{O\}$ with respect to~$W$ is given by \[ h(P) = \frac{1}{[K:\Q]}\sum_{v \in M_K}n_v \log\max\{|x_1|_v,|x_2|_v\}\,, \] where $(x_1, x_2)$ is a pair of Kummer coordinates for~$P$. Note that $h(P)$ does not depend on the choice of $(x_1, x_2)$, by the product formula. The limit \[ \hat{h}(P) = \lim_{n \to \infty} \frac{h(nP)}{n^2} \] exists and is called the {\em canonical height} (or {\em N\'eron-Tate height}) of~$P$. For the computation of~$\hat{h}(P)$, the limit construction is not suitable due to slow convergence and exponential growth of the size of the coordinates. Instead, one decomposes~$\hat{h}(P)$ into local terms. We now recall how this can be achieved, following~\cite{CPS}. For $v \in M_K$ and $Q \in E(K_v)$, we set \[ \Phi_v(Q) = \frac{\max\{|\delta_1(x_1,x_2)|_v, |\delta_2(x_1,x_2)|_v\}}{\max\{|x_1|_v,|x_2|_v\}^4} \] where $(x_1, x_2) \in \A^2(K_v) \setminus \{0,0\}$ is a pair of Kummer coordinates for~$Q$. Since $\delta_1$ and~$\delta_2$ are homogeneous of degree~$4$, $\Phi_v(Q)$ does not depend on the choice of~$(x_1, x_2$). The function $\Phi_v$ is continuous and bounded on~$E(K_v)$, so it makes sense to define \[ \Psi_v(Q) = -\sum^{\infty}_{n=0} 4^{-n-1} \log\Phi_v(2^nQ)\, , \] which is likewise continuous and bounded. Note that for $P \in E(K)$, we have \[ h(2P) - 4h(P) = \frac{1}{[K:\Q]}\sum_{v \in M_K}n_v\log\Phi_v(P)\,, \] and Tate's telescoping trick yields the formula \begin{equation}\label{can_height_formula} \hat{h}(P) = h(P) - \frac{1}{[K:\Q]}\sum_{v \in M_K}n_v\Psi_v(P)\,, \end{equation} which we will use to compute the canonical height. It is also possible to decompose the canonical height into a sum of local height functions. For $v \in M_K$ and $Q \in E(K_v)\setminus\{O\}$, we define the {\em local height} of~$Q$ as \[ \hat{\lambda}_v(Q) = \log \max\{1,|x(Q)|_v\} - \Psi_v(Q)\,. \] Then~\eqref{can_height_formula} immediately implies \begin{equation}\label{loc_height_decomp} \hat{h}(P) = \frac{1}{[K:\Q]}\sum_{v \in M_K}n_v\hat{\lambda}_v(P) \end{equation} for $P \in E(K)\setminus\{O\}$. \begin{rk} Several normalizations for the local height on elliptic curves can be found in the literature, see the discussion in~\cite{CPS}. Our normalization corresponds to that used in~\cite{CPS}, so in particular, our canonical height is twice the canonical height in Silverman's paper~\cite{SilvermanHeights} and in his books on elliptic curves. More precisely, we have \[ \hat{\lambda}_v(Q)=2\hat{\lambda}^{\text{SilB}}_v(Q)+\frac{1}{6}\log|\Delta(W)|_v\,, \] where $\hat{\lambda}^{\text{SilB}}_v$ is the normalization used in Silverman's book~\cite{ATAEC}*{Chapter VI}. The advantages of our normalizations are discussed in~\cite{CPS}; the crucial advantage of $\hat{\lambda}^{\text{SilB}}_v$ is its independence of the chosen Weierstrass equation. \end{rk} In Section~\ref{arch}, we need to know how local heights change under isogenies. \begin{prop}\label{P:bernardi}(Bernardi~\cite{Bernardi}) Let $E$ and~$E'$ be elliptic curves defined over~$K_v$, given by respective Weierstrass equations $W$ and~$W'$. Let $\varphi \colon E\longrightarrow E'$ be an isogeny of degree~$n$. If $Q\in E(K_v)$ satisfies $\varphi(Q)\ne 0$, then we have \[ \hat{\lambda}_v(\varphi(Q))=n\hat{\lambda}_v(Q)-\log |F_\varphi(Q)|_v-\frac{1}{6}\log |m(\varphi)|_v, \] where \[F_\varphi(Q)=\prod_{R\in \ker(\varphi)\setminus \{O\}}(x(Q)-x(R))\] and \[m(\varphi)=\lim_{Q\to O}\left(\frac{x(Q)}{x(\varphi(Q))}\right)^6\frac{\Delta(W')}{\Delta(W)}.\] \end{prop} \section{Non-archimedean local error functions}\label{nonarch} In this section, we let $K$ be a non-archimedean local field with normalized additive valuation $v \colon K\twoheadrightarrow \Z\cup\{\infty\} $. Let $\O$ denote the valuation ring of~$K$, let $k$ denote the residue class field of~$\O$ and let $\pi$ be a uniformizing element of~$\O$. Consider an elliptic curve~$E/K$, given by a Weierstrass equation~$W$ as in~\eqref{W_eqn}, with coefficients in~$\O$. Given $P \in E(K)$, we choose a pair of Kummer coordinates $(x_1,x_2)$ for $P$ and define \[ \varepsilon(x_1,x_2)=\min\{v(\delta_1(x_1,x_2)),v(\delta_2(x_1,x_2))\}-4\min\{v(x_1),v(x_2)\} \in \Z\,. \] Note that $\varepsilon$ does not depend on the choice of Kummer coordinates, so we can define $\varepsilon(P) = \varepsilon(x_1,x_2)$ for any such choice. The function~$\varepsilon$ is nonnegative, bounded and continuous in the $v$-adic topology. Hence we can define \begin{equation}\label{mu_def} \mu(P) = \sum^{\infty}_{n=0} \frac{1}{4^{n+1}} \varepsilon(2^nP) \in \R\,. \end{equation} It follows that $\mu$ is nonnegative, bounded and continuous as well. One can show that in fact $\mu(P) \in \Q$, compare Table~\ref{T:mu_values}. \begin{rk}\label{R:completion} If $K$ is the completion of a number field at a non-archimedean place~$v$, then we have $n_v \log\Phi_v(P) = -\varepsilon(P)(\log \#k)$ and $n_v \Psi_v(P) = \mu(P) (\log \#k)$ for $P \in E(K)$, where $\Phi_v$ and~$\Psi_v$ are as defined in Section~\ref{heights}. \end{rk} If we have bounds for~$\varepsilon(P)$ and for the denominator of~$\mu(P)$, then we can use~\eqref{mu_def} to compute~$\mu(P)$. \begin{lemma} \label{L:fast algo 2} Assume that $M \ge 2$ and~$B$ are nonnegative integers such that \begin{enumerate}[\upshape(1)]\addtolength{\itemsep}{1mm} \item $M'\mu(P) \in \Z$ for some $0<M'\le M$, and \item $\max \{\varepsilon(P) : P \in E(K)\} \le B$. \end{enumerate} Set $\displaystyle m = \left\lfloor\frac{\log(BM^2/3)}{\log 4}\right\rfloor$. Then $\mu(P)$ is the unique fraction with denominator~$\le M$ in the interval $[\mu_0, \mu_0 + 1/M^2]$, where \[ \mu_0 = \sum_{n=0}^{m} 4^{-n-1} \varepsilon(2^n P) . \] \end{lemma} \begin{proof} We know that $\mu(P)$ is a fraction with denominator bounded by~$M$. Two distinct such fractions have distance greater than~$1/M^2$ (here we use $M \ge 2$), so there is at most one such fraction in the given interval. On the other hand, we know that \[ \mu_0 \le \mu(P) \le \mu_0 + \sum_{n>m} 4^{-n-1} B = \mu_0 + \frac{B}{3\cdot 4^{m+1}} \le \mu_0 + \frac{1}{M^2}\,. \qedhere \] \end{proof} We now discuss how to bound~$\varepsilon(P)$ and the denominator of~$\mu(P)$. \begin{lemma}\label{L:mu_eps_bounds} For $P \in E(K)$, we have \begin{enumerate}[\upshape (i)]\addtolength{\itemsep}{1mm} \item $ 0 \le \mu(P) \le \frac{1}{4} v(\Delta(W))\,;$ \item $ 0 \le \varepsilon(P) \le v(\Delta(W))\,. $ \item The denominator of $\mu(P)$ is bounded from above by $v(\Delta(W))$. \end{enumerate} \end{lemma} \begin{proof} If the Weierstrass equation $W$ is minimal, then $\varepsilon(P)$ (or, equivalently, $\mu(P)$) vanishes if and only if $P$ has nonsingular reduction, and $\varepsilon(P)$ (or, equivalently, $\mu(P)$) depends only on the component of the special fiber of the N\'eron model of~$E$ that $P$ reduces to, see~\cite{SilvermanHeights}. For minimal~$W$, the nonzero values that $\mu$ can take and an upper bound~$\alpha$ are given in Table~\ref{T:mu_values}, taken from~\cite{CPS} and~\cite{ATAEC}. \begin{table} \begin{tabular}{|c||c|c|c|} \hline type & $v(\Delta)$ & $\mu$ & $\alpha$ {\Large\strut}\\\hline\hline I$_m$ & $m \ge 2$ & $i(m-i)/m,\; i=1,\ldots,m-1$ & $m/4$ {\Large\strut}\\\hline III & $\ge 3$ & $1/2$ & $1/2$ {\Large\strut}\\\hline IV & $\ge 4$ & $2/3$ & $2/3$ {\Large\strut}\\\hline I$^*_m$ & $\ge 6+m$ & 1, $(m+4)/4$ & $(m+4)/4$ {\Large\strut}\\\hline IV$^*$ & $\ge 8$ & $4/3$ & $4/3$ {\Large\strut}\\\hline III$^*$ & $\ge 9$ & $3/2$ & $3/2$ {\Large\strut}\\\hline \end{tabular} \medskip \caption{Nonzero values of and upper bounds~$\alpha$ for~$\mu$ for minimal Weierstrass equations} \label{T:mu_values} \end{table} Let $v_{\textrm{min}}$ denote the minimal discriminant of~$E$ over~$\O$. In general, we have \[ 0 \le \mu(P) \le \alpha + \frac{1}{6}\left(v(\Delta(W)) - v_{\textrm{min}}\right) \] by~\cite{CPS}*{Proposition~8}, and (i)~follows from a straightforward computation. This also proves~(ii), because \[ \varepsilon(P) = 4\mu(P) - \mu(2P)\,. \] By the proof of~\cite{CPS}*{Proposition~8}, a transformation from one integral Weierstrass equation to another does not change $\mu(P)\bmod \Z$, so (iii)~follows from Table~\ref{T:mu_values}. \end{proof} Lemmas~\ref{L:fast algo 2} and~\ref{L:mu_eps_bounds} lead to an algorithm for the computation of $\mu(P)$. A pair $(x_1,x_2)$ of Kummer coordinates for $P$ is said to be {\em primitive}, if $\min\{v(x_1),v(x_2)\} = 0$. Recall that $\pi$ denotes a uniformizer of~$K$. \begin{algo}\label{algo1} \strut \begin{enumerate}[1.]\addtolength{\itemsep}{2mm} \item Set $B \colonequals v(\Delta)$. \item If $B \le 1$, then return~$0$. Otherwise set $m \colonequals \lfloor \log(B^3/3)/\log 4 \rfloor$. \item Set $\mu_0 \colonequals 0$. Let $(x_1,x_2)$ be primitive Kummer coordinates for~$P$ with $(m+1)B+1$ $v$-adic digits of precision. \item For $n \colonequals 0$ to~$m$ do: \begin{enumerate}[a.]\addtolength{\itemsep}{1mm} \item Compute $(x'_1,x'_2) \colonequals \delta(x_1,x_2)$ (to $(m+1)B+1$ $v$-adic digits of precision). \item Set $\ell \colonequals \min\{v(x'_1),v(x'_2)\}$. \item If $\ell = 0$, then return $\mu_0$. \item Set $\mu_0 \colonequals \mu_0 + 4^{-n-1} \ell$. \item Set $(x_1,x_2) \colonequals \pi^{-\ell} (x'_1,x'_2)$ \end{enumerate} \item Return the unique fraction with denominator at most~$B$ in the interval $[\mu_0,\,\mu_0 + 1/B^2]$. \end{enumerate} \end{algo} We now show that the algorithm is correct and estimate its running time. \begin{prop} \label{P:fast algo} Algorithm~\ref{algo1} computes~$\mu(P)$. Its running time is \[ \ll (\log v(\Delta)) \operatorname{\sf M}\bigl((\log v(\Delta)) v(\Delta) (\log \#k)\bigr) \] as $v(\Delta) \to \infty$, with an absolute implied constant. \end{prop} \begin{proof} If $B = v(\Delta) \le 1$, then $\mu = \varepsilon = 0$ by Table~\ref{T:mu_values}. Otherwise the loop in step~4 computes the sum in Lemma~\ref{L:fast algo 2} (where now $M = B \ge 2$). When $\ell = 0$ in step~4c, then $\varepsilon(2^{n'}P) = 0$ for all $n' \ge n$, hence the infinite sum defining~$\mu$ is actually a finite sum and equals~$\mu_0$. (This step could be left out without affecting the correctness or the worst-case complexity of the algorithm.) Lemma~\ref{L:mu_eps_bounds} shows that $B$ is an upper bound for~$\varepsilon$ and that $M=B$ is an upper bound for the denominator of~$\mu$. So the algorithm computes~$\mu(P)$, provided the precision of $(m+1)B + 1$ $v$-adic digits is sufficient. For this, note that the precision loss at each duplication step is given by~$\varepsilon(2^n P)$ and is therefore bounded by~$B$. So after at most $m+1$~steps in the loop, the resulting~$(x_1,x_2)$ still has at least one digit of precision. It remains to estimate the running time. We assume that elements of~$\O$ are represented as truncated power series in~$\pi$, whose coefficients are taken from a complete set of representatives for the residue classes. Operations on these coefficients can be performed in time $\ll \operatorname{\sf M}(\log \#k)$. Then steps b through~e in the loop take negligible time compared to step~a, which involves a fixed number of additions and multiplications of elements given to a precision of $(m+1)B + 1$ digits, leading to a complexity of \[ \ll \operatorname{\sf M}\bigl(((m+1)B + 1) (\log \#k)\bigr) \] operations for each pass through the loop. The total running time is therefore \[ \ll (m+1) \operatorname{\sf M}\bigl(((m+1)B + 1) (\log \#k)\bigr) \ll (\log v(\Delta)) \operatorname{\sf M}\bigl((\log v(\Delta)) v(\Delta) (\log \#k)\bigr) \] as $v(\Delta) \to \infty$. \end{proof} \begin{rk}\label{R:silverman_algo} We stress that our algorithm does not require $W$ to be minimal. If we know that $W$ is minimal, then Silverman's algorithm~\cite{SilvermanHeights}*{\S5}, which only involves the computation of the valuations of a bounded number of polynomials in the coefficients of~$W$ and the coordinates of~$P$, can be used to compute~$\mu(P)$. \end{rk} \section{Archimedean local heights}\label{arch} Let $K$ be an archimedean local field with valuation~$v$. The following methods have been proposed for the computation of the local height $\hat{\lambda} = \hat{\lambda}_v$ on an elliptic curve~$E/K$, given by a Weierstrass equation~\eqref{W_eqn}: \begin{enumerate}[$\bullet$] \item An elegant series approach due to Tate and modified by Silverman~\cite{SilvermanHeights}. \item A more complicated series approach based on theta functions, see~\cite{Cohen}*{Algorithm~7.5.7}; \item An algorithm based on the Arithmetic Geometric Mean (AGM) and~2-isogenies introduced by Bost and Mestre in an unpublished manuscript~\cite{BMisog}, which currently requires $v$ to be real; see also Bradshaw's PhD thesis~\cite{BradshawThesis}. \end{enumerate} Tate's series converges linearly. The theta series has a better rate of convergence and is also faster in practice if the elliptic integrals arising in the algorithm are computed using the AGM. If $v$ is real and one is interested in high precision, then the method of Bost and Mestre is preferable, as it converges quadratically. We now describe this algorithm and provide a complexity analysis. Let $v$ be real and let $|{\cdot}|$ denote the usual absolute value on $K= \R$. We want to compute $\hat{\lambda}(P)$ for a point $P \in E(\R)$; for simplicity, we only consider the case $2P \ne O$. Note that the function~$\mu$ considered in~\cite{BMisog} satisfies $\mu = \frac{1}{2}\hat{\lambda}$. Applying a transformation, we may assume that $E$ is given by a Weierstrass equation \[ W \colon y^2 = x(x^2+ux+v)\,, \] where $u,v \in \R$. If all points of order~$2$ on~$E$ are real, then we set $E_0 = E$. Otherwise, consider the isogeny $E \to E_0$ defined by \begin{equation}\label{isog0} (x,y) \mapsto\left(\frac{x^2+ux+v}{x},\,y\frac{x^2-v}{x^2}\right)\,, \end{equation} where now $E_0$ has full~2-torsion over $\R$ and is given by the Weierstrass equation \[ y^2 = x(x^2-2ux+u^2-4v)\,. \] By Proposition~\ref{P:bernardi}, it suffices to compute the local height of the image of~$P$ on~$E_0$ to obtain~$\hat{\lambda}(P)$. For the algorithm, we need a Weierstrass equation \[ W_0 \colon y^2=x(x+a_0^2)(x+b_0^2) \] for~$E_0$, where $0 < b_0<a_0\in\R$. We may assume that $P$ lies on the connected component $E^0_0(\R)$ of the identity; if not, we can apply the algorithm to $2P \in E^0_0(\R)$ and compute $\hat{\lambda}(P)$ using \begin{equation}\label{conn-cpt} \hat{\lambda}(2P) = 4\hat{\lambda}(P) - \log|2y(P)|\,. \end{equation} We define the AGM sequences $(a_n)$ and $(b_n)$ by \[ a_n=\frac{a_{n-1}+b_{n-1}}{2}\,, \qquad b_n=\sqrt{a_{n-1}b_{n-1}}\, , \] and we let $M(a_0,b_0)$ denote their common limit. For $n\ge 1$ we recursively define an elliptic curve~$E_n$ over the reals by the Weierstrass equation \[ W_n \colon y^2=x(x+a_n^2)(x+b_n^2)\, , \] and we define a 2-isogeny $\varphi_{n-1} \colon E_n\to E_{n-1}$ by \[ (x,y)\longmapsto\left(\frac{x(x+b_n^2)}{x+a_n^2},\; y\frac{(x+a_{n-1}a_n)(x+b_{n-1}a_n)}{(x+a_n^2)^2}\right)\, . \] Then the sequence of curves $(E_n)_n$ converges to a singular cubic curve~$E_\infty$, with equation \[ W_\infty \colon y^2=x\left(x+M(a_0,b_0)^2\right)^2\,. \] Moreover, the sequence of isogenies $(\varphi_n)_n$ converges to the identity map on~$E_\infty(\R)$. Now let $\hat{\lambda}_n$ denote the local height on~$E_n(\R)$. Then Proposition~\ref{P:bernardi} asserts that \begin{equation}\label{isogrel} \hat{\lambda}_{n-1}(\varphi_{n-1}(P_n))=2\hat{\lambda}_n(P_n)-\log(x(P_n)+a_n^2)\, , \end{equation} whenever we have $x(\varphi_{n-1}(P_n))\ne0$. Bost and Mestre use~\eqref{isogrel} to give a formula for~$\hat{\lambda}(P)$. Note that $\varphi_{n-1}$ maps $E_{n}(\R)$ onto the connected component~$E^0_{n-1}(\R)$ and that points on~$E^0_{n-1}(\R)$ always have a unique preimage on~$E^0_n(\R)$ under~$\varphi_{n-1}$. Setting $P_0=P$, we therefore get a well-defined sequence of preimages $P_n=(x_n,y_n)\in E^0_n(\R)$, which converges to a point $P_\infty=(x_\infty,y_\infty)\in E_\infty(\R)$. Here $x_n$ can be calculated using \[ x_n=\frac{1}{2}\left(x_{n-1}-a_{n-1}b_{n-1}+\sqrt{(x_{n-1}+a_{n-1}^2)(x_{n-1}+b_{n-1}^2)}\right). \] From~\eqref{isogrel} we deduce \[ \hat{\lambda}(P) = \hat{\lambda}_0(P) = \log\lim_{n\to\infty}\frac{(x_n+a_n^2)^{2^{n-1}}}{\prod^{n-1}_{m=1}(x_m+a_m^2)^{2^{m-1}}}\,, \] or equivalently, \begin{equation}\label{bm_sum} \hat{\lambda}(P) = \log(x_1+a_1^2) + \sum^\infty_{n=1}2^{n}\log\frac{x_{n+1} + a_{n+1}^2}{x_{n} + a_{n}^2}\, . \end{equation} Because of the quadratic convergence of the~AGM, these formulas can be used to compute $\hat{\lambda}(P)$ to an accuracy of~$2^{-d}$ in $\ll \log (d + \log \|W\|)$ steps. This was already shown by Bradshaw, see~\cite{BradshawThesis}*{\S6.1}. We give a slightly different error estimate. Note first that we have \[ a_n-b_n \le 2^{1-2^n}(a_0-b_0)\, . \] Because $x_n>0$ and $0<b_0<b_n<a_n$, this implies \begin{equation}\label{aux_bd} \frac{a_n^2-b_n^2}{x_n+a_n^2} \le 2^{1-2^n}(a_0-b_0)\frac{a_n+b_n}{x_n+a_n^2} \le 2^{2-2^n}\left(\frac{a_0}{b_0}-1\right)\, . \end{equation} Now set \[ s_n := 1 - \frac{x_{n+1}+a_{n+1}^2}{x_n+a_n^2} \quad\textrm{and}\quad\vartheta := \frac{a_0}{b_0}-1 + \sqrt{\frac{a_0}{b_0}-1}\, . \] Then we have $0<s_n<1$ and $\vartheta \ll \|W\|$. The sequence $s_n$ converges rapidly to~0 for large $n$, since~\eqref{aux_bd} implies \begin{align} s_n & \, = \left|\frac{1}{2}\left(\sqrt{\frac{x_n+b_n^2}{x_n+a_n^2}} + \frac{x_n+\left(\frac{a_n^2+b_n^2}{2}\right)}{x_n+a_n^2}\right)-1\right|\nonumber\\ & \,\le \frac{1}{2}\left|\sqrt{\frac{x_n+b_n^2}{x_n+a_n^2}} -1\right| + \frac{1}{2}\left|\frac{x_n+\left(\frac{a_n^2+b_n^2}{2}\right)}{x_n+a_n^2}-1\right|\nonumber\\ & \,\le \frac{1}{2}\sqrt{\frac{a_n^2-b_n^2}{x_n+a_n^2}} + \frac{1}{4}\left(\frac{a_n^2-b_n^2}{x_n+a_n^2}\right)\nonumber\\ & \,\le 2^{-2^{n-1}}\sqrt{\frac{a_0}{b_0}-1}+ 2^{-2^n}\left(\frac{a_0}{b_0}-1\right) \nonumber\\ & \,\le 2^{-2^{n-1}}\vartheta\label{s_n-bound}\, . \end{align} In particular, we have $s_n \le \frac{1}{2}$ for $n \ge \log_2(\log_2\vartheta + 1) + 1$, so that $|\log(1-s_n)| \le 2s_n$ for such $n$. We can use this to bound the tail of the series in~\eqref{bm_sum}. Namely, we have \begin{align*} \left|\sum^\infty_{n=N+1}2^{n}\log\frac{x_{n+1} + a_{n+1}^2}{x_{n} + a_{n}^2} \right| & \,\le \sum^\infty_{n=N+1}2^{n}\left|\log(1-s_n) \right|\\ & \,\le \vartheta\sum^\infty_{n=N+1}2^{1+n-2^{n-1}}\, , \end{align*} if $N \ge \log_2(\log_2\vartheta + 1)$. For $n \ge 4$, we have $n-2^{n-1} \le -2^{n-2}$, so \begin{equation}\label{est_n4} \left|\sum^\infty_{n=N+1}2^{n}\log\frac{x_{n+1} + a_{n+1}^2}{x_{n} + a_{n}^2} \right| \le \vartheta\sum^\infty_{n=N+1}2^{1-2^{n-2}} \le 2^{2-2^{N-1}}\vartheta \end{equation} follows, provided $N \ge \max\{3, \log_2(\log_2\vartheta + 1) \}$. Having computed~$\hat{\lambda}(P)$ for $P \in E(\R)$, we get~$\Psi_\infty(P)$ from \begin{equation}\label{psi-lambda} \Psi_\infty(P) = \log \max\{1,|x(P)|\}-\hat{\lambda}(P)\, . \end{equation} \begin{prop}\label{P:arch} The algorithm above computes $\Psi_\infty(P)$ to $d$~bits of precision in time \[ \ll \log(d + \log\|W\|)^2 \operatorname{\sf M}(d + \log\|W\|)\, . \] \end{prop} \begin{proof} Suppose first that we have already computed $a_0, b_0$ and $x_0$ and that $P$ lies on the connected component $E_0^0(\R)$. By~\eqref{bm_sum} and~\eqref{est_n4}, we have \[ \left|\hat{\lambda}(P) - \log(x_1+a_1^2) - \sum^N_{n=1}2^{n}\log\frac{x_{n+1} + a_{n+1}^2}{x_{n} + a_{n}^2}\right| \le 2^{-d}\,, \] for \[ N = \max\left\{3,\, \log_2\left(d+2+\log_2 \vartheta)\right)+1\right\} \ll \log (d + \log \|W\|)\, . \] For every $n \le N$, we have to apply a fixed number of additions, multiplications and square roots to compute $a_{n+1}, b_{n+1}$ and $x_{n+1}$ --- which can be done to $d'$~bits of precision in time $\ll \operatorname{\sf M}(d')$ --- and we have to compute $\log(1-s_n)$. Because of precision loss due to the multiplication by $2^n$, we need to compute $\log(1-s_n)$ to an additional $n$~bits, so we need an initial precision of \[ d + N \ll d + \log (d + \log \|W\|) \] bits. A logarithm can be computed to $d'$~bits of precision in time $\ll (\log d')\operatorname{\sf M}(d')$ using one of several quadratically converging algorithms based on the AGM, see~\cite{Borwein}*{Chapter~7}. Therefore, and by~\eqref{s_n-bound}, we can compute $\log(1-s_n)$ to $d+n$~bits of precision in time \[ \ll \log(d + \log(d + \log \|W\|))\operatorname{\sf M}(d + \log (d + \log \|W\|))\, . \] The computation of $\log(x_1+a_1^2)$ to $d$~bits of precision takes time \[ \ll \log(d + \log\|W\|)\operatorname{\sf M}(d + \log\|W\|) \, . \] Hence, given $a_0$, $b_0$ and $x_0$ to $d + N$~bits of precision, we can compute $\hat{\lambda}(P)$ to $d$~bits of precision in time \begin{align*} &\ll \log(d + \log\|W\|)\, \times\\ &\qquad{} \left(\operatorname{\sf M}(d + \log\|W\|) + \log\bigl(d + \log(d + \log\|W\|)\bigr)\operatorname{\sf M}\bigl(d + \log(d + \log\|W\|)\bigr)\right)\, . \end{align*} We can then find $\Psi_\infty(P)$ using~\eqref{psi-lambda} in time $\ll \log(d) \operatorname{\sf M}(d)$, which is negligible. To compute $a_0$, $b_0$ and $x_0$ from a given Weierstrass equation, we need to find the roots of at most two polynomials of degree $\le 3$ with real coefficients, transform the corresponding Weierstrass equation and find the image of our point $P$ under these transformations. The roots of a polynomial of fixed degree to $d'$~bits of precision can be found in time $\ll \operatorname{\sf M}(d')$, see~\cite{Borwein}*{Theorem~6.4}; the same holds for the evaluation of a polynomial of fixed degree. To counter loss of precision, we start with an initial precision of $\ll d + \log \|W\| + \log(d + \log\|W\|)$ bits, so we can compute $a_0$, $b_0$ and $x_0$ to $d + N$~bits of precision in time \[ \ll \operatorname{\sf M}(d + \log \|W\| + \log(d + \log\|W\|))\, , \] which is dominated by the complexity of the remaining parts of the algorithm. The logarithmic correction terms coming from~\eqref{conn-cpt} and from Proposition~\ref{P:bernardi} applied to the isogeny~\eqref{isog0} and to the change of model needed to find $W_0$ can be computed to sufficiently many bits of precision in time $\ll \log(d + \log \|W\|)\operatorname{\sf M}(d + \log \|W\|)$. Hence the result follows. \end{proof} \begin{rk}\label{R:series} For large $n$, computing $\log(1-s_n)$ using an AGM-based algorithm might be less efficient than using a power series such as \[ \log x = 2\sum^{\infty}_{k=0}\frac{1}{2k+1}\left(\frac{x-1}{x+1}\right)^{2k+1}\, . \] The reason is that by~\eqref{s_n-bound}, the numbers $1-s_n$ are very close to~1, so only few terms of the power series have to be computed. \end{rk} \section{Computing the canonical height of rational points}\label{algo} We combine the results of Sections~\ref{nonarch} and~\ref{arch} into an efficient algorithm for computing the canonical height of a point $P$ on an elliptic curve $E$ over a number field, proving Theorem~\ref{T:main}. For simplicity, we take this number field to be~$\Q$ in the following. We assume that our curve is given by a Weierstrass equation~\eqref{W_eqn} $W$ with coefficients in~$\Z$, but we make no minimality assumption. One usually computes~$\hat{h}(P)$ using the decomposition~\eqref{loc_height_decomp} into local heights~$\hat{\lambda}_v(P)$. The local height~$\hat{\lambda}_\infty(P)$ can be computed using the algorithm of Bost-Mestre discussed in Section~\ref{arch} or one of the other approaches mentioned there. If the factorization of~$\Delta(W)$ is known, we can use~\cite{SilvermanHeights}*{\S5} to compute the local heights~$\hat{\lambda}_p(P)$ efficiently. Alternative, but less efficient algorithms can be found in~\cite{TschoepeZimmer} and~\cite{Zimmer}. If we know that $W$ is minimal (for which some factorization is required, see the introduction), then we can use~\cite{SilvermanLittleFact} to compute $\sum_p \hat{\lambda}_p(P)$ without factoring~$\Delta(W)$. Another approach to computing~$\hat{h}(P)$ without factorization is discussed in~\cite{EverestWard}, but their method does not yield a polynomial-time algorithm. Our goal is to devise an algorithm for the computation of~$\hat{h}(P)$ that runs in time quasi-linear in $\log \|W\|$, $h(P)$ and the required precision~$d$, measured in bits after the binary point. We note that $h(P)$ is the logarithm of a rational number, so it can be computed in time $\ll \log(h(P) + d) \operatorname{\sf M}(h(P) + d)$. In the previous section, we showed that there is a quasi-linear algorithm for the computation of $\Psi_\infty(P)$, see Proposition~\ref{P:arch}. It remains to see how the total contribution \[{\Psi}^{\text{f}}(P) \colonequals \sum_p \Psi_p(P) = \sum_p \mu_p(P) \log p\] coming from the local error functions at finite places can be computed efficiently; here we write~$\mu_p$ for the local height correction function over~$\Q_p$ as in Definition~\ref{mu_def}. Fix $P \in E(\Q)$. We assume that $(x_1,x_2)\in \Z^2$ is a primitive (i.e., $\gcd(x_1,x_2) = 1$) pair of Kummer coordinates for~$P$. We set $g_n = \gcd(\delta(x^{(n)}_1,x^{(n)}_2))$ where $(x^{(n)}_1,x^{(n)}_2) \in \Z^2$ is a primitive pair of Kummer coordinates for~$2^n P$. Then the definition of $\mu_p$ implies that \[ {\Psi}^{\text{f}}(P) = \sum_{n=0}^\infty 4^{-n-1} \log g_n \,. \] See~\cite{FlynnSmart} for a related approach in genus~2. By Lemma~\ref{L:mu_eps_bounds} we know that each~$g_n$ divides~$\Delta(W)$. The key observation is that ${\Psi}^{\text{f}}(P)$ is a rational linear combination of logarithms of positive integers, which can be computed exactly as follows. \begin{algo}\label{algo2}\strut \begin{enumerate}[1.]\addtolength{\itemsep}{2mm} \item Set $(x_1',x_2') \colonequals \delta(x_1,x_2)$, $g_0 \colonequals \gcd(x'_1,x'_2)$ and $(x_1,x_2) \colonequals (x'_1/g_0,x'_2/g_0)$. \item Set $D \colonequals \gcd(\Delta(W), g_0^\infty)$ and $B \colonequals \lfloor \log D/\log 2 \rfloor$. \item If $B \le 1$, then return $0$. Otherwise set $m \colonequals \lfloor \log(B^5/3)/\log 4 \rfloor$. \item For $n \colonequals 1$ to $m$ do: \begin{enumerate}[a.]\addtolength{\itemsep}{1mm} \item Compute $(x'_1,x'_2) \colonequals \delta(x_1,x_2) \bmod D^{m+1} g_0$. \item Set $g_n \colonequals \gcd(D,\gcd(x'_1,x'_2))$ and $(x_1,x_2) \colonequals (x'_1/g_n,x'_2/g_n)$. \end{enumerate} \item Use Bernstein's algorithm from~\cite{dcba2} to compute a sequence $(q_1, \ldots, q_r)$ of pairwise coprime positive integers such that each~$g_n$ (for $n = 0, \ldots, m$) is a product of powers of the~$q_i$: $g_n = \prod_{i=1}^r q_i^{e_{i,n}}$. \item For $i \colonequals 1$ to $r$ do: \begin{enumerate}[a.]\addtolength{\itemsep}{1mm} \item Compute $a \colonequals \sum_{n=0}^{m} 4^{-n-1} e_{i,n}$. \item Let $\mu_i$ be the simplest fraction between $a$ and $a+1/B^4$. \end{enumerate} \item Return $\sum_{i=1}^r \mu_i \log q_i$, a formal linear combination of logarithms. \end{enumerate} \end{algo} \begin{prop} \label{P:nofact} The preceding algorithm computes ${\Psi}^{\text{\rm f}}(P)$ in time \[ \ll (\log\log D) \operatorname{\sf M}((\log\log D) (\log D)) + \operatorname{\sf M}(h(P)) \,. \] \end{prop} \begin{proof} We note that if $B \le 1$ in step~3, then either $g_0 = 1$ and $\Psi^{\text{\rm f}}(P) = 0$, or else $D \in \{2,3\}$. In the latter case, $g_0$ is a power of $p = 2$ or~$3$ and $v_p(\Delta(W)) = 1$, which would imply that $\varepsilon_p(P) = 0$, so $g_0 = 1$, and we get a contradiction. Let $p$ be a prime. If $p \nmid g_0$, then $\varepsilon_p(P) = 0$ and therefore $\mu_p(P) = 0$. So we now assume that $p$ divides~$g_0$. We have $v_p(\Delta(W)) = v_p(D) \le B$. We see that $p^{(m+1)v_p(D) + 1}$ divides $D^{m+1} g_0$, so computing modulo~$D^{m+1} g_0$ will provide sufficient $p$-adic accuracy so that $v_p(g_n) = \varepsilon_p(2^n P)$ for all $n \le m$, compare the proof of Proposition~\ref{P:fast algo} above. (One could replace $D^{m+1} g_0$ by~$D^{m+1-n} g_0$ in step 4a.) Since all the~$g_n$ are power products of the~$q_i$, there will be exactly one~$i = i(p)$ such that $p \mid q_{i(p)}$; let $b_p = v_p(q_{i(p)})$. Then \[ \sum_{n=0}^m 4^{-n-1} \varepsilon_p(2^n P) = \sum_{n=0}^m 4^{-n-1} v_p(g_n) = b_p \sum_{n=0}^m 4^{-n-1} e_{i(p),n} = b_p a \,, \] so \[ \mu_p(P) = \sum_{n=0}^\infty 4^{-n-1} \varepsilon_p(2^n P) = b_p a + \sum_{n=m+1}^\infty 4^{-n-1} \varepsilon_p(2^n P) \,, \] where the last sum is in $[0, 1/B^4]$ (this follows from $0 \le \varepsilon_p \le B$, see Lemma~\ref{L:mu_eps_bounds}, and the definition of~$m$, compare the proof of Lemma~\ref{L:fast algo 2}). We know that the denominator of~$\mu_p(P)$ is at most~$B$ (see Lemma~\ref{L:mu_eps_bounds}), so the denominator of~$\mu_p(P)/b_p$ is at most~$B^2$, since $b_p\le v_p(D) \le B$. On the other hand, $a \le \mu_p(P)/b_p \le a + 1/(b_p B^4) \le a + 1/B^4$, which implies that $\mu_p(P)/b_p$ is the unique fraction in $[a, a + 1/B^4]$ with denominator at most~$B^2$, so $\mu_p(P)/b_p = \mu_{i(p)}$ by Step~6b. Now \[ \sum_p \mu_p(P) \log p = \sum_p \mu_{i(p)} b_p \log p = \sum_{i=1}^r \mu_i \sum_{p \mid q_i} b_p \log p = \sum_{i=1}^r \mu_i \log q_i \,. \] It remains to estimate the running time. The computation of~$\delta(x_1,x_2)$ can be done in time $\ll \operatorname{\sf M}(h(P))$; the same is true for the gcd computation and the division in step~1. The computations in steps 2 and~3 take negligible time compared to step~4. Each pass through the loop in step~4 takes time $\ll \operatorname{\sf M}\bigl((m+2) \log D\bigr)$, so the total time for the loop is $\ll m \operatorname{\sf M}(m (\log D)) \ll (\log\log D) \operatorname{\sf M}((\log\log D) (\log D))$. The algorithm in~\cite{dcba2} computes suitable~$q_i$ for a pair $a, b$ of positive integers in time $\ll (\log ab)(\log\log ab)^{2}$. We iterate this algorithm, applying it first to $g_0$ and~$g_1$, then to each of the resulting $q_i$ and~$g_2$, and so on. Note that $g_n\le D$ for all $n$. Because there are always $\ll \log D$ terms in the sequence of~$q_i$'s, this leads to a contribution of $\ll \log D (\log\log D)^3$ for step~5. This is dominated by the time for the loop. The remaining steps take negligible time. \end{proof} In practice, the efficiency of this approach can be improved as follows: \begin{enumerate}[$\bullet$] \item We trial factor $\Delta(W)$ up to some bound~$T$ to split off the contributions of all sufficiently small primes~$p$. We can then compute the corresponding~$\mu_p$ using the algorithm of Proposition~\ref{P:fast algo} or the algorithm of~\cite{SilvermanHeights}, see Remark~\ref{R:silverman_algo}. In step~3, we can then set $B \colonequals \lfloor \log D'/\log T \rfloor$, where $D'$ is the unfactored part of~$D$. Note that in practice the trial division can take quite a bit more time than it saves, in particular when the equation has large coefficients, so this modification should be used with care. \item We update our list of `building blocks'~$q_i$ after each pass through the loop in step~4 using the new~$g_n$; we do the computation modulo suitable powers of the~$q_i$ instead of modulo~$D^{m+1} g_0$. We can also use separate values of $B$ and~$m$ for each~$q_i$, which will usually be smaller than those given above. \item In this way, we can integrate steps 4, 5 and~6 into one loop. \item We can replace $B^5$ in the definition of~$m$ by~$2B^4$. Then $\mu_p(P) \le b_p a + 1/(2B^3)$ and $a \le \mu_p(P)/b_p \le a + 1/(2b_pB^3)$. If $\mu_p(P)/b_p = r/s$ with $s \le B b_p$, then we have $a \le r/s \le a + 1/(2sB^2) \le a + 1/(2s^2)$. There can be at most one fraction~$r/s$ with $s \le B^2$ satisfying this: if $r'/s'$ is another such fraction, then \[ \frac{1}{ss'} \le \Bigl|\frac{r}{s} - \frac{r'}{s'}\Bigr| \le \frac{1}{2\min\{s,s'\}B^2} \,, \] which leads to the contradiction $\max\{s,s'\} \ge 2B^2$. We can then find $\mu_i = \mu_p(P)/b_p$ as the first convergent~$r/s$ of the continued fraction expansion of~$a$ that is $\ge a$ and satisfies $r/s \le a + 1/(2sB^2)$. \end{enumerate} Combining Proposition~\ref{P:arch} and Proposition~\ref{P:nofact}, we finally obtain an efficient algorithm for computing the canonical height~$\hat{h}(P)$ of a point $P \in E(\Q)$. \begin{proof}[Proof of Theorem~\ref{T:main}] The first term is the time needed to compute~$h(P)$. The second term comes from the complexity bound for the computation of $\Psi^f(P)$ (using $\log D \ll \log \|W\|$) from Proposition~\ref{P:nofact}. The third term is the bound for the computation of $\Psi^\infty(P)$ given in Proposition~\ref{P:arch}. \end{proof} \section{Implementation and Examples} \label{examples} We have implemented our algorithm using the computer algebra system {\sf Magma}~\cite{Magma}. In the current implementation, the factorization into coprimes in the algorithm preceding Proposition~\ref{P:nofact} uses a relatively simple algorithm due to Buchmann and Lenstra~\cite{BuchmannLenstra}*{Proposition~6.5} instead of the faster algorithm of~\cite{dcba2} (or of~\cite{dcba}). In practice, the running time of this part of the algorithm appears to be negligible. Let us compare our implementation to {\sf Magma's} built-in command {\tt CanonicalHeight} (version~2.21-2). The latter uses the method of Bost-Mestre for the computation of the archimedean local height. For the finite part of the height, a globally minimal Weierstrass model is computed. The non-archimedean contributions are then computed separately using the algorithm from~\cite{SilvermanHeights}; the relevant primes are found by factoring $\gcd(\delta_1(x_1,x_2),\delta_2(x_1,x_2))$, where $(x_1,x_2)$ is a primitive pair of Kummer coordinates for a point $P$. The same strategy is currently used in {\tt Pari/GP}. The computer algebra system {\tt Sage} contains an implementation of, essentially, Silverman's original algorithm for the computation of canonical heights from~\cite{SilvermanHeights}; in particular, it factors the discriminant. \begin{ex}\label{E:ex1} Consider the family $E_a$ of curves given by the Weierstrass equation \[ W_a \colon y^2 = x^3 - ax + a \,, \] where $a$ is an integer, and the non-torsion point $P = (1, 1)$ on $E_a$. To compute $\hat{h}(P)$, {\sf Magma} needs to find a globally minimal model for $E_a$, which boils down to deciding whether a sixth power of a prime divides $a$. Hence, for random integers $a$ of large absolute value, the {\sf Magma} implementation becomes slow. For instance, taking $a$ to be {\tiny 5340200419833800017985460942490398389444339691251749039558531515293241873258929634112121245344691478}, which has~100 digits and is of the form $a = 2\cdot 37\cdot a'$ with $a'$ composite, {\sf Magma}'s built-in {\tt CanonicalHeight} takes about an hour, but our implementation needs only 0.001~seconds to compute $\hat{h}(P)$ to~30 decimal digits of precision. For these computations, and the ones below, we used a Dell Latitude~E7440 Laptop with 8~GB of memory and an i5-4300U~CPU with two cores having 1.9~GHz each. For $a$ equal to {\tiny 11564989338479595339888318793988161304389769478402845252925842502529380219520469639630008648580579144420 \\644034811856542472168315806833370153467480796669618513200953623811052728493745808300717019759850}, which has~200 digits and factors as $a = 2\cdot3^2\cdot5^2 \cdot a'$ with $a'$ composite, the computation of $\hat{h}(P)$ using our implementation takes 0.003~seconds, whereas {\sf Magma} needs about 5~hours and 30~minutes. Finally, we look at the~500-digit number $a =\, ${\tiny 28276805523181086329328141188416415606304708589734\\77817578971661824087775869298113031993537983620824509955240160299513508612337439203295411762778576874861\\6863628083464269023575658346783517541505391502873826466 503688549496039448522504993529003411479688448361\\01223685296862173154902553901481398879346590153031505842226530360178416613777225501497807415587146715112\\586124106534351729435112961600134931787708117028525772977 3270941059335530220433045635898507554473398924\\420918799034729911478550230429211184}, which factors as $a = 2^4\cdot 23\cdot71 \cdot a'$ with $a'$ composite. Our implementation needs 0.009~seconds to compute $\hat{h}(P)$; {\sf Magma}'s {\tt CanonicalHeight} did not terminate in~6 weeks. For this $a$, the computation of the canonical height of $50P$, which has naive height $h(50P) \approx 1437536.77$, took 0.215~seconds, whereas it took {\sf Magma} 2.83~seconds to even compute $50P$! For random $a$ having~5000 digits, the computation of $\hat{h}(P)$ to the standard precision of 30~decimal digits usually takes about 0.7~seconds. Our implementation is usually faster than {\tt CanonicalHeight} if $a$ has at least~18 decimal digits. Note that in contrast to our implementation, the {\sf Magma} implementation of the algorithm of Bost-Mestre for $\hat{\lambda}_\infty$ is written in {\sf C}. \end{ex} \begin{bibdiv} \begin{biblist} \bib{Bernardi}{article}{ author = {Dominique Bernardi}, title = {Hauteur p-adique sur les courbes elliptiques}, book = { title = {Seminar on number theory Paris 1979--1980}, series={Progr. Math.}, volume={12}, publisher={Birkh\"auser Boston, Boston, MA}, }, date = {1981}, pages = {1--14},} \bib{dcba}{article}{ year={2005}, author={Daniel J. Bernstein}, title={Factoring into coprimes in essentially linear time}, journal={Journal of Algorithms}, volume={54}, pages={1--30}, } \bib{dcba2}{misc}{ author={Daniel J. Bernstein}, year={2004}, title={Research announcement: Faster factorization into coprimes}, note={Preprint} } \bib{Borwein}{book}{ author={Borwein, Jonathan M.}, author={Borwein, Peter B.}, title={Pi and the AGM}, series={Canadian Mathematical Society Series of Monographs and Advanced Texts, 4}, publisher={John Wiley \& Sons, Inc., New York}, date={1998}, } \bib{Magma}{article}{ author={Bosma, Wieb}, author={Cannon, John}, author={Playoust, Catherine}, title={The Magma algebra system. I. The user language}, journal={J. Symbolic Comput.}, volume={24}, date={1997}, number={3-4}, pages={235--265}, url={See also the Magma home page at http://magma.maths.usyd.edu.au/magma/}, } \bib{BMisog}{misc}{ author={Bost, Jean-Beno{\^{\i}}t}, author={Mestre, Jean-Fran{\c{c}}ois}, year={1993}, title={Calcul de la hauteur archim\'edienne des points d'une courbe elliptique par un algorithme quadratiquement convergent et application au calcul de la capacit\'e de l'union de deux intervalles}, note={Unpublished Manuscript} } \bib{BradshawThesis}{thesis}{ author={Robert W. Bradshaw}, title={Provable Computation of Motivic $L$-functions}, date={2010}, organization={University of Washington}, type={PhD thesis}, } \bib{BuchmannLenstra}{article}{ author={Buchmann, J. A.}, author={Lenstra, H. W., Jr.}, title={Approximating rings of integers in number fields}, journal={J. Th\'eor. Nombres Bordeaux}, volume={6}, date={1994}, number={2}, pages={221--260}, issn={1246-7405}, } \bib{Cohen}{book}{ author={Henri Cohen}, title={A course in computational algebraic number theory}, publisher={Springer-Verlag}, date={1993}, } \bib{CPS}{article}{ author={Cremona, John}, author={Prickett, Martin}, author={Siksek, Samir}, title={Height difference bounds for elliptic curves over number fields}, journal={J. Number Theory}, volume={116}, date={2006}, number={1}, pages={42--68}, } \bib{EverestWard}{article}{ author={Everest, Graham}, author={Ward, Thomas}, title={The canonical height of an algebraic point on an elliptic curve}, journal={New York J. Math.}, volume={6}, date={2000}, pages={331--342}, } \bib{FlynnSmart}{article}{ author={Flynn, E. Victor}, author={Smart, Nigel P.}, title={Canonical heights on the Jacobians of curves of genus $2$ and the infinite descent}, journal={Acta Arith.}, volume={79}, date={1997}, number={4}, pages={333--352}, } \bib{Neron}{article}{ author={N{\'e}ron, A.}, title={Quasi-fonctions et hauteurs sur les vari\'et\'es ab\'eliennes}, journal={Ann. of Math. (2)}, volume={82}, date={1965}, pages={249--331}, issn={0003-486X}, } \bib{SilvermanHeights}{article}{ author={Silverman, Joseph H.}, title={Computing heights on elliptic curves}, journal={Math. Comp.}, volume={51}, date={1988}, number={183}, pages={339--358}, } \bib{ATAEC}{book}{ author={Silverman, Joseph H.}, title={Advanced topics in the arithmetic of elliptic curves}, series={Graduate Texts in Mathematics}, volume={151}, publisher={Springer-Verlag, New York}, date={1994}, } \bib{SilvermanLittleFact}{article}{ author={Silverman, Joseph H.}, title={Computing canonical heights with little (or no) factorization}, journal={Math. Comp.}, volume={66}, date={1997}, number={218}, pages={787--805}, } \bib{TschoepeZimmer}{article}{ author={Tsch{\"o}pe, Heinz M.}, author={Zimmer, Horst G.}, title={Computation of the N\'eron-Tate height on elliptic curves}, journal={Math. Comp.}, volume={48}, date={1987}, number={177}, pages={351--370}, } \bib{Zimmer}{article}{ author={Zimmer, Horst G.}, title={A limit formula for the canonical height of an elliptic curve and its application to height computations}, conference={ title={Number theory}, address={Banff, AB}, date={1988}, }, book={ publisher={de Gruyter, Berlin}, }, date={1990}, pages={641--659}, } \end{biblist} \end{bibdiv} \end{document}
1509.09029
\section{Introduction} Partons produced at large transverse momenta (\pt) through hard-scattering processes in heavy-ion collisions are expected to lose energy as they travel through the quark-gluon plasma (QGP) created in these interactions~\cite{Bjorken:1982tu}. Experiments at RHIC and the LHC have observed a suppression in the yield of high-$\pt$ particles relative to suitably scaled pp collision data, and a significant reduction in back-to-back high-\pt hadron correlations \cite{Adcox:2001jp,Adcox:2004mh,Arsene:2004fa,Back:2004je,Adams:2005dq,Aamodt:2010jd,CMS:2012aa,Aad:2015wga,Adler:2002tq} that have been interpreted as evidence for strong partonic interactions within the dense medium that causes the quenching of jets. A direct observation of this effect using jets was provided by ATLAS \cite{Aad:2010bu} and CMS \cite{CMS_dijet2010, Chatrchyan:2012nia} through a comparison of the \pt balance of dijets in PbPb and pp collisions. In head-on PbPb collisions, a large increase in asymmetric dijet events was observed relative to the pp reference. This reflects the difference in energy lost by the two scattered partons in the medium, an effect that becomes more pronounced as the path lengths travelled by the partons and the energy density of the medium increase. In pPb collisions, no excess in unbalanced dijets was observed~\cite{Chatrchyan:2014hqa}, leading to the conclusion that the dijet imbalance does not originate from initial-state effects. A wide range of models was proposed to accommodate the dependence of dijet data on the jet \pt and the centrality of the collision, \ie on the degree of overlap of the two colliding nuclei~\cite{He:2011pd,Young:2011qx,Qin:2010mn,Casalderrey-Solana:2014bpa,Mehtar-Tani:2013pia,CasalderreySolana:2010eh}. Further evidence for parton energy loss was found in studies of correlations between isolated photons and jets in PbPb events~\cite{Chatrchyan:2012gt}, where the unmodified isolated photon provides a measure of the initial parton momentum~\cite{Chatrchyan:2012vq}. As energy is conserved in all interactions in the medium, parton energy loss does not imply the disappearance of energy, but its redistribution in phase space such that it is not recovered with standard jet finding clustering methods. The observed jet quenching naturally leads to questions of how the angular and \pt distributions of charged particles are modified by the energy loss of partons as they traverse the medium. A measurement of these spectra can provide information about the physical processes underlying parton energy loss, which can yield insights into the properties of the strongly interacting medium \cite{Kurkela:2014tla}. Particle distributions inside the jet cone (within $\Delta = \sqrt{ \smash[b]{ (\eta_\text{trk}-\eta_\text{jet})^2 + (\phi_\text{trk}-\phi_\text{jet})^2 } } = 0.2$--0.4, where $\phi$ is the azimuthal angle in radians and $\eta$ is the pseudorapidity) were studied in terms of jet fragmentation functions and jet shapes~\cite{Chatrchyan:2012gw,Chatrchyan:2013kwa,Chatrchyan:2014ava,Aad:2014wha}. These distributions show a moderate softening and broadening of the in-cone fragmentation products in PbPb collisions compared to pp data. However, the observed changes account for only a small fraction of the dijet momentum imbalance, indicating that a large amount of energy is transported outside of the jet cone through interactions in the medium. Identifying the distribution of particle \pt surrounding the jets (\ie the pattern of \pt ``flow'' relative to the dijet system) is challenging, as the ``lost'' \pt is only of the order of 10\GeV, while the total \pt from soft processes forming the underlying event (UE) in a head-on (central) PbPb collision is about three orders of magnitude larger~\cite{dNdeta,Chatrchyan:2012mb}. The angular distribution of the radiated energy is a priori unknown. To overcome these difficulties, CMS previously used the ``missing \pt" method that exploits momentum conservation and azimuthal symmetry in dijet events. This method makes it possible to distinguish the correlated particles carrying the energy lost by jets from the uncorrelated particles, the directions of which are not related to the axes of the jets~\cite{CMS_dijet2010}. The momenta of all charged-particle tracks were therefore projected onto the jet direction, leading to a balancing of the uncorrelated particles, and thereby revealed the \pt flow relative to the dijet system. In pp events, imbalance in the \pt of leading and subleading jets is accommodated through three-jet and multijet final states. In PbPb collisions, quenching effects modify the spectrum and angular distribution of particles that recover the \pt balance within the dijet system. These studies showed that the overall energy balance is restored only when low-momentum particles ($\pt \approx 0.5$--2\GeV) at large angles to the jet axis ($\Delta > 0.8$) are considered. The original CMS analysis used a PbPb data sample corresponding to an integrated luminosity of 10\mubinv~\cite{CMS_dijet2010}, which was insufficient for a detailed study of the angular pattern. In addition, no pp data at the same collision energy was available at the time. In this paper, PbPb data corresponding to an integrated luminosity of 166\mubinv from a heavy-ion run at a nucleon-nucleon center-of-mass energy of 2.76\TeV, and pp data corresponding to an integrated luminosity of 5.3\pbinv taken at the same center-of-mass energy are used in a more comprehensive study. The new data provide an opportunity for detailed characterization of the multiplicity, momentum, and angular distribution of particles associated with the flow of \pt in dijet events in PbPb and pp collisions, as a function of collision centrality and dijet \pt asymmetry. Collision centrality refers to configurations with different impact parameters of the lead nuclei. By changing centrality, the dependence of jet quenching can be studied as a function of the size and density of the medium. To study the \pt flow relative to the dijet system, two complementary approaches are pursued, both relying on the cancellation of contributions from the uncorrelated UE. First, the \pt of individual tracks are projected onto the dijet axis, defined as the bisector of the leading (highest \pt) jet axis and the subleading (next highest \pt) jet axis, with the latter flipped by $\pi$ in $\phi$. These projections are then summed to investigate the overall $\pt$ flow in dijet events. This ``missing \pt'' analysis is used to study how the lost momentum is distributed as a function of the separation of the track from the jet axis, $\Delta$. The second approach involves the study of the difference in the total number of particles emitted in the leading and subleading jet hemispheres. The measurements are carried out as a function of the collision centrality in PbPb collisions, and as a function of the dijet \pt imbalance in pp and PbPb collisions. To investigate how differences in jet fragmentation affect energy loss mechanisms, jets are clustered using several anti-$\kt$ $R$ parameters (0.2, 0.3, 0.4 and 0.5) \cite{FastJet,bib_antikt}. \section{CMS detector} The central feature of the CMS apparatus is a superconducting solenoid with a 6\unit{m} internal diameter. Within the superconducting solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter (HCAL), each composed of a barrel and two endcap sections. Forward calorimeters extend the pseudorapidity~\cite{Chatrchyan:2008zzk} coverage provided by the barrel and endcap detectors. Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid. The silicon tracker measures charged particles within the pseudorapidity range $\abs{\eta}< 2.5$. It consists of 1440 silicon pixel and 15\,148 silicon strip detector modules and is located in the 3.8\unit{T} field of the superconducting solenoid. For nonisolated particles of $1 < \pt < 10\GeV$ and $\abs{\eta} < 1.4$, the track resolutions are typically 1.5\% in \pt and 25--90 (45--150)\mum in the transverse (longitudinal) impact parameter \cite{TRK-11-001}. The ECAL has coverage up to $\abs{\eta}=1.48$, and the HCAL up to $\abs{\eta}=3$. Steel and quartz fibre hadron forward (HF) calorimeters extend the acceptance to \mbox{$\abs{\eta}=5$}. For central $\eta$, the calorimeter cells are grouped in projective towers of granularity $\Delta\eta\times\Delta\phi = 0.087 \times 0.087$. The ECAL was initially calibrated using test beam electrons, and then with photons from $\pi^0$ and $\eta$ meson decays and electrons from \PZ boson decays~\cite{CMS-DP-2011-008,Khachatryan:2015iwa,Khachatryan:2015hwa}. The energy scale in data agrees with that in the simulation to better than 1\,(3)\% in the barrel (endcap) region, $\abs{\eta}<1.5$ ($1.3<\abs{\eta}<3.0$) ~\cite{CMS-PAS-EGM-10-003}. Hadron calorimeter cells in the $\abs{\eta}<3$ region are calibrated primarily with test beam data and radioactive sources~\cite{hcal_jinst,Abdullin:2009zz}. A more detailed description of the CMS detector, together with a definition of the coordinate system and kinematic variables, can be found in Ref.~\cite{Chatrchyan:2008zzk}. \section {Monte Carlo simulation} \label{MCdefinition} To study the performance of jet reconstruction in PbPb and pp collisions, dijet events in nucleon-nucleon collisions are simulated with the \PYTHIA Monte Carlo (MC) event generator~\cite{Pythia} (version 6.423, tune Z2~\cite{Z2,Field}). To account for isospin effects present in PbPb collisions, the underlying pp, pn, and nn subcollisions are weighted by cross sections using the model from Ref.~\cite{Lokhtin:2005px}. For the simulation of dijet signals, a minimum hard-interaction scale of 30\GeV is used to increase the number of dijet events. To model the PbPb UE, minimum bias PbPb events are simulated with the \textsc{hydjet} event generator, version 1.8~\cite{Lokhtin:2005px}. The parameters of this version are tuned to reproduce total particle multiplicities, improve agreement with the observed charged-hadron spectra, and to approximate the fluctuations in UE seen in data. Proton-proton collisions are generated using leading-order (LO) {\PYTHIA} (without \textsc{hydjet} simulation). Full detector simulation using the \GEANTfour{} package~\cite{bib_geant} and the standard CMS analysis chain are used to process both {\PYTHIA} dijet events and \PYTHIA dijet events embedded into \textsc{hydjet} events (denoted \textsc{pythia+hydjet} in this paper). Jet reconstruction is studied using the jet information in the \PYTHIA generator in comparison to the same fully reconstructed jet in \textsc{pythia+hydjet}, after matching the generator-level and reconstructed jets in angular regions of $\Delta^{\rm reco,gen} = \sqrt{ \smash[b]{ (\eta^{\rm gen}_\text{jet}-\eta^{\rm reco}_\text{jet})^{2}+(\phi^{\rm gen}_\text{jet}-\phi^{\rm reco}_\text{jet})^{2} } } < R$. \section{Jet reconstruction} \label{sec:Jetreco} Jet reconstruction in heavy-ion collisions at CMS is performed using the anti-$\kt$ algorithm and distance parameters $R = 0.2$ through $0.5$, encoded in the FastJet framework~\cite{FastJet}. Jets are reconstructed based on energies deposited in the CMS calorimeters. The probability of having a pileup collision is 23\%, and the average transverse energy ($E_{\rm T}$) associated with the UE is less than 1\GeV. For pp collisions, no subtraction is employed for the underlying event (UE) nor for pileup from overlapping pp interactions. Whereas, for PbPb collisions, a new ``HF/Voronoi" algorithm is used to subtract the heavy-ion background~\cite{CMS-DP-2013-018}. The transverse energy is defined by $E_{\rm T} = \sum E_{i} \sin{(\theta_{i})}$, where $E_{i}$ is the energy of the ${i}^{\rm th}$ particle in the calorimeter, $\theta_{i}$ is the polar angle of particle $i$ measured from the beam axis, and the sum is over all particles emitted into a fixed $\Delta$ in an event. The HF/Voronoi algorithm removes the UE contribution by estimating the $E_{\rm T}$ contribution from the UE at central $\eta$, and its azimuthal dependence, from deposition in the HF detector. The estimation is performed using a polynomial model that is trained using a singular-value decomposition method~\cite{GolubReinschSVD}, separately on minimum bias data and MC simulation. After an average $E_{\rm T}$ is subtracted from each calorimeter tower, based on its location in $\eta$ and $\phi$, the calorimeter towers containing non-physical negative $E_{\rm T}$ are evened out by redistributing the energy in neighboring positive $E_{\rm T}$ towers in a circular region of the parameter $R + 0.1$. The redistribution is implemented by minimizing a metric that describes the total energy difference before and after the process, given that after the redistribution all towers have positive energy. The initial calorimetric $E_{\rm T}$ values are corrected as a function of \pt and $\eta$ to match the jets clustered using all particles, except neutrinos, at the generator level of \PYTHIA. The consistency of the corrected jet energy scale (JES), defined as $\langle \pt^{\rm reco}/\pt^{\rm gen} \rangle$, is checked as a function of $\pt$ and $\eta$ using \textsc{pythia+hydjet} events in bins of event centrality. The deviations are within 2\% for all centrality, \pt, and $\eta$ bins, and less than 1\% for jet \pt greater than 60\GeV. The nonlinear response of the calorimeter as a function of particle energy gives jets that fragment into many particles with smaller energies a smaller response relative to the jets of same energy but with fewer fragments. To account for the dependence of JES on the fragmentation of jets, an additional correction is applied as a function of reconstructed jet \pt, and as a function of the number of charged particles with $\pt > 2\GeV$ in a cone of $R$ around the jet axis. The number of charged particles in \textsc{pythia+hydjet} is calculated using the \pt values obtained after the ``HF/Voronoi'' subtraction. For \PYTHIA, \pt values without any UE subtraction are used to calculate the number of charged particles. The fragmentation-dependent correction applied in PbPb collisions is calculated using \textsc{pythia+hydjet} events with matching UE activity. This correction results in a reduction in separation of the JES for quark and gluon jets, and also lessens the impact of jet reconstruction on fragmentation of the leading and subleading jets. The residual JES that accounts for the difference in calorimeter response in data and MC events is calculated using dijet balance in pp and peripheral (50--100\% centrality) PbPb collisions~\cite{Chatrchyan:2011ds}, based on data. This difference is found to be less than 2\% for $\abs{\eta} < 2$. \section{Track reconstruction} \label{sec:Trackreco} For studies of pp data and \PYTHIA MC events, charged particles are reconstructed using the same iterative method~\cite{TRK-11-001} as in previous CMS analyses of pp collisions. However, for PbPb data and \textsc{pythia+hydjet} events, a different iterative reconstruction ~\cite{CMS:2012aa,Chatrchyan:2013kwa} is employed after extending the global tracking information down to $\pt = 0.4\GeV$. To minimize the impact of inefficiencies in track reconstruction caused by the \pt resolution in track seeds near the 0.4\GeV threshold, only tracks with $\pt > 0.5\GeV$ are used in this analysis. Reconstructed tracks in \PYTHIA and \textsc{pythia+hydjet} simulations are matched to primary particles using the associated hits, \ie, charged particles that are produced in the interaction or are remnants of particles with a mean proper lifetime of less than $5 \times 10^{13}\GeV^{-1}$. The misidentification rate is defined as the fraction of reconstructed tracks that do not match any charged particle (primary or otherwise). The multiple reconstruction rate is given by the fraction of primary particles that are matched to more than one reconstructed track. Tight track quality criteria are applied to reduce the rate of misidentified or secondary particles~\cite{TRK-11-001}. Requirements are less restrictive for pp than for PbPb collisions. Heavy-ion tracking requires a larger number of hits in the tracker and a smaller normalized fit $\chi^{2}$ value for fits to reconstructed tracks. For both systems, tracks are required to be compatible with the vertex with the largest value in the sum of their \pt. In pp collisions, the track reconstruction efficiency is ${\approx} 90\%$ at $\pt = 10\GeV$ and 80\% at 0.5\GeV. The misidentification rate for tracks is ${<}2\%$ for $\pt > 1\GeV$ and slightly higher below this value. The contribution from secondary particles is subtracted, as the secondary-particle rate is as high as 2\%. The multiple reconstruction rate is smaller than 1\%. The efficiency and misidentification corrections are calculated as a function of $\eta$, $\phi$, $\pt$, and the distance to the nearest jet axis, while simpler secondary-particle and multiple reconstruction corrections are applied that depend only on the $\eta$ and $\pt$ values of charged particles. As the track reconstruction efficiency in pp collisions is larger than in PbPb collisions, the momentum flow can be measured with higher precision, while in the high-multiplicity environment of heavy-ion collisions track reconstruction remains a challenge. In PbPb collisions, the reconstruction efficiency for primary charged particles, after implementing the above track quality criteria, is approximately 70\% at $\pt \approx 10\GeV$. Efficiency starts to drop for $\pt$ below 5\GeV and at 0.5\GeV the efficiency is 30\%. The misidentification rate for tracks with $\pt = 0.5\GeV$ is ${\approx} 35\%$, but decreases to values smaller than $2\%$ and for $\pt > 1\GeV$. The secondary-particle rate and multiple-reconstruction rate are, respectively, less than 0.5\% and 0.3$\%$ over the whole \pt range in the analyis. No corrections are applied for these in PbPb collisions. Using \textsc{pythia+hydjet} simulations, track reconstruction efficiency and misidentification rates are evaluated as a function of the $\eta$, $\phi$, and \pt of the track, as well as the centrality of the collision, and the smallest distance in $\Delta$ between the track and a jet with $\pt > 50\GeV$. Tracks used in the analysis are weighted with a factor to correct for the effects described above. The value of this correction is \begin{equation} {c}^\text{trk} = \frac{(1-\text{misreconstruction}) \, (1-\text{secondary-particle})}{(\text{efficiency})\,(1+\text{multiple-reconstruction})}, \label{Eqn:ctrk} \end{equation} where secondary-particle and multiple-reconstruction rates are set to zero for PbPb collisions. \section{Analysis} \label{sec:analysis} Events are selected using an inclusive single-jet trigger with jet \mbox{$\pt > 80\GeV$}. To suppress electronic noise, cosmic rays, and beam backgrounds, events are required to satisfy selection criteria documented in refs~\cite{CMS_dijet2010,Chatrchyan:2012gt}. Events passing selections are subject to offline jet reconstruction. To select samples containing high-\pt dijets, events are required to have a leading (subleading) jet in the range of $\abs{\eta}<2$ with a corrected jet $\pt> 120\, (50)\GeV$. The single-jet trigger is fully efficient for events with the requirement on the leading jet \pt for all the $R$ parameters in the analysis. To select a dijet topology, the azimuth between the leading and subleading jets is required to be $\Delta\phi_{1,2} = \abs{\phi_{1} - \phi_{2}} > 5\pi/6$. Once leading and subleading jets are identified within the initial range of $\abs{\eta}<2$, both jets are then restricted to be within a tighter $\abs{\eta}$. For measurements that offer comparison to a previous analysis~\cite{CMS_dijet2010}, we use the previous selection of $\abs{\eta}<1.6$. For those that extend up to large angular distances $\Delta$, a tighter requirement of $\abs{\eta}<0.6$ is applied, such that leading and subleading jets are far from the edge of the tracker and all ranges in $\Delta$ fall within the acceptance. This analysis aims to provide information that would aid the characterization of the energy loss mechanisms responsible for the increase in the fraction of unbalanced dijet pairs in central PbPb relative to pp collisions. As hard-scattered partons travel and shower in the QGP, they can both trigger a coherent medium response and undergo interactions in the medium that modify the showers of both partons. However, the enhancement in unbalanced dijet pairs suggests that, on average, the subleading jet loses more energy than the leading jet. The modification in jet balance must be compensated by the remaining, unclustered constituents of the event, as each interaction conserves overall momentum. The particles that provide the \pt balance are correlated with the jet axes, but the particles that are not affected by the interaction of the partons with the medium are evenly distributed in azimuth relative to the individual directions of the leading and subleading jets. The total \pt of these particles is uncorrelated with the dijet pair. To differentiate the uncorrelated and correlated particles, we compare differences in multiplicity in leading and subleading jet hemispheres. In addition, we measure modifications in the \pt spectrum of charged particles that contribute to the overall \pt balance in the event, as well as their angular distribution with respect to the dijet system. Using the azimuthal symmetry of the jet axes relative to the UE makes it possible to perform precise measurements for particles down to $\pt = 0.5\GeV$, and angles as large as $\Delta = 1.8$. This provides constraints on energy loss mechanisms despite the small signal-to-background ratio. The cancellation of the uncorrelated UE depends on azimuthal symmetry of the areas selected around the leading and subleading jets relative to the axis of projection. As mentioned above, to ensure this requirement, the dijet azimuthal angle ($\phi_\text{dijet}$) is defined as the average $\phi$ of the leading and subleading jets after the subleading jet is reflected around the origin. In contrast with previous publications~\cite{CMS_dijet2010}, $\phi_\text{dijet}$ is preferred over $\phi_{1}$ (the $\phi$ of the leading jet) for the projection axis, because the latter choice breaks azimuthal symmetry, by generating particles near the leading jet that have larger projections at small angles relative to particles produced at the same distance to the subleading jet. The perfect cancellation of contributions from particles to \pt flow, and to differences in hemisphere multiplicities from UE, take place only when there is no interaction between UE and the jets. This is the case in \textsc{pythia+hydjet} simulations. In data, due to the variations in path length in medium traversed by jets there are complicated correlations between particles from different interactions and jet directions. These correlations comprise a part of the signal probed in this analysis. The observables used in this analysis are measured in bins of centrality and dijet imbalance. The dependence on centrality in PbPb collisions is investigated in terms of the emergence and enhancement of jet quenching effects as the size of the medium and energy density increase, and the dijet imbalance enriches events with subleading jets that lose more energy than the leading jet. To define centrality classes, collisions with inelastic hadronic interactions are divided into percentages according to the $E_{\rm T}$ of calorimeter towers summed in the HF, and events are assigned into classes of centrality based on these total sums in the HF. The distribution in this $E_{\rm T}$ is used to divide the event sample into bins, each representing 0.5\% of the total nucleus-nucleus interaction cross section. Following Refs.~\cite{CMS_dijet2010,Chatrchyan:2012nia}, we quantify \pt imbalance through the asymmetry ratio $ \AJ = (p_{{\rm T},1}-p_{{\rm T},2})/(p_{{\rm T},1}+p_{{\rm T},2})$, where $p_{\rm{T},1}$ and $p_{\rm{T},2}$ are the \pt of the leading and subleading jets within $\eta < 2.0$, respectively. The $\AJ$ boundaries used in the analysis are 0.11, 0.22, 0.33 and 0.44, which correspond to $p_{\rm T,2}/p_{\rm T,1}$ values of 0.8, 0.64, 0.50 and 0.42, respectively. \subsection{Difference in multiplicities} The events are bisected with a plane perpendicular to $\phi_\text{dijet}$ into two hemispheres associated with the leading and subleading jets. The multiplicity difference is defined as the difference between the corrected number of tracks with $\pt > 0.5\GeV$ ($N_\text{trk}^\text{Corrected} = \sum {c}^\text{trk}$) in these two hemispheres: \begin{equation} \Delta_\text{mult} = N_\text{trk}^\text{Corrected}|_{|\phi_\text{trk} - \phi_\text{dijet}|>\pi/2} - N_\text{trk}^\text{ Corrected}|_{\abs{\phi_\text{trk} - \phi_\text{dijet}}<\pi/2}. \end{equation} Positive $\Delta_\text{mult}$ means that an excess of particles is found in the hemisphere of the subleading jet, relative to the number of particles in the leading jet hemisphere. This quantity is measured event-by-event and then averaged in bins of the observables of interest. It is sensitive to the number of jets in a given hemisphere and their fragmentation, as well as to the additional particles produced in jet quenching or through some specific response of the QGP medium in one of the two hemispheres. To select events that show consequences of jet quenching, the measurement is carried out as a function of $\AJ$ and collision centrality. The $\AJ$-dependent measurement is performed for jets with a distance parameter of $R = 0.3$. To see modifications in the \pt spectrum associated with the difference in multiplicities in two hemispheres, $\Delta_\text{mult}$ is measured for track \pt ranges of 0.5--1, 1--2, 2--4, 4--8, and 8--300\GeV, and divided by the bin width. The measurement is repeated for different $R$ parameters. To be consistent with the measurement of \pt balance, the leading and subleading jets used in the $\AJ$-dependent $\Delta_\text{mult}$ measurement are required to fall in the pseudorapidity region of $\abs{\eta}<1.6$. The leading and subleading jets used in the $R$-dependent measurement are required to be within $\abs{\eta}<0.6$. Although in both cases jets with $\abs{\eta} > 2$ are excluded, it is important to note that starting jet reconstruction with a cutoff $\abs{\eta}<1.6$, (or $<0.6$) is different than using the $\abs{\eta}<2$ selection for determining the highest-\pt jets and then applying a tighter requirement, since events in which the leading or subleading jets are found in the range between $\abs{\eta} = 1.6$ (or $0.6$) and $\abs{\eta} = 2.0$ are also excluded. \subsection{Transverse momentum balance} \label{sec:momentumBalance} Detailed information about the \pt flow relative to the dijet system can be obtained by studying the contribution of tracks to the overall \pt balance in the event, as characterized by individual track \pt and angle relative to the jets. To calculate the \pt balance, the \pt of tracks are projected onto the dijet axis. For each track, this projection is defined as \begin{equation} \pt^{\parallel} = -{c}^\text{trk} \, \pt^\text{trk} \, \cos{(\phi_\text{trk} - \phi_\text{dijet})}, \end{equation} where, as mentioned in Section~\ref{sec:Trackreco}, the correction for reconstruction effects accounts for the misreconstruction rate and reconstruction efficiency for PbPb collisions, with values specified by Eq.~\ref{Eqn:ctrk}. In addition, secondary particle and multiple reconstruction rates are corrected in pp collisions. Particles that make a positive contribution in $\Delta_\text{mult}$ also have positive $\pt^{\parallel}$, as the cosine function changes sign at $\pi/2$. These two observables therefore map onto each other with a weight in track $\pt$ and $\cos{(\phi_\text{trk} - \phi_\text{dijet})}$. To study the angular recovery rate (rate at which imbalance is restored, as momentum contributions are included further from the jet cone) and the associated spectra of \pt balance, tracks that fall in annular regions around the jet axes are grouped together according to their \pt. In each event, $\pt^{\parallel}$ values of these group of tracks are summed to obtain $\PTslash^{\parallel}$. For each region, $\PTslash^{\parallel}$ is calculated in track \pt ranges of 0.5--1, 1--2, 2--4, 4--8, and 8--300\GeV. Annular regions are defined in $\Delta = \sqrt{ \smash[b]{ (\phi_\text{trk} - \phi_\text{jet})^2 + (\eta_\text{trk} - \eta_\text{jet})^2} } $ and binned between $\Delta=0.0$--1.8 in steps of 0.2. In addition, the contribution from charged particles that fall outside of this range are all collected in an extra overflow bin. These particles lie in the range of $1.8< \Delta < 3.6$, depending on the $\eta$ of the dijet pair. No anti-$\kt$ clustering is employed in the calculation of $\Delta$, and tracks are defined to lie within circular regions in pseudorapidity and azimuth. The axes used to define the annuli differ from the projection axis, $\phi_\text{dijet}$. For large $\Delta$, the annuli around the leading and subleading jets can overlap, in which case, the track used in the overlap region when calculating $ \PTslash^{\parallel}$, is the one in the annulus at smaller radius. The overlaps do not occur before $\Delta = 5\pi/12$. The $\PTslash^{\parallel}$ is averaged over events with a specific $\AJ$ value separately for pp and PbPb collisions, and for PbPb collisions they are divided into classes of collision centrality. This average is denoted as $\langle \PTslash^{\parallel}\rangle_{\pt^\text{trk},\Delta}$, to indicate that within each event the balance is calculated using a subset of tracks with specific $\Delta$ and \pt. Using the track $\pt$ and $\Delta$ parameters limits the selections on collision centrality and $\AJ$ because of the statistical imprecision of the data. For more detailed analysis of the dependance of track $\pt$ on event properties, $\Delta$ binning can be removed by adding up the \MPTpTdR values for each $\Delta$ bin, which is identical to not having annular requirements in the first place, to obtain \begin{equation} \langle \PTslash^{\parallel} \rangle_{\pt^{\text{trk}}} = \sum\limits_{\Delta} \langle \PTslash^{\parallel} \rangle_{\pt^{\text{trk}},\Delta}. \label{Eqn:four} \end{equation} The \pt balance, as in Eq.~\ref{Eqn:four}, calculated for tracks in a given \pt range usually yields nonzero values, because of the differences in \pt spectra of particles in subleading jet hemisphere relative to the spectra in the leading jet hemisphere. Summing the signed \MPTpT values for each track \pt bin provides an overall \pt balance in the event for tracks with $0.5 < \pt < 300\GeV $, that takes values close to zero, because of momentum conservation. There can still be a deviation from zero because of the particles with $\pt < 0.5\GeV$, as well as for those particles that fall out of the tracker coverage in pseudorapidity that are not included in the measurement. This sum corresponds to \begin{equation} \langle \PTslash^{\parallel} \rangle_{\Sigma} = \sum\limits_{p_{\rm{T}}^{\text{trk}}} \langle \PTslash^{\parallel} \rangle_{\pt^{\text{trk}}}. \end{equation} The angular distribution of \pt balance is studied differentially in bins of track \pt by $\langle \PTslash^{\parallel}\rangle_{\pt^\text{trk},\Delta}$, as described above, and adding up the contribution from different track \pt bins gives \begin{equation} \langle \PTslash^{\parallel} \rangle_{\Delta} = \sum\limits_{p_{\rm{T}}^{\text{trk}}} \langle \PTslash^{\parallel} \rangle_{\pt^{\text{trk}},\Delta}, \end{equation} which defines the contribution of all tracks with $0.5 < \pt < 300\GeV$ in a given annulus to total \pt balance. This \MPTdR, summed over all $\Delta$ intervals, yields $\langle \PTslash^{\parallel} \rangle_{\Sigma}$. Instead of summing all $\Delta$ bins, to calculate the recovery of balance as radius gets larger, the annuli can be summed from $\Delta = 0$ up to the angle of interest, and a cumulative balance inside a cone calculated, as \begin{equation} \langle \PTslash^{\parallel} \rangle_{[0,\Delta]} = \sum\limits_{\Delta^\prime = 0}^{\Delta^\prime = \Delta} \langle \PTslash^{\parallel} \rangle_{\Delta^\prime}. \end{equation} As mentioned previously, for consistency with the analysis in Ref.~\cite{CMS_dijet2010}, in calculations that integrate over $\Delta$ , \eg for, \MPTpT and \MPTsum, only events in which both leading and subleading jets fall within $\abs{\eta}<1.6$ are included in the measurement of \pt balance. For measurements where contributions of different annuli are studied, to ensure full tracker coverage around jets over $\Delta < 1.8$ for \MPTpTdR, \MPTdR, and \MPTcum, tighter restrictions are required on the pseudorapidity of leading and subleading jets ($\abs{\eta}<0.6$) after they are found within $\abs{\eta}<2$. \section{Systematic uncertainties} \label{sec:systematics} \begin{table*}[!ht] \centering \topcaption{Systematic uncertainties in \MPTdR for jets clustered with distance parameter of 0.3 in pp, and in central and peripheral PbPb collisions, for different $\AJ$ selections. Uncertainties are shown as shifts in the values in units of\GeV (rather than as fractions) for two $\Delta$ selections.} \label{table:SysAdepMpt} \cmsTableResize{ \begin{tabular}{l|xc|xc|xc} \multicolumn{1}{c}{} & \multicolumn{6}{c}{Values integrated over $\AJ$} \\ \cline{2-7} \multicolumn{1}{c}{} & \multicolumn{2}{c}{pp} & \multicolumn{2}{|c|}{PbPb, 30--100\%} & \multicolumn{2}{c}{PbPb, 0--30\%} \\ \hline $\Delta$ &{<}0,2 & 0.2--2.0 &{<}0,2 & 0.2--2.0 &{<}0,2 & 0.2--2.0 \\ \hline Jet reconstruction &{<}1 & 0.0--0.2 & 1 & 0.1--0.2 & 1 & 0.1--0.4 \\ Data/MC differences for JES & 1 & 0.1--0.2 & 2 & 0.1--0.3 & 2 & 0.1--0.3 \\ Fragmentation dependent JES &{<}1 & 0.1--0.2 & 2 & 0.1--0.2 & 1 & 0.1--0.4 \\ Track corrections &{<}1 & ${<}0.1$ & 1 & 0.0--0.2 & 2 & 0.2--0.9 \\ Data/MC differences for tracking& 1 & 0.0--0.1 & 1 & 0.1--0.2 & 1 & 0.1--0.2 \\ \hline Total & 1 & 0.1--0.3 & 2 & 0.2--0.3 & 3 & 0.2--1.0 \\ \hline \multicolumn{7}{c}{} \\ \multicolumn{1}{c}{} & \multicolumn{6}{c}{$\AJ < 0.22$} \\ \cline{2-7} \multicolumn{1}{c}{} & \multicolumn{2}{c}{pp} & \multicolumn{2}{|c|}{PbPb, 30--100\%} & \multicolumn{2}{c}{PbPb, 0--30\%} \\ \hline $\Delta$ &{<}0,2 & 0.2--2.0 &{<}0,2 & 0.2--2.0 &{<}0,2 & 0.2--2.0 \\ \hline Jet reconstruction & {<}1 & 0.1--0.2 & 1 & 0.1--0.2 & 1 & 0.1--0.4 \\ Data/MC differences for JES & 1 & 0.1--0.2 & 2 & 0.1--0.4 & 2 & 0.2--0.4 \\ Fragmentation dependent JES & {<}1 & 0.1 & 2 & 0.1--0.4 & 1 & 0.1--0.5 \\ Track corrections& {<}1 & $<$0.1 & 1 & 0.1 & 2 & 0.1--0.6 \\ Data/MC differences for tracking& {<}1 & 0.0--0.1 & 1 & 0.1 & 1& 0.1 \\ \hline Total & 1 & 0.1--0.3 & 2 & 0.2--0.4 & 3 & 0.2--0.6 \\ \hline \multicolumn{7}{c}{} \\ \multicolumn{1}{c}{} & \multicolumn{6}{c}{$\AJ > 0.22$} \\ \cline{2-7} \multicolumn{1}{c}{} & \multicolumn{2}{c}{pp} & \multicolumn{2}{|c|}{PbPb, 30--100\%} & \multicolumn{2}{c}{PbPb, 0--30\%} \\ \hline $\Delta$ &{<}0,2 & 0.2--2.0 &{<}0,2 & 0.2--2.0 &{<}0,2 & 0.2--2.0 \\ \hline Jet reconstruction &2 & 0.1--0.5 & 1 & 0.1--0.6 & 2 & 0.2--0.6 \\ Data/MC differences for JES & 2 & 0.1--0.3& 3 & 0.2--0.5 & 3 & 0.3--0.6\\ Fragmentation dependent JES& 1 & 0.1--0.5 & 1 & 0.1--0.7 & 1 & 0.2--0.6 \\ Track corrections & {<}1 & 0.1 & 1 & 0.1--0.3 & 3 & 0.2--1.1\\ Data/MC differences for tracking& 2 & 0.1--0.2 & 2 & 0.1--0.2 & 2 & 0.1--0.3 \\ \hline Total& 3 & 0.3--0.8 & 3 & 0.3--0.9 & 4 & 0.4--1.4 \\ \hline \end{tabular} } \end{table*} The sources of major systematic uncertainty can be categorized into two groups; biases related to jet reconstruction and those related to track reconstruction. Effects associated with event selection and beam background rejection are found to be negligible. The biases related to jet reconstruction are caused by smearing of jet \pt due to energy resolution and uncertainties in the JES. These factors can change the \pt-ordering of jets in the event, resulting in the interchanging of leading and subleading jets, or causing third jet to replace the subleading jet. The uncertainties are estimated as a function of centrality and $\AJ$ in each charged-particle \pt range, using \PYTHIA and \textsc{pythia+hydjet} simulations to compare observables calculated with reconstructed jets to generator-level jets. A bin-by-bin correction is applied to data to account for the observed jet reconstruction bias. This uncertainty includes the effect of jet-angular resolution. However, the size of the bins in the $\Delta$-dependent measurement is significantly larger than a typical angular resolution, which therefore has a negligible effect on the observables. Going from $R=0.2$ to $0.5$, the angular resolution, defined by the standard deviation of the $\Delta^\text{reco,gen}$ distribution, increases from 0.020 to 0.025 for leading jets, and from 0.025 to 0.035 for subleading jets in pp. The same holds in 30--100\% centrality PbPb collisions. In the most central 0--30\% of events, the corresponding ranges are 0.020--0.035 and 0.025--0.045, respectively. After implementing the fragmentation-dependent jet energy corrections there is up to 5\% difference between the JES for quark and gluon jets at $\pt < 50\GeV$, and the difference disappears for high-\pt jets. Additional checks are therefore pursued to account for possible discrepancies in the performance of jet energy corrections in data and in MC simulations. A modification in flavor content of jets due to quenching can lead to an under- or over-correction of the jet energy in data. Also, the uncertainty in the JES from differences in simulation and detector conditions is calculated to be 2\% using a data-based ``tag-and-probe" technique that depends on dijet balance in a control sample of peripheral PbPb events~\cite{Chatrchyan:2011ds}. The jet \pt is changed up and down for leading and subleading jets in an asymmetric manner (leading JES is increased while subleading JES is decreased) as a function of jet \pt, to account for the differences in JES between quark and gluon jets and the data-based JES uncertainty. Because the number of charged particles is a parameter in these corrections, and can make the fragmentation-dependent jet energy corrections sensitive to quenching effects, the difference in the observables before and after corrections in MC events is compared to the corresponding change in data, and the discrepancy between data and simulation is quoted as an additional source of uncertainty. Uncertainties related to track reconstruction are calculated in \PYTHIA and \textsc{pythia+hydjet} by comparing the results with generator-level charged particles to those with reconstructed tracks, after applying the track corrections discussed in Section~\ref{sec:Trackreco}. The small uninstrumented regions in the detector, and the correlation between track reconstruction efficiency and JES are the main causes of discrepancies observed between results with generator-level particles and reconstructed tracks. The track corrections account for the inefficiencies due to uninstrumented regions. However, the bins used in $\eta$ and $\phi$ to calculate the reconstruction efficiency are larger than the size of the uninstrumented regions, and as a result cannot completely correct the effect of these. An additional uncertainty is therefore added to account for the effect of differences in detector conditions and simulation of track reconstruction. This is achieved using the ratio of corrected to initial track \pt and $\eta$ spectra in data and simulations that are compared as track-quality selections are changed. The difference is found to be less than 5\%, which is included in the systematic uncertainty. To calculate the total uncertainty, the uncertainties from sources mentioned above are summed in quadrature. The contribution of each item is summarized in Tables~\ref{table:SysAdepMpt}--\ref{table:SysRdepMpt} for the \MPTdR measurement. The systematic sources are given in terms of shifts in the value of each observable in a given bin in units of\GeV instead of \% changes, as the \MPTdR can vanish and can take values arbitrarily close to zero. Typically, \MPTdR is between 15--40\GeV near the jet axes ($\Delta < 0.2$), and less than 10\GeV at larger angles. The dependence of uncertainties in dijet asymmetry and centrality is summarized in Table~\ref{table:SysAdepMpt} for jets with a distance parameter $R=0.3$. The jet energy resolution, can cause events to move across the $\AJ$ boundaries. Moreover, it is more likely for the leading jet in a highly imbalanced dijet event to be located in a region of an upward UE fluctuation in PbPb collisions. For these reasons, uncertainties related to jet reconstruction are larger in imbalanced dijet events. For well-balanced events, the uncertainty is comparable to that in the inclusive $\AJ$ selection, because the increase in effects from jet energy resolution balances the reduction of effects related to UE fluctuations. Uncertainties in track reconstruction are larger in imbalanced than in balanced events, because of the correlation of track reconstruction efficiency and reconstructed jet energy. When a high-\pt track that carries a significant fraction of jet \pt is not reconstructed, the jet energy is under-corrected, and vice versa, the energy is over-corrected in events where the high-\pt track is found, because jet energy corrections are obtained for the average case where the high-\pt track might not be reconstructed. Events with highly imbalanced dijets can result from miscalculated jet energies caused by inefficiencies in track reconstruction. Centrality of PbPb collisions does not affect the uncertainties within the jet cone as much as at larger angles, where the signal-to-background ratio gets smaller. Track and jet reconstruction uncertainties, caused by over-correction of the leading jet \pt because of upward UE fluctuations, in particular, tend to increase in central collisions. Uncertainties are smaller in pp than in PbPb collisions because of the absence of a heavy-ion UE, and differences in jet and track reconstruction that provide better measurement of jet \pt, larger track reconstruction efficiency, and lower track misidentification rates. Uncertainties for small $\Delta$ are dominated by charged particles with $\pt > 8\GeV$, while at larger $\Delta$, low-\pt particles make up a larger fraction of the total uncertainty in events when there is no selection made on charged-particle \pt. The contribution from each range of track \pt to the uncertainty in \MPTdR, in other words the uncertainty in \MPTpTdR, is shown in Table~\ref{table:SysPtDep} for $R = 0.3$, in events with 0--30\% central PbPb collisions. Finally, as shown in Table~\ref{table:SysRdepMpt}, uncertainties in jet reconstruction and track reconstruction in MC events increase together with increasing $R$, as the UE inside the jet cone gets larger. However, JES difference between quark and gluon jets is smaller for large $R$ parameters, and uncertainties that account for JES differences in data and in MC events therefore decrease. \begin{table*}[!ht] \centering \topcaption{Systematic uncertainties in \MPTpTdR in 0--30\% PbPb collisions, for jets clustered with a distance parameter of 0.3, as a function of charged-particle \pt. Uncertainties are shown as shifts in the values in units of\GeV (rather than as fractions) for two $\Delta$ selections.} \label{table:SysPtDep} \cmsTableResize{ \begin{tabular}{l|xc|xc|xc} \multicolumn{1}{c}{} & \multicolumn{2}{c|}{$0.5 < \pt < 2\GeV$ } & \multicolumn{2}{c|}{$2 < \pt < 8\GeV$ } & \multicolumn{2}{c}{$ \pt > 8\GeV$ } \\ \hline $\Delta$ &{<}0,2 & 0.2--2.0 &{<}0,2 & 0.2--2.0 &{<}0,2 & 0.2--2.0 \\ \hline Jet reconstruction & 0,04 & 0.06--0.25 & 0,13 & 0.04--0.14 & 0,85 & 0.01--0.07\\ Data/MC differences for JES & 0,14 & 0.07--0.24 & 0,42 & 0.03--0.11 & 0,97 & 0.01--0.12 \\ Fragmentation dependent JES & 0,03 & 0.10--0.14 & 1,1 & 0.05--0.23 & 0,19 & 0.02--0.06\\ Track corrections & 0,09 & 0.08--0.64 & 0,27 & 0.06--0.13 & 1,78 & 0.01--0.07\\ Data/MC differences for tracking & 0,04 & 0.03--0.08 & 1,2 & 0.01--0.05 & 1,16 & 0.00--0.02\\ \hline Total & 0,17 & 0.20--0.69 & 1,1 & 0.11--0.29 & 2,3 & 0.04--0.10\\ \hline \end{tabular} } \end{table*} \begin{table*}[!ht] \centering \topcaption{Systematic uncertainties in \MPTpTdR in 0--30\% PbPb collisions are shown for jets clustered with distance parameters of 0.2, 0.4 and 0.5. Uncertainties are shown as shifts in the values in units of\GeV (rather than as fractions) for two $\Delta$ selections.} \label{table:SysRdepMpt} \cmsTableResize{ \begin{tabular}{l|xc|xc|xc} \multicolumn{1}{c}{} & \multicolumn{2}{c|}{$R = 0.2$} & \multicolumn{2}{c|}{$R = 0.4$} & \multicolumn{2}{c}{$R = 0.5$} \\ \hline $\Delta$ &{<}0,2 & 0.2--2.0 &{<}0,2 & 0.2--2.0 &{<}0,2 & 0.2--2.0 \\ \hline Jet reconstruction & 1 & 0.1--0.4 & 1 & 0.1--0.5 & 1 & 0.1--0.7 \\ Data/MC differences for JES & 2 & 0.1--0.5 & 2 & 0.1--0.4 & 2 & 0.1--0.3 \\ Fragmentation dependent JES & 1 & 0.1--0.4 & 1 & 0.1--0.3 & 1 & 0.1--0.3 \\ Track corrections & 2 & 0.2--0.7 & 2 & 0.1--1.1 & 2 & 0.1--1.1 \\ Data/MC differences for tracking & 1 & 0.1--0.2 & 1 & 0.1 & 1 & 0.1 \\ \hline Total & 3 & 0.2--0.9 & 3 & 0.3--1.1 & 3 & 0.2--1.1\\ \hline \end{tabular} } \end{table*} Although uncertainties in differences in multiplicities are calculated separately, their values are not listed in a table, because they can be approximated from the uncertainties in $\langle\PTslash^{\parallel}\rangle$ divided by the average charged particle \pt in that range. In 0--10\% central events, for $R = 0.3$, the dominant source is jet reconstruction, with an uncertainty caused by an upward fluctuation in the background under the leading jet, which is followed by the uncertainty in track reconstruction, and residual track reconstruction in data and in MC events that change by 0.5--1.5 particles, as a function of $\AJ$. The uncertainties increase with $R$ and with centrality from peripheral to central collisions. \section{Results} \label{sec:results} \subsection{Dependence of the \texorpdfstring{\pt}{pT} balance in pp and PbPb on opening angles around jets} \label{sec:results_AngDep} Angular distribution of the \pt relative to the axis defined by the parton direction is a key for studying QCD processes responsible for parton energy loss. In models, large-angle modifications in the event due to jet quenching have been accommodated qualitatively through a response triggered in the hydrodynamic medium by the deposited energy~\cite{Tachibana:2014lja} and through the cascade of gluons created in medium-induced radiation processes~\cite{Blaizot:2014rla,Iancu:2015uja,Fister:2014zxa,Blaizot:2014ula}. Moreover, some MC implementations of jet quenching that modify partonic showers in \PYTHIA, such as \textsc{Q-pythia}, can generate soft particles at angles $\Delta > 0.8$, but this treatment modifies the fragmentation functions more severely than found in data~\cite{Apolinario:2012si,Armesto:2009fj}. Angular scales for different jet quenching mechanisms in perturbative QCD are related to momentum scales through time evolution of partonic interactions~\cite{Kurkela:2014tla}. Especially for QCD cascades in a sufficiently large medium, angular broadening is independent of the path length, and this mechanism might therefore produce a cumulative effect even after taking averages over different events where jets travel different path lengths in the QGP. The medium response may not have the same correlation between angular and momentum scales. The relative importance of each mechanism is unknown. Measuring the \pt spectra of $\langle \PTslash^{\parallel} \rangle$ as a function of $\Delta$ from the jet axis, denoted as \MPTpTdR, as discussed in Section~\ref{sec:analysis}, can provide information on the momentum scales at which certain quenching mechanisms become dominant. The analysis is performed for pp collisions, and two PbPb centrality selections of 30--100\% and 0--30\%. The resulting differential distributions in \MPTpTdR are shown for different regions of track \pt (in terms of the colored boxes) as a function of $\Delta$ in the upper row of Fig.~\ref{fig:Mpt_integrated_Dr}. The sum of \MPTpTdR for different $\pt^\text{trk}$ ranges as a function of $\Delta$, \MPTdR, are given by the open markers, and follow the leading jet at small $\Delta$ and subleading jet at large $\Delta$. The cumulative values, \MPTcum (\ie from summing and smoothing the \MPTdR over bins in $\Delta$, starting at $\Delta = 0$ and ending at the point of interest) are shown as dashed lines for pp and solid lines for PbPb. These lines demonstrate the evolution of the overall \pt balance from small to large distances relative to the jet axis, reaching an overall balance close to zero only at large radii. The cumulative curve in PbPb collisions for 0--30\% centrality is slightly narrower than for pp collisions. \begin{figure}[h!t] \centering \includegraphics[width=0.95\textwidth]{figures/20150831/HIN14010_Fig1} \caption{(Color online) Upper row: \MPTpTdR distributions for pp, and for 30--100\% and 0--30\% PbPb data for five track-$\pt$ ranges (colored boxes), for momentum ranges from $0.5 < \pt < 1\GeV$ (light blue) to $8 < \pt < 300\GeV$ (red), as a function of $\Delta$. Also shown is \MPTdR as a function of $\Delta$ for pp (open squares) and PbPb data (open plus symbols). Dashed lines (pp) and solid lines (PbPb) show \MPTcum (\ie integrating the \MPTdR over $\Delta$ from $\Delta =0$ up to the point of interest). Lower row: Difference between the PbPb and pp \MPTpTdR distributions according to the range in $\pt$, as a function of $\Delta$ (colored boxes), and difference of \MPTdR as a function of $\Delta$ (open circles), error bars and brackets represent statistical and systematic uncertainties, respectively.} \label{fig:Mpt_integrated_Dr} \end{figure} The distributions in pp collisions have characteristic features, and understanding these is important for interpreting the PbPb results. The magnitude of the \MPTdR in the first bin, with $\Delta < 0.2$, is related to the average dijet imbalance, and takes a negative value indicating that the momentum projection points along the direction of the leading jet. In the rest of the $\Delta$ bins, \MPTdR takes a positive value, and \MPTpTdR for lower track \pt make up larger fractions of \MPTdR. We refer to the \MPTpTdR and \MPTdR for bins with $\Delta > 0.2$ as the ``balancing distribution" of the corresponding quantity, because they reduce the large \pt imbalance observed in the first bin in $\Delta$. The balancing distribution has a peak in the range $0.4 < \Delta < 0.6$, which is at the most likely $\Delta$ position for a third jet relative to the subleading jet. In PbPb collisions, the peak of the balancing $\MPTdR$ distribution shifts towards smaller angles ($0.2 < \Delta < 0.4$). This can be due to the modification in the fragmentation of the leading and subleading jets after quenching, as it occurs at angles close to their axes, where the low-\pt particles make largest contributions. It is therefore not possible to claim a direct relation between the peak position of the balancing $\MPTdR$ distribution and the location of other jets in the event, unless only the highest-\pt particles are considered, \ie not likely to be related to the leading and subleading jets at large $\Delta$ values. The peak position of the balancing \MPTpTdR distribution of the highest-\pt particles is located at the same place as in pp collisions ($0.4 < \Delta < 0.6$), but with smaller magnitude. This suggests that the position of a third jet relative to the subleading jet is not modified significantly. However, the magnitude of \MPTpTdR for tracks with $8 < \pt < 300\GeV$ associated with the third jet can be reduced for several reasons, such as quenching of the third jet, which makes its fragmentation softer, or a change in the ordering of the jets relative to original partonic conditions, \ie leading parton losing more energy compared to the subleading parton, which causes the third jet to be found in the leading jet hemisphere, instead of the subleading jet hemisphere. A comparison of pp and PbPb collisions is provided in the lower row of Fig.~\ref{fig:Mpt_integrated_Dr}, showing the difference in PbPb and pp for \MPTpTdR, and \MPTdR as a function of $\Delta$. For central events, the first bin with $\Delta < 0.2$ \MPTpTdR for high-\pt tracks and \MPTdR point in the leading jet direction, although the excess is not significant. While in the second bin with $0.2 < \Delta < 0.4$, there is a significant positive excess in \MPTdR. The excess towards the subleading jet in this bin may either be because the leading jet is narrower, or the subleading jet wider in PbPb collisions compared to pp collisions. The excess in $\MPTdR$ along the subleading jet direction extends up to larger angles ($\Delta \approx 1$--1.2), with decreasing significance. In this angular range, there is an excess in \MPTpTdR for tracks with \pt that fall in the ranges of 0--0.5, 0.5--1, and 1--2\GeV, and a depletion for particles with $\pt > 4\GeV$. This is consistent with results shown in the previous section and earlier CMS studies that demonstrate that the small-angle imbalance towards the leading jet is compensated by particles of small \pt emitted at large angles to the jet axes~\cite{CMS_dijet2010}. \begin{figure}[h!t] \centering \includegraphics[width=0.95\textwidth]{figures/20150831/HIN14010_Fig2} \caption{(Color online) Same as Fig.~\ref{fig:Mpt_integrated_Dr}, but with a balanced dijet selection ($\AJ < 0.22$). Upper row: \MPTpTdR distributions for pp, and for 30--100\% and 0--30\% PbPb data for five track \pt ranges (colored boxes), as a function of $\Delta$. Also shown is \MPTdR as a function of $\Delta$ for pp (open squares) and for PbPb data (open plus symbols). Dashed lines (pp) and solid lines (PbPb) show \MPTcum (\ie integrating the \MPTdR over $\Delta$ from $\Delta = 0$ up to the point of interest). Lower row: Difference in the \MPTpTdR distributions for the PbPb and pp according to the range in $\pt$, as a function of $\Delta$ (colored boxes), and difference of \MPTdR as a function of $\Delta$ (open circles). Error bars and brackets represent statistical and systematic uncertainties, respectively. The y-axis range on the top panels are smaller than in Fig.~\ref{fig:Mpt_integrated_Dr}. } \label{fig:Mpt_integrated_Dr_aj0} \end{figure} \subsection{Study of the \texorpdfstring{\pt}{pT} balance in pp and PbPb collisions, as a function of opening angles around jets in bins of \texorpdfstring{$\AJ$}{AJ}} \label{sec:results_AngDep_AJ} \begin{figure}[h!t] \centering \includegraphics[width=0.95\textwidth]{figures/20150831/HIN14010_Fig3} \caption{(Color online) Same as Fig.~\ref{fig:Mpt_integrated_Dr}, but with an unbalanced dijet selection ($\AJ > 0.22$). Upper row: \MPTpTdR distributions for pp, and for 30--100\% and 0--30\% PbPb data for five track $\pt$ ranges, as a function of $\Delta$. Also shown is \MPTdR as a function of $\Delta$ for pp and for PbPb data. Dashed lines (pp) and solid lines (PbPb) show \MPTcum (\ie integrating the \MPTdR over $\Delta$ from $\Delta = 0$ up to the point of interest). Lower row: Difference in the \MPTpTdR distributions for the PbPb and pp. Error bars and brackets represent statistical and systematic uncertainties, respectively. The $y$-axis range on the top panels are larger than in Fig.~\ref{fig:Mpt_integrated_Dr}. } \label{fig:Mpt_integrated_Dr_aj22} \end{figure} More information can be obtained by repeating the previous study as a function of dijet asymmetry $\AJ$. The results for a sample containing more balanced dijets ($\AJ < 0.22$) is shown in Fig.~\ref{fig:Mpt_integrated_Dr_aj0}, again comparing pp data with two PbPb centrality bins. As expected, \MPTdR and \MPTpTdR for all track \pt take smaller values compared to inclusive $\AJ$ selection, meaning that events with a more balanced dijet selection show an overall better \pt balance in both small $\Delta < 0.2$, as well as larger $\Delta$. This is also seen in the difference in \MPTdR for PbPb and pp collisions, although, as before, an preference of \MPTpTdR for low-$\pt$ tracks to point along the subleading side can be seen for central PbPb events. Complementary to the selection of more balanced dijets, Fig.~\ref{fig:Mpt_integrated_Dr_aj22} shows a selection for unbalanced dijets with $\AJ > 0.22$. The $\AJ$ selection is reflected in the overall larger contributions in the small- and large-angle regions relative to the jet axes. This large $\AJ$ selection, which enhances the fraction of jets having undergone significant energy loss in PbPb collisions, also enhances the differences between PbPb and pp, as shown in the lower row of Fig.~\ref{fig:Mpt_integrated_Dr_aj22}. It is important to note that in pp collisions, only 30\% of selected dijet events have $\AJ > 0.22$, but this number increases to 42\% for central PbPb selections. This again suggests the presence of an additional mechanism creating asymmetric dijets in PbPb, \ie parton energy loss in the medium. Consistent with this picture, the $\AJ$ dependence of the \MPTpTdR distributions in PbPb and pp collisions and their difference suggests that asymmetric dijet systems in pp and PbPb collisions are created through different mechanisms, with semi-hard radiation (\eg, three-jet events) dominating pp collisions. In contrast, a large fraction of asymmetric dijet events in PbPb is created through a differential energy loss mechanism as the partons traverse the medium, which leads to the observed excess in \MPTpTdR for the low-\pt bins. The depletion of high-\pt particle contributions at large angles in PbPb is more dominant with $\AJ > 0.22$ relative to an inclusive $\AJ$ selection, because of the difference in relative fractions of three-jet events among all selected events. \subsection{Dependence of dijet asymmetry on \texorpdfstring{\pt}{pT} balance and multiplicity difference in jet hemispheres} To study the \pt flow relative to the dijet system as a function of event properties, such as centrality and $\AJ$, in more detail, the\MPTpTdR is summed over all annuli to obtain \MPTpT, \ie the average \pt balance in the event calculated for a given range of track $\pt$. In Fig.~\ref{fig:lPtPlot}, we display \MPTpT for different ranges of track \pt (displayed in terms of the colored boxes) as a function of $\AJ$, ranging from almost balanced to very unbalanced dijets in pp collisions, and in four selections of PbPb centrality from most peripheral to most central. The balance in the event for all tracks with $\pt > 0.5\GeV$, denoted as \MPTsum, which is obtained by adding up the \MPTpT for different $\pt$ ranges, is also included, and shown as open markers, with associated systematic uncertainties as brackets around the points. In PbPb events, overall \pt is balanced to better than $10\GeV$, \ie $|\MPTsum|<10\GeV$ for all $\AJ$ selections. The small negative trend in \MPTsum as a function of $\AJ$ is observed also in pp events, and in generator-level \PYTHIA events, once the \pt threshold set on charged particles and the acceptance of the tracker are imposed. When selecting events containing dijets with $\AJ > 0.11$, an expected excess of high-$\pt$ particles in the direction of the leading jet (indicated by the red areas in Fig.~\ref{fig:lPtPlot}) is seen for all selections in pp and PbPb collisions. For pp and peripheral PbPb collisions, this excess is mostly balanced by particles with intermediate \pt of 2--8\GeV. Going to more central collisions, \MPTpT on the subleading jet side is modified from the intermediate $\pt$ range towards low $\pt$ (0.5--2\GeV). This effect is most pronounced for events with large $\AJ$ in central PbPb collisions. The lower row of Fig.~\ref{fig:lPtPlot} shows the difference between \MPTpT in PbPb and pp collisions, after requiring the specific PbPb collision centralities and dijet imbalance. While the contributions from different $\pt$ ranges are similar for pp and peripheral PbPb collisions, a difference can be seen for central collisions, where a significant excess of low-$\pt$ charged particles is observed for asymmetric jets in PbPb collisions. Systematic uncertainties are shown only for \MPTsum, and not for \MPTpT. Uncertainties in \MPTsum provide an upper bound on systematic uncertainties for individual \pt ranges, as uncertainties in low-\pt particles are, in fact, significantly smaller. The excess observed in low-\pt particles in the range of 0.5--2\GeV has therefore a significance of 3--4 standard deviations for $\AJ > 0.11$ for most central events. The difference in $\langle \PTslash^{\parallel} \rangle$ between PbPb and pp collisions for all tracks with $\pt > 0.5\GeV$ is consistent with zero across all centrality and $\AJ$ selections. \begin{figure}[h!t] \centering \includegraphics[width=1.0\textwidth]{figures/20150924/HIN14010_Fig4} \caption{(Color online) Upper row has \MPTpT and \MPTsum in pp collisions (leftmost) and in four selections of PbPb for collision centralities from 50--100\% to 0--10\%. The open markers show \MPTsum, \pt balance for tracks with $0.5 < \pt < 300\GeV$, while the colored boxes show the \MPTpT contributions for different track \pt ranges. For each panel, \MPTpT and \MPTsum values are shown as a function of dijet asymmetry. The lower row shows the difference between \MPTpT and \MPTsum for PbPb and pp data. Error bars and brackets represent statistical and systematic uncertainties, respectively.} \label{fig:lPtPlot} \end{figure} The overall \pt balance observed through \MPTsum in PbPb events agrees with pp events, within systematic and statistical uncertainties, over all ranges of $\AJ$ and centrality, while the \MPTpT distributions show excess of low-\pt particles. This implies that there are more particles in the subleading jet hemispheres compared to the leading jet hemispheres, because more particles are required to obtain the same \pt sum. Figure~\ref{fig:Result_MultiplicityDifference_AJ} shows the mean difference in multiplicities between leading and subleading jet hemispheres, denoted as $\langle \Delta_\text{mult} \rangle$, as a function of $\AJ$ and collision centrality. The $\langle \Delta_\text{mult} \rangle$ is presented for both PbPb and pp collisions. Measurements in pp collisions are in good agreement with \PYTHIA and \textsc{pythia+hydjet} simulations. In general, the $\langle\Delta_\text{mult}\rangle$ increases as a function of $\AJ$ in pp, PbPb, \PYTHIA, and \textsc{pythia+hydjet} events. The events in pp collisions with large $\AJ$ contain a larger fraction of three-jet or multijet events, where more particles are produced in the direction of the subleading jet. The observed increase in $\langle\Delta_\text{mult}\rangle$ for pp collisions with increasing $\AJ$ is therefore expected. Going from peripheral (50--100\%) to central (0--10\%) PbPb events, for a given $\AJ$ selection an excess in $\langle\Delta_\text{mult}\rangle$ is visible compared to pp collisions. The difference in $\langle \Delta_\text{mult} \rangle$ between pp and PbPb collisions increases monotonically as a function of $\AJ$ at all collision centralities, with the biggest effect seen for most central PbPb collisions. This is consistent with the expected dependence of medium-induced energy loss on collision centrality, where systems of the largest size (\ie smallest centrality) should show the largest medium-related effects. The multiplicity difference is up to ${\approx}15$ particles in the most central 0--10 \% collisions. \begin{figure}[h!t] \centering \includegraphics[width=\textwidth]{figures/20150831/HIN14010_Fig5} \caption{(Color online) Upper panels show the comparison of the mean difference in multiplicity $\langle\Delta_\text{mult}\rangle$ between the subleading jet hemisphere and leading jet hemisphere, as a function of dijet asymmetry $\AJ$ for pp (blue squares), PbPb (red filed circles), \PYTHIA (dashed histogram), and \textsc{pythia+hydjet} events (black histogram). The centralities of PbPb collisions are 50--100\%, 30--50\%, 10--30 \%, and 0--10\%, respectively, from leftmost to rightmost panel. Lower panels provide the difference in $\langle\Delta_\text{mult}\rangle$ between PbPb and pp collisions. Statistical and systematic uncertainties are shown as error bars and brackets, respectively.} \label{fig:Result_MultiplicityDifference_AJ} \end{figure} \subsection{Dependence of transverse momentum balance on jet distance parameter \texorpdfstring{$R$}{R}} \label{sec:results_RDep} In pp collisions, jets clustered with small $R$ are narrower and fragment into components with higher \pt than jets clustered with large $R$. In addition, using small $R$ tends to bias the clustered jets to contain a larger fraction of quark jets~\cite{Dasgupta:2014yra,Cacciari:2008gd}. Changing the $R$ parameter can provide a handle on the size and shower profiles of individual jets. In heavy ion collisions, studying the $R$ dependence of momentum flow in dijet events makes it possible to investigate whether jet quenching mechanisms act differently on jets with different fragmentation patterns on a jet-by-jet basis. It is important to note that there is an overlap in the final set of dijet events obtained for different $R$ parameters, and therefore it is not possible to interpret the dependence of the \pt-balance distributions on $R$ as simply a dependence on jet size. A change in $R$ can induce a modification in $\PTslash^{\parallel}$ in two ways: events that satisfy the dijet requirements for one $R$ can fail for another $R$ value, or events that satisfy the dijet requirements for both $R$ parameters, but for which the ordering of jets change, can impact $\phi_{\text{dijet}}$, as well as the value of parameters used in the binning of the measurements, such as $\AJ$ and $\Delta$. The requirements on the \pt of leading and subleading jets are the main sources of variations in the final set of dijet events for different $R$ parameters. For each $R$, a jet \pt selection translates into a different requirement on initial parton \pt. A smaller fraction of the initial energy of the parton is recovered using jets of smaller size. Although fewer events pass the dijet requirement for $R = 0.2$ jets, strictly speaking, such events do not form a subset of dijet events with larger $R$ parameters. A small fraction of $R = 0.2$ dijet events (4--7\% in PbPb collisions and 2--4\% in pp collisions) does not satisfy the dijet requirements for other $R$ values, mainly because jets fall outside of the $\eta$ range or the $\Delta\phi$ requirement for the dijet pair. This can happen because of the merging of the subleading and third jets, and because of the resolution in jet angular direction. Such events make up a statistically negligible contribution to the results and are therefore not the focus of the discussion. \begin{table}[h!t] \centering \topcaption{ Overlap in event selections for 0--100\% PbPb and pp collisions. The second column gives the percentage of events that pass dijet selections and a tight pseudorapidity requirement ( $\abs{\eta}<0.6$ ) for $R = 0.5$, and an additional dijet selection also required for a smaller $R$ value. In columns 3--6 the leading and subleading jets with $R = 0.5$ are matched to the leading and subleading jets with smaller $R$ values, requiring only $R = 0.5$ selection on jets. The third column shows the percentage of these events where both leading and subleading jets point in the same direction ($\Delta_{i} = \sqrt{ \smash[b]{ (\eta_{i}^{R}-\eta_{i}^{R = 0.5})^{2} + (\phi_{i}^{R}-\phi_{i}^{R = 0.5})^{2}}} < 0.5$ for $i=1$ and $2$). The average value of the ratio of \pt of the leading and subleading jets at jet for a given $R$, to their \pt for $R = 0.5$ are shown in the fourth and fifth columns, respectively. The sixth column shows percentage of events in which subleading jets with the given R parameter match the $R = 0.5$ leading jet, and the leading jet matches the $R = 0.5$ subleading jet. } \label{table_compare_R5} \cmsTableResize{ \begin{tabular}{cy{5}y{5}y{5}y{5}y{5}} \hline & \multicolumn{1}{c}{Additional} & \multicolumn{1}{c}{Matched} & & & \multicolumn{1}{c}{Swapped} \\ $R$ & \multicolumn{1}{c}{dijet selection [\%] } & \multicolumn{1}{c}{ jet directions [\%] } & \multicolumn{1}{c}{ $\langle p_{\rm T,1}^{R}/p_{\rm T,1}^{R=0.5} \rangle$ } & \multicolumn{1}{c}{$\langle p_{\rm T,2}^{R}/p_{\rm T,2}^{R=0.5}\rangle$ } & \multicolumn{1}{c}{ jet directions [\%] } \\ \hline \multicolumn{6}{c}{PbPb}\\ \hline 0.2 & 48 , 2 & 83 , 5 & 0.89 , 0.001 & 0.79 , 0.002 & 10 , 3\\ 0.3 & 62 , 2 & 90 , 4 & 0.93 , 0.002 & 0.88 , 0.004 & 7 , 3\\ 0.4 & 77 , 1 & 94 , 3 & 0.96 , 0.002 & 0.94 , 0.005 & 3 , 2\\ \hline \multicolumn{6}{c}{pp}\\ \hline 0.2 & 58 , 2 & 83 , 5 & 0.91 , 0.001 & 0.83 , 0.002 & 14 , 3 \\ 0.3 & 73 , 2 & 90 , 4 & 0.95 , 0.001 & 0.90 , 0.001 & 8 , 3 \\ 0.4 & 86 , 1 & 95 , 3 & 0.98 , 0.001 & 0.96 , 0.001 & 4 , 2 \\ \hline \end{tabular}} \end{table} The fraction of events that pass the dijet selection both for the largest $R = 0.5$ and for other values are shown in the second column of Table~\ref{table_compare_R5}, without matching the directions of the jets. Compared to pp collisions, the fraction of events that pass both cutoffs on jets is reduced in PbPb collisions more rapidly as $R$ decreases. This observation is qualitatively consistent with the measurement showing that inclusive jet suppression is smaller in PbPb collisions for large $R$ values~\cite{Aad:2012vca}, which can be interpreted as due to the recovery of part of the energy lost in the initial hard scatter of partons. Additional information can therefore be extracted by requiring the leading and subleading jets with a given $R$ to be in the same direction as the corresponding jets found using $R = 0.5$. As shown in the third column of Table~\ref{table_compare_R5}, the fraction of such events is similar for pp and PbPb collisions. These events produce almost no change in $\phi_{\rm{dijet}}$ and the jet axes, which change only slightly due to jet angular resolution, and therefore yield approximately the same $\PTslash^{\parallel}$. However, these events can accommodate the change in the \pt of jets that originate from the same initial hard-scattered parton for different $R$ parameters. For jets matched to each other spatially, the ratio of the \pt of the leading or subleading jet at some given $R$ to respective jets with $R = 0.5$, $\langle p_{\rm T,1(2)}^{R}/p_{\rm T,1(2)}^{R = 0.5} \rangle$, is calculated and the values are shown in columns 4 and 5 in Table~\ref{table_compare_R5}. As expected, in both PbPb and pp collisions, $\langle p_{\rm T,1}^{R}/p_{\rm T,1}^{R = 0.5} \rangle$ and $\langle p_{\rm T,2}^{R}/p_{\rm T,2}^{R = 0.5} \rangle$ are reduced as $R$ gets smaller. In PbPb collisions, a smaller fraction of jet \pt is recovered at small $R$ for both the leading and subleading jets, which may be due to the broadening of quenched jets. This effect is larger for the subleading than for the leading jet. As $R$ parameters become smaller, leading and subleading jets fall below the \pt requirements. Most of the time, the leading jet satisfies the \pt selection for $R = 0.5$, but falls below the threshold for smaller $R$, because the subleading jet \pt is already biased towards values above the 50\GeV threshold by the leading jet with $\pt > 120\GeV$ in the event. However, as shown in Figs.~\ref{fig:Mpt_integrated_Dr_aj0} and~\ref{fig:Mpt_integrated_Dr_aj22}, for $R = 0.3$ jets the \MPTpTdR signal is dominated by dijet events with large imbalance, which is true for all other $R$ parameters as well. For events with $\AJ > 0.22$, $\langle p_{\rm T,2} \rangle \approx$ 70--80\GeV is sufficiently close to the 50\GeV threshold for subleading jets falling below the threshold to create sizable effects on the results. The last column of Table~\ref{table_compare_R5} gives the fraction of events with swapped leading and subleading jets compared to those with $R = 0.5$. For these events, the $ \PTslash^{\parallel}$ has an opposite sign relative to the value for $R = 0.5$, as $\phi_{\rm{dijet}}$ points in the opposite hemisphere. Especially in pp collisions, swapping of the leading and subleading jet is the main source of events in which the jet directions are not matched. In PbPb collisions, swapping is slightly less frequent than in pp collisions, suggesting that the third jet may be replacing the subleading jet. For events that satisfy dijet requirements for different R parameters, the $\PTslash^{\parallel}$ in each event can still change as a function of $R$ because of the swapping of jets in the dijet pairs, and the replacement of the subleading jet by the third jet. \begin{figure}[h!t] \centering \includegraphics[width=0.95\textwidth]{figures/20150831/HIN14010_Fig6} \caption{ (Color online) Upper row shows \MPTpTdR in pp collisions as a function of $\Delta$, for a distance parameter $R = 0.2$, $0.3$, $0.4$, and $0.5$, from left to right for different ranges of track $\pt$, and \MPTdR (\ie \MPTpTdR summed over all $\pt$ for a given $\Delta$ bin). Dashed lines indicate cumulative results for \MPTcum in pp, for each distance parameter (\ie integrating \MPTdR over the $\Delta$ range from $\Delta$ = 0 to the point of interest). Middle row provides \MPTpTdR and \MPTdR in PbPb collisions of centrality range 0--30\% as a function of $\Delta$, for distance parameters $R = 0.2$, $0.3$, $0.4$, and $0.5$ from left to right. Solid line indicates \MPTcum in PbPb for each distance parameter. Lower row has the difference between PbPb and pp. Error bars and brackets represent statistical and systematic uncertainties, respectively. The results are inclusive in the dijet asymmetry parameter $\AJ$.} \label{fig:Mpt_delR_incAj_RDep030} \end{figure} The dependence of \MPTpTdR on $\Delta$ and $R$ is shown in Fig.~\ref{fig:Mpt_delR_incAj_RDep030}, without any $\AJ$ requirement, for pp and for PbPb events with 0--30\% centralities. The $R$-dependent evolution in pp collisions, which is attributed to the softening and broadening of jets, can be seen as a shift in the position of the sign change of \MPTpTdR and as a decrease in the total imbalance within the jet cones \mbox{ $\Delta \lesssim 0.2$--0.4 }. Moreover, the peaking point of the balancing distribution shifts towards larger $\Delta$, as jet distance parameter $R$ increases (from $\Delta =$ 0.2--0.4 for $R = 0.2$ jets, to $\Delta =0.6$--1.0 for $R = 0.5$ jets). As stated for $R = 0.3$ jets in Section~\ref{sec:results_AngDep}, the peak position is correlated with the most likely position of the third jet relative to the subleading jet, which also moves to larger angles by increasing $R$. In the PbPb system, the peak also shifts towards greater $\Delta$, but less than in pp collisions due to the additional soft particles at small angles associated to the quenching of the dijet pair and reduction in the number of high-\pt particles associated with the third jet. In the PbPb$-$pp bottom panels, this manifests in the depletion of higher ranges at \pt, 4--8 and 8--300\GeV, which shift to greater angular distance with increasing $R$. There is a modest increase observed in the excess in the \pt ranges of 0.5--1 and 1--2\GeV with increasing $R$. The overall distribution in the low-\pt excess in PbPb relative to pp does not change significantly with the distance parameter, and especially not at larger angular distance $\Delta$. There is a hint that the \MPTcum distribution in central PbPb collisions, shown by the black curves in Fig.~\ref{fig:Mpt_delR_incAj_RDep030}, is narrower than in pp collisions, shown by the dashed black curves, meaning that the slope is larger in PbPb relative to pp collisions. This becomes slightly more significant at $R = 0.5$, where bias in gluon or quark jets that have large angular width becomes smaller. This is also reflected in the increase in the magnitude of \MPTdR in the leading jet direction in the first bin, and in the subleading jet direction in the second bin. This modification is dominated by particles with $\pt > 2\GeV$, and may arise from quenching effects, causing leading jets to narrow or subleading jets to widen in central PbPb relative to pp collisions. \begin{figure}[h!t] \centering \includegraphics[width=0.95\textwidth]{figures/20150924/HIN14010_Fig7} \caption{(Color online) Upper row shows \MPTpT (the individual track \pt) and \MPTsum (sum over all ranges of track \pt) as a function of $\AJ$ in pp collisions for distance parameters $R = 0.2$, $0.3$, $0.4$, and $0.5$, from left to right. The dijet asymmetry ranges from almost balanced ($\AJ < 0.11$) to unbalanced ($\AJ > 0.33$) dijets. Middle row provides \MPTpT and \MPTsum as a function of $\AJ$ in PbPb collisions of centrality range 0--10\%, for distance parameter $R = 0.2$, $0.3$, $0.4$, and $0.5$, from left to right. Lower row has the difference PbPb $-$ pp of the \MPTpT, and \MPTsum, which are shown in the upper panels. Error bars and brackets represent statistical and systematic uncertainties, respectively.} \label{fig:Mpt_aj_RDep010} \end{figure} To summarize the dependence of differences in \pt balance among different $R$ bins on $\AJ$, and to investigate the observed changes in the associated track \pt spectrum in more central events, our measurement of the dependence of the \pt balance on $R$ and $\AJ$, is shown in Fig.~\ref{fig:Mpt_aj_RDep010} for pp and 0--10\% central PbPb events, respectively, in the top and middle rows. The leftmost panels correspond to a selection of $R=0.2$ jets, while the rightmost panels correspond to $R=0.5$. For pp collisions, there is a slight decrease in the magnitude of signal in each \pt range as $R$ increases. This behavior is consistent with the observed reduction in the incone \MPTpTdR for high-\pt tracks with $\Delta < 0.2$ shown in the top panels of Fig.~\ref{fig:Mpt_delR_incAj_RDep030} as a function of $R$, which was discussed above, and is also observed in generator-level \PYTHIA. This kind of behavior is not observed in central PbPb events. The bottom row of Fig.~\ref{fig:Mpt_aj_RDep010} displays the difference between PbPb and pp results. The $R$ parameter is correlated with a small change in the magnitude of the \MPTpT excess of low-\pt particles, as jets of larger $R$ give a greater excess. When \pt ranges 0.5--2.0\GeV are combined, the increase in the low-\pt excess becomes more significant. The systematic uncertainties shown in the plot are dominated primarily by the \pt range 8.0--300.0\GeV, and as such cannot be used to characterize the significance of \MPTpT in the low track-\pt ranges, nor the slight dependence on the distance parameter in the low-\pt excess. The sum of track \pt ranges \MPTsum is insensitive to the distance parameter, and the difference between PbPb and pp collisions is consistent with zero for all $R$ values. \begin{figure}[h!t] \centering \includegraphics[width=\textwidth]{figures/20150831/HIN14010_Fig8} \caption{(Color online) Difference in differential multiplicity $\langle {\rd\Delta_\text{mult}}/{\rd p_{\mathrm{T}^\text{trk}}}\rangle$ between the away-side and leading-jet hemispheres as a function of track $\pt$, using an inclusive dijet asymmetry selection. Left panel has measurements in pp for jet radii $R = 0.2$, $0.3$, $0.4$, and $0.5$, and the middle panel displays similar measurements in PbPb. Right panel provides the difference in $\langle{\rd \Delta_\text{mult}}/{\rd \pt^\text{trk}}\rangle$ between PbPb and pp collisions for each momentum range. Systematic uncertainties are shown as boxes. Error bars represent statistical uncertainties.} \label{fig:Result_dNdPT_AJInc} \end{figure} Finally, the multiplicity associated with excess of low-\pt particles shown in Figs.~\ref{fig:Mpt_delR_incAj_RDep030} and~\ref{fig:Mpt_aj_RDep010}, and the charged-particle spectrum for $\langle\Delta_\text{mult}\rangle$ are given in Fig.~\ref{fig:Result_dNdPT_AJInc} for events with 0--30\% centrality, without any $\AJ$ requirement, for several distance parameters in pp and PbPb collisions, and for their difference. In pp collisions the fragmentation of leading jets with high \pt provides more high-\pt and fewer low-\pt particles in the hemisphere of the leading jet relative to the subleading-jet hemispheres. As a result, $\langle \rd\Delta_\text{mult} / \rd\pt \rangle$ has a positive value for charged particles with $\pt < 8\GeV$ and a negative value for charged particles with $\pt > 8\GeV$. Also, in PbPb collisions, $\langle \rd\Delta_\text{mult} / \rd\pt \rangle$ is positive for particles with $\pt < 8\GeV$ and becomes negative in the last bin, although the spectrum is much steeper, and has a large excess of soft particles. By taking the difference in $\langle \rd\Delta_\text{mult} / \rd\pt \rangle$ between PbPb and pp collisions, a significant excess (${>}5$ standard deviations) is observed at $\pt < 2\GeV$, and a depletion at $\pt > 4\GeV$, while there is only a slight excess in the range $2 < \pt < 4\GeV$. Changing $R$ does not have an effect on the results in pp collisions, while in PbPb collisions there is a small enhancement in the excess for low-\pt charged particles as $R$ is increased from 0.2 to 0.5. \section{Summary and conclusions} The transverse momentum flow relative to the dijet axis in PbPb and pp collisions containing jets with large $\pt$ has been studied using data corresponding to integrated luminosities of 166\mubinv and 5.3\pbinv, respectively, collected at a nucleon-nucleon center-of-mass energy of 2.76\TeV. Dijet events were selected containing a leading jet with transverse momentum $p_{\rm T,1} > 120\GeV$ and a subleading jet with $p_{\rm T,2} > 50\GeV$, reconstructed using the anti-$\kt$ algorithm, with distance parameters of $R = 0.2$, $0.3$, $0.4$ and $0.5$. For PbPb collisions, the dijet events show a larger asymmetry in \pt between the leading and subleading jets than in pp collisions. The multiplicity, angular, and \pt spectra of the radiation balancing this asymmetry are characterized using several techniques as a function of PbPb collision centrality and \pt asymmetry. For a given dijet asymmetry, the imbalance in \pt in PbPb collisions is found to be compensated by particles at $\pt =$ 0.5--2\GeV, whereas in pp collisions most of the momentum balance is found in the $\pt$ range of 2--8\GeV, reflecting a softening of the radiation responsible for the imbalance in \pt of the asymmetric dijet system in PbPb interactions. Correspondingly, a larger multiplicity of associated particles is seen in PbPb than in pp collisions. Both measurements show larger differences between PbPb and pp for more central PbPb collisions. The current data provide the first detailed study of the angular dependence of charged particle contributions to the asymmetry up to large angles from the jet axis ($\Delta = 1.8$). Despite the large shift in the \pt spectrum of particles, the angular pattern of energy flow in PbPb events as a function of $\Delta$ matches that seen in pp collisions, especially for small $R$ parameters. The results suggest that either the leading jet is getting narrower, or the subleading jet is getting broader after quenching. In pp collisions, the balancing distribution shifts to larger $\Delta$ with increasing distance parameter R, likely because of the presence of a third jet further away from the dijet axis. The shift is more pronounced than in PbPb collisions, where there is an excess of low \pt particles close to the jet axes. These results constrain the redistribution of transverse momentum in the modelling of QCD energy loss processes of partons traversing the hot and dense medium created in heavy-ion collisions. \section*{Acknowledgments} We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centres and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: BMWFW and FWF (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES and CSF (Croatia); RPF (Cyprus); MoER, ERC IUT and ERDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); OTKA and NIH (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); MSIP and NRF (Republic of Korea); LAS (Lithuania); MOE and UM (Malaysia); CINVESTAV, CONACYT, SEP, and UASLP-FAI (Mexico); MBIE (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Dubna); MON, RosAtom, RAS and RFBR (Russia); MESTD (Serbia); SEIDI and CPAN (Spain); Swiss Funding Agencies (Switzerland); MST (Taipei); ThEPCenter, IPST, STAR and NSTDA (Thailand); TUBITAK and TAEK (Turkey); NASU and SFFR (Ukraine); STFC (United Kingdom); DOE and NSF (USA). \hyphenation{Bundes-ministerium Forschungs-gemeinschaft Forschungs-zentren} We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centres and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: the Austrian Federal Ministry of Science, Research and Economy and the Austrian Science Fund; the Belgian Fonds de la Recherche Scientifique, and Fonds voor Wetenschappelijk Onderzoek; the Brazilian Funding Agencies (CNPq, CAPES, FAPERJ, and FAPESP); the Bulgarian Ministry of Education and Science; CERN; the Chinese Academy of Sciences, Ministry of Science and Technology, and National Natural Science Foundation of China; the Colombian Funding Agency (COLCIENCIAS); the Croatian Ministry of Science, Education and Sport, and the Croatian Science Foundation; the Research Promotion Foundation, Cyprus; the Ministry of Education and Research, Estonian Research Council via IUT23-4 and IUT23-6 and European Regional Development Fund, Estonia; the Academy of Finland, Finnish Ministry of Education and Culture, and Helsinki Institute of Physics; the Institut National de Physique Nucl\'eaire et de Physique des Particules~/~CNRS, and Commissariat \`a l'\'Energie Atomique et aux \'Energies Alternatives~/~CEA, France; the Bundesministerium f\"ur Bildung und Forschung, Deutsche Forschungsgemeinschaft, and Helmholtz-Gemeinschaft Deutscher Forschungszentren, Germany; the General Secretariat for Research and Technology, Greece; the National Scientific Research Foundation, and National Innovation Office, Hungary; the Department of Atomic Energy and the Department of Science and Technology, India; the Institute for Studies in Theoretical Physics and Mathematics, Iran; the Science Foundation, Ireland; the Istituto Nazionale di Fisica Nucleare, Italy; the Ministry of Science, ICT and Future Planning, and National Research Foundation (NRF), Republic of Korea; the Lithuanian Academy of Sciences; the Ministry of Education, and University of Malaya (Malaysia); the Mexican Funding Agencies (CINVESTAV, CONACYT, SEP, and UASLP-FAI); the Ministry of Business, Innovation and Employment, New Zealand; the Pakistan Atomic Energy Commission; the Ministry of Science and Higher Education and the National Science Centre, Poland; the Funda\c{c}\~ao para a Ci\^encia e a Tecnologia, Portugal; JINR, Dubna; the Ministry of Education and Science of the Russian Federation, the Federal Agency of Atomic Energy of the Russian Federation, Russian Academy of Sciences, and the Russian Foundation for Basic Research; the Ministry of Education, Science and Technological Development of Serbia; the Secretar\'{\i}a de Estado de Investigaci\'on, Desarrollo e Innovaci\'on and Programa Consolider-Ingenio 2010, Spain; the Swiss Funding Agencies (ETH Board, ETH Zurich, PSI, SNF, UniZH, Canton Zurich, and SER); the Ministry of Science and Technology, Taipei; the Thailand Center of Excellence in Physics, the Institute for the Promotion of Teaching Science and Technology of Thailand, Special Task Force for Activating Research and the National Science and Technology Development Agency of Thailand; the Scientific and Technical Research Council of Turkey, and Turkish Atomic Energy Authority; the National Academy of Sciences of Ukraine, and State Fund for Fundamental Researches, Ukraine; the Science and Technology Facilities Council, UK; the US Department of Energy, and the US National Science Foundation. Individuals have received support from the Marie-Curie programme and the European Research Council and EPLANET (European Union); the Leventis Foundation; the A. P. Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Council of Science and Industrial Research, India; the HOMING PLUS programme of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund; the OPUS programme of the National Science Center (Poland); the Compagnia di San Paolo (Torino); the Consorzio per la Fisica (Trieste); MIUR project 20108T4XTM (Italy); the Thalis and Aristeia programmes cofinanced by EU-ESF and the Greek NSRF; the National Priorities Research Program by Qatar National Research Fund; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University (Thailand); and the Welch Foundation, contract C-1845.
2010.07901
\section{Introduction} In \cite{Rau-TropicalPoincareHopf}, the author proves a tropical version of the Poincaré-Hopf theorem and conjectures a tropical version of a tropical Lefschetz-Hopf trace formula. The main result of this paper is the proof of the tropical trace formula in the case of \emph{matroidal} automorphisms. Iin this case, the formula can be refined by giving a third description of the value in terms of the (generalized) beta invariant of the lattice of fixed flats. \begin{theorem} \label{thm:traceformulamatroidauts} Let $M$ be a loopless matroid and let $\Psi \colon \Sigma_M \to \Sigma_M$ be a matroidal automorphism. Then \begin{equation} \label{eq:traceformula} \deg (\Gamma_{\Psi} \cdot \Delta) = (-1)^n \beta(\Fix(L(M))) = \sum_{p=0}^n (-1)^{p} \Tr(\Psi_*, {\mathbf F}_p(\Sigma_M)). \end{equation} \end{theorem} Let us quickly review the ingredients of the theorem (precise definitions follow in the later sections). In the following, $M$ is a loopless matroid of rank $n+1$ on the set $E = \{0, \dots, N\}$. We denote by $\Sigma_M \subset {\mathbf R}^N$ the (projective) matroid fan of $M$. A \emph{matroidal automorphism} $\Psi \colon \Sigma_M \to \Sigma_M$ is a tropical automorphism of $\Sigma_M$ which is induced by a matroid automorphism $\psi \colon M \to M$. More precisely, the relationship is given by \[ \Psi(v_S) = v_{\psi(S)} \] for any $S \subset E$ and associated indicator vector $v_S$. It is easy to see that in general $\Sigma_M$ can have automorphisms which are \emph{not} matroidal. Still, we hope that this special case is of interest given the additional combinatorial structures and, in particular, the (generalized) beta invariant description that are present in this case. Given $\Psi$, we denote by $\Gamma_\Psi, \Delta \subset \Sigma_M \times \Sigma_M$ the graph of $\Psi$ and the diagonal of $\Sigma_M$, respectively. They carry the structure of tropical cycles. Using the tropical intersection theory for matroid fans constructed in \cite{Sha-TropicalIntersectionProduct, FR-DiagonalTropicalMatroid}, we can define their intersection product $\Gamma_\Psi \cdot \Delta$. The degree of this product is the intersection-theoretic side of the tropical trace formula. For $p = 0, \dots, n$, the framing group ${\mathbf F}_p(\Sigma_M) \subset \bigwedge^p {\mathbf R}^N$ is the vector space generated by wedges of $p$ vectors contained in the same cone of $\Sigma_M$ \cite{MZ-TropicalEigenwaveIntermediate, IKMZ-TropicalHomology}. Since $\Psi$ is the restriction of a linear map on ${\mathbf R}^N$, $\Psi$ induces a map $\Psi_* \colon \bigwedge^p {\mathbf R}^N \to \bigwedge^p {\mathbf R}^N$ which can be restricted to $\Psi_* \colon {\mathbf F}_p(\Sigma_M) \to {\mathbf F}_p(\Sigma_M)$. We denote the trace of this automorphism by $\Tr(\Psi_*, {\mathbf F}_p(\Sigma_M))$. The alternating sum of traces is the trace side of the tropical trace formula. To connect the two sides, we use an intermediate, combinatorially defined invariant which is the (generalized) \emph{beta invariant} of a lattice with rank function. In our case, the lattice in question is the lattice of flats $F \in L(M)$ which are fixed by $\psi$, \[ \Fix(L(M)) = \{F \in L(M) : F = \psi(F)\} \] The beta invariant is defined using the Möbius function $\mu^\psi$ of $\Fix(L(M))$ and the rank function $\rk$ of $M$ (or $L(M)$) by \[ \beta(\Fix(L(M))) = (-1)^{n+1} \sum_{\substack{F \in L(M) \\ F = \psi(F)}} \mu^\psi(\emptyset, F) \rk(F). \] To prove the trace formula, we will show that both sides agree with the beta invariant (up to sign). Note that this is a generalization of the situation in \cite{Rau-TropicalPoincareHopf}, where the two sides of the Poincaré-Hopf theorem are shown to be equal to the (ordinary) beta invariant of $M$. Even though we only use elementary properties of the generalized beta invariant and the fixed lattice $\Fix(L(M))$, we hope that this results encourages the further study of their combinatorial properties. \paragraph*{Acknowledgements} I would like to thank Kristin Shaw, Karim Adiprasito and Omid Amini for useful discussions and feedback on this project. \section{Preliminaries} \subsection{Notational summary} In order to compute the intersection-theoretic side of the trace formula, we will use the same approach as in \cite{Rau-TropicalPoincareHopf}. That is to say, we will rely on the expression of the diagonal in terms of generic chains of matroids from \cite{FR-DiagonalTropicalMatroid}. In order to keep the overlap to a minimum, we will just give a quick summary of the relevant notation and statements here and refer to reader to the aforementioned sources for more details. \bigskip \begin{tabular}{lp{0.8\linewidth}} $E$ & base set $\{0, \dots, N\}$ \\ $M$ & loopless matroid of rank $n+1$ on $E$ \\ $\rk$ & rank function of $M$ \\ $L(M)$ & lattice of flats of $M$ \\ $\mathbf{1}$ & all one vector $(1,\dots,1) \in {\mathbf R}^E \cong{\mathbf R}^{N+1}$ \\ $v_S$ & indicator vector of a subset $S \subset E$ (in ${\mathbf R}^E$ and by abuse of notation in ${\mathbf R}^E/{\mathbf R}\mathbf{1}$) \\ ${\mathcal F}$ & chain of flats $E \supsetneq F_1 \supsetneq \dots \supsetneq F_l \supsetneq \emptyset$ (note that we count in decreasing order) \\ $l({\mathcal F})$ & length of a chain ($l$ in the above notation) \\ $\gap({\mathcal F})$ & gap sequence $(r_0, r_1, \dots, r_l)$ of ${\mathcal F}$, where $r_i := \rk F_{i+1} - \rk F_i$ \\ $\sigma_{\mathcal F}$ & cone in ${\mathbf R}^E$ (and ${\mathbf R}^E/{\mathbf R}\mathbf{1}$) generated by the indicator vectors $v_{F_i}$ (and ${\mathbf R} v_E = {\mathbf R}\mathbf{1}$) \\ $\Sigma'_M$ & \emph{affine} matroid fan sitting in ${\mathbf R}^E$ (the union of cones $\sigma_{\mathcal F}$ for all chains of flats ${\mathcal F}$) \\ $\Sigma_M$ & \emph{projective} matroid fan sitting in ${\mathbf R}^E/{\mathbf R}\mathbf{1}$ \\ ${\mathbf F}_p(\Sigma_M)$ & framing groups of $\Sigma_M$ (generated by wedges of $p$ vectors contained in the same cone of $\Sigma_M$) \end{tabular} \bigskip Running through all chains ${\mathcal C}$ of (arbitrary) subsets of $E$, the collection of cones $\sigma_{\mathcal C}$ forms a unimodular subdivision of ${\mathbf R}^E$ (and ${\mathbf R}^E/{\mathbf R}\mathbf{1}$) which we call the \emph{permutahedral fan} (since it is the normal fan of the permutahedron). Throughout the following, matroid fans (as well as all other fans to come) will always be represented as subfans of the permutahedral fan. This representation is called the \emph{fine subdivision} of $\Sigma_M$. Since the permutahedral fan is unimodular, in order to describe a piecewise ${\mathbf Z}$-linear function $f$ on it, it suffices to prescribe its values on the indicator vectors $v_S$. We will often use the shorthand $f(S)$ instead of $f(v_S)$. Given two matroids $M, N$ on the ground set $E$, it is obvious that $\Sigma_N \subset \Sigma_M$ (both as sets and fans) if and only if $L(N) \subset L(M)$ (or, in matroid terminology, $N$ is a \emph{quotient} of $M$). In such a case, there exists a canonical sequence of matroids $N = M_0, M_1, \dots, M_s = M$ (called the \emph{generic chain}) such that $\rk(M_i) = \rk(N) + i$ and with rank fucntions \begin{equation} \label{eq:intermediatematroids} \rk_{M_i}(S) = \min\{\rk_N(S) + i, \rk_M(S)\}. \end{equation} Moreover, there is a sequence of piecewise ${\mathbf Z}$-linear functions (on the permutahedral fan) $g'_1, \dots, g'_s : {\mathbf R}^{N+1} \to {\mathbf R}$ such that \begin{equation} \label{eq:intersectfunctiongeneral} \Sigma'_{M_{s-i}} = g'_i \cdot g'_{i-1} \cdots g'_1 \cdot \Sigma'_M. \end{equation} If $N$ and $M$ correspond to the hyperplane arrangements associated to the projective subspaces $K \subset L \subset \mathbf{CP}^n$, then the $M_i$ correspond to a generic flag of subspaces $K \subset S_1 \subset \dots \subset S_s = L$ (see also \autoref{lem:hyperplanesection} for a tropical version). The functions are given by \begin{equation} \label{eq:functioningeneral} g'_i(S) = \begin{cases} -1 & \rk_M(S) \geq \rk_N(S) + s + 1 - i, \\ 0 & \text{otherwise}. \end{cases} \end{equation} We denote by $M\oplus_0 M$ the parallel connection of $M$ with itself along the element $0$. Its base set $E \sqcup_0 E$ is the disjoint union of $E$ with itself, but the two zeros identified. By convention, we write a subset of $E \sqcup_0 E$ as a pair $(F,G), F,G \subset E$ such that either $0 \in F \cap G$ or $0 \notin F \cup G$. We will always identify ${\mathbf R}^E/{\mathbf R}\mathbf{1} = {\mathbf R}^N$ by setting the coordinate corresponding to $0 \in E$ to zero --- $x_0 = 0$. This induces a natural identification of \[ {\mathbf R}^{E\sqcup_0 E}/{\mathbf R}\mathbf{1} = {\mathbf R}^{2N} = {\mathbf R}^E/{\mathbf R}\mathbf{1} \times {\mathbf R}^E/{\mathbf R}\mathbf{1}. \] Under this identification, we have $\Sigma_{M\oplus_0 M} = \Sigma_M \times \Sigma_M$. Using this setup, the diagonal $\Delta \subset \Sigma_M \times \Sigma_M$ can also be represented as a matroid fan. The rank function of the associated matroid is given by \[ \rk_\Delta(F,G) = \rk(F \cup G). \] Applying the construction from \autoref{eq:intersectfunctiongeneral} to $\Delta \subset \Sigma_M \times \Sigma_M$, we obtain (dehomogenised) functions $g_1, \dots, g_n \colon {\mathbf R}^{2N} \to {\mathbf R}$ given by \begin{equation} \label{eq:functiongidehom} g_i(F,G) = \begin{cases} -1 & 0 \notin F, \rk(F) + \rk(G) \geq \rk(F \cup G) + n + 1 - i, \\ +1 & 0 \in F, \rk(F) + \rk(G) \leq \rk(F \cup G) + n + 1 - i, \\ 0 & \text{otherwise}, \end{cases} \end{equation} and such that \begin{equation} \label{completeintersection} \Delta = g_n \cdots g_1 \cdot (\Sigma_M \times \Sigma_M) \end{equation} (see \cite[Equation (12) and Proposition 2.6]{Rau-TropicalPoincareHopf}). Our computation of $\deg(\Gamma_\Psi \cdot \Delta)$ is based on this description of $\Delta$. \subsection{Hyperplane sections} A special case of \autoref{eq:intersectfunctiongeneral} that we will use briefly is when $N = U_{1,N+1}$, the unique loopless matroid of rank $1$. In this case, $\Sigma_N = \{\mathbf{0}\}$. Given any loopless matroid $M$, let $M^{\leq i}$ denote the matroid whose flats are the flats of $M$ of rank at most $i$ as well as $E$ (called the $i$-th \emph{truncation} of $M$). Indeed, this defines a matroid by an obvious check of axioms or by the following observation. \begin{lemma} \label{lem:genericmatroids} Let $M$ be a loopless matroid and set $N = U_{1,N+1}$. Then the intermediate matroids $M_i$ from \autoref{eq:intersectfunctiongeneral} are equal to $M^{\leq i}$ for all $i = 0, \dots, s$. \end{lemma} \begin{proof} This is straightforward using \autoref{eq:intermediatematroids} and $\rk_N(S) = 1$ for all $S \neq \emptyset$. \end{proof} As mentioned before, the intermediate matroids $M_i$ should be thought of as generic hyperplane sections. This can be made precise tropically by the following statement. \begin{lemma} \label{lem:hyperplanesection} Let $H$ denote the standard hyperplane in ${\mathbf R}^N$. For $i = 0, \dots, n$, we have \[ H^{n-i} \cdot \Sigma_M = \Sigma_{M^{\leq i}}. \] \end{lemma} \begin{proof} By induction, it is sufficient to prove $H \cdot \Sigma_M = \Sigma_{M^{\leq n-1}}$. Setting \[ h = \max\{x_0, \dots, x_N\} = \tr{x_0 + \dots + x_N}, \] we have $(H \cdot \Sigma_M)' = h \cdot \Sigma'_M$. By \autoref{lem:genericmatroids} and \autoref{eq:intersectfunctiongeneral}, we can write $\Sigma'_{M^{\leq n-1}}$ as $g'_1 \cdot \Sigma'_M$. By \autoref{eq:functioningeneral}, $g'_1(S) = -1$ if $\rk(S) = n+1$ and $g'_1(S) = 0$ otherwise. For flats $S=F$ of $M$, $\rk(F) = n+1$ is equivalent to $F = E$. On the other hand, we have $h(S) = -1$ if and only if $S = E$. It follows that the functions $h$ and $g_1$ agree on flats of $M$ and hence on $\Sigma'_M$, which proves the claim. \end{proof} \subsection{Cutting out matroid fans} As a side remark, let us have a quick look at the opposite situation to the previous subsection, the inclusion $\Sigma_M \subset {\mathbf R}^N = \Sigma_{U_{N+1,N+1}}$. In this case, \autoref{eq:intersectfunctiongeneral} allows us to write $\Sigma_M$ as a complete intersection \[ \Sigma_M = g_{N-n} \cdots g_1 \cdot {\mathbf R}^N. \] Of course, this is a complete intersection in a rather weak sense. For example, the functions need not be tropically linear nor convex (tropical polynomials) in general. Nevertheless, this description of arbitrary matroid fans might be useful in some contexts. For example, the CSM classes of matroid fans studied in \cite{dMRS-ChernSchwartzMacpherson} can be written in terms of these functions as \[ \text{CSM}_*(\Sigma_M) = \prod_{i=1}^{N-n} \frac{g_i}{1+g_i} \cdot {\mathbf R}^N = \prod_{i=1}^{N-n} \frac{1}{1+g_i} \cdot \Sigma_M. \] The $n_*$-classes appearing in \cite[Section 9.3]{dMRS-ChernSchwartzMacpherson} in connection with Speyer's $g$-polynomial \cite{Spe-MatroidInvariantVia} can be written as \[ n_*(\Sigma_M) = \frac{\prod_{i=1}^{N-n} (1+g_i)}{1+h} \cdot \Sigma_M. \] Here, again, $h = \max\{x_0, \dots, x_N\} = \tr{x_0 + \dots + x_N}$. Finally, note that $g_1 = h$ if and only if $M$ has no coloops. Hence, in this case $n_*$ can be further simplified by clearing the denominator. In view of the $g$-polynomial conjecture, it is interesting to study the positivity properties of the functions $g_i$ and the expressions above. \section{Matroidal automorphisms and their beta invariant} \subsection{Generalized beta invariants} \label{betainvariant} Beta invariants are usually defined in the context of geometric lattices. We want to extend the definition to the case of arbitrary lattices $L$ equipped with a rank function $\rk$. The application we have in mind is the sublattice $L = \Fix(L(M))$ of fixed flats of an matroidal automorphism (with the restricted rank function), see \autoref{matautom}. We define the generalized beta invariant as (up to signs) the convolution of $\rk$ with the Möbius function $\mu$ of $L$. The main property that we will need later (\autoref{lem:RecursiveBeta}) is based purely on this definition, without further requirements on $(L, \rk)$. Let $L$ by a finite lattice with minimal element $\emptyset$ and maximal element $E$. Let $\rk \colon L \to {\mathbf Z}$ be an arbitrary function on $L$, called the rank function. We set $n := \rk(E) - \rk(\emptyset) -1$. \begin{definition} The \emph{beta invariant} of $L$ (or rather, $(L,\rk)$) is \[ \beta(L) := (-1)^{n+1} \sum_{F \in L} \mu(\emptyset, F) \rk(F). \] \end{definition} For any $F \in L$, we equip the interval $[F,E]$ with the restricted rank function $\rk|_{[F,E]}$ and set \begin{equation} \label{eq:betaforF} \beta(F) := \beta([F,E]) = (-1)^{\rk(E) - \rk(F)} \sum_{F \subset G \in L} \mu(F, G) \rk(G). \end{equation} \begin{lemma} \label{lem:InverseBeta} For any $F \in L$, we have \[ \rk(F) = \sum_{F \subset G \in L} (-1)^{\rk(E)-\rk(G)} \beta(G). \] \end{lemma} \begin{proof} This is just Möbius inversion for \autoref{eq:betaforF}. \end{proof} For our purposes, it will be useful to rewrite this formula in a particular, asymmetric way. \begin{lemma} \label{lem:RecursiveBeta} Fix an element $G \in L$. Then \[ (-1)^{n} \beta(L) = (\rk(G) - \rk(\emptyset)) \; - \; \sum_{\emptyset \neq F \notin [G,E]} (-1)^{\rk(E) -\rk(F)-1} \beta(F). \] \end{lemma} \begin{proof} Since $\beta(\emptyset) = \beta(L)$, the formula is a rearrangement (including some sign yoga) of \[ \rk(\emptyset) - \rk(G) = \sum_{\emptyset \neq F \notin [G,E]} (-1)^{\rk(E) -\rk(F)} \beta(F). \] This follows from using \autoref{lem:InverseBeta} twice. \end{proof} \begin{remark} Let $K \subset L$ be a sublattice with $\emptyset, E \in K$. We equip $K$ with the restricted rank fucntion $\rk|_K$. For any $S \subset L$, we set \[ \cl(F) = \bigwedge_{\substack{G \in K \\ F \leq G}} G. \] By general properties of Möbius functions, the Möbius function $\mu^K$ of $K$ can computed in terms of the Möbius function $\mu^L$ of $L$ by \[ \mu^K(\emptyset, G) = \sum_{\substack{F \in L \\ \cl(F) = G}} \mu^L(\emptyset, F). \] It follows that the beta invariant of $K$ can be expressed as \begin{equation} \label{eq:reformulationmob} \beta(K) = (-1)^{n+1} \sum_{G \in K} \mu^G(\emptyset, G) \rk(G) = (-1)^{n+1} \sum_{F \in L} \mu^L(\emptyset, F) \rk(\cl(F)). \end{equation} \end{remark} \subsection{Matroidal automorphisms} \label{matautom} A matroid automorphisms $\psi \colon M \to M$ is a bijection $\psi \colon E \to E$ such that $F$ is a flat if and only if $\psi(F)$ is a flat. This induces an automorphism of geometric lattices $\psi \colon L(M) \to L(M)$ which we denote by the same letter. Vice versa, a lattice automorphism $\psi \colon L(M) \to L(M)$ defines a matroid automorphism up to the ambiguity of permuting parallel elements. For what follows, it is actually enough to fix the lattice automorphism $\psi \colon L(M) \to L(M)$ (since the linear span of $\Sigma_M$ is generated by the indicator vectors of flats). \begin{definition} Let $M$ be a loopless matroid. Given an matroid automorphism $\psi \colon M \to M$, the associated \emph{matroidal automorphism} $\Psi \colon \Sigma_M \to \Sigma_M$ is the restriction of the linear map on ${\mathbf R}^N$ given by \[ \Psi \colon v_S \mapsto v_{\psi(S)} \] for all $S \subset E$. \end{definition} It follows from the definition that $\Psi$ is a tropical automorphism of $\Sigma_M$ which respects its fine subdivision. In the context of the trace formula, we are interested in the subset \[ \Fix(L(M)) = \{F \in L(M) : F = \psi(F)\}. \] \begin{lemma} The set $\Fix(L(M))$ is a sublattice of $L(M)$. \end{lemma} \begin{proof} We need to show that $\Fix(L(M))$ is closed under $\wedge$ (intersection) and $\vee$ (union and closure). Since $\psi$ is a bijection, taking images commutes with intersection/unions, and since it takes flats to flats, it also commutes with taking closures. The statement follows. \end{proof} \begin{remark} As a sublattice of $L(M)$, $\Fix(L(M))$ is distributive. Note, however, that in general it is neither graded, atomic, coatomic nor complemented. \end{remark} In the context of \autoref{betainvariant}, we will always equip $\Fix(L(M))$ with the rank function $\rk$ of $L(M)$ restricted to $\Fix(L(M))$. In other words, we set \[ \beta(\Fix(L(M))) = (-1)^{n+1} \sum_{\substack{F \in L(M) \\ F = \psi(F)}} \mu^\psi(\emptyset, F) \rk(F) \] and for any $F \in \Fix(L(M))$ \begin{equation} \label{eqbetainterval} \beta(\Fix([F,E])) = (-1)^{\rk(E) - \rk(F)} \sum_{F \subset G \in L} \mu^\psi(F, G) \rk(G). \end{equation} Here, $\mu^\psi$ denotes the Möbius function on $\Fix(L(M))$ (in contrast to the Möbius function of $L(M)$). So, to emphasize, the definition mixes the Möbius function of $\Fix(L(M))$ with the rank function of $L(M)$. Note that given $F \in \Fix(L(M))$, we could alternatively consider the contracted matroid $M/F$ with rank function $\rk' = \rk - \rk(F)$ and induced automorphism $\psi'$. The beta invariant \begin{equation} \beta(\Fix(M/F)) = (-1)^{\rk(E) - \rk(F)} \sum_{F \subset G \in L} \mu^\psi(F, G) (\rk(G) - \rk(F)). \end{equation} is equal to $\beta(\Fix([F,E]))$ from \autoref{eqbetainterval}, except for $F=E$ (since summing the Möbius function over a non-trivial interval gives zero). In the (non-relevant) case $F=E$, we have $\beta(\{E\}) = \rk(E) = n+1$. Finally, note \autoref{eq:reformulationmob} applied to $K = \Fix(L(M)) \subset L(M) = L$ allows us to rewrite the beta invariant as \[ \beta(\Fix(L(M))) = (-1)^{n+1} \sum_{F \in L(M)} \mu(\emptyset, F) \rk(F^\psi) \] where $F^\psi$ denotes smallest flat containing $F$ and fixed under $\psi$, \[ F^\psi := \bigcap_{\substack{G \supset F \\ G = \psi(G)}} G. \] We recall from the introduction that our main theorem consists in the following equations. \[ \deg (\Gamma_{\Psi} \cdot \Delta) = (-1)^n \beta(\Fix(L(M))) = \sum_{p} (-1)^{p} \Tr(\Psi_*, {\mathbf F}_p(\Sigma_M)) \] The next two sections are devoted to the proofs of the first and second equality, respectively. \begin{remark} Note that a matroid fan $\Sigma_M$ can in general have many non-matroidal automorphisms. The trivial example is ${\mathbf R}^N = \Sigma_{U_{N+1,N+1}}$ with $\Aut({\mathbf R}^N) = \GL(n,{\mathbf Z})$ but $\Aut(U_{N+1,N+1}) = S_{N+1}$. In this case, however, it follows from \cite{Rau-TropicalPoincareHopf} that the trace formula holds in general. Other examples of non-matroidal automorphisms can be easily constructed, for example the Cremona type automorphisms discussed in \cite{Sha-TropicalSurfaces} or for parallel connections. \end{remark} \section{The intersection-theoretic side} In this section, our goal is to prove \begin{equation} \label{eq:rightside} \deg(\Gamma_\Psi \cdot \Delta) = (-1)^n \beta(\Fix(L(M))). \end{equation} \subsection{General approach} We denote by $\Gamma \colon \Sigma_M \to \Sigma_M \times \Sigma_M$, $x \mapsto (x,\Psi(x))$ the graph map. We denote by $f_i := \Gamma^*(g_i)$ the pullbacks of the functions $g_i$ constructed in \autoref{eq:functiongidehom}. \begin{lemma} \label{lem:reformIntersection} With the notations from above, we have \[ \deg (\Gamma_\psi \cdot \Delta) = \deg(f_n \cdots f_1 \cdot \Sigma_M). \] \end{lemma} \begin{proof} This follows from \autoref{completeintersection}, \cite[Theorem 4.5 (6)]{FR-DiagonalTropicalMatroid} and the projection formula \cite[Proposition 7.7]{AR-FirstStepsTropical}. \end{proof} We set $X_k := f_k \cdots f_1 \cdot X$. Our goal is to compute information about these intermediate intersection products inductively. The first new technical difficulty that we encounter in comparison to \cite{Rau-TropicalPoincareHopf} is the fact that $\Gamma \colon \Sigma_M \to \Sigma_{M\otimes_0 M}$ is in general \emph{not} a map of fans, i.e.\ the cones of $\Sigma_M$ are not necessarily mapped to cones of $\Sigma_{M\oplus_0 M}$. Consequently, the functions $f_k$ are in general not linear when restricted to cones $\sigma_{\mathcal F} \subset \Sigma_M$. In principal, this could lead to fan structures which cannot be described as subfans of the permutahedral fan. However, somewhat mysteriously, it turns out that no such refinements are necessary. In fact, we will show that $f_k$ is linear when restricted to a (non-zero) cone of $X_{k-1}$ (but not of $\Sigma_M$). By induction, this is enough to ensure that $X_k$ can (still) be written as a subfan of the permutahedral fan. The following lemma contains the technical key piece of this argument. \begin{lemma} \label{lem:fklinearity} Pick $k \in \{1, \dots, n\}$ and let ${\mathcal F} = (F_i)$ be a chain of flats such that one of the following conditions holds: \begin{enumerate} \item For any $i$, either $0 \in F_i \cap \psi(F_i)$ or $0 \notin F_i \cup \psi(F_i)$. \item The gap sequence of ${\mathcal F}$ is $\gap({\mathcal F}) = (k-1, 0, \dots, 0)$. \end{enumerate} Then the function $f_k$ is linear on $\sigma_{\mathcal F}$. \end{lemma} \begin{proof} If property (a) holds, then $\Gamma(\sigma_{\mathcal F})$ is a cone in the fine subdivision of $\Sigma_{M\otimes_0 M}$, given by the chain of flats $((F_i,\psi(F_i)))$. Since the $g_k$ are linear on such a cone by definition, the $f_k$ are linear on $\sigma_{\mathcal F}$ as well. Let us now assume property (b) holds. We set $G_i = \psi(F_i)$ and choose $p$ and $q$ maximal with the property that $0 \in F_p$ and $0 \in G_q$, respectively. If $p=q$, we are back in the case of property (a). By symmetry of the function $g_k$ in the two factors, we can assume $p > q$ without loss of generality. Given a point $x \in \sigma_{\mathcal F}$, we write it as a positive linear combination of the $v_{F_i}$ with coefficients $a_i$. We claim that $f|_{\sigma_{\mathcal F}}(x)$ is equal to the linear function $A(x) := \sum_{i=1}^p a_i$. Set $y := \Gamma(x) = (x, \psi(x))$. Using indicator vectors, we can write $y$ as \begin{equation} \begin{split} \label{eq:vectorsum} y &= a_1 v_{(F_1, G_1)} + \dots + a_q v_{(F_q, G_q)} \\ &+ a_{q+1} (v_{(F_{q+1}, E)} + v_{(\emptyset, G_{q+1})}) + \dots + a_p (v_{(F_{p}, E)} + v_{(\emptyset, G_{p})}) \\ &+ a_{p+1} v_{(F_{p+1}, G_{p+1})} + \dots + a_l v_{(F_l, G_l)}. \end{split} \end{equation} Here, we use the formula $(v_F, v_G) = v_{(F, E)} + v_{(\emptyset, G)}$ if $0 \in F\setminus G$, which follows from our conventions. Note that when considered as an equation in ${\mathbf R}^{E'} = {\mathbf R}^{2N+1}$, $y$ is normalised in the sense that its minimal coordinate is $0$. Moreover, $y_0 = A(x)$. It is straightforward to check that the cone $\sigma \subset \Sigma_{M \otimes_0 M}$ that contains $y$ in its relative interior corresponds to a chain formed from flats of the form $(F_i, G_j)$. Here, as usual, we have either $0 \in F_i \cap G_j$ or $0 \notin F_i \cup G_j$, which can also be rewritten as either $i \leq p, j \leq q$ or $i > p, j > q$. Note that $p \geq 1$, so in the latter case $i \geq 2$. By our assumption, we have $\rk(F_i)=\rk(G_i) \leq n+1-k$ for $i \geq 1$ and $\rk(F_i)=\rk(G_i) < n-k+1$ for $i \geq 2$. It follows that $\rk(F_i) + \rk(G_j) \leq \rk(F_i \cup G_j) + n+1-k$ if $(i,j) \neq (0,0)$ and the inequality is strict if $i > p, j > q$. Using the description of $g_k$ in \autoref{eq:functiongidehom}, we find \[ g_k(F_i, G_j) = \begin{cases} 1 & i \leq p, j \leq q, (i,j) \neq (0,0), \\ 0 & i > p, j > q \text{ or } (i,j) \neq (0,0). \end{cases} \] We now write $y$ as a positive sum of the cone generators of $\sigma$. Setting the coefficient $v_{(E,E)}$ to zero, this sum is normalised as above (minimal coordinate is $0$) and hence equal to \autoref{eq:vectorsum} in ${\mathbf R}^{2N+1}$. Moreover, by the previous computation, $g_k(y)$ (and hence $f_k(x)$) is equal to the sum of coefficients of the vectors $v_{(F_i, G_j)}$ with $0 \in F_i \cap G_j$, which clearly is equal to $y_0$. Since $y_0 = A(x)$, this proves the claim. \end{proof} \subsection{Combinatorial properties of $X_k$} We will now proceed by formulating certain properties of the intermediate intersection products $X_k$. In contrast to \cite{Rau-TropicalPoincareHopf}, we do not describe the $X_k$ completely. However, the information considered is just sufficient to make the induction run, on the one hand, and to prove the main statement for $k=n$, on the other hand. We split the properties into two statements: Here we consider combinatorial properties, in the next subsection we discuss weights. \begin{lemma} \label{Xkcombinatorics} For all $k \in \{0, \dots, n\}$, the following statements hold. \begin{enumerate} \item \label{lab1} The tropical cycle $X_k$ can be represented as a weighted subfan of the permutahedral fan. (By a \emph{facet} of $X_k$, we mean a cone of non-zero weight in $X_k$.) \item \label{lab2} For a facet $\sigma_{\mathcal F}$ of $X_k$, the gap sequence of ${\mathcal F}$ has one of the following two forms. \begin{align} \gap({\mathcal F}) &= (r,s,0, \dots, 0) =: (A), & r+s=k, \\ \gap({\mathcal F}) &= (r,s,0, \dots, 0,1,0, \dots, 0) =: (B), & r+s = k-1. \end{align} \item \label{lab3} If $\gap({\mathcal F}) = (A)$, $0 \notin F_1 \cup \psi(F_1)$ and $s \geq 1$, then $F_1 = \psi(F_1)$. \item \label{lab4} If $\gap({\mathcal F}) = (A)$, $0 \in F_1 \cup \psi(F_1)$ and $s \geq 1$, then $0$ and $\psi^{-1}(0)$ are linearly independent in $F_1/F_2$. If moreover $s \geq 2$, then $F_1 = \psi(F_1)$. \item \label{lab5} If $\gap({\mathcal F}) = (B)$ and $G \supsetneq H$ denotes the part of ${\mathcal F}$ corresponding to the final $1$ in $\gap({\mathcal F})$, then $0$ and $\psi^{-1}(0)$ are linearly independent in $G/H$ (and hence $G = \cl(H \cup\{ 0 , \psi^{-1}(0)\})$). If moreover $s \geq 1$, then $F_1 = \psi(F_1)$. \item \label{lab6} For $k < n$, $f_{k+1}$ is linear when restricted to a facet of $X_k$. \end{enumerate} \end{lemma} The proof of this lemma mostly consists of simple, but tedious calculations of weights in the intersection product $f_{k+1} \cdot X_k$. To make it as transparent as possible, we will first collect a couple of recurrent arguments and case distinctions that occur in this calculation. \begin{remark} \label{rem:weightcomp} Let $\tau$ be a codimension one cone in $X_k$ and let ${\mathcal F}$ be the corresponding chain of flats. Our general goal is to compute the weight $\omega(\tau)$ of $\tau$ in $f_{k+1} \cdot X_k$. \begin{enumerate} \item The facets in $X_k$ containing $\tau$ correspond to filling a (non-trivial) gap $G \supsetneq H$ of ${\mathcal F}$ with an additional flat $G \supsetneq F \supsetneq H$. The balancing condition around $\tau$ naturally splits into separate equations, one for each gap of ${\mathcal F}$ (and the facets corresponding to it). In particular, the calculation of $\omega(\tau)$ can be split into a calculation for each gap. \item Let $F^1, \dots, F^m$ denote the flats corresponding to the facets of $X_k$ for a given gap $G \supsetneq H$ of ${\mathcal F}$. The typical situation will be that the sets $F^i \setminus H$ form a partition of $G$. In this situation, the involved indicator vectors satisfy a unique linear relation (up to multiples), namely \[ \sum_{i=1}^m v_{F^i} = v_G + (m-1) v_H. \] By uniqueness, it follows that the weights of the corresponding facets in $X_k$ are all equal, say, to $\omega \in {\mathbf Z}\setminus\{0\}$. \item By induction, we may assume that $f_{k+1}$ is linear on the facets of $X_k$ (which are cones of the permutahedral fan). It follows that in order to compute $\omega(\tau)$, it is sufficient to know the values $f_{k+1}(F)$. Using \autoref{eq:functiongidehom}, these values are given by \begin{equation} f_{k+1}(F) = \begin{cases} -1 & 0 \notin F \cup \psi(F) \text{ and } 2 \rk(F) \geq \rk(F \cup \psi(F)) + n - k, \\ +1 & 0 \in F \cap \psi(F) \text{ and } 2 \rk(F) \leq \rk(F \cup \psi(F)) + n - k, \\ +1 & 0 \in (F \cup \psi(F)) \setminus (F \cap \psi(F)) \text{ and } \rk(F) \leq n - k, \\ 0 & \text{otherwise}. \end{cases} \end{equation} In the third case, we use the fact that $\Gamma(v_F) = v_{(F,E)} + v_{(\emptyset, \psi(F))}$ or $\Gamma(v_F) = v_{(F,\emptyset)} + v_{(E, \psi(F))}$, depending on whether $0 \in F$ or $0 \in \psi(F)$. It is convenient to list a few particular cases, focusing on the critical rank $n-k$. \begin{equation} \label{eq:functionsfi} f_{k+1}(F) = \begin{cases} -1 & 0 \notin F = \psi(F) \text{ and } \rk(F) \geq n - k, \\ 0 & 0 \notin F \cup \psi(F), F \neq \psi(F) \text{ and } \rk(F) = n - k, \\ 0 & 0 \notin F \cup \psi(F) \text{ and } \rk(F) < n - k, \\ 0 & 0 \in F = \psi(F) \text{ and } \rk(F) \geq n - k + 1, \\ +1 & 0 \in F \cap \psi(F), F \neq \psi(F) \text{ and } \rk(F) = n - k + 1, \\ +1 & 0 \in F \cap \psi(F) \text{ and } \rk(F) \leq n - k. \\ \end{cases} \end{equation} \item Going back to the partition case of item (b), let $q$ be the number of flats $F^i$ such that $0 \in F^i \cup \psi(F^i)$. The possible values are $q = 0, 1, 2, m$. The first and latter case correspond to $0 \notin G \cup \psi(G)$ and $0 \in H \cup \psi(H)$, respectively. The case $q=2$ occurs if $0$ and $\psi^{-1}(0)$ are linearly independent in $G/H$. The remaining cases correspond to $q=1$. Based on all the previous comments, we list in \autoref{weightcomputation} the computation of $\omega(\tau)$ (or rather, the contribution of a fixed gap $G \supsetneq H$ to it) for the various values of $q$ and with various extra conditions. \end{enumerate} \end{remark} \begin{table}[tb \centering \begin{tabular}{l|l|l|l|ll} \hline & $q=0$ & $q=1$ & $q=2$ & $q=m$ \\ \hline condition & $\rk G < n-k$ & $\rk G \leq n-k$ & $\rk G \leq n-k$ & $\rk G \leq n-k$ \\ $f_{k+1}(G)$ & $0$ & $1$ & $1$ & $1$ \\ $f_{k+1}(F^i)$ & $0, \dots, 0$ & $1, 0, \dots, 0$ & $1,1,0 \dots,0$ & $1, \dots, 1$ \\ $f_{k+1}(H)$ & $0$ & $0$ & $0$ & $1$ \\ $\omega(\tau)$ & $0$ & $0$ & $\omega$ & $0$ \\ \hline condition & $\rk G = n-k$ & \begin{tabular}[t]{@{}l@{}} $\rk G = n-k+1$, \\ $[0 \notin F^i \cup \psi(F^i)$ \\ $\Rightarrow \rk F^i < n-k]$ \end{tabular} & & $\rk G = n-k+1$ \\ subcase & $G =$/$\neq \psi(G)$ & $G =$/$\neq \psi(G)$ & & $G =$/$\neq \psi(G)$ \\ $f_{k+1}(G)$ & $-1$/$0$ & $0$/$1$ & & $0$/$1$ \\ $f_{k+1}(F^i)$ & $0, \dots, 0$ & $1, 0, \dots, 0$ & & $1, \dots, 1$ \\ $f_{k+1}(H)$ & $0$ & $0$ & & $1$ \\ $\omega(\tau)$ & $\omega$/$0$ & $\omega$/$0$ & & $\omega$/$0$ \\ \hline condition & $G = \psi(G)$, & & & $G = E$, \\ & $\rk F^i = n-k-1$ & & & $\rk F^i = n-k$ \\ $f_{k+1}(G)$ & $-1$ & & & $0$ \\ $f_{k+1}(F^i)$ & $0, \dots, 0$ & & & $1,\dots,1$ \\ $f_{k+1}(H)$ & $0$ & & & $1$ \\ $\omega(\tau)$ & $\omega$ & & & $\omega$ \\ \hline \end{tabular} \caption{The computation of the weight $\omega(\tau)$ for various types of facets $\tau$ in $f_{k+1} \cdot X_k$. The notation is borrowed from \autoref{rem:weightcomp}.} \label{weightcomputation} \end{table} \begin{proof}[of \autoref{Xkcombinatorics}] The initial case $k = 0$ is trivial (except for \autoref{lab6}, which follows by the same argument as below). Let us consider the induction step $k \to k+1$. For \autoref{lab1}, note that by induction assumption $X_{k}$ can be represented on the permutahedral fan and $f_{k+1}$ is linear on the facets of this representation. Hence $X_{k+1} = f_{k+1} \cdot X_{k}$ can also be represented on the permutahedral fan. For \autoref{lab2}, let $\tau$ be a cone of dimension $n-k-1$ whose weight $\omega(\tau)$ in $X_{k+1}$ is non-zero. Let ${\mathcal F}$ be the corresponding chain of flats with gap sequence $\gap({\mathcal F}) = (r_0, \dots, r_l)$. We need to show that $S := \sum_{i=2}^l r_i \leq 1 $. If $S > 2$, then by induction assumption $\tau$ is not contained in any facet of $X_k$, a contradiction. So let us assume $S=2$. Note that hence $r_1 + r_2 = k-1$ and therefore $\rk F_2 = n-k$. If $(r_2, \dots, r_l) = (\dots, 1, \dots, 1, \dots)$ (the dots represent zeros), one of the two gaps must be as in \autoref{lab5} and the facets of $X_k$ correspond to filling the other gap. Depending on the ordering of the gaps, $\omega(\tau)$ is computed according to \autoref{weightcomputation}, row 1, $q=0$ or $q=m$. In both cases $\omega(\tau) = 0$. If $(r_2, \dots, r_l) = (\dots, 2, \dots)$, with gap $G \supsetneq H$, then $0$ and $\psi^{-1}(0)$ must be linearly independent in $G/H$ and the possible fillings are given by $F = \cl(H \cup\{ 0 , \psi^{-1}(0)\})$ and $F \nsubset \cl(H \cup\{ 0 , \psi^{-1}(0)\})$, $\rk F = \rk H +1$. Note that this still induces a partition of $G \setminus H$ and $q=1$, so by \autoref{weightcomputation}, row 1, $q=1$ we get $\omega(\tau) = 0$ again. This finishes \autoref{lab2}. We proceed with \autoref{lab3}, so $\gap({\mathcal F}) = (r,s, \dots)$ with $r+s = k+1$, $s \geq 1$ and $0 \notin F_1 \cup \psi(F_1)$. We want to show $F_1 = \psi(F_1)$. Note that the only possible (non-zero) fillings must have gap sequence $(r,s-1, \dots)$ (since type $(B)$ is excluded by $0 \notin F_1 \cup \psi(F_1)$ and \autoref{lab5}). For $s > 1$, the statement follows by induction assumption. For $s=1$, note that $r = k$ and hence $\rk F_1 = n - k$. So the statement follows from \autoref{weightcomputation}, row 2, $q=0$. For \autoref{lab4}, we assume $\gap({\mathcal F}) = (r,s, \dots)$ with $r+s = k+1$, $s \geq 1$ and $0 \in F_1 \cup \psi(F_1)$. Assume that $0$ and $\psi^{-1}(0)$ are not linearly independent in $F_1/F_2$. Then we must have $s=1$ and the only possible gap sequence for fillings is $(r,0, \dots)$ (all other possibilities have no facets adjacent). In this case $\rk F_1 = n-k$ and \autoref{weightcomputation}, row 1, $q=1$ or $q=m$ gives $\omega(\tau) = 0$, a contradiction. Now assume $s \geq 2$. We need to show $F_1 = \psi(F_1)$. The only possible gap sequences of fillings are $(r,s-1,0,\dots)$ and $(r,s-2,1, \dots)$. For $s > 2$, the statement follows by the induction assumption. For $s=2$, we have $\rk(F_1) = n-k+1$ and the possible fillings agree with the ones described in the last case of \autoref{lab1}. This is covered by \autoref{weightcomputation}, row 2, $q=1$. Let us now consider \autoref{lab5}, so $\gap({\mathcal F}) = (r,s, \dots, 1, \dots)$. Assume first that $0$ and $\psi^{-1}(0)$ are not linearly independent in $G \supsetneq H$ (the final gap in ${\mathcal F}$). Then the only possible fillings are fillings of $G \supsetneq H$. But the corresponding weight can be computed according to \autoref{weightcomputation}, row 1, and we get a non-zero weight only if $q=2$, a contradiction. Now let $s \geq 1$ and assume that $F_1 \neq \psi(F_1)$. Since by the previous argument $0 \in F_2 \cap \psi(F_2)$, the only possible fillings have gap sequence $(r,s-1, \dots, 1, \dots)$. Again, if $s > 1$, the statement follows from the induction assumption. If $s=1$, we have $\rk F_1 = n-k+1$ and hence the claim follows from \autoref{weightcomputation}, row 2, $q=m$. This finishes the proof of \autoref{lab5}. Finally, let us prove \autoref{lab6}. Note that up to now, we established that $X_{k}$ can be represented as a weighted subfan of the permutahedral fan and that its facets satisfy the properties of \autoref{lab3} -- \autoref{lab5}. Then the linearity of $f_{k+1}$ follows from \autoref{lem:fklinearity}. Indeed, note that all facets of $X_k$ satisfy condition (a) of that lemma, except for the facets from \autoref{lab3} or \autoref{lab4} with $s=0$. These ones satisfy condition (b) instead. \end{proof} \subsection{Weights on $X_k$} Based on our understanding of the combinatorics of $X_k$, we can now describe the weights of some of its facets. \begin{lemma} \label{lem:weightsXk} Fix $k \in \{0, \dots, n\}$ and let $\sigma = \sigma_{\mathcal F}$ be a facet of $X_k$. Then the following statements hold. \begin{enumerate} \item \label{labb1} If $\gap({\mathcal F}) = (k,0, \dots, 0)$ and $0 \in F_1 \cup \psi(F_1)$, then $\omega(\sigma) = 1$. \item \label{labb2} If $\gap({\mathcal F}) = (k-1,0, \dots, 0,1,0, \dots, 0)$ and $F_1 \neq \psi(F_1)$, then $\omega(\sigma) = \rk F_1^\psi - \rk F_1$. \item \label{labb3} If $0 \notin F_1 $ and $F_1 = \psi(F_1)$, then $\omega(\sigma) = (-1)^{n-\rk F_1} \beta(\Fix(M/F_1))$. \end{enumerate} \end{lemma} Before proving the lemma, let us check that it implies \autoref{eq:rightside} as promised. \begin{proof}[\autoref{eq:rightside}] In the case $k=n$, the only chain of correct dimension is the trivial flag ${\mathcal F} = (E \supset \emptyset)$. We have $\gap({\mathcal F}) = (n)$ and $0 \notin F_1 \cup \psi(F_1) = \emptyset$. Hence, by item (c) of \autoref{lem:weightsXk} and \autoref{lem:reformIntersection}, we conclude \[ \deg(\Gamma_\Psi \cdot \Delta) = \deg(X_n) = \omega(\sigma_{(E \supset \emptyset)}) = (-1)^n \beta(\Fix^\psi(M)). \] \end{proof} We now want to prove \autoref{lem:weightsXk}. \begin{proof}[\autoref{lem:weightsXk}] We (again) proceed by induction on $k$. For $k=0$, the statements are trivial (only \autoref{labb1} and \autoref{labb3} occur, and both give weight $1$ in this case). We now prove the induction step $k \to k+1$. Let $\sigma = \sigma_{\mathcal F}$ be a facet of $X_{k+1}$. We start with \autoref{labb1}. In this case, the facets of $X_k$ containing $\sigma$ correspond to fillings $E \supsetneq F \supsetneq F_1$ with gap sequences of the form $(r,s, \dots)$, $r+s = k$. Note that since $0 \in F_1 \cup \psi(F_1)$ (now the second step of the chain), by \autoref{Xkcombinatorics} only the value $s=0$ is possible. Therefore, by induction assumption, all facets containing $\sigma$ have weight $1$, and the computation for $\omega(\sigma) = 1$ is given in \autoref{weightcomputation}, row 3, $q=m$. We proceed with \autoref{labb2}. In this case, we have two gaps that can potentially be filled, namely $E \supsetneq F_1$ and $G \supsetneq H$, the gap corresponding to the final $1$. By \autoref{Xkcombinatorics}, $0$ and $\psi^{-1}(0)$ are independent in $G/H$. Hence the contribution $\omega_1$ of the fillings of $G \supsetneq H$ to $\omega(\sigma)$ can be computed according to \autoref{weightcomputation}, row 1, $q=2$. Note that these fillings correspond to facets of $X_k$ of the type discussed in \autoref{labb1}, hence by induction assumption the all have weight $1$. It follows that $\omega_1 = 1$. Let us now consider the gap $E \supsetneq F_1$. This is the first time that we encounter a facet structure that is not just given by a partition of $E \setminus F_1$. In fact, by induction hypothesis, the possible fillings are given by flats $F \supsetneq F_1$ which satisfy at least one of the following conditions: Either $\rk F = \rk F_1 + 1 = n-k+1$, or $F = \psi(F)$. Denoting the weights of the corresponding facets in $X_k$ by $\omega(F)$, the balancing condition for $X_k$ states that there are coefficients $\omega(E), \omega(F_1) \in {\mathbf Z}$ such that \begin{equation} \label{eq:balancingSpecial} \sum_F \omega(F) v_F = \omega(E) v_E + \omega(F_1) v_{F_1}. \end{equation} Pick an element $i \in E \setminus F_1$. Then can express the coefficients $\omega(E), \omega(F_1)$ as \begin{equation} \begin{split} \omega(E) &= \sum_{F \ni i} \omega(F), \\ \omega(F_1) &= \sum_F \omega(F) \; - \omega(E) = \sum_{F \not\ni i} \omega(F). \end{split} \end{equation} By assumption, $F_1 \neq F_1^\psi$ and we can choose $i \in F_1^\psi \setminus F_1$. This implies $i \in F$ for all $F = \psi(F)$, so for this choice of $i$ the sum for $\omega(F_1)$ can be restricted to $F$ with $F \neq \psi(F)$ (and hence $\rk F = \rk F_1 + 1$). Finally, by comparing with \autoref{eq:functionsfi}, we find that the values of the indicator vectors in \autoref{eq:balancingSpecial} under $f_{k+1}$ are zero except for the cases $F_1$ and $F \neq \psi(F)$, in which case the value is $1$. We conclude that the contribution $\omega_2$ of $E \supsetneq F_1$ is \[ \omega_2 = \sum_{\substack{F \\ F \neq \psi(F)}} \omega(F) \; - \omega(F_1) = \sum_{\substack{F \\ F \neq \psi(F)}} \omega(F) - \sum_{\substack{F \not\ni i \\ F \neq \psi(F)}} \omega(F) = \omega(\cl(F_1 \cup \{i\})). \] Note that $(\cl(F_1 \cup \{i\}))^\psi = F_1^\psi$, so by induction hypothesis, we have \[ \omega(\cl(F_1 \cup \{i\})) = \rk F_1^\psi - \rk \cl(F_1 \cup \{i\}) = \rk F_1^\psi - \rk F_1 - 1. \] So finally we get $\omega(\sigma) = \omega_1 + \omega_2 = \rk F_1^\psi - \rk F_1$, which proves \autoref{labb2}. It remains to prove \autoref{labb3}. In this case $0 \notin F_1 \cup \psi(F_1)$ and hence $\gap({\mathcal F}) = (r,s, \dots)$, $r+s = k+1$. Assume first that $s \geq 1$. Then by \autoref{Xkcombinatorics} the only possible fillings have gap sequence $(r, s-1, \dots)$. Note that $\rk F_2 = n-k-2$ while the ranks of the fillings are $\rk F = \rk F_2 + 1 = n-k-1$. We can thus use \autoref{weightcomputation}, row 3, $q=0$, which proves the claim in this case. Assume now $s=0$, so $\gap({\mathcal F}) = (k+1, \dots)$. This is another case where the balancing condition cannot be expressed in terms of a partition of $E \setminus F_1$. Note that $\rk F_1 = n-k-1$ and hence $f_{k+1}(F_1) = f_{k+1}(E) = 0$. So, in order to compute $\omega(\sigma)$, we are only interested in fillings $E \supsetneq F \supsetneq F_1$ which correspond to facets of $X_k$, on the one hand, and for which $f_{k+1}(F) \neq 0$, on the other hand. By comparing \autoref{eq:functionsfi} and \autoref{Xkcombinatorics}, we see that such flats $F$ belong to one of the following three subcases: \begin{enumerate} \item[(i)] $0 \notin F$, $F = \psi(F)$ \item[(ii)] $0 \in F \cup \psi(F)$, $\rk F = n-k$ \item[(iii)] $0 \in F \cup \psi(F)$, $\rk F = n-k+1$, $F = \cl(F_1 \cup \{0, \psi^{-1}(0)\})$, $F \neq \psi(F)$ \end{enumerate} We set $G = \cl(F_1 \cup \{0\})$. Note that $f_{k+1}(F) = -1$ for item (i) and $f_{k+1}(F) = 1$ for items (ii) and (iii). By the induction assumption, the flats of type (i) account for the second summand in \autoref{lem:RecursiveBeta} applied to $L= \Fix^\psi(M/F_1)$ and $G$. Hence it remains to show that the contribution $\omega'$ of items (ii) and (iii) to $\omega(\sigma)$ is $\rk(G^\psi)-\rk(F_1)$ (the first summand in \autoref{lem:RecursiveBeta}). This can easily be checked by going through the following case consideration. \begin{itemize} \item $\rk(G^\psi) = \rk(F_1) + 1$ --- In this case, $F= G^\psi = G$ is the only flag of type (ii) and type (iii) does not occur, so $\omega' = 1$. \item $\rk(G^\psi) = \rk(F_1) + 2$ --- In this case, $G$ and $\cl(F_1 \cup \{\psi^{-1}(0)\})$ are distinct and contribute to (ii), but $\cl(F_1 \cup \{0, \psi^{-1}(0)\}) \in \Fix^\psi(L(M))$, so again (iii) does not occur, so $\omega' = 2$. \item $\rk(G^\psi) > \rk(F_1) + 2$ --- Again, $G$ and $\cl(F_1 \cup \{\psi^{-1}(0)\})$ are distinct and contribute to (ii). Moreover, $F = \cl(F_1 \cup \{0, \psi^{-1}(0)\}) \notin \Fix^\psi(L(M))$, so it contibrutes to (iii). By induction assumption, the weight of the corresponding facet is \[ \rk F^\psi - \rk F = \rk G^\psi - (\rk F_1 + 2) = \rk(G^\psi) - \rk(F_1) - 2. \] Hence $\omega' = 2 + (\rk(G^\psi) - \rk(F_1) - 2) = \rk(G^\psi) - \rk(F_1)$. \end{itemize} This finishes the proof of the lemma. \end{proof} \section{The trace side} In this section we prove the second half of the main theorem by showing \begin{equation} \label{eq:secondhalf} (-1)^n \beta(\Fix(L(M))) = \sum_{p} (-1)^{p} \Tr(\Psi_*, {\mathbf F}_p(\Sigma_M)). \end{equation} \subsection{A resolution of the framing groups} We start by constructing an explicit resolution for the framing groups ${\mathbf F}_p(\Sigma_M)$ of a matroid fan using chains of flats of low rank. It appears that this resolution is known to experts --- a similar resolution is for example considered in \cite{AP-HodgeTheoryTropical}. We give a brief independent treatment here. Let ${\mathcal F} = E \supsetneq F_1 \supsetneq \dots \supsetneq F_l \supsetneq \emptyset$ be a chain of flats of $M$. Recall that $l({\mathcal F}) := l$ denotes the \emph{length} of ${\mathcal F}$. Moreover, we define the \emph{rank} of ${\mathcal F}$ by $\rk({\mathcal F}) := \rk(F_1)$. Hence, except for $E$, ${\mathcal F}$ only involves flats of rank at most $\rk({\mathcal F})$. Let ${\mathcal C}^{\leq p}_l$ be the set of chains ${\mathcal F}$ of length $l$ and rank at most $p$ and let ${\mathbf R} {\mathcal C}^{\leq p}_l$ be the real vectorspace with a basis labelled by ${\mathcal C}^{\leq p}_l$. We consider the differential complex formed by the simplicial coboundary maps $\partial \colon {\mathbf R} {\mathcal C}^{\leq p}_l \to {\mathbf R} {\mathcal C}^{\leq p}_{l+1}$. Explicitly, $\partial$ maps a generator $e_{\mathcal F}, {\mathcal F} \in {\mathcal C}^{\leq p}_l$ to a vector whose non-zero entries correspond to the chains ${\mathcal G} \in {\mathcal C}^{\leq p}_{l+1}$ with ${\mathcal F} \leq {\mathcal G}$. Moreover, such an entry is equal to $(-1)^k$ where $k$ denotes the index such that $G_k \notin {\mathcal F}$. Given a chain ${\mathcal F}$ of length $p$, we set \[ V_{\mathcal F} := v_{F_1} \wedge \dots \wedge v_{F_p} \hspace{3ex} \in {\mathbf F}_p(\Sigma_M), \] or, in other words, $V_{\mathcal F}$ is the canonical volume element for the cone $\sigma_{\mathcal F} \subset \Sigma_M$. This defines, by construction of ${\mathbf F}_p(\Sigma_M)$, a surjective map ${\mathbf R} {\mathcal C}_p \to {\mathbf F}_p(\Sigma_M)$. We are interested in its restriction ${\mathbf R} {\mathcal C}^{\leq p}_p \to {\mathbf F}_p(\Sigma_M)$ to chains only using flats of rank at most $p$. Our goal ist prove the following statement. \begin{theorem} \label{thm:res} Given a matroid fan $\Sigma_M$ with framing groups ${\mathbf F}_p(\Sigma_M)$, the sequence \[ 0 \to {\mathbf R} {\mathcal C}^{\leq p}_0 \to \dots \to {\mathbf R} {\mathcal C}^{\leq p}_p \to {\mathbf F}_p(\Sigma_M) \to 0 \] is exact for all $p$. \end{theorem} We start by proving the theorem for $p = n = \dim(\Sigma_M)$. In a sceond step, we show how to reduce to this case by using a tropical analogue of the Lefschetz hyperplane section theorem. \begin{lemma} \label{lem:resmaxdim} Given a matroid fan $\Sigma_M$ of dimension $n$, the sequence \begin{equation} \label{resmaxdim} 0 \to {\mathbf R} {\mathcal C}_0 \to \dots \to {\mathbf R} {\mathcal C}_n \to {\mathbf F}_n(\Sigma_M) \to 0 \end{equation} is exact. \end{lemma} \begin{proof} Recall that by Poincaré duality \cite{JSS-SuperformsTropicalCohomology} we have \[ {\mathbf F}_n(\Sigma_M) = H_0(\Sigma_M, {\mathbf F}_n) \cong (H_n^{\BM}(X, {\mathbf Z}))^*. \] Moreover, note that $H_k^{\BM}(X, {\mathbf Z}) = 0$ for all $k \neq 0$ (again, by Poincaré duality, or using the well-known statement about homology groups of geometric lattices \cite{Fol-HomologyGroupsLattice}). Hence the complex of simplicial chains, completed by $H_n^{\BM}(X, {\mathbf Z})$, \begin{equation} 0 \to H_n^{\BM}(X, {\mathbf Z}) \to {\mathbf R} {\mathcal C}_n \to \dots \to {\mathbf R} {\mathcal C}_0 \to 0 \end{equation} is exact and is dual to \autoref{resmaxdim} under this identification. \end{proof} \begin{remark} Since Poincaré duality holds over ${\mathbf Z}$ by \cite{JRS-Lefschetz11Theorem}, the resolution also works over ${\mathbf Z}$ (i.e.\ the sequence $0 \to {\mathbf Z} {\mathcal C}_\bullet \to {\mathbf F}_n^{\mathbf Z}(\Sigma_M) \to 0$ is exact). \end{remark} The second step in our argument is to prove the (local version of) the tropical Lefschetz hyperplane section theorem for stable intersections. Even though tropical section theorems are treated in various sources (e.g.\ \cite{AB-FilteredGeometricLattices, ARS-LefschetzSectionTheorems}), it seems that this particular statement has not been covered. It is implicit in \cite{Zha-OrlikSolomonAlgebra} however (as explained below). \begin{lemma} \label{lem:Lefschetzhyperplane} Let $H$ denote the standard hyperplane in ${\mathbf R}^N$. Then, for all $p < n$, we have \[ {\mathbf F}_p(H \cdot \Sigma_M) = {\mathbf F}_p(\Sigma_M). \] \end{lemma} \begin{proof} It suffices to prove ${\mathbf F}_p(H^{n-p} \cdot \Sigma_M) = {\mathbf F}_p(\Sigma_M)$. By \autoref{lem:hyperplanesection}, ${\mathbf F}_p(H^{n-p} \cdot \Sigma_M) = {\mathbf F}_p(\Sigma_{M^{\leq p}})$. By definition, we have ${\mathbf F}_p(\Sigma_{M^{\leq p}}) \subset {\mathbf F}_p(\Sigma_M)$, and it remains to show that ${\mathbf R} {\mathcal C}^{\leq p}_p \to {\mathbf F}_p(\Sigma_M)$ is surjective (as opposed to ${\mathbf R} {\mathcal C}_p \to {\mathbf F}_p(\Sigma_M)$ which is surjective by definition). An indirect, but short proof, works by comparing dimensions. The dimension of ${\mathbf F}_p(\Sigma_{M^{\leq p}})$ can be computed using \autoref{lem:resmaxdim}. Conversely, the dimension of ${\mathbf F}_p(\Sigma_M)$ can be computed using the fact that ${\mathbf F}_\bullet(\Sigma_M)$ is isomorphic to the Orlik-Solomon algebra $\text{OS}_\bullet(M)$ by \cite{Zha-OrlikSolomonAlgebra}. Expressing both quantities in terms of the Möbius function, we find that they agree. It is instructive to give a simple direct proof. Let ${\mathcal F} \in {\mathcal C}_p$ be a chain of length $p$. Let $\gap({\mathcal F})$ be its gap sequence and let $G \supsetneq F$ denote the piece of ${\mathcal F}$ corresponding to the last non-zero entry of $\gap({\mathcal F})$ (in particular, $\rk(G) \geq \rk(F) +2$). We denote by $F_1, \dots, F_k$ the the flats of rank $\rk(F) + 1$ such that $G \supsetneq F_i \supsetneq F$. Finally, we denote by ${\mathcal F}_i \in {\mathcal C}_p$ the chains obtained from ${\mathcal F}$ by removing $G$ and inserting $F_i$. The equation for indicator vectors $v_G + (k-1) v_F = \sum v_{F_i}$ implies the equation for volume elements \[ V_{\mathcal F} = \sum_{i=1}^k V_{{\mathcal F}_i}. \] But note that the last non-zero entry of the gap sequences $\gap({\mathcal F}_i)$ has moved by one position to the front (compared to $\gap({\mathcal F})$). By recursion we can express $V_{\mathcal F}$ in terms of elements $V_{{\mathcal F}'}$ with gap sequences $\gap({\mathcal F}') = (n-p, 0, \dots, 0)$, or equivalently, ${\mathcal F}' \in {\mathcal C}^{\leq p}_p$. This proves the claim. \end{proof} \begin{proof}[\autoref{thm:res}] By \autoref{lem:Lefschetzhyperplane} and \autoref{lem:hyperplanesection}, the statement can be reduced to the case $p = n$ after intersecting $n-p$ times with the standard hyperplane. This case was done in \autoref{lem:resmaxdim}. \end{proof} \subsection{Computing the trace side} We continue by computing $\Tr(\Psi_*, {\mathbf F}_p(\Sigma_M))$ using the resolution from \autoref{thm:res}. \begin{lemma} \label{lemTraceChains} Let $M$ be a loopless matroid of rank $n+1$ and let $\Psi \colon \Sigma_M \to \Sigma_M$ be a matroidal automorphism. Then for any $p = 0, \dots, n$, we have \[ (-1)^p \Tr(\Psi_*, {\mathbf F}_p(\Sigma_M)) = \sum_{\substack{F \in \Fix(L(M)) \\ \rk F \leq p}} \mu^\psi(\emptyset, F). \] \end{lemma} \begin{proof} Given a chain ${\mathcal F} = (F_i)$, we define $\psi({\mathcal F}) = (\psi(F_i))$. This induces linear maps on ${\mathbf R} {\mathcal C}^{\leq p}_i$ for $i=0, \dots,p$ by permuting the generators. By abuse of notation, we denote these maps by $\Psi_*$. It is obvious that these maps form a morphism of the sequence from \autoref{thm:res}. Hence, using the Hopf trace lemma \cite[§9, Theorem 2.1]{GD-FixedPointTheory}, we get \[ (-1)^p \Tr(\Psi_*, {\mathbf F}_p(\Sigma_M)) = \sum_{i=0}^p (-1)^i \Tr(\Psi_*, {\mathbf R} {\mathcal C}^{\leq p}_i). \] Since $\Psi_* \colon {\mathbf R} {\mathcal C}^{\leq p}_i \to {\mathbf R} {\mathcal C}^{\leq p}_i$ is given by a permutation of the generators, its trace is equal to the number of fixed generators, i.e.\ the chains ${\mathcal F}$ with $\psi({\mathcal F}) = {\mathcal F}$. We denote the set of such chains by $\Fix({\mathcal C}^{\leq p}_i)$. Obviously, it corresponds to the set of chains (with given length/rank) of the lattice $\Fix(L(M))$. Note that for any lattice $[\hat{0}, \hat{1}]$, the Möbius function can be expressed in terms of chains by \[ \mu(\hat{0}, \hat{1}) = \sum_{\mathcal F} (-1)^{l({\mathcal F})+1} \] (e.g.\ \cite[Proposition 2.37]{OT-ArrangementsHyperplanes}). Here, the sum runs through all chains ${\mathcal F}$ of $[\hat{0}, \hat{1}]$ and the length is measured as usual ($\hat{0}, \hat{1}$ are not counted). Applied to our situation, we obtain \begin{equation} \begin{split} (-1)^p \Tr(\Psi_*, {\mathbf F}_p(\Sigma_M)) &= \sum_{i=0}^p (-1)^i |\Fix({\mathcal C}^{\leq p}_i)| \\ &= \sum_{\substack{F \in \Fix(L(M)) \\ \rk F \leq p}} \sum_{\substack{{\mathcal F} \in \Fix({\mathcal C}) \\ F_1 = F}} (-1)^{l({\mathcal F})} \\ &= \sum_{\substack{F \in \Fix(L(M)) \\ \rk F \leq p}} \mu^\psi(\emptyset, F). \end{split} \end{equation} Note the change of exponent from $l({\mathcal F})+1$ to $l({\mathcal F})$ due to the fact that we consider ${\mathcal F}$ as a chain of $\Fix({\mathcal C})$, hence $F_1=F$ is counted. \end{proof} It is now easy to finish the proof. \begin{proof}[\autoref{eq:secondhalf}] By \autoref{lemTraceChains}, we have \begin{equation} \begin{split} \sum_{p=0}^n (-1)^{p} \Tr(\Psi_*, {\mathbf F}_p(\Sigma_M)) &= \sum_{p=0}^n \sum_{\substack{F \in \Fix(L(M)) \\ \rk F \leq p}} \mu^\psi(\emptyset, F) \\ &= \sum_{F \in \Fix(L(M))} \mu^\psi(\emptyset, F) (n+1 -\rk(F)) \\ &= - \sum_{F \in \Fix(L(M))} \mu^\psi(\emptyset, F) \rk(F) \\ &= (-1)^n \beta(\Fix(L(M))). \end{split} \end{equation} In the second last step, we use (again) that summing the Möbius function over a non-trivial interval gives zero. \end{proof} \printbibliography \end {document}
1509.04490
\section{Introduction} The bright X-ray transient source \object{V~0332$+$53}\xspace was discovered by the Vela~5B satellite during an outburst in 1973 \citep{Terrel84}. Observations with EXOSAT ten years later allowed the position of the source to be determined the X-ray pulsations with period of $\sim4.4$\,s \citep{Stella85} modulated by motion in an eccentric orbit ($e\sim0.3$) with period of $\sim34.3$\,d to be detected. The optical counterpart of the X-ray pulsar has been identified to be a Be star \citep{Honeycutt85} at a distance of $\sim7$\,kpc \citep{Negu99}. Typically for Be systems, \object{V~0332$+$53}\xspace exhibits two types of X-ray outbursts with peak luminosities of $\sim10^{38}\,{\rm erg\,s}^{-1}$ and $\sim10^{37}\,{\rm erg\,s}^{-1}$ associated with enhanced accretion onto the neutron star from the Be circumstellar disk close to the periastron. The so-called ``normal'' or Type~I outburst occur regularly as the neutron star passes through the Be disk and have comparatively low luminosity. Longer and more luminous type II or ``giant'' outbursts are more rare and thought to be related to the formation of a stable accretion disk around the neutron star during a periastron passage \citep{Okazaki02,Martin14}. Four type II outbursts and a number of type I outbursts have been detected from the source so far \citep{Terrel84,Tsunemi89,Swank04,Camero15,Myatel,Nakajima15}. In this note we report on the analysis of timing properties of \object{V~0332$+$53}\xspace during the most recent type II outburst which took place in June-October~2015 together with older observations and provide the updated orbital solution for the system. \section{Data analysis and results} Orbital parameters of \object{V~0332$+$53}\xspace were first determined by \cite{Stella85} using the EXOSAT data. Later \cite{Zhang05} and \cite{Raichur10} used Rossi X-Ray Timing Explorer (RXTE) and International Gamma-Ray Astrophysics Laboratory (INTEGRAL) observations performed during the 2004-2005 giant outburst to refine the orbital solution. These authors found significantly larger projected semimajor axis $a\sin{i}$ value than reported by \cite{Stella85} which was associated with the apsidal motion in the system. The most recent outburst in 2015 has been monitored with several instruments. However, the best timing information is provided by the Gamma-ray burst monitor (GBM) on board \emph{Fermi} \citep{Meegan} which regularly measured spin frequency of the source every 1-3 days throghout the outburst. The GBM consists of 12 \emph{NaI} and two \emph{BGO} detectors providing the full-sky coverage in 8~\,keV$-$40\,MeV energy range and is designed to detect and localize gamma ray bursts. However, it also proved to be very useful for detection and monitoring of pulsed signals from X-ray pulsars. In particular, the \emph{Fermi}~GBM team regularly publishes\footnote{http://gammaray.msfc.nasa.gov/gbm/science/pulsars.html} spin histories of selected pulsars including \object{V~0332$+$53}\xspace based on the analysis of \emph{CTIME} data from \emph{NaI} detectors in 8-50\,keV energy range. Pulsations from \object{V~0332$+$53}\xspace were first detected by GBM on MJD~57194 and the data used in this publication covers the interval until MJD~57290, i.e. almost three full orbital cycles. The orbital modulation of the spin frequency superimposed by a spin-up trend is clearly visible in the raw data shown in Fig.~\ref{fig:gbm}. We use these measurements to constrain the orbital parameters and intrinsic spin evolution of the pulsar using the same approach as \cite{Zhang05}. In addition, we repeat the analysis of RXTE data carried out by \cite{Raichur10}. In particular, we reconstruct the spin history of the pulsar during the giant outburst in 2005 using the epoch-folding period search and RXTE PCA lightcurves in 3-21\,keV energy range between MJD~53332 and 53432. Using the obtained spin frequency measurements we followed procedures described in \cite{Zhang05} and \cite{Raichur10} to constrain the orbital parameters of the system. We modeled simultaneously the data from 2005 and 2015 outbursts together with historical pulse frequency measurements reported by \cite{Stella85} and \cite{Makishima} for the outburst in 1983-1984. \begin{figure*}[!ht] \centering \includegraphics[width=0.33\textwidth]{gbm.pdf} \includegraphics[width=0.33\textwidth]{rxte.pdf} \includegraphics[width=0.33\textwidth]{tenma.pdf} \caption{Observed pulse frequency modulated by orbital motion as measured with (left to right) \emph{Fermi}~GBM, RXTE and EXOSAT and Tenma during the three major outbursts from the source. Reconstructed intrinsic pulsar frequency (black dashed line) modulated by motion along the orbit with best-fit parameters (red line) together with fit residuals are also shown.} \label{fig:gbm} \end{figure*} We emphasize that both the orbital modulation and intrinsic variation of the pulsars spin frequency must be modeled in order to constrain the orbital parameters of the system. The intrinsic spin evolution of \object{V~0332$+$53}\xspace during an outburst is rather complicated as found by \cite{Zhang05,Raichur10} who had to include the second pulse period derivative to describe it adequately. This is to be expected as the accretion torque spinning up the neutron star depends on the accretion rate which changes dramatically and also true for the latest outburst. In fact, we were unable to describe the spin history measured with \emph{Fermi}~GBM even with inclusion of the third spin frequency derivative as a parameter. Therefore, we opted to model the intrinsic spin evolution of the pulsar as a smooth interpolating function (i.e. using the piecewise cubic hermite interpolating polynomial) defined by frequency values at fixed times $T_i = {\rm MJD} 57194,57208,57220,57230,57250,57270,57290$. Following \cite{Zhang05} we included a systematic uncertainty of $7\times10^{-8}$\,Hz and $7\times10^{-6}$\,Hz for \emph{Fermi}~GBM and RXTE data respectively. Joint fitting of all three datasets results in $\chi^2_{red}=1.02$ for 115 degrees of freedom and the best-fit results are presented in Fig.~\ref{fig:gbm} and Table.~\ref{tab:orpar}. We wish to emphasize that contrary to findings by \cite{Zhang05} it was possible to describe the spin evolution of the source during all three outbursts without assuming any change in orbital parameters. Obtained results are generally consistent with the values reported by \cite{Zhang05} and \cite{Raichur10} with the exception of the orbital period value. We note that in our case it is much better defined as the periastron passage time is very well constrained in both RXTE and \emph{Fermi} datasets and the number of orbital cycles between the two outbursts is known. Other discrepancies likely arise due to the different assumptions on the intrinsic spin frequency of the pulsar coupled with insufficient orbital phase coverage and limited accuracy of individual spin frequency measurements in previous studies. \paragraph{Implications for the binary orbit.} We have shown that it is possible to describe the data outbursts in 1983, 2005 and 2015 assuming no change in the orbital parameters, thus resolving the discrepancy in $a\sin(i)$ values reported by \cite{Stella85} and \cite{Zhang05}. The discussion by \cite{Zhang05} regarding a possible apsidal motion in the system is thus not required anymore. We conclude, therefore, that there is no evidence for apsidal motion in the system. On the other hand, the low $a\sin{i}$ value reported by \cite{Stella85} led \cite{Negu99} to conclude that the Be companion in \object{V~0332$+$53}\xspace must be strongly undermassive. Indeed, assuming $a\sin{i}\sim50$\,lt\,s the projected rotational velocity of the Be star measured by \cite{Negu99} $\upsilon_*\sin{i_*}\sim100-200$\,km\,s$^{-1}$ implies $i_*\ge12^\circ-24^\circ$ if the intrinsic rotational velocity of the star is below the break-up value $\upsilon_*\le0.8\upsilon_{\rm break-up}\sim480$\,km\,s$^{-1}$. For orbital parameters reported by \cite{Stella85} this indeed implies a strongly undermassive optical companion with $M_*\le5-7\,M_\odot$ unless its equatorial plane is misaligned with the orbital plane of the system \citep{Negu99}. For larger $a\sin{i}\sim78$\,lt\,s found in this work the same considerations considerations imply $M_*\le8-50\,M_\odot$, i.e. compatible with the expected mass of a Be star. We conclude, therefore, that there is also no evidence for a tilt between its equatorial plane and orbital plane of the system. \paragraph{Intrinsic spin evolution} Once the orbital parameters of the system are determined it is possible to assess the intrinsic spin evolution of the pulsar and thus probe the accretion torques acting onto the neutron star. To estimate the time spin frequency derivative we correct the observed frequency values for motion in the binary system using the ephemeris obtained above and calculate the frequency derivative comparing the measurements in adjacent time intervals (propagating the uncertanties). The average spin-up rate of $\dot{P}_{spin}\sim5\times10^{-6}$\,s\,d$^{-1}$ is comparable with $\dot{P}_{spin}\sim8\times10^{-6}$\,s\,d$^{-1}$ reported by \cite{Zhang05} and, as discussed by these authors, is in rough agreement with the accretion torque theory. The spin-up rate of the neutron star is expected to be correlated with the accretion rate. Therefore, we estimate also the accretion rate for each time interval where the spin-up rate was determined. In particular we multiply the average \emph{Swift~BAT} count rate in 15-50\,keV range during respective time interval by factor of $1.6\times10^{-7} {\rm erg\,s}^{-1}{\rm cm^{-2}}$ to calculate the flux. This factor was calculated based on the comparison of the \emph{Swift BAT} count rates and flux in 3-100\,keV range measured during the pointed \emph{NuSTAR} observations on MJD~57223, 57275 and 57281 (detailed analysis of pointed observations will be presented elsewhere). The luminosity and accretion rate can then be estimated assuming the distance to the source of 7\,kpc and standard neutron star parameters. We find that the spin-up rate is indeed correlated with the observed X-ray flux as illustrated in Fig.~\ref{fig:spinup}. Several accretion torque models describing the spin evolution of a neutron star accreting from a disk have been proposed. The model by \cite{Ghosh79} invokes angular momentum transport from a neutron star to an accretion disk threaded by the magnetic field lines. As can be seen in Fig.~\ref{fig:spinup}, this model predicts somewhat higher spin-up rate than observed (assuming the magnetic field of the neutron star of $B=3.5\times10^{12}$\,G corresponding to the highest cyclotron line energies reported by \citealt{Tsygankov06}). The discrepancy is not large, however, and could be reconciled if we assume that the accretion rate is overestimated by factor of two. In a more recent study \cite{Parfrey15} argue that the angular momentum is removed from the neutron star by the electromagnetic outflow along the field lines opened by differential rotation of the magnetic field and the accretion disk. The total torque in this model depends strongly on the magnetospheric radius, which is assumed to constitute a fraction of the Alfvenic radius $r_a=(\mu^4/2GM\dot{M}^2)^{1/7}$ for spherical accretion, i.e. $r_m=\xi r_a$. The value of $\xi\sim0.5$ is expected from MHD simulations \citep{Long05,Bessolaz08,Zanni13}. However, in this case the model predicts significantly higher spin-up rate for \object{V~0332$+$53}\xspace than observed. To match the observations with the model prediction one can assume either that the accretion rate is over-estimated by factor of ten (which is unlikely) or that the $\xi$ must be reduced by factor of two (i.e. $\xi=0.25$ as presented in Fig.~\ref{fig:spinup}) which could signify that the disk pushes further into the magnetosphere than expected. \begin{table} \begin{center} \begin{tabular}{llll} Parameter & This work & RP10 & Z05\\ \hline $P_{orb}$, d & 33.850(1) &36.5(3) & 34.7(4) \\ $a\sin(i)$, lt\,s &77.81(7) & 82.5(9) & 86(10) \\ $e$ & 0.3713(8) & 0.417(7) &0.37(12)\\ $\omega$, deg & 277.43(5) & 283.5(9) & 283(14)\\ $T_{PA}$, MJD & 57157.88(3) & 53330.58(6) & 53367(1)\\ $K_X$, km\,s$^{-1}$ & 53.97(5) & -- & 59(7) \\ $f(M)$, $M_\odot$ & 0.441(1) & -- & 0.58(23) \\ \hline \end{tabular} \end{center} \caption{The best-fit orbital parameters of \object{V~0332$+$53}\xspace including the \emph{Fermi}~GBM data from 2015 outburst (left column). Values obtained by \cite{Raichur10} (RP10) and \cite{Zhang05} (Z05) are also shown for reference. All uncertainties are at $1\sigma$ confidence level.} \label{tab:orpar} \end{table} \begin{figure}[t!] \centering \includegraphics[width=0.5\textwidth]{spinup.pdf} \caption{Observed spin-up rate as function of source X-ray luminosity for the rising (circles) and declining (diamonds) parts of the 2015 outburst. Model predictions after \cite{Ghosh79} and \cite{Parfrey15} are also shown for reference.} \label{fig:spinup} \end{figure} \section{Conclusions} In this note we analyzed the \emph{Fermi}~GBM spin history of \object{V~0332$+$53}\xspace during the giant outburst in 2015 together with historical data from previous outbursts in 1983-1984 and 2004-2005. Our results are generally consistent with earlier estimates by \cite{Zhang05,Raichur10} and similar to the preliminary findings by the \emph{Fermi}~GBM team (based only on the data from the latest outburst). For the first time we succeeded to describe the spin evolution of the source during all three outbursts with no change in orbital parameters between them thus resolving a long standing discrepancy between the orbital solutions reported by \cite{Stella85}, \cite{Zhang05}, and \cite{Raichur10}. Our finding makes the discussion by \cite{Zhang05} regarding the possible apsidal motion in the system unnecessary. We note also that in the light of updated ephemeris the conclusion by \cite{Negu99} regarding the misalignment of orbital plane of the system and Be stars equatorial plane does not hold anymore. We were also able to significanlty improve the accuracy of the orbital solution. We find the the intrinsic spin evolution of the pulsar to be complicated with the spin-up rate being correlated with the accretion rate. The observed spin-up is qualitatively consistent with existing torque models assuming the neutron star has the magnetic field of $\sim3\times10^{12}$\,G as determined from cyclotron line energy although model uncertanties and the uncertainty in accretion rate of the source prevent us from any conclusions regarding preferred torque model. \begin{acknowledgements} This work is based on spin-histories provided by \emph{Fermi}~GBM pulsar project. VD and AS thank the Deutsches Zentrums for Luft- und Raumfahrt (DLR) and Deutsche Forschungsgemeinschaft (DFG) for financial support (grant DLR~50~OR~0702). ST acknowledges support by the Russian Scientific Foundation grant 14-12-01287. \end{acknowledgements}
2103.15267
\section{Introduction} In an environment such as a supernova where a large number of neutrinos are present, neutrino oscillations exhibit nonlinear behaviors due to the self-interaction of neutrinos~\cite{Sawyer2005,Duan2006,Hannestad2006,Raffelt2007,Raffelt2007a,Dasgupta2008,Dasgupta2009,Duan2010,Patwardhan2019,Rrapaj2019,Rrapaj2020}. It is quite difficult to solve the kinetic equations that describe this phenomenon, called collective neutrino oscillations, because of the enormous computational cost; the spatial and temporal scales of the oscillations are usually much smaller than those of a supernova, and very fine grids are needed in the momentum space to obtain even qualitatively correct behaviors~\cite{Sarikas2012}. However, collective neutrino oscillations do not always occur by working against the matter suppression~\cite{Wolfenstein1978,Wolfenstein1979} of flavor conversions when dense matter exists. The conditions crucial for the occurrence of collective neutrino oscillations have been investigated by linear stability analysis~\cite{Sawyer2009,Banerjee2011,Mirizzi2012,Mirizzi2012a,Mirizzi2013,Chakraborty2014,Chakraborty2014a,Abbar2015,Dasgupta2015,Sawyer2016,Chakraborty2016,Dasgupta2017,Izaguirre2017,Airen2018,Chakraborty2020}. In particular, fast flavor instability~\cite{Sawyer2009,Chakraborty2016,Dasgupta2017,Izaguirre2017,Abbar2018,Airen2018,Dasgupta2018,Yi2019}, which is a kind of unstable mode and whose spatial and temporal scales are proportional to the inverse of the density of neutrinos, has attracted much attention~\cite{Capozzi2017,Dasgupta2018b,Abbar2018b,Martin2019,DelfanAzari2019,Abbar2020a,Johns2020,Abbar2020c,Bhattacharyya2020,Shalgar,Bhattacharyya2021,Martin2021}. Indeed, some studies have discussed the possibilities of fast flavor conversion in various regions, such as the regions inside~\cite{DelfanAzari2020,Glas2020} and just above~\cite{Capozzi2019a} a protoneutron star and the preshock region~\cite{Morinaga2020a} in a supernova. Additionally, asymmetric neutrino emissions~\cite{Abbar2018a,Nagakura2019,Abbar2020a,Abbar2020c} and breaking the degeneracy of heavy leptonic neutrinos~\cite{Chakraborty2020,Capozzi2020,Capozzi2020b} can affect the possible regions for fast flavor conversion. It is important that all of these studies focus on crossings of the neutrino flavor lepton number (NFLN) angular distributions. Many studies have suggested that a fast instability appears when the difference between the NFLN angular distributions of 2 flavors crosses with 0. However, whether an NFLN crossing is necessary and/or sufficient is not known. In particular, the veracity of its sufficiency is sometimes dubious and even controversial. For example, Ref.~\cite{Johns2020a} concluded that the presence of an NFLN crossing is not sufficient for fast instability under the assumption of axisymmetry and spatial homogeneity. According to Ref.~\cite{Capozzi2020}, a ``shallow crossing" in the electron lepton number distribution does not generate instability. However, these results occur because artificially imposed symmetries hinder the development of unstable modes. In this paper, we show that the existence of fast instability is equivalent to that of NFLN crossings. Mathematical proof of this proposition has been a missing link in the study of fast flavor conversion. In addition, we find that spurious instability~\cite{Sarikas2012} by the discretization of spectra does not appear over time, unlike stationary solutions. If an NFLN crossing exists, at least modes with the wave vector $\vec{k}$ around the ``crossing direction," at which the NFLN angular distributions of 2 flavors cross each other, exhibit instability. This study clarifies the condition for fast neutrino flavor instability and plays a crucial role in the elucidation of collective neutrino oscillations. \section{Fast neutrino flavor instability} \subsection{Kinetic equation} We consider the density matrix of $N_\mathrm{f}$-flavor neutrinos (antineutrinos) $\flavor{f}$ ($\bar{\flavor{f}}$), which is $N_\mathrm{f} \times N_\mathrm{f}$ matrices depending on the spacetime position $x$, energy $E$ and flight direction $\vec{v}$. Through the introduction of the density matrix with negative energy $-E < 0$ as $\flavor{f}(-E) \equiv -\bar{\flavor{f}}(E)$, their evolutions are described collectively by the kinetic equation~\cite{Sigl1993,Yamada2000,Cardall2008,Vlasenko,Kartavtsev2020} \begin{align} v\cdot\partial\flavor{f}(x,\Gamma) = -i[\flavor{H}(x,\Gamma),\flavor{f}(x,\Gamma)], \label{eq:Kinetic} \end{align} where $\Gamma \equiv (E, \vec{v})$ and $(v^\mu) \equiv (1, \vec{v})$, and the Hamiltonian $\flavor{H}$ is expressed as \begin{align} \flavor{H}(x,\Gamma) \equiv \dfrac{\flavor{M}^{2}}{2E} + v\cdot\flavor{J}(x). \end{align} The first term of $\flavor{H}$ is the vacuum mixing term, which mixes the neutrino flavors by the off-diagonal components of the mass square matrix $\flavor{M}^2$. The second term reflects the forward scattering of neutrinos on leptons with \begin{align} \flavor{J}^\mu(x)\equiv \sqrt{2}G_F\left[\operatorname{diag}\left(\{j^\mu_\alpha(x)\}\right) + \int d\Gamma \flavor{f}(x,\Gamma)v^\mu\right], \end{align} where $j_\alpha$ is the lepton number current of charged leptons $\alpha$ and $\int d\Gamma \equiv (2\pi)^{-3}\int_{-\infty}^\infty dE E^2 \int d\vec{v}$. \subsection{Dispersion relation of the fast mode} Fast neutrino flavor instability is the instability of the flavor eigenstates $\flavor{f} = \operatorname{diag}(\{f_{\nu_\alpha}\})$ when the vacuum mixing term is neglected because it is minor compared to the self-interactions of neutrinos~\cite{Izaguirre2017,Airen2018}. To consider this class of instability, we omit $\flavor{M}^2$ and linearize Eq.~(\ref{eq:Kinetic}) as \begin{align} &v\cdot \left\{i\partial - J_{0\alpha}(x) + J_{0\beta}(x)\right\}S_{\alpha\beta}(x,\Gamma)\nonumber\\ &+ \left(f_{\nu_\alpha}(x, \Gamma) - f_{\nu_\beta}(x, \Gamma)\right)\sqrt{2}G_F\int d\Gamma' v\cdot v'S_{\alpha\beta}(x,\Gamma') = 0 \label{eq:KineticLinearized} \end{align} for the Hermitian matrix $\flavor{S}$ given as $\flavor{f} = \operatorname{diag}(\{f_{\nu_\alpha}\}) + \flavor{S}$, where $J_{0\alpha}^\mu(x) \equiv \sqrt{2}G_F\left[j^\mu_\alpha(x) + \int d\Gamma f_{\nu_\alpha}(x,\Gamma)v^\mu\right]$. We neglect the spatial and temporal variations in $\{j_\alpha\}$ and $\{f_{\nu_\alpha}\}$ and substitute the plane wave ansatz $\flavor{S}(x, \Gamma) = \tilde{\flavor{S}}(k, \Gamma)e^{-ik\cdot x}$ into Eq.~(\ref{eq:KineticLinearized}) to derive the dispersion relation (DR), where $(k^\mu) \equiv (\omega, \vec{k})$ denotes an angular frequency $\omega$ and wave vector $\vec{k}$. The diagonal components of Eq.~(\ref{eq:KineticLinearized}) yield DR $v\cdot k = 0$, which does not generate instabilities, and the others do \begin{align} \Delta_{\alpha\beta}(k) \equiv \det\mat{\Pi}_{\alpha\beta}(k) = 0, \label{eq:DR} \end{align} where \begin{align} \Pi^{\mu\nu}_{\alpha\beta}(k) \equiv \eta^{\mu\nu} + \int \frac{d\vec{v}}{4\pi} G_{\alpha\beta}(\vec{v})\dfrac{v^\mu v^\nu}{v\cdot (k - J_{0\alpha} + J_{0\beta})}. \label{eq:PolarizationTensor} \end{align} $G_{\alpha\beta}(\vec{v}) \equiv \sqrt{2}G_F\int_{-\infty}^\infty \frac{dE E^2}{2\pi^2}\left(f_{\nu_\alpha}(\Gamma) - f_{\nu_\beta}(\Gamma)\right)$ is the difference between the NFLN angular distribution for $\nu_\alpha$ and $\nu_\beta$. We note that $\Delta_{\alpha\beta}(k) = \Delta_{\beta\alpha}(-k)$ is satisfied and that all the $[N_\mathrm{f}(N_\mathrm{f} - 1)/2]$-independent equations of Eq.~(\ref{eq:DR}) are candidates for instabilities. In the following discussions, we consider one of them and omit the indices denoting flavor. Additionally, we set $J_0 = 0$ because $J_0$ shifts only the real parts of the wave vector $k$ and does not affect the instability. In addition, we assume that $G$ is continuous, which is a natural assumption for treating realistic systems. Because $\Delta(\omega, \vec{k}) = \overline{\Delta(\overline{\omega}, \vec{k})}$ is satisfied for $\vec{k} \in \mathbb{R}^3$, the complex conjugate pair $(\omega, \vec{k})$ and $(\overline\omega, \vec{k})$ are both solutions of Eq.~(\ref{eq:DR}). Therefore, if there exist nonreal $\omega$ values for $\vec{k} \in \mathbb{R}^3$, $\flavor{S}$ grows exponentially. Note that a nonreal $\vec{k}$, which is sometimes called a ``spatial instability'', does not directly play a role in spatiotemporal evolutions and does not even guarantee the spatial growth of perturbations imposed ceaselessly at some spatial point~\cite{Sturrock1958,Briggs1964,Landau1997,Capozzi2017,Yi2019,Morinaga2020,Morinaga2021}. \section{Equivalency between fast instability and NFLN crossings} In this section, we show that the necessary and sufficient condition for the existence of fast instability is that of NFLN crossings. \subsection{Necessary condition} First, we focus on the necessary condition: if there exist $\omega \notin \mathbb{R}$ and $\vec{k} \in \mathbb{R}^3$ such that $\Delta(k) = 0$, $G(\vec{v})$ takes both positive and negative values. We define \begin{align} \sigma \equiv& \operatorname{Im} \omega\\ (\kappa^\mu) \equiv& (\operatorname{Re}\omega, \vec{k}) \end{align} to separate the real and imaginary parts of $\omega$. Then, $\mat{\Pi}$ can be decomposed as \begin{align} \Pi^{\mu\nu}(k) = R^{\mu\nu}(k) - i I^{\mu\nu}(k), \end{align} where we define \begin{align} R^{\mu\nu}(k) \equiv& \eta^{\mu\nu} + \int \frac{d\vec{v}}{4\pi} G(\vec{v})\dfrac{v^\mu v^\nu v\cdot \kappa}{(v\cdot \kappa)^2 + \sigma^2}\\ I^{\mu\nu}(k) \equiv& \sigma\int \frac{d\vec{v}}{4\pi} G(\vec{v})\dfrac{v^\mu v^\nu}{(v\cdot \kappa)^2 + \sigma^2}. \end{align} The symmetric tensor $\mat{I}$ can be diagonalized by an orthogonal matrix $\mat{V} \in O(4, \mathbb{R})$ as \begin{align} V^\mu{}_\sigma V^\nu{}_\rho I^{\sigma\rho} = D^{\mu\nu}, \end{align} where $\mat{D}$ is a real diagonal matrix whose $(\mu, \mu)$-component is given as \begin{align} D^{\mu\mu}(k) = \sigma \int \frac{d\vec{v}}{4\pi} G(\vec{v})\dfrac{\left(V^\mu{}_\nu(k) v^\nu\right)^2}{(v\cdot \kappa)^2 + \sigma^2}. \label{eq:Diagonal} \end{align} Then, $\mat{\Pi}$ can be expressed as \begin{align} \Pi^{\mu\nu} = \left(V^{-1}\right)^\mu{}_\sigma \left(V^{-1}\right)^\nu{}_\rho \left(\tilde{R}^{\sigma\rho} - iD^{\sigma\rho}\right) \end{align} with $\tilde{R}^{\mu\nu} \equiv V^\mu{}_\sigma V^\nu{}_\rho R^{\sigma\rho}$, and Eq.~(\ref{eq:DR}) is equivalent to $\det\left(\mat{\tilde{R}}(k) - i \mat{D}(k)\right) = 0$, which means that there exists a nontrivial 4-vector $a$ such that $\tilde{R}^{\mu\nu} a_\nu = iD^{\mu\nu} a_\nu$. From this equation, we can obtain $\overline{a}_\mu \tilde{R}^{\mu\nu} a_\nu = i\overline{a}_\mu D^{\mu\nu} a_\nu$ and $\overline{a}_\mu \tilde{R}^{\mu\nu} a_\nu = -i\overline{a}_\mu D^{\mu\nu} a_\nu$, whose difference yields \begin{align} \sum_\mu D^{\mu\mu}|a_\mu|^2 = 0. \label{eq:ConditionForDR} \end{align} If $G(\vec{v})$ does not change its sign for all $\vec{v}$ and $\sigma \neq 0$, all the diagonal components of $\mat{D}$ have the same sign as $\sigma G$ from Eq.~(\ref{eq:Diagonal}) and cannot satisfy Eq.~(\ref{eq:ConditionForDR}). Therefore, $G$ must take both positive and negative values for $\sigma$ to be nonzero. We notice that the above discussion is valid even if $G$ is discretized as $G(\vec{v}) = \sum_i G_i \delta(\vec{v} - \vec{v}_i)$. If we consider stationary solutions, the discretization of spectra sometimes suffers from spurious instabilities~\cite{Sarikas2012}. On the other hand, when we solve time evolutions, spurious instability does not appear by discretization. \subsection{Sufficient condition} The remaining task is to show the sufficient condition: if $G(\vec{v})$ takes both positive and negative values, there exist $\omega \notin \mathbb{R}$ and $\vec{k} \in \mathbb{R}^3$ such that $\Delta(k) = 0$. By introducing \begin{align} n^\mu \equiv \frac{k^\mu}{\omega}, \end{align} $\mat{\Pi}$ can be expressed as \begin{align} \Pi^{\mu\nu}(k) = \eta^{\mu\nu} + \frac{1}{\omega}T^{\mu\nu}(\vec{n}), \end{align} where \begin{align} T^{\mu\nu}(\vec{n}) \equiv \int \frac{d\vec{v}}{4\pi} G(\vec{v})\dfrac{v^\mu v^\nu}{v\cdot n}, \end{align} whose integral converges for $\vec{n}$ in the open unit ball $B \equiv \{\vec{n}\in\mathbb{R}^3||\vec{n}| < 1\}$. Here, $\operatorname{tr} \mat{T} = 0$ due to $v^\mu v_\mu = 0$, where the trace of the natural powers of a tensor $\mat{A}$ is defined as $\operatorname{tr} \mat{A}^m \equiv A^{\mu_1}{}_{\mu_2} A^{\mu_2}{}_{\mu_3}\cdots A^{\mu_m}{}_{\mu_1}$. Then, the DR is the zeros of the quartic function of $\omega$ (see Appendix~A): \begin{align} \tilde\Delta(\omega, \vec{n}) \equiv& -\omega^4\Delta(k) = \det\left(\omega\delta^\mu_\nu + T^\mu{}_\nu(\vec{n})\right)\nonumber\\ =& \omega^4 - \frac{1}{2}\operatorname{tr} \mat{T}^2(\vec{n})\omega^2 + \frac{1}{3}\operatorname{tr} \mat{T}^3(\vec{n})\omega\nonumber\\ &+ \frac{1}{8}\left(\operatorname{tr} \mat{T}^2(\vec{n})\right)^2 - \frac{1}{4} \operatorname{tr} \mat{T}^4(\vec{n}). \label{eq:Quartic} \end{align} Henceforth, we express a solution for $\omega$ of $\Delta(k) = 0$ as $\omega(\vec{k})$ and that of $\tilde\Delta(\omega, \vec{n}) = 0$ as $\omega(\vec{n})$. It should be noted that $\omega(\vec{n})$ is four-valued while $\omega(\vec{k})$ is multivalued but not always four-valued. In the following discussions, we assume that $G(\vec{v})$ takes both positive and negative values and prove that $\omega(\vec{k})$ can be nonreal for some $\vec{k}\in\mathbb{R}^3$. This proposition can be shown by proving the following 3 lemmas instead: \begin{lem} If some of the 4 branches of $\omega(\vec{n} = \vec{0})$ are nonreal, there is a nonreal $\omega(\vec{k})$ for some $\vec{k} \in \mathbb{R}^3$. \end{lem} \begin{lem} $\omega(\vec{n})$ is nonreal for some $\vec{n}\in B$. \end{lem} \begin{lem} $\omega(\vec{n})$ does not diverge to infinity for all $\vec{n}\in B$. \end{lem} From lemma~1, we have only to consider the case in which all 4 branches of $\omega(\vec{n} = \vec{0})$ to be real; otherwise, the proposition is already proven. Then, the DR with real $\omega$ can be categorized into 3 cases as Fig.~\ref{fig:DRPattern} by paying attention to $\vec{n} = \vec{k} / \omega$. If all the branches of $\omega(\vec{n})$ are real for all $\vec{n} \in B$ [case~(a)] or some branches of $\omega(\vec{n})$ diverge to infinity for some $\vec{n}\in B$ [case~(b)], $\omega(\vec{k})$ is not necessarily nonreal; otherwise, some branches of $\omega(\vec{n})$ must merge for some $\vec{n}\in B$ [case~(c)] and a branch point, at which the gradient $\nabla \omega(\vec{k})$ diverges and nonreal $\omega(\vec{k})$ begins, appears at some $\vec{k}$. Since lemma~2 excludes case~(a) and lemma~3 excludes case~(b), the 3 lemmas lead to the existence of nonreal $\omega(\vec{k})$ for some $\vec{k}\in\mathbb{R}^3$, which is the proposition to prove. \begin{figure}[htb] \includegraphics[width=\linewidth]{DRPattern2.pdf} \caption{ Schematic pictures of the DRs for the cases in which (a) $\omega$ is real for all $\vec{n}\in B$, (b) $\omega$ diverges at $\vec{n} \in B$ and (c) $\omega$ has a branch point (red dot). The black solid lines show $\omega(\vec{k} = k\vec{e}) \in \mathbb{R}$, where we choose some direction $\vec{e}$, and the gray regions are the zones of avoidance. } \label{fig:DRPattern} \end{figure} Lemma~1 can be easily proven; since $\vec{k} = \omega\vec{n}$ yields $\omega(\vec{k} = \vec{0}) = \omega(\vec{n} = \vec{0})$, the existence of nonreal $\omega(\vec{n} = \vec{0})$ immediately means that of nonreal $\omega(\vec{k} = \vec{0})$. Lemma~3 is also confirmed from Eq.~(\ref{eq:Quartic}) because $\operatorname{tr}\mat T^m(\vec{n})$ is finite for $\vec{n} \in B$. In the following, we prove lemma~2 by showing that there exists $\vec{n}\in B$ such that the coefficient of $\omega^2$ of $\tilde\Delta(\omega, \vec{n})$ is positive; for such $\vec{n}$, $\tilde\Delta(\omega, \vec{n})$ has only 1 local minimum for $\omega$, meaning that the number of real solutions of the quartic equation $\tilde\Delta(\omega, \vec{n}) = 0$ is at most 2 and that the remaining solutions are nonreal. We define $\vec{e}_\xi$ as one of the unit vectors satisfying $G(\vec{e}_\xi) = 0$; we refer to these directions as crossing directions. $\vec{e}_\eta$ is also defined as a unit vector parallel to $\vec{\nabla}G(\vec{e}_\xi)$, and $\vec{e}_\zeta \equiv \vec{e}_\xi \times \vec{e}_\eta$ (see Fig.~\ref{fig:Distribution}). Hereinafter, the indices $t$, $\xi$, $\eta$ and $\zeta$ of vectors and tensors are used to denote their temporal, $\vec{e}_\xi$, $\vec{e}_\eta$ and $\vec{e}_\zeta$ components, respectively. \begin{figure*}[htb] \subfigure[]{ \includegraphics[width=0.31\linewidth]{Distribution.pdf} \label{fig:Distribution} } \subfigure[]{ \includegraphics[width=0.31\linewidth]{DR.pdf} \label{fig:DR} } \subfigure[]{ \includegraphics[width=0.31\linewidth]{Asymptotics.pdf} \label{fig:Asymptotics} } \caption{ \subref{fig:Distribution}: The difference between the NFLN angular distributions of 2 flavors $G(\vec{v}) = 3(v^z)^2-\frac{1}{4}$ and $\{\vec{e}_\xi, \vec{e}_\eta, \vec{e}_\zeta\}$ and $\vec{e}_\pm$. The black solid lines on the sphere show the crossing directions. The scales of $\vec{e}_{\xi/\pm}$ are adjusted for visibility. \subref{fig:DR}: The DR for $G(\vec{v}) = 3(v^z)^2-\frac{1}{4}$. The black, cyan and red lines are $\omega(k\vec{e}_\xi)$, $\omega(k\vec{e}_+)$ and $\omega(k\vec{e}_-)$, respectively. The complex $\omega(k\vec{e}_-)$ values for real $k$ are indicated by the red areas, whose centerlines are $\operatorname{Re}\omega$, and the difference between the centerlines and the boundaries of the areas is $10\operatorname{Im}\omega$. The gray regions are the zone of avoidance. \subref{fig:Asymptotics}: The relation between $\omega$ and $\vec{n}$ for $G(\vec{v}) = 3(v^z)^2-\frac{1}{4}$. The directions of $\vec{n}$ are $\vec{e}_\xi$ (black), $\vec{e}_+$ (cyan) and $\vec{e}_-$ (red). } \label{fig:ToyModel} \end{figure*} We focus on the behaviors of $\omega(\vec{n})$ around the crossing direction by considering \begin{align} T^{\mu\nu}(n \mat{R}_\zeta(\theta)\vec{e}_\xi) =& \int \frac{d\vec{v}}{4\pi} G(\vec{v})\dfrac{v^\mu v^\nu}{1 - n\vec{v}\cdot\left\{\mat{R}_\zeta(\theta)\vec{e}_\xi\right\}}\nonumber\\ =& R_\zeta{}^\mu{}_\sigma(\theta) R_\zeta{}^\nu{}_\rho(\theta)\tilde{T}_\theta^{\sigma\rho}(n), \end{align} where $\mat{R}_\zeta(\theta)$ is the rotation operator around $\vec{e}_\zeta$ with the angle $\theta$ and \begin{align} \tilde{T}_\theta^{\mu\nu}(n) \equiv& \int \frac{d\vec{v}}{4\pi} G\left(\mat{R}_\zeta(\theta)\vec{v}\right)\dfrac{v^\mu v^\nu}{1 - n v^\xi}. \label{eq:TTilde} \end{align} Now, $\operatorname{tr}\mat{T}^m (n \mat{R}_\zeta(\theta)\vec{e}_\xi) = \operatorname{tr}\mat{\tilde{T}}_\theta^m(n)$ is satisfied because $\mat{R}_\zeta$ is a Lorentz transformation. $n > 1$ corresponds to the ``zone of avoidance"~\cite{Izaguirre2017}, in which $\mat{\tilde{T}}$ diverges to infinity and there is no solution satisfying Eq.~(\ref{eq:Quartic}). At the limit of $n \uparrow 1$, $\mat{\tilde{T}}_\theta(n)$ seems to diverge as well. All the components of $\mat{\tilde{T}}_\theta(n)$ whose indices include $\eta$ or $\zeta$, however, converge to a finite value because $v^\eta$ and $v^\zeta$ are proportional to $\sqrt{1 - \left(v^\xi\right)^2}$. On the other hand, the other components diverge and asymptotically behave as \begin{align} \tilde{T}_\theta^{tt}(n) \sim \tilde{T}_\theta^{t\xi}(n) \sim \tilde{T}_\theta^{\xi\xi}(n) \sim \frac{G\left(\mat{R}_\zeta(\theta)\vec{e}_\xi\right)}{2}\log\frac{1}{1 - n} \label{eq:AsymptoticBehavior} \end{align} as $n \uparrow 1$ for $\theta = \pm \epsilon$ with small $\epsilon > 0$. At $\theta = 0$, $v^\xi = 1$ is zero for $G$, and all the components of $\mat{\tilde{T}}_\theta(n)$ converge to a finite value as $n \uparrow 1$. The components that converge at $\theta = \pm\epsilon$ as $n \uparrow 1$ are continuous for $\theta$ at $\theta = 0$ and $n = 1$. From Eq.~(\ref{eq:AsymptoticBehavior}), the other components for $\theta = \epsilon$ and $\theta = - \epsilon$ diverge to infinity with different signs from each other as $n \uparrow 1$. Although these components diverge, the differences \begin{align} c_\theta(n) \equiv& \tilde{T}_\theta^{t\xi}(n) - \tilde{T}_\theta^{tt}(n) = - \int \frac{d\vec{v}}{4\pi} G\left(\mat{R}_\zeta(\theta)\vec{v}\right)\dfrac{1 - v^\xi}{1 - n v^\xi}\\ d_\theta(n) \equiv& \tilde{T}_\theta^{\xi\xi}(n) - \tilde{T}_\theta^{tt}(n) = - \int \frac{d\vec{v}}{4\pi} G\left(\mat{R}_\zeta(\theta)\vec{v}\right)\dfrac{1 - (v^\xi)^2}{1 - n v^\xi} \end{align} converge to a finite value as $n \uparrow 1$, and hence, those as $\theta \uparrow 0$ and $\theta \downarrow 0$ coincide with each other at $n = 1$. Then, straightforward computation yields asymptotic behavior \begin{align} -\frac{1}{2}\operatorname{tr}\mat{\tilde{T}}_\theta^2(n) \sim \left[2c_\theta(1) - d_\theta(1)\right]\tilde{T}_\theta^{tt}(n) \end{align} as $n \uparrow 1$. This is the coefficient of $\omega^2$ in Eq.~(\ref{eq:Quartic}) and takes positive values for either $\theta = \epsilon$ or $\theta = -\epsilon$ with sufficiently large $n$~\footnote{ If $d_0(1) - 2c_0(1)$ vanishes for all crossing directions, we have to consider subleading terms and/or other coefficients in Eq.~(\ref{eq:Quartic}). We do not go into further detail because such a case is quite special with measure zero. }. Therefore, for at least one of $\theta = \epsilon$ or $\theta = -\epsilon$, there exist nonreal $\omega$ values for sufficiently large $n$, and lemma 2 has been proven. We note that the proof of the sufficient condition here is not valid for discrete spectra, unlike the case of the necessary condition. Whether the sufficient condition holds also for the discrete case is left for future research. We focus on the distribution $G(\vec{v}) = 3(v^z)^2 - \frac{1}{4}$ to exemplify the above discussion (see Fig.~\ref{fig:Distribution}). In this case, all the points satisfying $v^z = \pm\frac{1}{2\sqrt{3}}$ are the crossing directions. Here, we choose $\{\vec{e}_\xi, \vec{e}_\eta, \vec{e}_\zeta\}$ as \begin{align} \begin{pmatrix} \vec{e}_\xi\\ \vec{e}_\eta\\ \vec{e}_\zeta \end{pmatrix} = \begin{pmatrix} 0 & \frac{\sqrt{11}}{2\sqrt{3}} & \frac{1}{2\sqrt{3}}\\ 0 & -\frac{1}{2\sqrt{3}} & \frac{\sqrt{11}}{2\sqrt{3}}\\ 1 & 0 & 0 \end{pmatrix} \begin{pmatrix} \vec{e}_x\\ \vec{e}_y\\ \vec{e}_z \end{pmatrix} \end{align} and define $\vec{e}_{\pm} \equiv \mat{R}_\zeta(\pm \pi/8)\vec{e}_\xi$. The DRs for $\vec{k}$ parallel to $\vec{e}_{\xi/\pm}$ are shown in Fig.~\ref{fig:DR}. We can confirm that nonreal $\omega$ values appear only for $\vec{k} = k\vec{e}_-$ and begin at the points at which $d\omega / dk$ diverges to infinity. We note that the solution $\omega(\vec{k})$ can vanish at large $|\vec{k}|$. If $\Delta(k)$ was holomorphic on $\mathbb{C}^4$, $\omega(\vec{k}) \in \mathbb{C}$ would exist for all $\vec{k}\in\mathbb{R}^3$. In reality, however, $\Delta(k)$ has the branch cut on $\omega \in (-|\vec{k}|, |\vec{k}|)$, which corresponds to the zone of avoidance, and the zeros of $\Delta(k)$ can terminate on the branch cut. Figure~\ref{fig:Asymptotics} shows the asymptotic behaviors of $\omega$ as $n \uparrow 1$ for $\vec{n}$ parallel to $\vec{e}_{\xi/\pm}$. For $\vec{n} = n\vec{e}_\xi$, all the branches of $\omega(\vec{n})$ converge to finite values as $n \uparrow 1$ because all the components of $\mat{T}$ converge. On the other hand, for $\vec{n} = n\vec{e}_+$, only 2 of them converge, and the remaining 2 logarithmically diverge to infinity because some components of $\mat{T}$ diverge, as shown in Eq.~(\ref{eq:AsymptoticBehavior}). For $\vec{n} = n\vec{e}_-$, while 2 branches converge, the remaining 2 merge at $n \approx 1 - e^{-7.3}$, and $\omega$ become nonreal for $n$ larger than this branch point; for sufficiently large $n$, the number of real branches is less than 4, which is the number of real branches of the $\vec{k} = \vec{0}$ mode, meaning that there is some branch point of $\omega(k\vec{e}_-)$ for $k \in \mathbb{R}$. \section{Conclusion} We showed that fast flavor instability is present if and only if the NFLN angular distributions of 2 flavors cross each other. To find fast instability, we have only to seek NFLN crossings. In contrast, once an NFLN crossing appears, the flavor coherence grows in the linear regime, and nonlinear oscillations are expected to begin after several times the linear growth timescale. We also find that unstable modes appear at least in $\vec{k}$ around the crossing directions. We have to consider that the fast instabilities may not be able to be captured if some symmetries are imposed a priori. Determining which modes are actually unstable is important for reasonable results when we conduct nonlinear calculations. To crystallize the effect of collective neutrino oscillations on astrophysical systems, nonlinear behaviors should also be elucidated. The resultant distributions after a sufficiently long time in the regions where instabilities have propagated might be simply flavor-decohered distributions. Whatever the results of nonlinear evolutions are, it is important to accurately understand the behaviors in the linear regime, including how instabilities propagate in spacetime~\cite{Sturrock1958,Briggs1964,Landau1997,Capozzi2017,Yi2019,Morinaga2020,Morinaga2021}, and this study has achieved one of the major goals toward this understanding. \begin{acknowledgments} I am grateful to Shoichi Yamada for his valuable comments. I would also like to thank Georg Raffelt, Basudeb Dasgupta, Sajad Abbar, and Manu George for their useful discussions and comments. I am supported by a JSPS Grant-in-Aid for JSPS Fellows (No. 19J21244) from the Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan. \end{acknowledgments}
2103.15097
\section{Introduction} Let~$A\in\R^{n\times n}$. Fix~$k\in\{1,\dots,n\}$. The $k$-multiplicative and $k$-additive compounds of~$A$ play an important role in several fields of applied mathematics. Recently, there is a growing interest in the applications of these compounds, and their generalizations, in systems and control theory (see, e.g.~\cite{cheng_diag_stab,kordercont,rami_osci,rola_spect,CTPDS,margaliot2019revisiting, pines2021,Eyal_k_posi,DT_K_POSI,9107214, gruss1,gruss2,gruss3,gruss4,gruss5}). This tutorial paper reviews the~$k$-compounds, focusing on their geometric interpretations, and surveys some of their recent applications in systems and control theory, including~$k$-positive systems, $k$-cooperative systems, and~$k$-contracting systems. The results described here are known, albeit never collected in a single paper. The only exception is the new generalization principle described in Section~\ref{sec:k_generalizations}. We use standard notation. For a set~$S$, $\operatorname{int}(S)$ is the interior of~$S$. For scalars~$\lambda_i$, $i\in\{1,\dots, n\}$, $\operatorname{diag}(\lambda_1,\dots,\lambda_n)$ is the $n\times n$ diagonal matrix with diagonal entries~$\lambda_i$. Column vectors [matrices] are denoted by small [capital] letters. For a matrix~$A$, $A^T$ is the transpose of~$A$. For a square matrix~$B$, $\operatorname{tr}(B)$ [$\det(B)$] is the trace [determinant] of~$B$. $B $ is called Metzler if all its off-diagonal entries are non-negative. The positive orthant in~$\R^n$ is~$\R^n_+ = \{x\in\R^n: x_i\geq 0,\; i=1,\dots,n\}$. \section{Geometric motivation} $k$-compound matrices provide information on the evolution of $k$-dimensional polygons subject to a linear time-varying dynamics. To explain this in a simple setting, consider the LTI system: \begin{equation}\label{eq:ltia} \dot x =\operatorname{diag}(\lambda_1,\lambda_2,\lambda_3)x , \end{equation} with~$\lambda_i\in\R$ and~$x:\R_+ \to\R^3$. Let~$e^i$, $i=1,2,3$, denote the~$i$th canonical vector in~$\R^3$. For~$x(0)=e^i$ we have~$x(t)=\exp(\lambda_i t)x(0)$. Thus, $\exp(\lambda_i t)$ describes the rate of evolution of the line~$e^i$ subject to~\eqref{eq:ltia}. What about 2D areas? Let~$S\subset\R^3 $ denote the square generated by~$e^i$ and~$e^j$, with~$i\not =j$. Then~$S(t):= x(t,S )$ is the rectangle generated by~$\exp(\lambda_i t) e^i$ and~$\exp(\lambda_j t) e^j$, so the area of~$S(t)$ is~$\exp( (\lambda_i + \lambda_j) t)$. Similarly, if~$B \subset\R^3 $ is the 3D box generated by~$e^1,e^2$, and~$e^3$ then the volume of~$B(t):=x(t,B)$ is~$\exp( (\lambda_1+\lambda_2+\lambda_3)t)$ (see Fig.~\ref{fig:GeometricalEvolution}). Since~$\exp(At)=\operatorname{diag} ( \exp(\lambda_1 t), \exp(\lambda_2 t), \exp(\lambda_3 t) )$, this discussion suggests that it may be useful to have a~$3\times 3$ matrix whose eigenvalues are the sums of any two eigenvalues of~$\exp(At)$, and a~$1\times 1$ matrix whose eigenvalue is the sum of the three eigenvalues of~$\exp(At)$. With this geometric motivation in mind, we turn to recall the notions of the multiplicative and additive compounds of a matrix. For more details and proofs, see e.g.~\cite[Ch.~6]{fiedler_book}\cite{schwarz1970}. \begin{figure} \begin{center} \includegraphics[scale=0.33]{GeometricalEvolution.eps} \caption{The evolution of lines, areas, and volumes under the the LTI~\eqref{eq:ltia}.} \label{fig:GeometricalEvolution} \end{center} \end{figure} \section{Compound Matrices} Let~$A\in\C^{n\times m }$. Fix~$k\in\{1,\dots,\min\{n,m\} \}$. Let~$Q(k,n)$ denote the set of increasing sequences of~$k$ integers in~$\{1,\dots,n\}$, ordered lexicographically. For example, \[ Q(2,3)=\{ \{1,2\} ,\{1,3\},\{2,3\} \}. \] For~$\alpha \in Q(k,n),\beta \in Q(k,m)$, let~$A[\alpha|\beta]$ denote the $ k\times k$ submtarix obtained by taking the entries of~$A$ in the rows indexed by~$\alpha$ and the columns indexed by~$\beta$. For example \[ A[ \{1,2\}|\{2,3\}]=\begin{bmatrix} a_{12} & a_{13}\\ a_{22} &a_{23} \end{bmatrix} . \] The minor of~$A $ corresponding to~$\alpha , \beta$ is~$A(\alpha|\beta):=\det (A[\alpha|\beta]) $. For example,~$Q(n,n)$ includes the single set~$\alpha=\{1,\dots,n\}$ and~$A(\alpha|\alpha)=\det(A)$. \begin{Definition}\label{def:multi} Let $A\in C^{n\times m}$ and fix $k \in \{ 1,\dots,\min \{ n,m \} \}$. The \emph{$k$-multiplicative compound} of~$A$, denoted~$A^{(k)}$, is the~$\binom{n}{k}\times \binom{m}{k}$ matrix that contains all the~$k$-order minors of~$A$ ordered lexicographically. \end{Definition} For example, if~$n=m=3$ and~$k=2$ then \[ A^{(2)}= \begin{bmatrix} A(\{12\}|\{12\}) & A(\{12\}|\{13\}) & A(\{12\}|\{23\}) \\ A(\{13\}|\{12\}) & A(\{13\}|\{13\}) & A(\{13\}|\{23\}) \\ A(\{23\}|\{12\}) & A(\{23\}|\{13\}) & A(\{23\}|\{23\}) \end{bmatrix}. \] In particular, Definition~\ref{def:multi} implies that~$A^{(1)}=A$, and if~$n=m$ then~$A^{(n)}=\det(A)$. Let~$A\in\mathbb{C}^{n\times m},\; B\in\mathbb{C}^{m\times p}$. The Cauchy-Binet Formula (see e.g.~\cite{notes_comb_algebra}) asserts that for any $k\in \{1,\dots,\min \{n,m,p\} \}$, \begin{align}\label{eq:AB_MultComp} (AB)^{(k)} = A^{(k)} B^{(k)}. \end{align} Hence the term multiplicative compound. Note that for~$n=m=p$, Eq.~\eqref{eq:AB_MultComp} with~$k=n$ reduces to the familiar formula~$\det(AB)= \det (A) \det (B)$. Let~$I_s$ denote the~$s\times s$ identity matrix. Definition~\ref{def:multi} implies that~$I_n^{(k)} = I_r$, where~$r:=\binom{n}{k}$. Hence, if~$A\in\R^{n\times n}$ is non-singular then~$(AA^{-1})^{(k)}=I_r$ and combining this with~\eqref{eq:AB_MultComp} yields~$(A^{-1})^{(k)} = (A^{(k)})^{-1}$. In particular, if~$A$ is non-singular then so is~$A^{(k)}$. The~$k$-multiplicative compound has an important spectral property. For~$A\in\C^{n\times n}$, let~$\lambda_i$,~$i=1,\dots,n$, denote the eigenvalues of~$A$. Then the eigenvalues of~$A^{(k)}$ are all the products \begin{equation}\label{eq:prodeig} \prod_{\ell=1}^k \lambda_{i_\ell}, \text{ with } 1\leq i_1< i_2<\dots< i_k \leq n. \end{equation} For example, suppose that~$n=3$ and $ A=\begin{bmatrix} a_{11} & a_{12} & a_{13}\\ 0 & a_{22} & a_{23}\\ 0 & 0 & a_{ 33} \end{bmatrix} . $ Then a calculation gives \[ A^{(2)}= \begin{bmatrix} a_{11} a_{22} & a_{11} a_{23}& a_{12} a_{23}-a_{13} a_{22} \\ 0& a_{11} a_{33}& a_{12} a_{33}\\ 0& 0& a_{22} a_{33} \end{bmatrix} , \] so, clearly, the eigenvalues of~$A^{(2)}$ are of the form~\eqref{eq:prodeig}. \begin{Definition} Let~$A\in\mathbb{C}^{n\times n}$. The \emph{$k$-additive compound} matrix of~$A$ is defined by: \begin{align*} A^{[k]} := \frac{d}{d\epsilon} (I+\epsilon A)^{(k)} |_{\epsilon=0} . \end{align*} \end{Definition} Note that this implies that~$ A^{[k]} = \frac{d}{d\epsilon} (\exp(A\epsilon ))^{(k)} |_{\epsilon=0}$, and also that \begin{align}\label{eq:(I+epsilonA)^k} (I+\epsilon A)^{(k)} = I + \epsilon A^{[k]} + o(\epsilon), \end{align} In other words,~$A^{[k]}$ is the first-order term in the Taylor expansion of~$(I+\epsilon A)^{(k)}$. Let~$\lambda_i$,~$i=1,\dots,n$, denote the eigenvalues of~$A$. Then the eigenvalues of~$(I+\epsilon A)^{(k)}$ are the products~$\prod_{\ell=1}^k (1+\epsilon \lambda_{i_\ell}) $, and~\eqref{eq:(I+epsilonA)^k} implies that the eigenvalues of~$A^{[k]}$ are all the sums \[ \sum_{\ell=1}^k \lambda_{i_\ell}, \text{ with } 1\leq i_1< i_2<\dots< i_k \leq n. \] Another important implication of the definitions above is that for any~$A,B\in\C^{n\times n}$ we have \[ (A+B)^{[k]}=A^{[k]}+B^{[k]}. \] This justifies the term additive compound. Moreover, the mapping~$A\to A^{[k]}$ is linear. The following result gives a useful explicit formula for~$A^{[k]}$ in terms of the entries~$a_{ij}$ of~$A$. Recall that any entry of~$A^{(k)}$ is a minor~$A(\alpha|\beta)$. Thus, it is natural to index the entries of~$A^{(k)}$ and~$A^{[k]}$ using~$\alpha,\beta \in Q(k,n)$. \begin{Proposition}\label{prop:Explicit_A_k} Fix~$\alpha,\beta \in Q(k,n)$ and let~$\alpha=\{i_1,\dots,i_k\}$ and~$\beta=\{j_1,\dots,j_k\}$. Then the entry of~$A^{[k]}$ corresponding to~$(\alpha,\beta)$ is equal to: \begin{enumerate} \item $\sum_{\ell=1}^{k} a_{ i_{\ell} i_{\ell} }$, if $i_{\ell} = j_{\ell}$ for all $\ell \in \{ 1,\hdots,k \}$; % \item $(-1)^{\ell +m} a_{i_{\ell} j_{m}} $, if all the indices in $ \alpha $ and $ \beta$ agree, except for a single index $ i_{\ell} \ne j_m$; and % \item $0$, otherwise. \end{enumerate} \end{Proposition} \begin{Example}\label{ex:A^[3]} For~$A\in\R^{4\times 4}$ and~$k=3$, Prop.~\ref{prop:Explicit_A_k} yields \[ A^{[3]} = \left[ \scalemath{0.67}{ \begin{array}{ccccccc} a_{11}+a_{22}+a_{33} & a_{34} & -a_{24} & a_{14} \\ a_{43} & a_{11}+a_{22}+a_{44} & a_{23} & -a_{13} \\ -a_{42} & a_{32} & a_{11}+a_{33}+a_{44} & a_{12} \\ a_{41} & -a_{31} & a_{21} & a_{22}+a_{33}+a_{44} \end{array}} \right]. \] The entry in the second row and fourth column of $A^{[3]}$ corresponds to $(\alpha | \beta) = ( \{ 1,2,4 \} | \{ 2,3,4 \} )$. As $\alpha$ and $\beta$ agree in all indices except for the index $i_{ 1} =1$ and $j_{ 2} =3$, this entry is equal to~$ (-1)^{1+2} a_{13} = -a_{13}$. \end{Example} Note that Prop.~\ref{prop:Explicit_A_k} implies in particular that~$A^{[n]}=\operatorname{tr}(A)$. The next section describes applications of compound matrices for dynamical systems described by~ODEs. For more details and proofs, see~\cite{schwarz1970,muldowney1990}. \section{Compound Matrices and ODEs} Fix an interval~$[a,b] \in \R$. Let~$A:[a,b] \to \R^{n\times n} $ be a continuous matrix function, and consider the LTV system: \begin{equation}\label{Eq:ltv} \dot x(t)=A(t)x(t),\quad x(a)=x_0. \end{equation} The solution is given by~$x(t)=\Phi(t ,a)x(a)$, where~$\Phi(t,a )$ is the solution at time~$t$ of the matrix differential equation \begin{equation}\label{eq:phidot} \dot \Phi(s)=A(s) \Phi(s), \quad \Phi(a)=I_n. \end{equation} Fix~$k\in\{1,\dots,n\}$ and let~$r:=\binom{n}{k}$. A natural question is: how do the~$k$-order minors of~$\Phi(t)$ evolve in time? The next result provides a beautiful formula for the evolution of~$\Phi^{(k)}(t):= (\Phi(t))^{(k)}$. \begin{Proposition}\label{prop:odeforphik} If~$\Phi$ satisfies~\eqref{eq:phidot} then \begin{equation}\label{eq:expat} \frac{d}{dt} \Phi^{(k)}(t)=A^{[k]}(t) \Phi^{(k)}(t),\quad \Phi^{(k)}(a)=I_r, \end{equation} where~$A^{[k]}(t):= (A(t)) ^ {[k]} $. \end{Proposition} Thus, the~$k\times k$ minors of~$\Phi$ also satisfy an~LTV. In particular, if~$A(t)\equiv A$ and~$a=0$ then~$\Phi(t)=\exp( At) $ so~$\Phi^{(k)}(t)=(\exp(At))^{(k)}$ and~\eqref{eq:expat} gives \[ (\exp(At))^{(k)} = \exp(A ^{[k]} t). \] Note also that for~$k=n$, Prop.~\ref{prop:odeforphik} yields \[ \frac{d}{dt} \det(\Phi(t)) = \operatorname{tr}(A(t)) \det(\Phi(t)), \] which is the Abel-Jacobi-Liouville identity. Roughly speaking, Prop.~\ref{prop:odeforphik} implies that under the LTV dynamics~\eqref{Eq:ltv}, $k$-dimensional polygons evolve according to the dynamics~\eqref{eq:expat}. We now turn to consider the nonlinear system: \begin{equation}\label{eq:nonlin} \dot x (t) = f(t,x). \end{equation} For the sake of simplicity, we assume that the initial time is zero, and that the system admits a convex and compact state-space~$\Omega$. We also assume that~$f\in C^1$. The Jacobian of the vector field~$f$ is~$J(t,x):=\frac{\partial}{\partial x}f(t,x)$. Compound matrices can be used to analyze~\eqref{eq:nonlin} by using an LTV called the variational equation associated with~\eqref{eq:nonlin}. To define it, fix~$a,b \in \Omega$. Let~$z(t):=x(t,a)-x(t,b)$, and for~$s\in[0,1]$, let~$\gamma(s):=s x(t,a)+(1-s)x(t,b)$, i.e. the line connecting~$x(t,a)$ and~$x(t,b)$. Then \begin{align*} \dot z(t)&=f(t,x(t,a))-f(t,x(t,b)) \\ &=\int_0^1 \frac{\partial }{\partial s} f(t,\gamma(s)) \dif s , \end{align*} and this gives the variational equation: \begin{equation}\label{eq:var_eqn} \dot z(t)=A^{ab}(t)z(t), \end{equation} where \begin{equation}\label{eq:at_int} A^{ab}(t):=\int_0^1 J(t,\gamma(s)) \dif s. \end{equation} \begin{comment} Let~$W(t,x_0):=\frac{\partial}{\partial x_0}x(t,x_0)$, that is, the change in the solution at time~$t$ due to a small change in the initial condition~$x_0$. Then \begin{equation}\label{eq:var_eqn} \dot W(t,x_0)= J(t,x(t,x_0)) W(t,x_0),\quad W(0,x_0)=I_n. \end{equation} This LTV is the \emph{variational equation along the solution~$x(t,x_0)$}. Prop.~\ref{prop:odeforphik} implies that \begin{equation}\label{eq:varewithk} \frac{d}{dt} W^{(k)}(t,x_0)= J^{[k]}(t,x(t,x_0)) W^{(k)}(t,x_0), \end{equation} with~$W^{(k)}(0,x_0)=I_r$. \end{comment} We can use the results above to describe a powerful approach for deriving useful ``$k$-generalizations'' of important classes of dynamical systems including cooperative systems~\cite{hlsmith}, contracting systems~\cite{sontag_cotraction_tutorial,LOHMILLER1998683}, and diagonally stable systems~\cite{diag_Stab_book}. \section{$k$-generalizations of dynamical systems}\label{sec:k_generalizations} Consider the LTV~\eqref{Eq:ltv}. Suppose that~$A(t)$ satisfies a specific \emph{property}, e.g. \emph{property} may be that~$A(t)$ is Metzler for all~$t$ (so the LTV is positive) or that~$\mu(A(t))\leq-\eta<0$ for all~$t$, where~$\mu:\R^{n\times n}\to\R$ is a matrix measure (so the~LTV is contracting). Fix~$k\in\{1,\dots,n\}$. We say that the LTV satisfies~\emph{$k$-property} if~$A^{[k]}$ (rather than~$A$) satisfies \emph{property}. For example, the LTV is~\emph{$k$-positive} if $A^{[k]}(t)$ is Metzler for all~$t$; the LTV is~\emph{$k$-contracting} if~$\mu(A^{[k]}(t))\leq-\eta<0$ for all~$t$, and so on. This generalization approach makes sense for two reasons. First, when~$k=1$,~$A^{[k]}$ reduces to~$A^{[1]}=A$, so \emph{$k$-property} is clearly a generalization of \emph{property}. Second, we know that~$A^{[k]}$ has a clear geometric meaning, as it describes the evolution of~$k$-dimensional polygons along the dynamics. The same idea can be applied to the nonlinear system~\eqref{eq:nonlin} using the variational equation~\eqref{eq:var_eqn}. For example, if~$\mu(J(t,x))\leq -\eta<0$ for all~$t\geq 0$ and~$x\in\Omega$ then~\eqref{eq:nonlin} is contracting: the distance between any two solutions (in the norm that induced~$\mu$) decays at an exponential rate. If we replace this by the condition~$\mu(J^{[k]}(t,x))\leq -\eta<0$ for all~$t\geq 0$ and~$x\in\Omega$ then~\eqref{eq:nonlin} is called~$k$-contracting. Roughly speaking, this means that the volume of $k$-dimensional polygons decays to zero exponentially along the flow of the nonlinear system. We now turn to describe several such~$k$-generalizations. \section{$k$-contracting systems} $k$-contracting systems were introduced in~\cite{kordercont} (see also the unpublished preprint~\cite{weak_manchester} for some preliminary ideas). For~$k=1$ these reduce to contracting systems. This generalization was motivated in part by the seminal work of Muldowney~\cite{muldowney1990} who considered nonlinear systems that, in the new terminology, are~$2$-contracting. He derived several interesting results for time-invariant $2$-contracting systems. For example, every bounded trajectory of a time-invariant, nonlinear, $2$-contracting system converges to an equilibrium (but, unlike in the case of contracting systems, the equilibrium is not necessarily unique). For the sake of simplicity, we introduce the ideas in the context of an LTV system. The analysis of nonlinear systems is based on assuming that their variational equation is a~$k$-contracting LTV. Recall that a vector norm $|\cdot |:\mathbb{R}^{n}\to\mathbb{R}_{+}$ induces a matrix norm $||\cdot ||:\mathbb{R}^{n\times n}\to\mathbb{R}_{+}$ by: \begin{align*} ||A||:=\max_{|x|=1} |Ax|, \end{align*} and a matrix measure $\mu (\cdot ):\mathbb{R}^{n\times n}\to\mathbb{R}$ by: \begin{align*} \mu (A) :=\lim_{\epsilon \downarrow 0} \frac{||I+\epsilon A||-1}{\epsilon}. \end{align*} For~$p\in\{1,2,\infty\}$, let~$|\cdot|_p:\R^n\to\R_+$ denote the~$L_p$ vector norm, that is, $|x|_1:=\sum_{i=1}^n|x_i|$, $|x|_2:=\sqrt{x^T x}$, and~$|x|_\infty:=\max_{ i\in\{1,\dots,n\} } |x_i|$. The induced matrix measures are~\cite{vid}: \begin{equation}\label{eq:matirx_measures_12infty} \begin{aligned} \mu_1(A) &= \max_{j\in\{1,\dots,n\}} (a_{jj}+ \sum_{\substack{i=1 \\ i \neq j}}^n |a_{ij}| ) , \\ \mu_2(A) &= \lambda_1 ( {A + A^T} )/2 , \\ \mu_{\infty}(A) &= \max_{i\in\{1,\dots,n\}} (a_{ii}+ \sum_{\substack{j=1 \\ j \neq i}}^n |a_{ij}| ) , \end{aligned} \end{equation} where~$\lambda_i (S)$ denotes the $i$-th largest eigenvalue of the symmetric matrix~$S$, that is, \[ \lambda_1(S) \geq \lambda_2(S) \geq \cdots \geq \lambda_n(S). \] \begin{Definition} The LTV~\eqref{Eq:ltv} is called~\emph{$k$-contracting} with respect to (w.r.t.) the norm~$|\cdot| $ if \begin{equation}\label{eq:kconinfi} \mu(A^{[k]}(t))\leq-\eta<0 \text{ for all } t \geq 0, \end{equation} where~$\mu$ is the matrix measure induced by~$|\cdot|$. \end{Definition} Note that for~$k=1$ condition~\eqref{eq:kconinfi} reduces to the standard infinitesimal condition for contraction~\cite{sontag_cotraction_tutorial}. For the~$L_p$ norms, with~$p\in\{1,2,\infty\}$, this condition is easy to check using the explicit expressions for~$\mu_1$, $\mu_2$, and~$\mu_\infty$. This carries over to~$k$-contraction, as combining Prop.~\ref{prop:Explicit_A_k} with~\eqref{eq:matirx_measures_12infty} gives~\cite{muldowney1990}: \begin{align*} \mu_1(A^{[k]}) &= \max_{\alpha \in Q(k,n) } ( \sum_{p=1}^k a_{\alpha_p,\alpha_p} + \sum_{\substack{j \notin \alpha }}(|a_{j,\alpha_1}| + \cdots + |a_{j,\alpha_k}|) ) ,\nonumber \\ \mu_2(A^{[k]}) &= \sum_{i=1}^k \lambda_i ( {A + A^T} )/2 , \\ \mu_{\infty}(A^{[k]}) &= \max_{\alpha \in Q(k,n) } ( \sum_{p=1}^k a_{\alpha_p,\alpha_p} + \sum_{\substack{j \notin\alpha }}(|a_{\alpha_1,j}| + \cdots + |a_{\alpha_k,j}|) ) . \end{align*} For~$k=n$, $A^{[n]}$ is the scalar~$\operatorname{tr}(A)$, so condition~\eqref{eq:kconinfi} becomes~$\operatorname{tr}(A(t))\leq -\eta<0 $ for all $t \geq 0$. Combining Coppel's inequality~\cite{coppel1965stability} with~\eqref{eq:expat} yields the following result. \begin{Proposition} If the LTV~\eqref{Eq:ltv} is~$k$-contracting then \[ ||\Phi^{(k)}(t) || \leq \exp(-\eta t) || \Phi^{(k)}(0)|| = \exp(-\eta t) \] for all~$t\geq 0$. \end{Proposition} Geometrically, this means that under the LTV dynamics the volume of $k$-dimensional polygons converges to zero exponentially. The next example illustrates this. \begin{Example}\label{exa:squares} Consider the LTV~\eqref{Eq:ltv} with~$n=2$ and $ A(t) = \begin{bmatrix} -1 & 0 \\ -2\cos( t) &0 \end{bmatrix}. $ The corresponding transition matrix is: $ \Phi(t)= \begin{bmatrix} \exp(-t)&0 \\ -1+\exp(-t)(\cos(t)-\sin(t)) &1\end{bmatrix}. $ This implies that the LTV is uniformly stable, and that for any~$x(0)\in\R^2$ we have \[ \lim _{t\to \infty} x(t,x(0))=\begin{bmatrix} 0 \\ x_2(0)-x_1(0) \end{bmatrix} . \] The LTV is not contracting, w.r.t. any norm, as there is more than a single equilibrium. However,~$ A^{[2]}(t) =\operatorname{tr}(A(t)) \equiv -1$, so the system is~$2$-contracting. Let~$S\subset\R^2$ denote the unit square, and let~$S(t):=x(t,S )$, that is, the evolution at time~$t$ of the unit square under the dynamics. Fig.~\ref{fig:squares} depicts~$S(t) +2t$ for several values of~$t$, where the shift by~$2t$ is for the sake of clarity. It may be seen that the area of~$S(t)$ decays with~$t$, and that~$S(t)$ converges to a line. \end{Example} \begin{figure} \begin{center} \includegraphics[scale=0.5]{squares.eps} \caption{Evolution of the unit square in Example~\ref{exa:squares}.} \label{fig:squares} \end{center} \end{figure} As noted above, time-invariant $2$-contracting systems have a ``well-odered'' asymptotic behaviour~\cite{muldowney1990,li1995}, and this has been used to derive a global analysis of important models from epidemiology (see, e.g.~\cite{SEIR_LI_MULD1995}). A recent paper~\cite{searial12contracting} extended some of these results to systems that are not necessarily~$2$-contracting, but can be represented as the serial interconnections of~$k$-contracting systems, with~$k\in\{1,2\}$. \section{$\alpha$-compounds and $\alpha$-contracting systems} A recent paper~\cite{pines2021} defined a generalizations called the~$\alpha$-multiplicative compound and~$\alpha$-additive compound of a matrix, where~$\alpha$ is a \emph{real} number. Let~$A\in\C^{n\times n} $ be non-singular. If~$\alpha=k+s$, where~$k\in\{1,\dots,n-1\}$ and~$s\in(0,1)$ then the~$\alpha$-multiplicative of~$A$ is defined by: \[ A^{(\alpha)} : = (A^{(k)})^{1-s} \otimes (A^{(k+1)})^{s}, \] where~$\otimes$ denotes the Kronecker product. This is a kind of ``multiplicative interpolation'' between~$ A^{(k)} $ and~$A^{(k+1)} $. For example,~$A^{(2.1)} = (A^{(2)})^{0.9} \otimes (A^{(3)})^{0.1}$. The~$\alpha$-additive compound is defined just like the~$k$-additive compound, that is, \begin{align*} A^{[\alpha]} := \frac{d}{d\epsilon} (I+\epsilon A)^{(\alpha)} |_{\epsilon=0} , \end{align*} and it was shown in~\cite{pines2021} that this yields \[ A^{[\alpha]} = ((1-s)A^{[k]}) \oplus (sA^{[k+1]}), \] where~$\oplus $ denotes the Kronecker sum. The system~\eqref{eq:nonlin} is called \emph{$\alpha$-contracting} w.r.t. the norm~$|\cdot| $ if \begin{equation} \label{eq:alphacon} \mu(J^{[\alpha]}(t,x))\leq -\eta<0 , \end{equation} for all~$t\geq 0$ and~all~$x$ in the state space~\cite{pines2021}. Using this notion, it is possible to restate the seminal results of Douady and Oesterl\'{e}~\cite{Douady1980} as follows. \begin{Theorem}\cite{pines2021} Suppose that~\eqref{eq:nonlin} is~$\alpha$-contracting for some~$\alpha\in [1,n]$. Then any compact and strongly invariant set of the dynamics has a Hausdorff dimension smaller than~$\alpha$. \end{Theorem} Roughly speaking, the dynamics contracts sets with a larger Hausdorff dimension. The next example, adapted from~\cite{pines2021}, shows how these notions can be used to ``de-chaotify'' a nonlinear dynamical system by feedback. \begin{Example} Thomas' cyclically symmetric attractor~\cite{thomas99,chaos_survey} is a popular example for a chaotic system. It is described by: \begin{align} \label{eq:thom} \dot x_1 =& \sin(x_2)-bx_1 , \nonumber \\ \dot x_2 =& \sin(x_3) - bx_2, \\ \dot x_3 =& \sin(x_1) - bx_3, \nonumber \end{align} where $b>0$ is the dissipation constant. The convex and compact set~$D: = \{x\in\R^3: b |x|_\infty \leq 1 \}$ is an invariant set of the dynamics. Fig.~\ref{fig:chaos} depicts the solution of the system emanating from~$\begin{bmatrix} 1& -2&1\end{bmatrix}^T$ for $ b=0.1$. Note the symmetric strange attractor. The Jacobian $J_f$ of the vector field in~\eqref{eq:thom} is \begin{align*} J_f (x)=\begin{bmatrix}-b&\cos(x_2)&0 \\ 0&-b&\cos(x_3) \\ \cos(x_1)&0&-b\end{bmatrix}, \end{align*} and thus \begin{align*} J_f ^{[2]}(x)=\begin{bmatrix}-2b&\cos(x_3)&0 \\ 0&-2b&\cos(x_2) \\ -\cos(x_1)&0&-2b\end{bmatrix}, \end{align*} and~$ J_f ^{[3]}=\operatorname{trace}(J (x))=-3b$. Since~$b>0$, this implies that the system is $3$-contracting w.r.t. any norm. Let~$\alpha = 2+s$, with~$s\in(0,1)$. Then \begin{align*} &J_f ^{[\alpha]}(x) =(1-s)J_f^{[2]}(x)\oplus sJ_f^{[3]}(x)\\ &= \begin{bmatrix} -(2+s)b & (1-s)\cos(x_3) & 0 \\ 0 & -(2+s)b & (1-s)\cos(x_2) \\ -(1-s)\cos(x_1) & 0 & -(2+s)b \\ \end{bmatrix}. \end{align*} This implies that \begin{equation}\nonumber \mu_{1 }(J_f ^{[\alpha]}(x)) \leq 1-2b-s(b+1), \text{ for all } x\in D. \end{equation} We conclude that for any~$b\in(0,1/2) $ the system is~$(2+s)$-contracting for any $ s>\frac{1-2b}{1+b} $. We now show how $\alpha$-contarction can be used to design a partial-state controller for the system guaranteeing that the closed-loop system has a ``well-ordered'' behaviour. Suppose that the closed-loop system is: \[ \dot x = f(x) + g(x), \] where~$g $ is the controller. Let~$\alpha = 2 + s$, with~$s \in(0,1)$. The Jacobian of the closed-loop system is~$J_{cl}:=J_f+J_g$, so \begin{align*} \mu_1(J_{cl}^{[\alpha]})& = \mu_1(J_f^{[\alpha]}+J_g^{[\alpha]}) \\& \leq \mu_1(J_f^{[\alpha]})+\mu_1(J_g^{[\alpha]}) \\ &\leq 1-2b-s(b+1)+\mu_1(J_g^{[\alpha]}). \end{align*} This implies that the closed-loop system is~$\alpha$-contracting if \begin{align}\label{eq:cond_cont} \mu_1(J^{[\alpha]}_g (x) ) < s(b+1)+2b-1 \text{ for all } x\in D . \end{align} Consider, for example, the controller $ g(x_1,x_2)=c \operatorname{diag}(1,1,0) x $, with gain~$c<0$. Then $ J_g^{[\alpha]} = c\operatorname{diag}( 2 ,1+ s,1+ s ) $ and for any~$c<0$ condition~\eqref{eq:cond_cont} becomes \begin{align}\label{eq:cond_cont1} (1+ s)c < s(b+1)+2b-1. \end{align} This provides a simple recipe for determining the gain~$c$ so that the closed-loop system is~$(2+s)$-contracting. For example, when~$s \to 0$, Eq.~\eqref{eq:cond_cont1} yields $ c<2b-1 $, and this guarantees that the closed-loop system is~$2$-contracting. Recall that in a~$2$-contracting system every nonempty omega limit set is a single equilibrium, thus ruling out chaotic attractors and even non-trivial limit cycles~\cite{li1995}. Fig.~\ref{fig:chaos_closed} depicts the behaviour of the closed-loop system with~$b=0.1$ and~$c=2b-1.1$. The closed-loop system is thus $2$-contracting, and as expected every solution converges to an equilibrium. \end{Example} \begin{figure}[t] \begin{center} \includegraphics[width=8cm,height=6cm]{THOMAS.eps} \caption{A trajectory emanating from~$x(0)=\begin{bmatrix} 1& -2&1 \end{bmatrix}^T$. }\label{fig:chaos} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=8cm,height=6cm]{CLOSED_LOOP_THOMAS.eps} \caption{Several trajectories of the closed-loop system. The circles denote the initial conditions of the trajectories. }\label{fig:chaos_closed} \end{center} \end{figure} \section{$k$-positive systems} Ref.~\cite{Eyal_k_posi} introduced the notions of~$k$-positive and~$k$-cooperative systems. The LTV~\eqref{Eq:ltv} is called~$k$-positive if~$A^{[k]}(t)$ is Metzler for all~$t$. For~$k=1$ this reduces to requiring that~$A(t)$ is Metzler for all~$t$. In this case the system is positive i.e. the flow maps~$\R^n_+$ to~$\R^n_+$ (and also~$\R^n_-:=-\R^n_+$ to~$\R^n_-$)~\cite{farina2000}. In other words, the flow maps the set of vectors with zero sign variations to itself. $k$-positive systems map the set of vectors with up to~$k-1$ sign variations to itself. To explain this, we recall some definitions and results from the theory of totally positive~(TP) matrices, that is, matrices whose minors are all positive~\cite{total_book,pinkus}. For a vector~$x\in\R^n\setminus\{0\}$, let~$s^-(x) $ denote the number of sign variations in~$x$ after deleting all its zero entries. For example,~$s^-(\begin{bmatrix}-1&0&0&2&-3\end{bmatrix}^T)=2$. We define~$s^-(0):=0$. For a vector~$x\in\R^n $, let~$s^+(x)$ denote the maximal possible number of sign variations in~$x$ after setting every zero entry in~$x$ to either~$-1$ or~$+1$. For example,~$s^+(\begin{bmatrix}-1&0&0&2&-3\end{bmatrix}^T)=4$. These definitions imply that $ 0\leq s^-(x)\leq s^+(x)\leq n-1$, for all $ x\in\R^n. $ For any $k\in \{1,\dots,n\}$, define the sets \begin{align}\label{eq:Pksets} P_{-}^{k} &:= \{ z\in\mathbb{R}^{n}: \; s^{-} (z) \le k-1 \},\nonumber \\ P_{+}^{k} &:= \{ z\in \mathbb{R}^{n}: \; s^{+} (z) \le k-1 \}. \end{align} In other words, these are the sets of all vectors with up to~$k-1$ sign variations. Then~$P_{-}^{k}$ is closed, and it can be shown that~$P_{+}^{k}=\operatorname{int}(P_{-}^{k})$. For example, \begin{align*} P_{-}^{1} = \mathbb{R}_{+}^{n} \cup \mathbb{R}_{-}^{n}, \;\;\; P_{+}^{1} = \operatorname{int}(\mathbb{R}_{+}^{n}) \cup \operatorname{int}( \mathbb{R}_{-}^{n}). \end{align*} \begin{Definition} The LTV~\eqref{Eq:ltv} is called \emph{$k$-positive} on an interval~$[a,b]$ if for any~$a<t_0 < b$, \[ x(t_0)\in P^k_- \implies x(t,x(t_0)) \in P^k_- \text{ for all } t_0 \leq t <b , \] and is called \emph{strongly $k$-positive} if \[ x(t_0)\in P^k_- \implies x(t,x(t_0)) \in P^k_+ \text{ for all } t_0 < t < b. \] \end{Definition} In other words, the sets of up to~$k-1$ sign variations are invariant sets of the dynamics. An important property of TP matrices is their sign variation diminishing property: if~$A\in\R^{n\times n}$ is~TP and~$x\in\R^n\setminus\{0\}$ then~$s^+(Ax)\leq s^-(x)$. In other words, multiplying a vector by a TP matrix can only decrease the number of sign variations. For our purposes, we need a more specialized result. Recall that~$A\in\R^{n\times n}$ is called \emph{sign-regular of order~$k$} if its minors of order~$k$ are all non-positive or all non-negative, and \emph{strictly sign-regular of order~$k$} if its minors of order~$k$ are all positive or all negative \begin{Proposition}\label{thm:BenAvraham_SSRk} \cite{CTPDS} Let $A\in\mathbb{R}^{n\times n}$ be a non-singular matrix. Pick $k\in \{1,\dots,n\}$. Then the following two conditions are equivalent: \begin{enumerate} \item For any $x\in\mathbb{R}^{n} $ with $s^{-}(x) \le k-1$, we have~$s^{-} (Ax) \le k-1$. \item $A$ is sign-regular of order~$k$. \end{enumerate} Also, the following two conditions are equivalent: \begin{enumerate}[I.] \item For any $x\in\mathbb{R}^{n} \setminus \{ 0 \}$ with $s^{-}(x) \le k-1$, we have~$s^{+} (Ax) \le k-1$. \item $A$ is strictly sign-regular of order~$k$. \end{enumerate} \end{Proposition} Using these tools allows to characterize the behaviour of~$k$-positive LTVs. \begin{Theorem}\label{thm:k_pois_ltv} The LTV~\eqref{Eq:ltv} is~$k$-positive on~$[a,b]$ iff~$A^{[k]}(t)$ is Metzler for all~$t\in (a,b)$. It is strongly $k$-positive on~$[a,b]$ iff~$A^{[k]}(t)$ is Metzler for all~$t\in(a,b)$, and~$A^{[k]}(t)$ is irreducible for all~$t \in(a,b)$ except, perhaps, at isolated time points. \end{Theorem} The proof is simple. Consider for example the second assertion in the theorem. The Metzler and irreducibility assumptions imply that the matrix differential system~\eqref{eq:expat} is a positive linear system, and furthermore, that all the entries of~$\Phi^{(k)}(t,t_0)$ are positive for all~$t>t_0$. Thus,~$\Phi(t,t_0)$ is strictly sign-regular of order~$k$ for all~$t>t_0$. Since~$x(t,x(t_0))=\Phi(t,t_0)x(t_0)$, applying Prop.~\ref{thm:BenAvraham_SSRk} completes the proof. This line of reasoning demonstrates a general and useful principle, namely, given conditions on~$A^{[k]}$ we can apply standard tools from dynamical systems theory to the ``$k$-compound dynamics''~\eqref{eq:expat}, and deduce results on the behaviour of the solution~$x(t)$ of~\eqref{Eq:ltv}. A natural question is: when is~$A^{[k]}$ a Metzler matrix? This can be answered using Prop.~\ref{prop:Explicit_A_k} in terms of sign pattern conditions on the entries~$a_{ij}$ of~$A$. This is useful as in fields like chemistry and systems biology, exact values of various parameters are typically unknown, but their signs may be inferred from various properties of the system~\cite{sontag_near_2007}. \begin{Proposition}\label{prop:sign_pattern_for_k_posi} Let~$A \in\R^{n\times n}$ with~$n\geq 3$. Then \begin{enumerate} \item \label{case:nminus1} $A^{[ n-1]}$ is Metzler iff $a_{ij}\geq 0$ for all~$i,j$ with~$i-j$ odd, and~$a_{ij}\leq 0$ for all~$i,j$ with~$i\not =j$ and~$i-j$ even; \item \label{case:kodd} for any odd~$k $ in the range~$1<k<n-1$, $A^{[k]}$ is Metzler iff $a_{1n},a_{n1}\geq 0$, $a_{ij}\geq 0$ for all~$|i-j|=1$, and~$a_{ij}=0$ for all~$1<|i-j|<n-1$; \item \label{case:keven} for any even~$k $ in the range~$1<k<n-1$, $A^{[k]}$ is Metzler iff $a_{1n},a_{n1}\leq 0$, $a_{ij}\geq 0$ for all~$|i-j|=1$, and~$a_{ij}=0$ for all~$1<|i-j|<n-1$. \end{enumerate} \end{Proposition} In Case~\ref{case:nminus1}) there exists a non-singular matrix~$T$ such~$-TAT^{-1}$ is Metzler. In other words, there exists a coordinate transformation such that in the new coordinates the dynamics is competitive. Thus, $k$-positive systems, with~$k\in\{1,\dots,n-1\}$, may be viewed as a kind of interpolation from cooperative to competitive systems. In Case~\ref{case:kodd}),~$A$ is in particular Metzler. Case~\ref{case:keven}) is illustrated in the next example. \begin{Example} Consider the case~$n=3$ and~$A=\begin{bmatrix} -1& 1& -2\\ 0& 1& 0.1 \\ -3& 0 &1 \end{bmatrix} $. Note that~$A$ is not Metzler, yet $ A^{[2]}=\begin{bmatrix} 0& 0.1& 2\\ 0& 0& 1 \\ 3& 0 &2 \end{bmatrix} $ is Metzler (and irreducible). Thm.~\ref{thm:k_pois_ltv} guarantees that for any~$x(0)$ with~$s^-(x(0))\leq 1$, we have \begin{equation}\label{eq:boundsmin} s^-(x(t,x(0)))\leq 1\text{ for all }t\geq 0. \end{equation} Fig.~\ref{fig:signs} depicts~$s^-(x(t,x(0)))=s^-(\exp(At)x(0))$ for~$x(0)=\begin{bmatrix} 4&-21&-1 \end{bmatrix}^T$. Note that~$s^-(x(0))=1$. It may be seen that~$s^-(x(t,x(0)))$ decreases and then increases, but always satisfies the bound~\eqref{eq:boundsmin}. \end{Example} \begin{figure} \begin{center} \includegraphics[scale=0.5]{sign_changes.eps} \caption{ $s^-(x(t,x(0)))$ as a function of~$t$.} \label{fig:signs} \end{center} \end{figure} \subsection{Totally positive differential systems} A matrix~$A\in\R^{n\times n} $ is called a \emph{Jacobi matrix} if~$A$ is tri-diagonal with positive entries on the super- and sub-diagonals. An immediate implication of Prop.~\ref{prop:sign_pattern_for_k_posi} is that~$A^{[k]}$ is Metzler and irreducible for all~$k\in\{1,\dots,n-1\}$ iff~$A$ is Jacobi. It then follows that for any~$t>0$ the matrices~$(\exp (At) )^{(k)} $, $k=1,\dots,n$, are positive, that is,~$\exp(At)$ is TP for all~$t>0$. Combining this with Thm.~\ref{thm:k_pois_ltv} yields the following. \begin{Proposition}\cite{schwarz1970}\label{thm:TP} The following two conditions are equivalent. \begin{enumerate} \item $A$ is Jacobi. \item for any~$x_0\in\R^n \setminus\{0\}$ the solution of the LTI~$\dot x(t)=Ax(t)$, $x(0)=x_0$, satisfies \[ s^+(x(t, x_0) )\leq s^-(x_0) \text { for all } t>0. \] \end{enumerate} \end{Proposition} In other words,~$s^-(x(t,x_0))$ and also~$s^+(x(t,x_0))$ are non-increasing functions of~$t$, and may thus be considered as piece-wise constant Lyapunov functions for the dynamics. Prop.~\ref{thm:TP} was proved by Schwarz~\cite{schwarz1970}, yet he only considered linear systems. It was recently shown~\cite{margaliot2019revisiting} that important results on the asymptotic behaviour of time-invariant and periodic time-varying nonlinear systems with a Jacobian that is a Jacobi matrix for all~$t,x$~\cite{smillie,periodic_tridi_smith} follow from the fact that the associated variational equation is a totally positive~LTV. \section{$k$-cooperative systems} We now review the applications of~$k$-positivity to the time-invariant nonlinear system: \begin{equation}\label{eq:time_invariant_non_linear} \dot x=f(x), \end{equation} with~$f\in C^1$. Let~$J(x):=\frac{\partial }{\partial x}f(x)$. We assume that the trajectories of~\eqref{eq:time_invariant_non_linear} evolve on a convex and compact state-space~$\Omega\subseteq\R^n$. Recall that~\eqref{eq:time_invariant_non_linear} is called \emph{cooperative} if~$J(x)$ is Metzler for all~$x\in \Omega$. In other words, the variational equation associated with~\eqref{eq:time_invariant_non_linear} is positive. The slightly stronger condition of strong cooperativity has far reaching implications. By Hirsch's quasi-convergence theorem~\cite{hlsmith}, almost every bounded trajectory converges to the set of equilibria. It is natural to define~$k$-cooperativity by requiring that the variational equation associated with~\eqref{eq:time_invariant_non_linear} is~$k$-positive. \begin{Definition}\label{def:k-coop}\cite{Eyal_k_posi} The nonlinear system~\eqref{eq:time_invariant_non_linear} is called \emph{[strongly] $k$-cooperative} if the associated LTV~\eqref{eq:var_eqn} is [strongly]~$k$-positive for any~$a,b\in\Omega$. \end{Definition} Note that for~$k=1$ this reduces to the definition of a cooperative [strongly coopertive] dynamical system. One immediate implication of Definition~\ref{def:k-coop} is the existence of certain invariant sets of the dynamics. \begin{Proposition} \label{prop:inhgt} Suppose that~\eqref{eq:time_invariant_non_linear} is~$k$-cooperative. Pick~$a,b\in\Omega$. Then \[ a-b\in P^k_- \implies x(t,a)-x(t,b) \in P^k_- \text{ for all } t \geq 0. \] If, furthermore,~$0 \in \Omega $ and~$0$ is an equilibrium point of~\eqref{eq:time_invariant_non_linear}, i.e.~$f(0)=0$ then \[ a \in P^k_- \implies x(t,a) \in P^k_- \text{ for all } t \geq 0. \] \end{Proposition} The sign pattern conditions in Prop.~\ref{prop:sign_pattern_for_k_posi} can be used to provide simple to verify sufficient conditions for [strong] $k$-cooperativity of~\eqref{eq:time_invariant_non_linear}. Indeed, if~$J(x)$ satisfies a sign pattern condition for all~$x\in \Omega$ then the integral of~$J$ in the variational equation~\eqref{eq:var_eqn} satisfies the same sign pattern, and thus so does~$A^{ab}$. The next example, adapted from~\cite{Eyal_k_posi}, illustrates this. \begin{Example} Ref.~\cite{Elkhader1992} studied the nonlinear system \begin{align}\label{eq:alexsys} \dot x_1&=f_1(x_1,x_n),\nonumber\\ \dot x_i &= f_i(x_{i-1},x_i,x_{i+1}), \quad i=2,\dots,n-1,\nonumber\\ \dot x_n&=f_n(x_{n-1},x_n), \end{align} with the following assumptions: the state-space~$\Omega\subseteq\R^n$ is convex, $f_i\in C^{n-1}$, $i=1,\dots,n$, and there exist~$\delta_i\in\{-1,1\}$, $i=1,\dots,n$, such that \begin{align*} \delta_1\frac{\partial }{\partial x_n}f_1(x) &>0,\\ \delta_2\frac{\partial }{\partial x_1}f_2(x) , \delta_3\frac{\partial }{\partial x_3}f_2(x)&>0,\\ &\vdots\\ \delta_{n-1} \frac{\partial }{\partial x_{n-2}}f_{n-1}(x), \delta_n \frac{\partial }{\partial x_{n}}f_{n-1}(x) &>0,\\ \delta_n \frac{\partial }{\partial x_{n-1}}f_n(x)&>0, \end{align*} for all~$x\in\Omega$. This is a generalization of the monotone cyclic feedback system analyzed in~\cite{poin_cyclic}. As noted in~\cite{Elkhader1992}, we may assume without loss of generality that~$\delta_2=\delta_3=\dots=\delta_n=1$ and~$\delta_1 \in \{-1,1\}$. Then the Jacobian of~\eqref{eq:alexsys} has the form \[ J(x)=\begin{bmatrix} *& 0 &0 &0 &\dots & 0 & 0 & \operatorname{{\mathrm sgn}}(\delta_1) \\ >0 & * &>0 &0 &\dots & 0& 0 & 0 \\ 0& >0 & * &>0 &\dots &0& 0 & 0 \\ &&&\vdots\\ 0& 0 & 0 & 0 &\dots & 0 &>0& * \\ \end{bmatrix} , \] for all~$x\in \Omega$. Here~$*$ denotes ``don't care''. Note that~$J(x)$ is irreducible for all~$x\in \Omega$. If~$\delta_1=1$ then~$J(x)$ is Metzler, so the system is strongly~$1$-cooperative. If~$\delta_1=-1$ then~$J(x)$ satisfies the sign pattern in Case~\ref{case:keven} in Prop.~\ref{prop:sign_pattern_for_k_posi}, so the system is strongly $2$-cooperative. (If~$n$ is odd then~$J(x)$ also satisfies the sign pattern in Case~\ref{case:nminus1}, so there is a coordinate transformation for which the system is also strongly competitive.) \end{Example} The main result in~\cite{Eyal_k_posi} is that strongly~$2$-cooperative systems satisfy a strong Poincar\'{e}-Bendixson property. \begin{Theorem} \label{thm:2dim} Suppose that~\eqref{eq:time_invariant_non_linear} is strongly~$2$-cooperative. Pick~$a\in \Omega$. If the omega limit set~$\omega(a)$ does not include an equilibrium then it is a closed orbit. \end{Theorem} The proof of this result is based on the seminal results of Sanchez~\cite{sanchez2009cones}. Yet, it is considerably stronger than the main result in~\cite{sanchez2009cones}, as it applies to \emph{any} trajectory emanating from~$\Omega$ and not only to so called \emph{pseudo-ordered} trajectories (see the definition in~\cite{sanchez2009cones}). The Poincar\'{e}-Bendixon property is useful because often it can be combined with a local analysis near the equilibrium points to provide a global picture of the dynamics. For a recent application of Thm.~\ref{thm:2dim} to a model from systems biology, see~\cite{Margaliot868000}. \section{Conclusion} $k$-compound matrices describe the evolution of~$k$-dimensional polygons along an LTV dynamics. This geometric property has important consequences in systems and control theory. This holds for both LTVs and also time-varying nonlinear systems, as their variational equation is an LTV. Due to space limitations, we considered here only a partial list of applications. Another application, for example, is based on generalizing diagonal stability for the LTI~$\dot x= Ax$ to~\emph{$k$-diagonal stability} by requiring that there exists a diagonal and positive-definite matrix~$D$ such that $ D A^{[k]}+(A^{[k]})^T D $ is negative-definite~\cite{cheng_diag_stab}. Another interesting line of research is based on analyzing systems with inputs and outputs. A SISO system is called \emph{externally~$k$-positive} if any input with up to~$k$ sign variations induces an output with up to~$k$ sign variations~\cite{gruss1,gruss2,gruss3,gruss4,gruss5}. For LTIs with a zero initial condition the input-output mapping is described by a convolution with the impulse response and then external~$k$-positivity is related to interesting results in statistics~\cite{ibragimov1956} and the theory of infinite-dimensional linear operators~\cite{karlin_tp}. \begin{comment} \section{Positive Systems} \begin{Definition} A matrix $A\in\mathbb{R}^{n\times n}$ is called \emph{totally nonnegative} [\emph{totally positive}] if every sub-minor of $A$ is \emph{nonnegative} [\emph{positive}]. \end{Definition} \begin{Example} Let $E_{i,j}\in\mathbb{R}^{n\times n}$ denote the matrix with a single entry $(i,j)$ equal one, and all the other entries equal zero. Define \begin{align}\label{eq:L_U} L_{i} (p) &:= I + p E_{i,i-1}, \\ U_{i} (p) &:= I + p E_{i-1,i}. \end{align} where $p\in\mathbb{R}$ and $i\in \{ 2,\hdots,n\}$. These matrices are called elemntary bidiagonal ($EB$) matrices, and generalized elementary bidiagonal ($GEB$) if a diagonal matrix $D$ is added to the matrices in~\eqref{eq:L_U}. Note that $EB$ matrices are $TN$ for $p \ge 0$, and $GEB$ matrices are $TN$ for $p \ge 0$ and every entry of $D$ is componentwise nonnegative~\cite{Eyal_Smiliie}. \end{Example} \begin{Example}\label{ex:dominance_condition} Consider the tridiagonal matrix \begin{align*} \begin{bmatrix} a_1 & b_1 & 0 & \hdots & 0 \\ c_1 & a_2 & \ddots & \hdots & \vdots \\ 0 & \ddots & \ddots & \hdots & \vdots \\ \vdots & \ddots & \ddots & \hdots & b_{n-1} \\ 0 & \hdots & \hdots & c_{n-1} & a_{n} \end{bmatrix} \end{align*} where $b_{i}, c_{i} \ge 0 \; \forall i\in [1,n-1]$. Therefore, $A$ is $TN$~\cite[Chapter~0]{total_book}~\cite{Eyal_Smiliie} if the dominance condition holds: \begin{align*} a_{i} \ge b_{i} + c_{i-1} \; \forall i\in [1,n], \end{align*} where $c_{0} = b_{n} := 0$. \end{Example} \subsection{Totally Positive Differential Systems (TPDS)} \begin{Definition} Consider the LTV system in~\eqref{Eq:ltv}. If every minor of $\Phi (t,t_0)$ is positive ($\forall t>0)$, i.e., $\Phi(t,t_0)$ is a \sl{totally positive} matrix (TP) [\sl{totally nonnegative}] ($\forall t>0)$, then~\eqref{Eq:ltv} is a~\sl{totally positive differential system} (TPDS) [\sl{totally nonnegative differential system (TNDS)}]~\cite{schwarz1970,Eyal_k_posi}. \end{Definition} \subsubsection{Coordinate Transformation of the Compound Matrix} Consider the system in~\eqref{Eq:ltv}. Applying the following coordinate transformation $Z(t) := TY(t)$, where $T$ is a non-singular square matrix $T\in\mathbb{R}^{n\times n}$. Thus, the $k$th compound of~\eqref{eq:phidot} is: \begin{align*} (TAT^{-1})^{[k]} = T^{(k)} A^{[k]} ( T^{-1} )^{(k)}. \end{align*} See~\cite{Eyal_k_posi} for the proof. \begin{Theorem} \cite{Eyal_Smiliie,schwarz1970} Consider the matrix differential system $\dot \Phi = A (t) \Phi, \; \Phi (t_0) = I$, where $A(t)$ is a continuous matrix for $t\in (a,b)$. The system is $TNDS$ iff $A(t)\in\mathbb{M}$ for all $t\in (a,b)$. It is $TPDS$ iff $A(t)\in\mathbb{M}$ for all $t\in (a,b)$, and every entry on the super- and sub-diagonals of $A(t)$ is not zero on a time interval. \end{Theorem} \subsubsection{Metzler Matrices} \begin{Definition} Let $M_{2}^{n}$ denote the set of matrices $A\in\mathbb{R}^{n\times n}$ satisfying: \begin{enumerate} \item $a_{1n},a_{n1} \le 0$, \item $a_{ij} \ge 0\; \forall i,j$ with $|i-j|=1$, \item $a_{ij} = 0 \; \forall i,j$ with $1< |i-j|<n-1$. \end{enumerate} \end{Definition} \begin{Example} Consider the $5$-dimension case. Then, the $M_{2}^{5}$ matrices consist of the following sign pattern: \begin{align*} \begin{bmatrix} * & \ge 0 & 0 & 0 & \le 0 \\ \ge 0 & * & \ge 0 & 0 & 0 \\ 0 & \ge 0 & * & \ge 0 & 0 \\ 0 & 0 & \ge 0 & * & \ge 0 \\ \le 0 & 0 & 0 & \ge 0 & * \end{bmatrix}, \end{align*} where $*$ denotes "don't care". \end{Example} \begin{Lemma} Let $A\in\mathbb{R}^{n\times n}$ with $n > 2$. Then $A^{[2]}$ is Metzler iff $A\in M_{2}^{n}.$ \end{Lemma} \begin{Example} Consider the $4$-dimensional case ($n=4$) as in Example~\eqref{ex:A^[3]}. Then, $A^{[2]}$ is Metzler iff $a_{12},a_{23},a_{34},a_{21},a_{32},a_{43} \ge 0$, $a_{14},a_{41} \le 0$, and $a_{13} = a_{24} = a_{31} = a_{42} = 0$, i.e., iff $A\in M_{2}^{4}$. \end{Example} This is a generalization of Smillie's theorem~\cite{smillie}. Indeed, Smillie assumed that $J(x)\in\mathbb{M}^{+}$ for all $x\in\Omega$. Note that $J(x)\in\mathbb{M}$ means that $J(x)$ is tridiagonal and Metzler, so~\eqref{eq:nonlin} is a tridiagonal cooperative system in the sense of Hirsch (\cite{hlsmith}). \begin{Remark} Thus, this is a generalization of the theory of $TNDS$s developed by Schwarz, Smith and Smillies. This is a generalization of the stability and entrainment results under a weaker condition, namely that the Jacobian is tridiagonal but may have nonnegative (rather than positive) entries on its super- and sub-diagonals, along with a suitable observability-type condition~\cite{9107214}. \end{Remark} \subsection{Oscillatory Matrices} \begin{Definition} A matrix $A\in\mathbb{R}^{n\times n}$ is called oscillatory if $A$ is $TN$ and there exists an integer $k>0$ such that $A^{k}$ is $TP$. A $TN$ matrix $A$ is oscillatory iff it is non-singular and irreducible~\cite[Chapter~2]{total_book}, and in this case $A^{n-1}$ is $TP$. \end{Definition} \begin{Example} Consider the matrix $A = \begin{bmatrix} 1 & \epsilon & 0 \\ \epsilon & 1 & \epsilon \\ 0 & \epsilon & 1 \end{bmatrix}$, with $\epsilon\in (0,1/2)$. This matrix is non-singular (as $\det (A) = 1 - 2\epsilon^{2} \ne 0 )$, $TN$ (by the result in Example~\eqref{ex:dominance_condition} ), and irreducible, so it is an oscillatory matrix. Here $A^{n-1} = A^{2} = \begin{bmatrix} 1+\epsilon^{2} & 2\epsilon & \epsilon^{2} \\ 2\epsilon & 1+2\epsilon^{2} & 2\epsilon \\ \epsilon^{2} & 2\epsilon & 1+\epsilon^{2} \end{bmatrix}$, and it is $TP$. \end{Example} \begin{Definition} \cite{gk_book} A square matrix $A\in \mathbb{R}^{n\times n}$ is called oscillatory if it is $TN$ and there exists an integer $k\ge 1$ such that $A^{k}$ is $TP$. \end{Definition} The following Theorem analyzes the spectral structure of oscillatory matrices. \begin{Theorem}\label{thm:gantmacher_and_Krein_OscillatoryMatrices} \cite{gk_book, pinkus} If $A\in\mathbb{R}^{n\times n}$ is an oscillatory matrix then its eigenvalues are all real, positive, and distinct. Thus, the eigenvalues can be order in the following manner: \begin{align*} \lambda_{1} > \lambda_{2} > \hdots \lambda_{n} > 0, \end{align*} Let $u^{k} \in \mathbb{R}^{n}$ denote the corresponding eigenvector to $\lambda_{k}$. Then, for any $1\le i \le j \le n$, and for any real scalars $c_{i},\hdots,c_{j}$ that are not all zero, \begin{align*} i-1 \le s^{-} \left ( \sum_{k=i}^{j} c_{k} u^{k} \right ) \le s^{+} \left ( \sum_{k=i}^{j} c_{k} u^{k} \right ) \le j-1. \end{align*} \begin{Remark} In particular, Theorem~\eqref{thm:gantmacher_and_Krein_OscillatoryMatrices} implies that $s^{-} (u^{i}) = s^{+} (u^{i}) = i-1 \; \forall i\in [1,n]$. \end{Remark} \end{Theorem} \begin{Example}\cite{Eyal_k_posi} Consider the following oscillatory matrix \begin{align*} \begin{bmatrix} 2 & 1 & 0 \\ 1 & 3 & 1 \\ 0 & 1 & 2 \end{bmatrix} \end{align*} Its eigenvalues are $\lambda_{1} = 4,\; \lambda_{2} = 2$ and $\lambda_{3} = 1$. The corresponding eigenvectors are $u^{1} = \begin{bmatrix} 1 & -1 & 1 \end{bmatrix}^{T},\; u^{2} = \begin{bmatrix} -1 & 0 & 1 \end{bmatrix}^{T}$ and $u^{3} = \begin{bmatrix} 1 & -1 & 1 \end{bmatrix}^{T}$. Notice that \begin{align*} s^{-} (u^{k}) = s^{+} (u^{k}) = k-1 \; \forall k\in [1,3]. \end{align*} \end{Example} \subsection{$k$-positive linear systems} For any $k\in [1,n]$, define the sets $P_{-}^{k} := \{ z\in\mathbb{R}^{n}: \; s^{-} (z) \le k-1 \}$, and $P_{+}^{k} := \{ z\in \mathbb{R}^{n}: \; s^{+} (z) \le k-1 \}$. \\ $P_{-}^{k}$ is closed, and $P_{+}^{k}$ is open. \\ Notice that, \begin{align*} P_{-}^{1} = \mathbb{R}_{+}^{n} \bigcup \mathbb{R}_{-}^{n}, \;\;\; P_{+}^{1} = \operatorname{int}\mathbb{R}_{+}^{n} \bigcup \operatorname{int} \mathbb{R}_{-}^{n}, \end{align*} and that \begin{align*} P_{+}^{k} &= \operatorname{int} ( P_{-}^{k} ) \; \forall k\in [1,n-1], \\ P_{-}^{1} &\subset P_{-}^{2} \subset \hdots \subset P_{-}^{n} = \mathbb{R}^{n}, \\ P_{+}^{1} &\subset P_{+}^{2} \subset \hdots \subset P_{+}^{n} = \mathbb{R}^{n}. \end{align*} \subsubsection{odd- and even-positivity} \begin{Definition} ~\cite{Eyal_k_posi,9107214} Choose $n\ge 4$, and $k\in [2,n-2]$. Consider the sign matrix $\overline{A}_{k}^{n}\in \{ *,-,0,+\}^{n\times n}$ , where \begin{enumerate} \item $\overline{a}_{ii} = * \;\; \forall i$, \item If $k$ is odd [even] then $\overline{a}_{1n},\overline{a}_{n1} = + \; [\overline{a}_{1n},\overline{a}_{n1} = -]$, \item $\overline{a}_{ij} = + \; \forall i,j$ such that $|i-j| = 1$, \item $\overline{a}_{ij} = 0 \; \forall i,j$ such that $1<|i-j|<n-1$. \end{enumerate} \end{Definition} \begin{Example}~\cite{9107214} Consider \begin{align*} \overline{A}_{2}^{4} = \begin{bmatrix} * & + & 0 & - \\ + & * & + & 0 \\ 0 & + & * & + \\ - & 0 & + & * \end{bmatrix} \end{align*} If $k$ is odd, then $\overline{A}_{2}^{4}$ is Metzler, whereas if $k$ is even, then $\overline{A}_{k}^{n}$ is not necessarily Metzler. \end{Example} \begin{Remark}~\cite{9107214} \begin{enumerate} \item $\overline{A}_{2}^{n} = \overline{A}_{4}^{n} = \hdots = \overline{A}_{2k}^{n}$ $\forall k$ satisfying $2k \le n-2$. \item $\overline{A}_{3}^{n} = \overline{A}_{5}^{n} = \hdots = \overline{A}_{2k+1}^{n}$ $\forall k$ satisfying $2k+1 \le n-2$. \item $A(t)$ satisfies the sign pattern of both $\overline{A}_{k}^{n}$ and $\overline{A}_{k+1}^{n}$ iff $A(t)$ is tridiagonal with non-negative entries on the super- and sub-diagonals for almost all $t$. Such a system is called a~\sl{totally positive differential system} (TPDS)~\cite{schwarz1970}. \end{enumerate} \end{Remark} \begin{Theorem}\label{thm:timeinterval_positivity}~\cite{Eyal_k_posi,9107214} Choose $n\ge 4$ and $k\in [2,n-2]$. The LTV system~\eqref{Eq:ltv} is $k$-positive on the time interval $(t_0,t_1)$ iff $A(t)$ admits the sign structure $\overline{A}_{k}^{n}$ for almost all $(t_0,t_1)$ \end{Theorem} \begin{Remark}~\cite{9107214} $k$-positivity, for $k\in [2,n-2]$, can be classified to two cases only: \begin{enumerate} \item \sl{odd-positivity} - the flow of~\eqref{Eq:ltv} maps $P^{k}$ to itself for ll odd $k\in [2,n-2]$. % \item \sl{even-positivity} - the flow of~\eqref{Eq:ltv} maps $P^{k}$ to itself for all even $k\in [2,n-2]$. \end{enumerate} Note that $k$-positivity as described in Theorem~\eqref{thm:timeinterval_positivity} is not invariant to coordinate transformations. \end{Remark} \begin{Example} Consider \begin{align*} \overline{A} = \begin{bmatrix} * & 0 & 0 & + \\ + & * & + & 0 \\ 0 & 0 & * & - \\ + & 0 & - & * \end{bmatrix} \end{align*} Note that $\overline{A} \notin M_{2}^{4}$, and therefore in general not even-positive. However, applying a permutation matrix $P$ may transform this to a system which is even-positive. Such a permutation matrix $P$ is for example $ P:= \begin{bmatrix} 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix}$. Thus, $\overline{B} := P\overline{A} P^{T} = \begin{bmatrix} * & 0 & 0 & - \\ + & * & + & 0 \\ 0 & 0 & * & + \\ - & 0 & + & * \end{bmatrix}$, where indeed $\overline{B}\in M_{2}^{4}$, and therefore, the system $x^{p} (t):= P x(t)$ is even-positive. \end{Example} \subsubsection{Influence Graphs} \subsubsection{$k$-positive up to permutations} \end{comment} \begin{comment} MATLAB PROGRAMS - DO NOT DELETE hold off; A=[-1 1 -2; 0 1 0.1 ; -3 0 1 ]; alp=-7;beta=5; x0=[-1 0 1]'+alp*[0 3 1 ]'+beta*[1 0 1]'; val=[]; oldnum=1; for t=0:0.001:1 xt=expm(A*t)*x0; num=0;if xt(1)*xt(2)<0 num=1; end if xt(2)*xt(3)<0 num=num+1; end val=[val ; num]; if oldnum ~= num [t oldnum num ] end oldnum=num; plot(t,num,'.k'); hold on; end grid on; xlabel('$t$','fontsize', 16 ,'interpreter','latex'); hold off; for t=0:.5:2 mat=[exp(-t) 0 ; -1+exp(-t)*(cos(t)-sin(t)) 1]; p1=mat*[0;0]; p2=mat*[1;0]; p3=mat*[1;1]; p4=mat*[0;1]; x=[ p1(1) p2(1) p3(1) p4(1) p1(1)]+2*t; y=[ p1(2) p2(2) p3(2) p4(2) p1(2)]+2*t; h=fill(x,y,'k-'); hold on; end text(1/8,1.2,'$t=0$','interpreter','latex','fontsize',14); text(0.7,2.1,'$t=0.5$','interpreter','latex','fontsize',14); text(1.7,3.1,'$t=1$','interpreter','latex','fontsize',14); text(2.7,4.1,'$t=1.5$','interpreter','latex','fontsize',14); text(3.7,5.1,'$t=2$','interpreter','latex','fontsize',14); grid on; xlabel('$x_1$','fontsize', 16 ,'interpreter','latex'); ylabel('$x_2$','fontsize', 16 ,'interpreter','latex'); axis([0 5.5 0 5.5 ] );axis('square'); hold off; timelen=2000; options = odeset('RelTol',1e-10,'AbsTol',1e-10); [t,x]=ode45(@RAZ_eq,[0 timelen], [ 1; -2;1],options); plot3(x(:,1),x(:,2),x(:,3),'k');hold on; h=xlabel('$x_1$','interpreter','latex','FontSize',16); ylabel('$x_2$','interpreter','latex','FontSize',16); zlabel('$x_3$','interpreter','latex','FontSize',16); grid on; box on; function ret=RAZ_eq(t,x) b = 0.1 ; ret=[ sin(x(2))-b*x(1) ; sin(x(3))-b*x(2) ; sin(x(1))-b*x(3) ]; function ret=RAZ_CL_eq(t,x) b = 0.1 ; c=2*b-1.1 ; ret=[ sin(x(2))-b*x(1)+c*x(1); sin(x(3))-b*x(2)+c*x(2); sin(x(1))-b*x(3) ]; hold off; timelen=25; options = odeset('RelTol',1e-10,'AbsTol',1e-10); [t,x]=ode45(@RAZ_CL_eq,[0 timelen],1/2*[-1; 1;1],options); plot3(x(1,1),x(1,2),x(1,3),'ok');hold on; plot3(x(:,1),x(:,2),x(:,3),'k');hold on; [t,x]=ode45(@RAZ_CL_eq,[0 timelen],[-1; 1;1],options); plot3(x(1,1),x(1,2),x(1,3),'ok');hold on; plot3(x(:,1),x(:,2),x(:,3),'k');hold on; [t,x]=ode45(@RAZ_CL_eq,[0 timelen],[ 1; 1;1],options); plot3(x(1,1),x(1,2),x(1,3),'ok');hold on; plot3(x(:,1),x(:,2),x(:,3),'k');hold on; [t,x]=ode45(@RAZ_CL_eq,[0 timelen],[ 1; -1;1],options); plot3(x(1,1),x(1,2),x(1,3),'ok');hold on; plot3(x(:,1),x(:,2),x(:,3),'k');hold on; [t,x]=ode45(@RAZ_CL_eq,[0 timelen],[ 1; 1;-1],options); plot3(x(1,1),x(1,2),x(1,3),'ok');hold on; plot3(x(:,1),x(:,2),x(:,3),'k');hold on; [t,x]=ode45(@RAZ_CL_eq,[0 timelen],[-1; 1;-1],options); plot3(x(1,1),x(1,2),x(1,3),'ok');hold on; plot3(x(:,1),x(:,2),x(:,3),'k');hold on; [t,x]=ode45(@RAZ_CL_eq,[0 timelen],[1/2;1/4;0 ],options); plot3(x(1,1),x(1,2),x(1,3),'ok');hold on; plot3(x(:,1),x(:,2),x(:,3),'k');hold on; [t,x]=ode45(@RAZ_CL_eq,[0 timelen],[1/20;1/40;0 ],options); plot3(x(1,1),x(1,2),x(1,3),'ok');hold on; plot3(x(:,1),x(:,2),x(:,3),'k');hold on; [t,x]=ode45(@RAZ_CL_eq,[0 timelen],[1/2;-1/2;-2 ],options); plot3(x(1,1),x(1,2),x(1,3),'ok');hold on; plot3(x(:,1),x(:,2),x(:,3),'k');hold on; h=xlabel('$x_1$','interpreter','latex','FontSize',16); ylabel('$x_2$','interpreter','latex','FontSize',16); zlabel('$x_3$','interpreter','latex','FontSize',16); grid on; box on; set(gca,'View',[-28,35]); [az,el] = view \end{comment}
1505.04666
\section{Introduction} \label{sec:introduction} From their first applications in photon detection\cite{Day:2003fk}, Kinetic Inductance Detectors (KIDs) became rapidly subject of several R$\&$D activities in different physics sectors such as astrophysics\cite{Monfardini2011,Mazin:2013wvi} and search of dark matter interactions\cite{Golwala2008}, proving to be very versatile devices\cite{zmu_annrev2012}. In superconductors biased with an AC current, Cooper pairs oscillate and acquire kinetic inductance. Interactions with energy larger than the binding energy of Cooper pairs (2$\Delta_0$) can break them into quasiparticles, producing a variation in the kinetic inductance. The superconductor is inserted in a high merit factor LC circuit, and the energy of the interactions can be reconstructed by monitoring the variations in the resonance parameters. The key features of KIDs consist in the good intrinsic energy resolution, and in the possibility of easily tuning each detector to a different resonant frequency. This natural aptitude to frequency multiplexed read-out allows operation of a few hundreds of KIDs with two cables and one cryogenic amplifier. In addition, since the electronics is located at room temperature, the installation of KIDs arrays would require minor modifications to the pre-existing cryogenic facilities. Another important advantage of these devices is that they are operated well below the superconductor critical temperature, where the quasiparticle lifetime and the internal quality factor saturate, resulting in a negligible dependence on the temperature. Light detectors (LDs) based on KIDs could enable particle identification, and therefore background suppression\cite{CUPIDRD,CUPIDsci} in bolometric experiments searching for neutrino-less double beta decay (0$\nu\beta\beta$) or dark matter interactions, such as CUORE\cite{Artusa:2014lgv} and LUCIFER\cite{Beeman:2013sba}. The present limit of KIDs in this kind of application resides in their small active surface (few mm$^2$). Macro-bolometers used by CUORE and LUCIFER feature a surface of several cm$^2$, thus an effective read-out of the light emitted by these crystals demands for LDs with large active area. Obtaining a similar surface with KIDs would require hundreds of pixels for each LD, which is not a realistic option for experiments with $\sim$1000 detectors. This problem can be overcome following the phonon-mediated approach developed for Neutron Tansmutation Doped Ge thermistors\cite{Fiorini1984} and for Transition Edge Sensors\cite{Proebst1995}, and recently proposed also for KIDs\cite{swenson,moore2}. The CALDER\ project\cite{CalderWhitePaper} aims to demonstrate that such an approach allows to develop large-area LD with noise energy resolution $<$20\,eV RMS, multiplexed read-out, high reproducibility and scalability. In this paper we present the results of a CALDER\ development prototype, implementing four aluminum resonators. We perform an absolute calibration of the energy absorbed by the KIDs and derive the efficiency of the detector. We characterize the response with photons at 400\,nm and X-rays from a $^{57}$Co\ source. Finally, we analyze and evaluate the energy resolution of the detector. \section{Detector Fabrication} \label{sec:Detector} The detectors are fabricated at CNR IFN on high quality, 300\,$\mu$m\ thick, high resistivity ($>$10\,k$\Omega\times$cm) Si(100) substrates. The four lumped-element resonators (in the following KID-1, KID-2, KID-3\ and KID-4, ordered according to the position on the chip) are patterned by electron beam lithography on a single 40\,nm thick Al film deposited using electron-gun evaporator. The active area of the single pixel consists of an inductive meander made of 14 connected strips of 80\,$\mu$m$\times$2\,mm. The meander is closed with a capacitor made of 5 interdigitated fingers of 1.2\,mm$\times$50\,$\mu$m, to ensure the uniformity of the current across the inductor. The resonant frequency of each resonator is varied by cutting the last finger of the capacitor. As shown in Figure~\ref{fig:photo}, the chip is assembled in a copper structure using PTFE supports with total contact area of about 3\,mm$^2$. The other side of the holder (not shown) is covered with a copper collimator hosting a $^{57}$Co\ calibration source (peaks at 6.4 and 14.4\,keV) and an optical fiber coupled to a room-temperature LED, that produces pulses at 400\,nm. The source and the fiber are placed on the back of the substrate to avoid direct illumination of the resonators. The optical fiber points to the center of the substrate, while the $^{57}$Co\ source is located nearby KID-4. \begin{figure}[htbp] \includegraphics[width=.45\textwidth, natwidth=842, natheight=595]{kid_photo.pdf} \caption{\label{fig:photo}The four Al KIDs deposited on the 2$\times$2\,cm$^2$ Si substrate. The chip is assembled in a copper structure using PTFE supports and illuminated from the back with a collimated $^{57}$Co\ source and an optical fiber.} \end{figure} The copper holder is thermally anchored to the coldest point of a $^3$He/$^4$He dilution refrigerator with base temperature of 10\,mK. The output signal is fed into a CITLF4 SiGe cryogenic low noise amplifier\cite{amply} operated at 4\,K. A detailed description of the chip design, the cryogenic setup of our laboratory at Sapienza University in Rome, the room-temperature electronics and the acquisition software can be found in references\cite{CalderWhitePaper,Bourrion:2011gi,Bourrion:2013ifa}. \section{Data Analysis} \label{sec:analysis} A typical data collection consists in acquiring the complex transmission (S21) for a frequency sweep around the resonances (see Figure~\ref{fig:resonances}), and fitting the resonance circles in order to extract the quality factor Q, the coupling quality factor Q$_c$ and the internal quality factor Q$_i$ using the method described in references~\cite{khalil2012,swenson2013}. \begin{figure}[htbp] \includegraphics[width=.45\textwidth, natwidth=500, natheight=345]{resonances.pdf} \caption{\label{fig:resonances}Amplitude of the line transmission (S21) for a VNA frequency sweep around the resonances. Inset: fit of the resonance circle of KID-3\ in the working point; the green marker indicates the resonant frequency.} \end{figure} Resonators are designed to be over-coupled, thus the total quality factors Q (reported in Table~\ref{tab:QualityFactors}) are entirely dominated by the coupling quality factors Q$_c$. Q$_c$-values differ from the design value of 8000, likely due to the presence of parasitic slotline modes or standing waves between the detector and the cryogenic amplifier, or inter-resonator coupling resulting in coupled oscillation modes\cite{Noroozian2012}. Q$_i$ is above 150$\times$10$^3$, but the values of Q$_c$ limit the accuracy of the estimation. \begin{table}[thb] \caption{\label{tab:QualityFactors}Resonant frequency $f_0$, quality factor Q, optimal (off-resonance) microwave power at the amplifier input P$_{in}$ and quasiparticles\ recombination time $\tau_{qp}$. Errors on f$_0$ and P$_{in}$ are negligible, while errors on Q and $\tau_{qp}$\ are dominated by fit systematics and are lower than 10$\%$.} \begin{ruledtabular} \begin{tabular}{lcccc} &$f_0$ &Q &P$_{in}$ &$\tau_{qp}$ \\ &[GHz] &[$\times$10$^3$] &[dBm] &[$\mu$s] \\ KID-1 &2.675 &6 &-63 &218 \\ KID-2 &2.689 &18 &-64 &228 \\ KID-3 &2.731 &8 &-66 &243 \\ KID-4 &2.746 &35 &-72 &233 \\ \end{tabular} \end{ruledtabular} \end{table} When the trigger of any of the resonators fires, we acquire a 2\,ms long time window for the real and imaginary parts of S21 for each resonator (I and Q) with a sampling frequency of 500\,kHz. I and Q variations are then converted into changes in phase ($\delta\phi$) and amplitude relative to the center of the resonance loop (blue marker in the circle reported in Figure~\ref{fig:resonances}). In the following analysis we use only the $\delta\phi$\ signal, as for this detector it is from 6 to 10 times larger than the amplitude one, depending on the KID. To determine the optimal microwave power, we evaluate the signal-to-noise ratio scanning from -80\,dBm to -50\,dBm. Increasing the input power produces a reduction of the noise contribution from the amplifier but, on the other hand, decreases the quasiparticles\ recombination time $\tau_{qp}$\ and, as a consequence, the signal integration length. The microwave power that optimizes the signal to noise ratio for each resonator is reported in Table~\ref{tab:QualityFactors}. The average noise power spectrum is reported in Figure~\ref{fig:noise} for phase (continuous line) and amplitude (dotted line) read-out of each resonator. The flat noise observed in the amplitude read-out and in the high frequency region of the phase read-out, is consistent with the noise temperature of the amplifier ($T_N\sim7\,K$). The low-frequency region of the phase spectra is dominated by another noise source, whose origin is not clear yet. It is not ascribable to two-level system noise, as it does not depend on temperature or microwave power. Furthermore, the presence of a mu-metal shield around the cryostat should guarantee an efficient suppression of noise due to static or low-frequency magnetic fields. Since a fraction of this noise is found to be correlated, it could be caused by frequency jitters in the read-out. \begin{figure}[htbp] \includegraphics[width=.48\textwidth, natwidth=567, natheight=271]{nps_ds7032.pdf} \caption{\label{fig:noise}Average noise power spectrum for phase (continuous line) and amplitude (dotted line) read-out. On top of the white noise from the amplifier, the phase noise exhibits an extra contribution at low frequency.} \end{figure} The high frequency noise in the acquired waveforms is rejected off-line using a software low-pass filter. In order to avoid distortions in the rise-time of the pulses, the cut-off frequency is set at 100\,kHz ($\tau_{cut-off}\sim1.6$\,$\mu s$). Finally, the waveforms are processed with the optimum filter\cite{Gatti:1986cw,Radeka:1966}, which includes a resonator-specific rise time (see below). The results are not highly sensitive to the choice of rise time. \begin{figure}[htbp] \includegraphics[width=.48\textwidth, natwidth=567, natheight=354]{fiberpulses_ds7037.pdf} \caption{\label{fig:14keVpulse}Response of the four resonators to 15\,keV pulses produced by the optical fiber, placed in the proximity of KID-2\ and KID-3. The responses are obtained averaging many pulses to reduce the random noise.} \end{figure} In Figure~\ref{fig:14keVpulse} we show the typical response of the four resonators to the interaction of 15\,keV optical pulses, obtained by averaging several pulses to suppress the random noise contributions. We fit the pulses with a model that includes the time constant of the low-pass filter, the ring-time of the resonator ($\tau_r = Q/(\pi f_0)$) and two free parameters: a rise-time, which is related to the spread in the arrival time of phonons, and a decay time. As expected, the rise-time depends on the distance between the resonator and the optical fiber, and ranges from 2\,$\mu$s (KID-2\ and KID-3) to 10\,$\mu$s (KID-1) and 17\,$\mu$s (KID-4). The decay time becomes faster increasing the microwave power or the temperature, and for this reason it is identified as $\tau_{qp}$. This time constant does not depend on the energy of the optical pulses in the scan range (0.7-25\,keV), and its value is reported in Table~\ref{tab:QualityFactors} for each resonator. \section{Energy calibration and efficiency} \label{sec:results} The energy $E$ absorbed in a resonator creates a number of quasiparticles\ $\delta N_{qp} = \eta E/\Delta_0 $, where $\eta$ is the detection efficiency. The variation $\delta N_{qp}$ produces a linear shift of the resonant frequency $f$ from the equilibrium one $f_0$\cite{Day:2003fk}: \begin{equation} \label{eq:E} E = \frac{\Delta_0}{\eta} \delta N_{qp} = \frac{ \Delta_0}{\eta} \left( \frac{1}{p_0} \frac{f-f_0}{f_0}\right) \end{equation} where $p_0 = \alpha S_2(f,T)/4N_0 V\Delta_0$. The parameter $\alpha$ is the fraction of the total inductance due to kinetic inductance, $N_0$ is the single spin density of states (1.72$\times$10$^{10}$\,eV$^{-1}\mu m^{-3}$) and $S_2(f,T)$ is a slow function of the temperature and of the resonant frequency that relates the phase variation of the complex conductivity to Cooper pairs breaking. In our working conditions, $S_2(f,T)$ is measured to be 2.3--2.6 depending on the resonator. The active volume of the resonator $V$ is calculated by correcting the volume of the inductor (96500\,$\mu$m$^3$) for the average variation of current density (90\%) evaluated from SONNET simulations\cite{sonnet}. We calculate $\Delta_0$ and $\alpha$ by measuring the variations of the resonant frequency $(f-f_0)/f_0$ as a function of the temperature, and fitting these data to the Mattis-Bardeen theory according to the procedure described by P\"{o}pel\cite{popel} and adopted by Gao \emph{et al.}\cite{GAOalpha}. The relationship between $(f-f_0)/f_0$ and $T$, indeed, depends only on $\Delta_0$ and $\alpha$, that are found to be the same for all the resonators. The average values are $\Delta_0=(201\pm6)\,\mu$eV and $\alpha=(5.8\pm0.5)\%$. We make a second, independent measurement of $\Delta_0$ by illuminating the chip with a Millimeter-Wave source. We monitor phase, amplitude and resonant frequency of the resonators while increasing the frequency of the source from 75 to 110\,GHz. The minimum source frequency that produces significant variations of the resonators features is $\nu_m = 95.5$\,GHz. This value corresponds to $\Delta_0 = h\nu_m/2 =(197\pm5)\,\mu$eV, in full agreement with the previous value. With these parameters, p$_0$ turns out to be $(1.2\pm0.1)\times10^{-13}$ for all the resonators. Finally, equation~\ref{eq:E} can be modified in order to convert the frequency response $(f-f_0)/f_0$ into the measured phase variation $\delta\phi$ recalling that, for $\delta\phi$\ measured in the center of the resonance loop, $(f-f_0)/f_0 = \delta\phi/4Q$: \begin{equation} \label{eq:AbsoluteCalib} E = \frac{\Delta_0}{4Q\eta p_0}\delta\phi. \end{equation} To estimate $\delta\phi$ we could use directly the amplitude of the phase signal. However, to account for the observed small differences in the pulse shapes induced by the arrival time of phonons, we evaluate $\delta\phi$ by integrating the time development of the pulse and dividing the result by $\tau_{qp}$. The energy-calibration function in Eq. \ref{eq:AbsoluteCalib} is applied to the single resonators to obtain the energy spectra of the $^{57}$Co\ source (reported in Figure~\ref{fig:TotalEnergySpectrumAbsoluteCalib} (left)) and of the optical pulses. \begin{figure}[htbp] \includegraphics[width=.5\textwidth, natwidth=567, natheight=488]{TotalEnergySpectrumAbsoluteCalib.pdf} \caption{\label{fig:TotalEnergySpectrumAbsoluteCalib} Left: low energy spectrum of the energy absorbed in KID-1\ (red, continuous line), KID-2\ (gray, filled) KID-3\ (blue, dotted line) and KID-4\ (green, dashed line) calibrated with the function in Eq.~\ref{eq:AbsoluteCalib}; each resonator absorbs a different energy fraction of the X-rays from the $^{57}$Co\ source depending on its position. Right: event by event sum of the the calibrated energies reported in the left plot; the fit of the 14.4\,keV peak produced by the $^{57}$Co\ source is shown. The energy spectra are not corrected for the detection efficiency.} \end{figure} To infer the detection efficiency, we fit the position of the high energy peak produced by the $^{57}$Co\ source, and we divide the result by the nominal energy of the line (see Table~\ref{tab:EnergyReso}). Applying this procedure to the lower energy peak (6.4\,keV) gives consistent results. The estimation of the efficiency on X-rays allows us to neglect uncertainties in the energy deposited, which could be significant when dealing with optical pulses. To evaluate the total efficiency of the detector, we sum event-by-event the calibrated energies of each KID (see Figure~\ref{fig:TotalEnergySpectrumAbsoluteCalib} (right)) and fit the position of the 14.4\,keV peak, obtaining $\eta_{SUM}=2.6$\,keV/14.4\,keV = \efficiency. The error on the efficiency is dominated by the error on $\tau_{qp}$\ introduced by the absolute calibration (Eq.~\ref{eq:AbsoluteCalib}). We apply to optical pulses the analysis technique validated on X-rays. The energy of the optical pulses was previously calibrated with a PMT at room temperature, and corrected for the PMT quantum efficiency, for the reflectivity of silicon\cite{SiIndex}, and for the geometrical efficiency, evaluated through a Monte Carlo simulation based on the Litrani software\cite{Latrina}. We observe that the detector response is linear in the scan range. The energies of the optical pulses evaluated using the absolute calibration (Eq.~\ref{eq:AbsoluteCalib}) and corrected for $\eta_{SUM}$, differ by less than 5$\%$ from what computed with the simulation. In Table~\ref{tab:EnergyReso} we report the noise resolution $\sigma_{E}$, evaluated at the detector baseline, and the intrinsic noise of each resonator, calculated by scaling the resolution $\sigma_{E}$ for the single KID efficiency $\eta$. Summing the response of the four KIDs on an event by event basis, we obtain a global energy resolution $\sigma_{E,SUM}$ of 154$\pm7$\,eV, which corresponds to an intrinsic resolution of $\sigma_{E,SUM}\times\eta = 154\,eV\times 0.18 = 27.7$\,eV. \begin{table} \caption{\label{tab:EnergyReso}Measured noise resolution ($\sigma_{E}$), expected noise resolution in the limit in which the amplifier noise dominates ($\sigma_{E}^{amp}$) and integral of the optimally-filtered noise power spectral density ($\sigma_E^{PSD}$); pixel efficiency $\eta$ and intrinsic resolution of the resonator ($\sigma_E\times\eta$). In the last line we report the global performance of the detector.} \begin{ruledtabular} \begin{tabular}{lcccccc} &$\sigma_E$ &$\sigma_{E}^{amp}$ &$\sigma_E^{PSD}$ &$\eta$ &$\sigma_E\times\eta$ \\ &[eV] &[eV] &[eV] &[$\%$] &[eV] \\ KID-1 &400 &147 &484 &3.1$\pm$0.4 &12.4 \\ KID-2 &262 &60 &253 &3.4$\pm$0.4 &8.9 \\ KID-3 &205 &75 &233 &6.1$\pm$0.7 &12.5 \\ KID-4 &204 &50 &184 &5.5$\pm$0.6 &11.2 \\ SUM &154 &42 & &18$\pm$2 &27.7 \\ \end{tabular} \end{ruledtabular} \end{table} This resolution is worse than the one expected if we were dominated by the amplifier noise that, for each resonator, can be computed as\cite{CalderWhitePaper}: \begin{equation} \sigma_{E}^{amp} = \frac{\Delta_0^2N_0V}{\eta\alpha Q S_2(f,T)} \sqrt{\frac{4Q_c^2}{Q^2} \frac{k_BT_N}{P_{in}\tau_{qp} } }\,. \end{equation} The discrepancy between the measured values $\sigma_E$ and $\sigma_{E}^{amp}$ can be interpreted as due to the excess low frequency noise in the phase readout. This is supported by the fact that $\sigma_E$ matches closely $\sigma_E^{PSD}$, which is the integral of the optimally-filtered noise power spectral density. Without this noise, we would have expected a global energy resolution of $\sigma_{E,SUM}^{amp} = \sqrt{\sum(\sigma_{E_i}^{amp}\times\eta_i)^2)}/\eta_{SUM} = 42\,\rm{eV}$. \section{Conclusions} In this paper we presented the results obtained with a 4-pixels Al array on Si substrate. The comparison of the absolute calibration of the detector with the energy produced by a calibrated LED+optical system proved the reliability in reconstructing the energy of optical pulses from 0 to 25\,keV. We derived an efficiency of \efficiency, that can be improved by increasing the active surface of the resonators. The overall baseline resolution of 154$\pm7$\,eV\ is already competitive with some of the commonly used cryogenic light detectors, and can be further improved by suppressing the noise sources and, eventually, by using more sensitive superconductors. \section*{Acknowledgements} This work was supported by the European Research Council (FP7/2007-2013) under contract CALDER no. 335359 and by the Italian Ministry of Research under the FIRB contract no. RBFR1269SL. We want to thank Dr. Silvio Morganti for his help in calibrating the optical system.
1605.02305
\section{Introduction} Depth estimation is one of the most fundamental tasks in computer vision. Many other computer vision tasks such as object detection, semantic segmentation, scene understanding, can benefit considerably from accurate estimation of depth information. Most existing methods \cite{Eigen15,LiB15,Wang_2015_CVPR,LiuSLR15} formulate depth estimation as a structured regression task due to the fact of depth values being continuous. These regression models for depth estimation are trained by iteratively minimizing the $L2$ norm between the predicted depths and the ground-truth depths, and aim to output depths as close to the actual depths as possible during evaluation. However, it is difficult to regress the depth value of input data to be exactly the ground-truth value. For human beings, we may find it difficult to tell the exact distance of a specific point in a natural scene, but we can easily give a rough distance range of that point. Motivated by this, we formulate depth estimation as a pixel-wise classification task by discretizing the continuous depth values into several discrete bins. Instead of training a model to predict the depth value of a point, we train a model to predict the depth range. We show that this simple re-formulation scheme performs surprisingly well. Another important reason for us to choose classification over regression for depth estimation is that it naturally predicts a confidence in the form of probability distribution over the output space. Different points have different distributions of possible depth values. The depth estimation of some points are easy while others are not. Typical regression models only output the mean values of possible depth values without the variances, (i.e., the confidence of a prediction is missing). Some efforts have been made to obtain this confidence such as the constrained structured regression \cite{deepak16}, or the Monte-Carlo dropout \cite{bayesiansegnet,Gal2016Bayesian}. Compared to these methods which either require specific constraints or multiple forward passes during evaluation, our proposed approach is simple to implement. The obtained probability distribution can be an important cue during both training and post-processing. Although we formulate depth estimation as a classification task by discretization, the depth labels are different from the labels of typical classification tasks such as semantic segmentation. During training, the predicted depth labels that are close to ground-truth and with high confidence can also be used to update model parameters. This is achieved by an information gain loss. As for the post-processing, we apply the fully-connected conditional random fields (CRF) \cite{phil2011} which have been frequently applied in semantic segmentation \cite{LinSRH15,ChenPKMY14}. With the fully connected CRFs, pixel depth estimation with low confidence can be improved by other pixels that are connected to it. Traditional depth estimation methods enforce geometric assumptions and rely on hand-crafted features such as SIFT, PHOG, GIST, texton, etc. Recently, computer vision has witnessed a series of breakthrough results introduced by deep convolutional neural networks (CNN) \cite{NIPS2012_4824, Simonyan14c}. The success of deep networks can be partially attributed to the rich features captured by the stacked layers. Recent evidence has shown that depth estimation benefits largely from increased number of layers \cite{EigenPF14, Eigen15, LiuSLR15}. However, stacking more layers does not necessarily improve performance as the training can become very difficult due to the problem of vanishing gradients. In this work, we apply the the deep residual learning framework proposed by He et al. \cite{kmhe15}. It manages to learn the residual mapping of a few stacked layers to avoid the vanishing gradients problem. \begin{figure*} \begin{center} \includegraphics[scale=.75]{./figures/figure_1.pdf} \end{center} \caption{An overview of our depth estimation model. It takes as input an image and output dense score maps. Fully-connected CRFs are then applied to obtain the final depth estimation.} \label{fig:overview} \end{figure*} An overview of our proposed depth estimation model is illustrated in Fig.~\ref{fig:overview}. It takes as input an arbitrarily sized image and outputs a dense score map. Fully connected CRFs are then applied to obtain the final depth estimation. The remaining content of the paper is organized as follows. Section \ref{sec:related} reviews some relevant work. Then we present the proposed method in Section \ref{sec:method}. Experiment results are presented in Section \ref{sec:exp}. Finally, Section \ref{sec:con} concludes the paper. \section{Related Work} \label{sec:related} Previous depth estimation methods are mainly based on geometric models. For example, the works of \cite{Hedau,NIPS2010_4120,SchwingECCV2012} rely on box-shaped models and try to fit the box edges to those observed in the image. These methods are limited to only model particular scene structures and therefore are not applicable for general-scene depth estimations. More recently, non-parametric methods \cite{KKarsch} are explored. These methods consist of candidate images retrieval, scene alignment and then depth inference using optimizations with smoothness constraints. These methods are based on the assumption that scenes with semantically similar appearances should have similar depth distributions when densely aligned. Other methods attempt to exploit additional information. To name a few, the authors of \cite{RussellT09} estimated depths through user annotations. The work of \cite{LiuCVPR2010} performed semantic label prediction before depth estimation. The works of \cite{Ladicky_2014_CVPR,Wang_2015_CVPR} have shown that jointly perform depth estimation and semantic labelling can help each other. Given the fact that the extra source of information is not always available, most of recent works formulated depth estimation as a Markov Random Field (MRF) \cite{Saxena2005,Saxena2009,Saxena073-ddepth} or Conditional Random Field (CRF) \cite{Liu_2014_CVPR} learning problem. These methods managed to learn the parameters of MRF/CRF in a supervised fashion from a training set of monocular images and their corresponding ground-truth depth images. The depth estimation problem then is formulated as a maximum a posteriori (MAP) inference problem on the CRF model. With the popularity of deep convolutional neural networks (CNN) since the work of \cite{NIPS2012_4824}, some works attempted to solve the depth estimation problem using deep convolutional networks and achieved outstanding performance. Eigen et al. \cite{Eigen15} proposed a multi-scale architecture for predicting depths, surface normals and semantic labels. The multi-scale architecture is able to capture many image details without any superpixels or low-level segmentation. Liu et al. \cite{LiuSLR15} presented a deep convolutional neural field model for depth estimation. It learned the unary and pairwise potentials of continuous CRF in a unified deep network. The model is based on fully convolutional networks (FCN) with a novel superpixel pooling method. Similarly, Li et al. \cite{LiB15} and Wang et al. \cite{Wang_2015_CVPR} also combined the CNNs with CRFs, they formulated depth estimation in a two-layer hierarchical CRF to enforce synergy between global and local predictions. Anirban et al. \cite{RoyT16} proposed a neural regression forest (NRF) architecture which combines convolutional neural networks with random forests for predicting depths in the continuous domain via regression. The NRF processes a data sample with an ensemble of binary regression trees and the final depth estimation is made by fusing the individual regression results. It allows for parallelizable training of all shallow CNNs, and efficient enforcing of smoothness in depth estimation results. Laina et al. \cite{laina2016deeper} applied the deep residual networks for depth estimation. In order to improve the output resolution, they presented a novel way to efficiently learn feature map up-sampling within the network. They also presented a reverse Huber loss which is driven by the value distributions commonly present in depth maps for the network optimization. Experiment results in the aforementioned works reveal that depth estimation benefits from: (a) an increased number of layers in deep networks; (b) obtaining fine-level details. In this work, we take advantage of the successful deep residual networks \cite{kmhe15} and formulate depth estimation as a dense prediction task. We also apply fully connected CRFs \cite{phil2011} as post-processing. Although Laina et al. \cite{laina2016deeper} also applied the deep residual network for depth estimation, our method is different from \cite{laina2016deeper} in 3 distinct ways: Firstly, we formulate depth estimation as a classification task, while \cite{laina2016deeper} formulated depth estimation as a regression task. Secondly, we can obtain the confidence of depth predictions which can be used during training and post-processing. Lastly, in order to obtain high resolution predictions, \cite{laina2016deeper} applied an up-sampling scheme while we simply use bilinear interpolation. The aforementioned CNN based methods formulate depth estimation as a structured regression task due to the continuous property of depth values. However for different pixels in a single monocular image, the possible depth values have different distributions. Depth values of some pixels are easy to predict while others are not. The output of continuous regression lacks this confidence. In \cite{deepak16}, Pathak et al. presented a novel structured regression framework for image decomposition. It applied special constraints on the output space to capture the confidence of predictions. In \cite{bayesiansegnet}, Kendall et al. proposed a Bayesian neural network for semantic segmentation. It applied the Monte-Carlo dropout during training and obtained the confidence of predictions by multiple forward passes during evaluation. In this work, we obtain the confidence by simply formulating depth estimation as a classification task. \section{Proposed Method}\label{sec:method} In this section, we describe our depth estimation method in detail. We first introduce the network architecture, followed by the introduction of our loss function. Finally, we introduce the fully connected conditional random field (CRF) which is applied as post-processing. \subsection{Network architecture} We formulate our depth estimation as a spatially dense prediction task. When applying CNNs to this type of task, the input image is inevitably down-sampled due to the repeated combination of max-pooling and striding. In order to handle this, we follow the fully convolutional network (FCN) which has been proven to be successful in dense pixel labeling. It replaces the fully connected layers in conventional CNN architectures with convolutional layers. By doing this, it makes the fully convolutional networks capable of taking input of arbitrarily sized images and output a down-sampled prediction map. After applying a simple upsample such as bilinear interpolation, the prediction map is of the same size of the input image. The depth of CNN architectures is of great importance. Much recent works reveal that the VGG \cite{Simonyan14c} network outperforms the shallower AlexNet \cite{NIPS2012_4824}. However, simply stacking more layers to existing CNN architectures does not necessarily improve performance due to the notorious problem of vanishing gradients, which hampers convergence from the beginning during training. The recent ResNet model solves this problem by adding skip connections. We follow the recent success of deep residual network with up to 152 layers \cite{kmhe15}, which is about 8$\times$ deeper than the VGG network but still having fewer parameters to optimize. \begin{figure} \begin{center} \includegraphics[scale=.41]{./figures/figure_2.pdf} \end{center} \caption{Two types of building blocks that can be used in our depth estimation model. (a) building block with identity mapping. (b) building block with linear projection.} \label{fig:buildingblocks} \end{figure} Instead of directly learning the underlying mapping of a few stacked layers, the deep residual network learns the residual mapping. Then the original mapping can be realized by feedforward neural networks with ``shortcut connections''. Shortcut connections are those skipping one or more layers. In our model, we consider two shortcut connections and the building blocks are shown in Fig. \ref{fig:buildingblocks}. The building block illustrated in Fig. \ref{fig:buildingblocks}(a) is defined as: \begin{equation} \label{block1} \mathbf{y} = \mathnormal{F}(\mathbf{x},\{W_i\})+\mathbf{x}, \end{equation} where $\mathbf{x}$ and $\mathbf{y}$ are the input and output matrices of stacked layers respectively. The function $\mathnormal{F}(\mathbf{x},\{W_i\})$ is the residual mapping that need to be learned. Since the shortcut connection is an element-wise addition, the dimensions of $\mathbf{x}$ and $\mathnormal{F}$ need to be same. The building block illustrated in Fig. \ref{fig:buildingblocks}(b) is defined as: \begin{equation} \mathbf{y} = \mathnormal{F}(\mathbf{x},\{W_i\})+W_s\mathbf{x}. \end{equation} Compared to the shortcut connection in Eq.~(\ref{block1}), a linear projection $\mathnormal{W_s}$ is applied to match the dimensions of $\mathbf{x}$ and $\mathnormal{F}$. The overall network architecture of our depth estimation model is illustrated in Fig. \ref{fig:network}. The input image is fed into a convolutional layer, a max pooling layer followed by 4 convolution blocks. Each convolution block starts with a building block with linear projection followed by different numbers of building blocks with identity mapping. In this article, we consider two deep residual network architectures with 101 and 152 layers respectively. For the network architecture with 101 layers, the number of building blocks with identity mapping in the four convolution blocks (i.e., $n_1,n_2,n_3,n_4$ in Fig. \ref{fig:network}) are 2, 3, 22 and 2 respectively. As for the network architecture with 152 layers, the numbers are 2, 7, 35 and 2. The last four layers are three convolutional layers with channels 1024,512 and $N$, and a softmax layer, where $N$ is the number of ground-truth labels. Batch normalization and ReLU layers are performed between these convolutional layers. Downsampling is performed by pooling or convolutional layers that have a stride of 2. These include the first $7\times7$ convolutional layer, the first $3\times3$ max pooling layer, and the first building block of convolution block 2 in Fig.~\ref{fig:network}. As a result, the output prediction map is downsampled by a factor of 8. During prediction, we perform a bilinear interpolation on this map to make it the same size with the input image. \begin{figure*} \begin{center} \includegraphics[scale=.685]{./figures/figure_3.pdf} \end{center} \caption{Network architecture of our depth estimation model. The input image is fed into a convolutional layer, a max pooling layer and 4 convolution blocks. We consider network architectures with 101 and 152 layers. The value of $[n_1,n_2,n_3,n_4]$ is $[2,3,22,2]$ for the 101-layer network architecture and $[2,7,35,2]$ for the 152-layer network architecture. The last 4 layers are 3 convolutional layers and a softmax layer. The output map is downsampled by a factor of 8 and we preform bilinear interpolation during prediction.} \label{fig:network} \end{figure*} \subsection{Loss function} In this work, we use the pixel-wise multinomial logistic loss function as we formulate depth estimation as a classification task. We uniformly discretize the continuous depth values into multiple bins in the log space. Each bin covers a range of depth values and we label the bins according to the range (i.e., the label index of a pixel indicates its distance). The depth labels however are different from the labels of typical classification tasks. For typical classification tasks such as semantic segmentation and object detection, the predictions that are different from ground-truth labels are considered wrong and contribute nothing in updating network parameters. As for depth estimation, the predictions that are close to ground-truth depth labels can also help in updating network parameters. This is achieved by an ``information gain" matrix in our loss function. Specifically, our loss function is defined as: \begin{equation}\label{lossfunc} \mathnormal{L} = -\frac{1}{N} \sum_{i=1}^{N}\sum_{D=1}^{B} H(D_{i}^{*},D)\log(\mathnormal{P(D|z_i)}), \end{equation} where $\mathnormal{D_{i}^{*}}\in[1,\dots,B]$ is the ground-truth depth label of pixel $\mathnormal{i}$ and $B$ is the total number of discretization bins. $\mathnormal{P(D|z_i)} = {e^{z_i,D}}/{\sum_{d=1}^{B}{e^{z_{i,d}}}}$ is the probability of pixel $\mathnormal{i}$ labelled with $\mathnormal{D}$. $\mathnormal{z_{i,d}}$ is the output of the last convolutional layer in the network. The ``information gain" matrix $H$ is a $B \times B$ symmetric matrix with elements $H(p,q) = \exp[-\alpha(p-q)^2]$ and $\alpha$ is a constant. It encourages the predicted depth labels that are closer to ground-truths have higher contributions in updating network parameters. During prediction, we set the depth value of each pixel to be the center of its corresponding bin. By formulating depth estimation as classification, we can get the confidence of each prediction in the form of probability distribution. This confidence can also be applied during post-processing via fully connected CRFs. \subsection{Fully connected conditional random fields} A deep convolutional network typically does not explicitly take the dependency among local variables into consideration. It does so only implicitly through the field of view. That is why the size of field of view is important in terms of the performance of a CNN. In order to greatly refine the network output, we apply the fully connected CRF proposed in \cite{phil2011} as post-processing. It connects all pairs of individual pixels in the image. Specifically, the energy function of a fully connected CRF is the sum of unary potential $\mathnormal{U}$ and pairwise potential $\mathnormal{V}$: \begin{equation} \mathnormal{E(\mathbf{D})} = \sum_{i}\mathnormal{U}(\mathnormal{D_i}) + \sum_{i,j}\mathnormal{V}(\mathnormal{D_i,D_j}), \end{equation} where $\mathbf{D}$ is the predicted depth labels of pixels and $i$, $j$ are pixel indices. We use the logistic loss of pixel defined in Eq. (\ref{lossfunc}) as the unary potential, which is \[ \mathnormal{U}(\mathnormal{D_i}) = \mathnormal{L(D_i)} = -\log(\mathnormal{P(D_{i}|z_i)}). \] The pairwise potential is defined as \[ \sum_{i,j}\mathnormal{V}(\mathnormal{D_i,D_j}) = \Delta(D_i,D_j)\sum_{s=1}^{M}w_s \cdot k^{s}(\mathbf{f}_i,\mathbf{f}_j), \] where $\Delta(D_i,D_j)$ is a penalty term on the labelling. Since the label here indicates depth, we enforce a relatively larger penalty for labellings that are far away from ground-truth. For simplicity, we use the absolute difference between two label values to be the penalty: $\Delta(D_i,D_j) = |D_i-D_j|$. There is one pairwise term for each pair of pixels in the image no matter how far they are from each other (i.e., the model's factor graph is fully connected). Each $k^s$ is the Gaussian kernel depends on features (denoted as $\mathbf{f}$) extracted for pixel $i$ and $j$ and is weighted by parameter $w_s$. Following \cite{phil2011}, we adopt bilateral positions and color terms, specifically, the kernels are: \begin{equation} \mathnormal{w_1}\exp(-\frac{\|p_i-p_j\|^2}{2\sigma_{\alpha}^{2}}-\frac{\|I_i-I_j\|^2}{2\sigma_{\beta}^{2}}) + \mathnormal{w_2}\exp(-\frac{\|p_i-p_j\|^2}{2\sigma_{\gamma}^{2}}). \end{equation} The first kernel is appearance kernel, which depends on both pixel positions (denoted as $p$) and pixel color intensities (denoted as $I$). It is inspired by the observation that nearby pixels with similar color are likely to be in the same depth range. The degrees of nearness and similarity are controlled by hyper parameters $\sigma_{\alpha}$ and $\sigma_{\beta}$. The second kernel is smoothness kernel which removes small isolated regions, the scale of smoothness is controlled by $\sigma_{\gamma}$. \section{Experiments} \label{sec:exp} We evaluate our proposed depth estimation approach on 2 benchmark RGB-D datasets: the indoor NYUD2 \cite{Silberman:ECCV12} dataset and the outdoor KITTI \cite{Geiger2013IJRR} dataset. We organize our experiments into the following three parts: (1) We show the effectiveness of our depth discretization scheme and compare our discrete depth label classification with continuous depth value regression. (2) We evaluate the contribution of different components in our proposed approach. (3) We compare our proposed approach with state-of-the-art methods to show that our approach performs better in both indoor and outdoor scenes. Several measures commonly used in prior works are applied for quantitative evaluations: $\bullet$ root mean squared error (rms): $\sqrt{\frac{1}{T}\sum_{p}(d_{gt}-d_p)^2}$ $\bullet$ average relative error (rel): $\frac{1}{T}\sum_{p}\frac{|d_{gt}-d_{p}|}{d_{gt}}$ $\bullet$ average $\log_{10}$ error (log10): $\frac{1}{T}\sum_{p}|\log_{10}d_{gt} - \log_{10}d_p|$ $\bullet$ root mean squared log error (rmslog) $\sqrt{\frac{1}{T}\sum_{p}(\log d_{gt} - \log d_p)^2}$ $\bullet$ accuracy with threshold $thr$: percentage ($\%$) of $d_p$ s.t. $\max(\frac{d_{gt}}{d_p},\frac{d_p}{d_{gt}}) = \delta < thr$ where $d_{gt}$ and $d_p$ are the ground-truth and predicted depths respectively of pixels, and $T$ is the total number of pixels in all the evaluated images. \subsection{Depth label classification vs. depth value regression} Discretizing continuous data would inevitably discard some information. In this part, we first show that the discretization of continuous depth values degrades the depth estimation model by negligible amount. Specifically, we equally discretize the ground-truth depth values of test images in the NYUD2 dataset into different numbers of bins in the linear and log space respectively and calculate three errors as is mentioned above. The results are illustrated in Fig. \ref{fig:gt_validate}. \begin{figure} \begin{center} \includegraphics[scale=.62]{./figures/figure_4.pdf} \end{center} \caption{Quantitative evaluations of discretized ground-truth depth values of the NYUD2 dataset. (a): errors of ground-truth depth values discretized in linear space. (b): errors of ground-truth depth values discretized in the log space.} \label{fig:gt_validate} \end{figure} We can see from Fig. \ref{fig:gt_validate} that with the increment of discretization bins, the errors of discretized ground-truth depths decrease and stop at a negligible amount. And the discretization in the log space leads to lower error than the discretization in the linear space. As for the accuracies, all the discretized ground-truth depths can reach 100\% except for the accuracy with threshold 1.25 when linearly discretizing the ground-truth depths into 10 bins. From this experiment we can see that converting the ground-truth depths from continuous values to discrete labels has negligible effect on the performance. We can reformulate depth estimation from a conventional regression task to a classification task. \begin{table} \caption{Depth estimation results by continuous depth value regression and discrete depth label classification for the NYUD2 and KITTI datasets. The first row is the result by regression. The following rows are results of depth label classification with different number of discretization bins.} \centering \label{table:regress_n_classify} \begin{tabular}{@{\hskip 0.05cm}c@{\hskip 0.07cm}c@{\hskip 0.07cm}c@{\hskip 0.07cm}c @{\hskip 0.39cm}c@{\hskip 0.15cm}c@{\hskip 0.15cm}c} \noalign{\smallskip} \hline \noalign{\smallskip} & \multicolumn{3}{c}{\small Accuracy} & \multicolumn{3}{c}{\small Error} \\ & \small $\delta<1.25$ &\small $\delta<1.25^2$ &\small $\delta<1.25^3$ &\small rel &\small log10 &\small rms \\ \noalign{\smallskip} \hline \noalign{\smallskip} & \multicolumn{5}{c}{\small NYUD2} \\ \hline \noalign{\smallskip} \small Regression &\small 65.3\% & \small 91.5\% & \small 97.4\% & \small 0.231 &\small 0.095 &\small 0.778 \\ \noalign{\smallskip} \small 10 bins &\small 69.4\% & \small \bf 92.4\% & \small 97.5\% & \small 0.213 &\small 0.091 &\small 0.754 \\ \noalign{\smallskip} \small 30 bins &\small 70.5\% &\small 92.1\% &\small \bf 97.8\% &\small 0.210 &\small \bf 0.090 &\small 0.751 \\ \noalign{\smallskip} \small 50 bins &\small 68.9\% &\small 91.9\% &\small 97.0\% &\small 0.209 &\small 0.092 &\small 0.750 \\ \noalign{\smallskip} \small 80 bins &\small \bf 70.6\% &\small 92.0\% &\small 97.6\% &\small 0.211 &\small 0.091 &\small \bf 0.747 \\ \noalign{\smallskip} \small 100 bins &\small 70.1\% &\small 92.1\% &\small 97.3\% &\small \bf 0.209 &\small 0.091 &\small 0.749 \\ \hline \noalign{\smallskip} & \multicolumn{5}{c}{\small KITTI} \\ \hline \noalign{\smallskip} \small Regression &\small 67.5\% &\small 88.6\% &\small 90.4\% &\small 0.279 &\small 0.104 &\small 7.916 \\ \noalign{\smallskip} \small 50 bins &\small 76.3\% &\small \bf 92.1\% &\small 96.3\% &\small 0.183 &\small 0.077 &\small \bf 6.209 \\ \noalign{\smallskip} \small 80 bins &\small \bf 77.1\% &\small 91.7\% &\small 96.6\% &\small \bf 0.180 &\small \bf 0.072 &\small 6.311 \\ \noalign{\smallskip} \small 120 bins &\small 76.8\% &\small 91.9\% &\small \bf 96.7\% &\small 0.187 &\small 0.076 &\small 6.263 \\ \hline \noalign{\smallskip} \end{tabular} \end{table} We next compare our proposed depth estimation by classification with the conventional depth regression and show the results in Table \ref{table:regress_n_classify}. In this experiment, we apply the deep residual network with 101 layers and the parameters are initialized with the ResNet101 model in \cite{kmhe15} which is trained on the ImageNet classification dataset. We train our models on standard NYUD2 training set with 795 images and standard KITTI training set with 700 images \cite{EigenPF14} for fast comparison. As for the test sets, we select 650 and 700 images from the raw NYUD2 and KITTI test sets respectively as validation sets. For depth regression, the loss function is standard $L2$ norm which minimizes the squared euclidean norm between predicted and ground-truth depths. The output depth map is upsampled to the same size of the input image through bilinear interpolation. As for our depth estimation by classification, we discretize the continuous depth values into different numbers of bins in the log space. We do not apply CRF post-precessing for both regression and classification. As we can see from Table \ref{table:regress_n_classify} that depth estimation by classification outperforms the conventional depth regression, and the performance of depth classification is not very sensitive to the number of discretization bins. One important reason for depth estimation by classification outperforms the depth regression is that the regression tends to converge to the mean depth values. This may cause larger errors in areas that are either too far from or too close to the camera. The classification with the information gain may alleviate this problem. In order to testify this, we break down the NYUD2 ground-truth depths into 3 ranges and report the results in Table \ref{table:regionGT}. The general setting is the same with the aforementioned experiment. The ground-truth depths are discretized into 100 bins in the log space and the $\alpha$ defined in Eq.~(\ref{lossfunc}) is set to 0.2. \begin{table} \caption{Test results on the NYUD2 dataset with different ground-truth ranges. We break down the ground-truth depths into 0m-3m, 3m-7m and 7m-10m.} \centering \label{table:regionGT} \begin{tabular}{@{\hskip 0.05cm}c@{\hskip 0.07cm}c@{\hskip 0.07cm}c@{\hskip 0.07cm}c @{\hskip 0.39cm}c@{\hskip 0.15cm}c@{\hskip 0.15cm}c} \noalign{\smallskip} \hline \noalign{\smallskip} & \multicolumn{3}{c}{\small Accuracy} & \multicolumn{3}{c}{\small Error} \\ & \small $\delta<1.25$ &\small $\delta<1.25^2$ &\small $\delta<1.25^3$ &\small rel &\small log10 &\small rms \\ \noalign{\smallskip} \hline \noalign{\smallskip} & \multicolumn{5}{c}{\small Regression} \\ \hline \noalign{\smallskip} \small 0m-3m & \small 65.7\% & \small 90.9\% & \small 97.4\% & \small 0.233 &\small 0.087 &\small 0.561 \\ \noalign{\smallskip} \small 3m-7m & \small 70.3\% & \small 95.5\% & \small 99.5\% & \small 0.175 &\small 0.075 &\small 0.936 \\ \noalign{\smallskip} \small 7m-10m & \small 45.0\% & \small 75.4\% & \small 93.5\% & \small 0.242 &\small 0.129 &\small 2.346 \\ \hline \noalign{\smallskip} & \multicolumn{5}{c}{\small Classification} \\ \hline \noalign{\smallskip} \small 0m-3m &\small 69.6\% &\small 91.2\% &\small 97.2\% &\small 0.216 &\small 0.083 &\small 0.561 \\ \noalign{\smallskip} \small 3m-7m &\small 76.0\% &\small 94.9\% &\small 98.6\% &\small 0.151 &\small 0.070 &\small 0.857 \\ \noalign{\smallskip} \small 7m-10m &\small 49.7\% &\small 74.9\% &\small 93.1\% &\small 0.238 &\small 0.126 &\small 2.199 \\ \hline \noalign{\smallskip} \end{tabular} \end{table} \subsection{Component evaluation} In this section, we analyze the contribution of key components including the information gain matrix, fully connected CRFs and network architectures in our proposed approach. We evaluate depth estimation on both the NYUD2 and KITTI datasets. We use the standard training set containing 795 images of the NYUD2 dataset and evaluate on the standard 654 test images. The continuous depth values are discretized into 100 bins in the log space. As for the KITTI dataset, we apply the same split in \cite{EigenPF14} which contains 700 training images and 697 test images. We only use left images and discretize the continuous depth values into 50 bins in the log space. We cap the maximum depth to be 80 meters. During training, we ignore the missing values in ground-truth depths and only evaluate on valid points. \subsubsection{Benefit of information gain matrix} In this part, we evaluate the contribution of the information gain matrix in our loss function. We train the ResNet101 model on both the NYUD2 and KITTI datasets with and without information gain matrices. The $\alpha$ defined in Eq.~(\ref{lossfunc}) is set to 0.2 and 0.5 for NYUD2 and KITTI respectively. In our experiments, we find that the performance is not sensitive to $\alpha$. The results are illustrated in Table \ref{table:info_gain}. As we can see from this table that the information gain matrix improves the performance of both indoor and outdoor depth estimation. \begin{table} \renewcommand\arraystretch{0.65} \caption{Test results on the NYUD2 and KITTI datasets with and without information gain matrices. For each dataset, the first row is the result without information gain matrix, the second row is the result with information gain matrix.} \centering \label{table:info_gain} \begin{tabular}{@{\hskip 0.05cm}c@{\hskip 0.07cm}c@{\hskip 0.07cm}c@{\hskip 0.07cm}c @{\hskip 0.39cm}c@{\hskip 0.15cm}c@{\hskip 0.15cm}c} \noalign{\smallskip} \hline \noalign{\smallskip} & \multicolumn{3}{c}{\small Accuracy} & \multicolumn{3}{c}{\small Error} \\ & \small $\delta<1.25$ &\small $\delta<1.25^2$ &\small $\delta<1.25^3$ &\small rel &\small log10 &\small rms \\ \noalign{\smallskip} \hline \noalign{\smallskip} & \multicolumn{5}{c}{\small NYUD2} \\ \hline \noalign{\smallskip} \small Plain &\small 70.9\% &\small 92.1\% &\small 98.0\% &\small 0.193 &\small 0.079 &\small 0.716 \\ \noalign{\smallskip} \small Infogain &\small \bf 72.2\% &\small \bf 92.6\% &\small \bf 98.0\% &\small \bf 0.192 &\small \bf 0.077 &\small \bf 0.688 \\ \hline \noalign{\smallskip} & \multicolumn{5}{c}{\small KITTI} \\ \hline \noalign{\smallskip} \small Plain &\small 79.9\% &\small 93.7\% &\small 97.6\% &\small 0.166 &\small 0.067 &\small 5.443 \\ \noalign{\smallskip} \small Infogain &\small \bf 81.4\% &\small \bf 93.9\% &\small \bf 97.6\% &\small \bf 0.153 &\small \bf 0.062 &\small \bf 5.290 \\ \hline \noalign{\smallskip} \end{tabular} \end{table} \subsubsection{Benefit of fully connected CRFs} In order to evaluate the effect of the fully connected CRFs, we first train the ResNet101 model on both the NYUD2 and KITTI datasets, and then apply the fully connected CRFs as post-processing. We illustrate the results in Table \ref{table:fc_CRF}. As we can see from the table, the fully-connected CRF can improve the depth estimation of both indoor and outdoor scenes. \begin{table} \renewcommand\arraystretch{0.65} \caption{Test results on the NYUD2 and KITTI datasets with and without the fully connected CRFs as post-processing. For each dataset, the first row is the result without CRFs, the following row is the result with CRFs.} \centering \label{table:fc_CRF} \begin{tabular}{@{\hskip 0.05cm}c@{\hskip 0.07cm}c@{\hskip 0.07cm}c@{\hskip 0.07cm}c @{\hskip 0.39cm}c@{\hskip 0.15cm}c@{\hskip 0.15cm}c} \noalign{\smallskip} \hline \noalign{\smallskip} & \multicolumn{3}{c}{\small Accuracy} & \multicolumn{3}{c}{\small Error} \\ & \small $\delta<1.25$ &\small $\delta<1.25^2$ &\small $\delta<1.25^3$ &\small rel &\small log10 &\small rms \\ \noalign{\smallskip} \hline \noalign{\smallskip} & \multicolumn{5}{c}{\small NYUD2} \\ \hline \noalign{\smallskip} \small Plain &\small 70.9\% &\small \bf 92.1\% &\small 98.0\% &\small 0.193 &\small 0.079 &\small 0.716 \\ \noalign{\smallskip} \small CRF &\small \bf 71.3\% &\small 92.0\% &\small \bf 98.0\% &\small \bf 0.190 &\small \bf 0.079 &\small \bf 0.696 \\ \noalign{\smallskip} \hline \noalign{\smallskip} & \multicolumn{5}{c}{\small KITTI} \\ \hline \noalign{\smallskip} \small Plain &\small 79.9\% &\small 93.7\% &\small 97.6\% &\small \bf 0.166 &\small 0.067 &\small 5.443 \\ \noalign{\smallskip} \small CRF &\small \bf 81.0\% &\small \bf 94.1\% &\small \bf 97.9\% &\small 0.167 &\small \bf 0.066 &\small \bf 5.349 \\ \hline \noalign{\smallskip} \end{tabular} \end{table} \begin{table} \renewcommand\arraystretch{0.65} \caption{Test results on the NYUD2 dataset with different network structures. The first row is the result of the VGG16 net, the following two rows are the results of deep residual networks. We also show the total number of parameters of the three networks in the last row.} \centering \label{table:networks} \begin{tabular}{@{\hskip 0.05cm}c@{\hskip 0.07cm}c@{\hskip 0.07cm}c@{\hskip 0.07cm}c @{\hskip 0.39cm}c@{\hskip 0.15cm}c@{\hskip 0.15cm}c} \noalign{\smallskip} \hline \noalign{\smallskip} & \multicolumn{3}{c}{\small Accuracy} & \multicolumn{3}{c}{\small Error} \\ & \small $\delta<1.25$ &\small $\delta<1.25^2$ &\small $\delta<1.25^3$ &\small rel &\small log10 &\small rms \\ \noalign{\smallskip} \hline \noalign{\smallskip} \small VGG16 &\small 62.1\% &\small 87.2\% &\small 96.0\% &\small 0.236 &\small 0.097 &\small 0.857 \\ \noalign{\smallskip} \small ResNet101 &\small 70.9\% &\small 92.1\% &\small 98.0\% &\small 0.193 &\small 0.079 &\small 0.716 \\ \noalign{\smallskip} \small ResNet152 &\small \bf 71.2\% &\small \bf 92.3\% &\small \bf 98.0\% &\small \bf 0.187 &\small \bf 0.071 &\small \bf 0.681 \\ \noalign{\smallskip} \hline \noalign{\smallskip} & \multicolumn{2}{c}{\small VGG16} & \multicolumn{2}{c}{\small ResNet101} & \multicolumn{2}{c}{\small ResNet152}\\ \noalign{\smallskip} \small Parameters & \multicolumn{2}{c}{\small $13.9\times10^{7}$} & \multicolumn{2}{c}{\small $6.7\times10^{7}$} & \multicolumn{2}{c}{\small $8.2\times10^{7}$}\\ \hline \end{tabular} \end{table} \begin{table*} \renewcommand\arraystretch{0.65} \caption{Comparison with state-of-the-art on the NYUD2 dataset. The first 4 rows are results by recent depth estimation models. The last row is the result of our approach.} \centering \label{table:state-of-art} \begin{tabular}{@{\hskip 0.1cm}c@{\hskip 0.2cm}c@{\hskip 0.2cm}c@{\hskip 0.2cm}c @{\hskip 0.6cm}c@{\hskip 0.25cm}c@{\hskip 0.25cm}c} \noalign{\smallskip} \hline \noalign{\smallskip} & \multicolumn{3}{c}{\small Accuracy} & \multicolumn{3}{c}{\small Error} \\ &\small $\delta<1.25$ &\small $\delta<1.25^2$ &\small $\delta<1.25^3$ &\small rel &\small log10 &\small rms \\ \noalign{\smallskip} \hline \noalign{\smallskip} \small Wang et al. \cite{Wang_2015_CVPR} & \small 60.5\% &\small 89.0\% &\small 97.0\% &\small 0.210 &\small 0.094 &\small 0.745 \\ \noalign{\smallskip} \small Liu et al. \cite{LiuSLR15} &\small 65.0\% &\small 90.6\% &\small 97.6\% &\small 0.213 &\small 0.087 &\small 0.759 \\ \noalign{\smallskip} \small Anirban et al. \cite{RoyT16} & - & - & - &\small 0.187 &\small 0.078 &\small 0.744 \\ \noalign{\smallskip} \small Eigen et al. \cite{Eigen15} &\small 76.9\% &\small 95.0\% &\small 98.8\% &\small 0.158 & - &\small 0.641 \\ \noalign{\smallskip} \small Laina et al. \cite{laina2016deeper} &\small 81.1\% &\small 95.3\% &\small 98.8\% &\small \bf 0.127 &\small \bf 0.055 &\small 0.573 \\ \noalign{\smallskip} \small Ours &\small \bf 81.9\% &\small \bf 96.5\% &\small \bf 99.2\% &\small 0.141 &\small 0.060 &\small \bf 0.540 \\ \hline \end{tabular} \end{table*} \begin{table*} \renewcommand\arraystretch{0.65} \caption{Comparison with state-of-the-art results on the KITTI dataset. We cap the maximum depth to 50 and 80 meters to compare with recent works. For the work in~\cite{godard2016unsupervised}, we also report their results with additional training images in the CityScapes dataset~\cite{Cordts2016Cityscapes} and denote as Godard et al. CS.} \centering \label{table:stat-of-art_kitti} \begin{tabular}{@{\hskip 0.32cm}c@{\hskip 0.25cm}c@{\hskip 0.25cm}c@{\hskip 0.25cm}c @{\hskip 0.6cm}c@{\hskip 0.30cm}c@{\hskip 0.30cm}c} \noalign{\smallskip} \hline \noalign{\smallskip} & \multicolumn{3}{c}{\small Accuracy} & \multicolumn{3}{c}{\small Error} \\ &\small $\delta<1.25$ &\small $\delta<1.25^2$ &\small $\delta<1.25^3$ &\small rel &\small rmslog &\small rms \\ \noalign{\smallskip} \hline \noalign{\smallskip} & \multicolumn{5}{c}{\small Cap 80 meters} \\ \hline \noalign{\smallskip} \small Liu et al.~\cite{LiuSLR15} &\small 65.6\% &\small 88.1\% &\small 95.8\% &\small 0.217 &\small - &\small 7.046 \\ \noalign{\smallskip} \small Eigen et al.~\cite{EigenPF14} &\small 69.2\% &\small 89.9\% &\small 96.7\% &\small 0.190 &\small 0.270 &\small 7.156 \\ \noalign{\smallskip} \small Godard et al.~\cite{godard2016unsupervised} &\small 81.8\% &\small 92.9\% &\small 96.6\% &\small 0.141 &\small 0.242 &\small 5.849 \\ \noalign{\smallskip} \small Godard et al. CS~\cite{godard2016unsupervised} &\small 83.6\% &\small 93.5\% &\small 96.8\% &\small 0.136 &\small 0.236 &\small 5.763 \\ \noalign{\smallskip} \small Ours &\small \bf 88.7\% &\small \bf 96.3\% &\small \bf 98.2\% &\small \bf 0.115 &\small \bf 0.198 &\small \bf 4.712 \\ \hline \noalign{\smallskip} & \multicolumn{5}{c}{\small Cap 50 meters} \\ \hline \noalign{\smallskip} \small Garg et al.~\cite{garg2016unsupervised} &\small 74.0\% &\small 90.4\% &\small 96.2\% &\small 0.169 &\small 0.273 &\small 5.104 \\ \noalign{\smallskip} \small Godard et al.~\cite{godard2016unsupervised} &\small 84.3\% &\small 94.2\% &\small 97.2\% &\small 0.123 &\small 0.221 &\small 5.061 \\ \noalign{\smallskip} \small Godard et al. CS~\cite{godard2016unsupervised} &\small 85.8\% &\small 94.7\% &\small 97.4\% &\small 0.118 &\small 0.215 &\small 4.941 \\ \noalign{\smallskip} \small Ours &\small \bf 89.8\% &\small \bf 96.6\% &\small \bf 98.4\% &\small \bf 0.107 &\small \bf 0.187 &\small \bf 3.605 \\ \hline \end{tabular} \end{table*} \subsubsection{Network Comparisons} In this part, we compare the performance of deep residual networks with the baseline VGG16 net \cite{Simonyan14c} on the NYUD2 dataset. Since we formulate depth estimation as a classification task, we can apply network structures that perform well on semantic segmentation task. Specifically, for the VGG16 net, we apply the structure in \cite{LinSRH15}. We keep the layers up to ``fc6" in VGG16 net and add 2 convolutional layers with 512 channels, and 2 fully-connected layers with 512 and 100 channels respectively. The results are illustrated in Table~\ref{table:networks}. The performance of residual networks unsurprisingly outperform the VGG16 net, reinforcing the importance of network depth. Note that the performance by the ResNet152 improves little to the ResNet101, this is caused by the overfitting as the training set contains only 795 images. We also compare the number of parameters in the Table~\ref{table:networks}. \subsection{State-of-the-art comparisons} In this section, we evaluate our approach on the NYUD2 and KITTI datasets and compare with recent depth estimation methods. We apply the deep residual network with 152 layers and the parameters are initialized with the ResNet152 model in \cite{kmhe15}. \subsubsection{NYUD2} We train our model using the entire raw training data specified in the official train/test distribution and test on the standard 654 test images. We discretize the depth values into 100 bins in the log space. We set the parameter $\alpha$ of the information gain matrix to be 0.2. The fully connected CRFs are applied as post-processing. The results are reported in Table \ref{table:state-of-art}. The first row is the result in \cite{Wang_2015_CVPR} which jointly performs depth estimation and semantic segmentation. The second row is the result of deep convolutional neural fields (DCNF) with fully convolutional network and super-pixel pooling in \cite{LiuSLR15}. The third row is the result of nerual regression forest (NRF) in \cite{RoyT16}. The fourth row is the result in \cite{Eigen15} which performs depth estimation in a multi-scale network architecture. The fifth row is the result in \cite{laina2016deeper} which applies an upsampling scheme. The last row is depth estimation result by our model. As we can see from the table, our deep fully convolutional residual network with depth label classification achieves state-of-the-art performance of 4 evaluation metrics. We also show some qualitative results in Fig. \ref{fig:visual}, from which we can see our method yields better visualizations in general. \begin{table} \renewcommand\arraystretch{0.65} \caption{Test results on the SUN RGB-D dataset for cross-dataset evaluation. The first 2 rows are results by recent depth estimation models. The last row is the result of our approach.} \centering \label{table:cross_dataset} \begin{tabular}{@{\hskip 0.05cm}c@{\hskip 0.07cm}c@{\hskip 0.07cm}c@{\hskip 0.07cm}c @{\hskip 0.26cm}c@{\hskip 0.15cm}c@{\hskip 0.15cm}c} \noalign{\smallskip} \hline \noalign{\smallskip} & \multicolumn{3}{c}{\small Accuracy} & \multicolumn{3}{c}{\small Error} \\ & \small $\delta<1.25$ &\small $\delta<1.25^2$ &\small $\delta<1.25^3$ &\small rel &\small log10 &\small rms \\ \noalign{\smallskip} \hline \noalign{\smallskip} \small Liu \cite{LiuSLR15} &\small 35.6\% &\small 57.6\% &\small 83.1\% &\small 0.316 &\small 0.161 &\small 0.931 \\ \noalign{\smallskip} \small Laina \cite{laina2016deeper} &\small 53.9\% &\small 70.3\% &\small \bf 89.0\% &\small 0.279 &\small 0.138 &\small 0.851 \\ \noalign{\smallskip} \small Ours &\small \bf 56.3\% &\small \bf 72.7\% &\small 88.2\% &\small \bf 0.256 &\small \bf 0.127 &\small \bf 0.839 \\ \noalign{\smallskip} \hline \end{tabular} \end{table} \subsubsection{KITTI} We train our model on the same training set in \cite{godard2016unsupervised} which contains 33131 images and test on the same 697 images in \cite{EigenPF14}. But different from the depth estimation method proposed in \cite{godard2016unsupervised} which applies both the left and right images in stereo pairs, we only use the left images. The missing values in the ground-truth depth maps are ignored during both training and evaluation. The depth values are discretized into 50 bins in the log space. We set the parameter $\alpha$ of the information gain matrix to be 0.5 and apply fully connected CRFs as post-processing. In order to compare with the recent state-of-the-art results, we cap the maximum depth into both 80 meters and 50 meters and present the results in Table~\ref{table:stat-of-art_kitti}. We can see from Table~\ref{table:stat-of-art_kitti} that our method outperforms the rest methods significantly. Some qualitative results are illustrated in Fig. \ref{fig:kitti_visual}. Our approach yields visually better results. \subsubsection{Cross-dataset evaluation} In order to show the generalization of our proposed method, we train our model on the raw NYUD2 dataset and test on the SUN RGB-D dataset \cite{Song_2015_CVPR}. The SUN RGB-D is an indoor dataset contains 10335 RGB-D images captured by four different sensors. We only select 500 images randomly from the test set for cross-dataset evaluation. The SUN RGB-D contains 1449 images from the NYUD2 dataset. Our selected testset exlucdes all the images from the NYUD2. We compare our method with Liu et al. \cite{LiuSLR15} and Laina et al. \cite{laina2016deeper}. We use the trained models and evaluation codes released by these authors. The results are illustrated in Table~\ref{table:cross_dataset}. We can see that our method can reach satisfactory results on different dataset, and outperforms other methods. \section{Conclusion}\label{sec:con} We have presented a deep fully convolutional residual network architecture for depth estimation from single monocular images. We have made use of the recent deep residual networks, discretized continuous depth values into different bins and formulated depth estimation as a discrete classification problem. By this formulation we can easily obtain the confidence of a prediction which can be applied during training via information gain matrices as well as post-processing via fully-connected CRFs. We have shown that our discretization approach surprisingly performs well. Note that the proposed network can be further improved by applying the techniques that have been previously explored. For example, it is expected that \begin{itemize} \item Multi-scale inputs as in \cite{Eigen15} would improve our result. \item Concatenating the mid-layers' outputs may better use the low-, mid-layers information as in \cite{BharathCVPR2015}. \item Upsampling the prediction maps as in \cite{long_shelhamer_fcn} would be beneficial too. \end{itemize} We leave these directions in our future work. \begin{figure*} \begin{center} \includegraphics[scale=.765]{./figures/figure_5.pdf} \end{center} \caption{Some depth estimation results on the NYUD2 dataset. (a) RGB Input; (b) Ground-truth depth; (c) Results of Liu et al. \cite{LiuSLR15}; (d) Results of Eigen et al. \cite{Eigen15}; (e) Results of our model without fully-connected CRFs; (f) Results of our model with fully-connected CRFs.} \label{fig:visual} \end{figure*} \begin{figure*} \begin{center} \includegraphics[scale=.45]{./figures/figure_6.pdf} \end{center} \caption{Some depth estimation results of the KITTI dataset. The first row are the ground-truth depths, the second row are the results by \cite{garg2016unsupervised}, the last row are the results by our approach.} \label{fig:kitti_visual} \end{figure*} \bibliographystyle{IEEEtran} \newpage
1905.06113
\section{Conclusions} \label{sec:conclusions} In this work we present a thorough analysis of the human motion trajectory prediction problem. We survey the literature across multiple domains and propose a taxonomy of motion prediction techniques. Our taxonomy builds on the two fundamental aspects of the motion prediction problem: the model of motion and the input contextual cues. We review the relevant trajectory prediction tasks in several application areas, such as service robotics, self-driving vehicles and advance surveillance systems. Finally, we summarize and discuss the state of the art along the lines of three major questions and outlined several prospective directions of future research. \emph{``Prediction is very difficult, especially about the future''}. This quote (whose origin has been attributed to multiple people) certainly remains applicable to motion trajectory prediction, despite two decades of research and the 200+ prediction methods listed in this survey. We hope that our survey increases visibility in this rapidly expanding field and the will stimulate further research along the directions discussed. \section{Discussion} \label{sec:discussion} There has been great progress in developing advanced prediction techniques over the last years in terms of method diversity, performance and relevance to an increasing number of application scenarios. In this section, we summarize and discuss the state of the art and pose the three questions initially raised in the introduction: \emph{Are the evaluation techniques to measure prediction performance good enough and follow best practices (Q1)}? This is discussed in Sec.~\ref{sec:discussion:benchmarking} by reviewing the existing benchmarking practices including metrics, experiments and datasets. \emph{Have all prediction methods arrived on the same performance level and the choice of the modeling approach does not matter anymore (Q2)}? This is discussed in Sec.~\ref{sec:modeling_approaches_discussion} where we consider the theoretical and demonstrated ability of the different modeling approaches to solve the motion prediction problem by accounting for contextual cues from the environment and the target agent. And: \emph{Is motion prediction solved (Q3)}? This is discussed in Sec.~\ref{sec:application_areas_discussion} by revisiting the requirements from the different application scenarios. Finally, in Sec.~\ref{sec:discussion:open_challenges} we outline open challenges and future research directions. \subsection{Benchmarking} \label{sec:discussion:benchmarking} Evaluating the performance of a motion prediction algorithm requires choosing appropriate testing scenarios and accuracy metrics, as well as studying the methods's robustness against various variables, such as the number of interacting agents or amount of maneuvering in the data. Depending on the application area, the testing scenario may be an intersection, a highway, a pedestrian crossing, shared urban street with heterogeneous agents, a home environment or a crowded public space. Existing datasets, summarized in Sec.~\ref{sec:evaluation:datasets}, cover a wide range of scenarios, e.g. indoor \citep{zhou2012understanding,brscic2013person,rudenko2019thor} and outdoor environments \citep{pellegrini2009you,lerner2007crowds,oh2011large}, pedestrian areas \citep{majecka2009statistical,benfold2011stable}, urban zones \citep{robicquet2016learning,schneider2013gcpr} and highways \citep{colyar2006us,colyar2007us,Krajewski2018HighDdataset}, and include trajectories of various agents, such as people, cyclists and vehicles. However, these datasets are usually semi-automatically annotated and therefore only provide incomplete and noisy estimation of the ground truth positions (due to annotation artifacts). Furthermore, length of the trajectories is often not sufficient for evaluation in some application domains, where long-term predictions are required. Moreover, the amount of interactions between recorded agents is often limited or disbalanced (very few agents are interacting, ergo misinterpreting such cases is not reflected in the lower benchmark scores). Finally, relevant semantic information about static (i.e. grass, crosswalks, sidewalks, streets) and dynamic (i.e. human attributes such as age, gender or group affiliation) entities is usually not recorded. Accuracy metrics, described in Sec.~\ref{sec:evaluation:metrics}, offer a rich choice for benchmarking, ranging from computing geometric distances between points (ADE, FDE) also accounting for temporal misalignments (DTW, MHD), to probabilistic policy likelihood measures (NLL) and sampling-based distribution evaluation (mADE). For long-term forecasts made in topologically non-trivial scenarios, results are usually multi-modal and associated with uncertainty. Performance evaluation of such methods should make use of metrics that account for this, such as negative log-likelihood or log-loss derived from the KLD. Not all authors are currently using such metrics. Even for short-term prediction horizons, for which a large majority of authors use geometric metrics only (AED, FDE), probabilistic metrics are preferable as they better reflect the stochastic nature of human motion and the uncertainties involved from imperfect sensing. Another issue of benchmarking is related to variations in exact metric formulation and different names used for the same metric, e.g. for the ADE- and likelihood-based metrics, as indicated in Sec.~\ref{sec:evaluation:metrics}. Additionally, precision is often evaluated on a single arbitrary prediction horizon. These aspects obstruct comparison of the relative precision of various methods. Furthermore, very few authors currently address robustness as a relevant issue/topic. This is surprising as prediction needs to be robust against a variety of perturbations when deployed in real systems. Examples includes sensing and detection errors, tracking deficiencies, self-localization uncertainties or map changes. \subsubsection{On question 1:} We conclude that \emph{Q1} is not confirmed. Despite the numerous metrics, datasets and experiment designs, used in individual works, benchmarking prediction algorithms lacks a systematic approach with common evaluation practices For evaluating prediction quality, researchers should opt for more complex testing scenarios (which include non-convex obstacles, long trajectories, collision avoidance maneuvers and non-trivial interactions) and the complete set of metrics (both geometric and probabilistic). It is a good practice to condition the forecast precision on various prediction horizons, observation periods and the complexity of the scene, {e.g.} defined by how many interacting agents are tracked simultaneously. Furthermore, perfect sensing, perception and tracking is not always achieved in real-life operation, and therefore algorithms' performance ideally should be investigated in realistic conditions and supported by robustness experiments, e.g. see Sec.~\ref{sec:evaluation:metrics:other}. Performing proper performance analysis would clarify application potential and effective prediction horizon of many methods. Similar benchmarking practices should be applied to runtime evaluation. Considering efficiency on embedded CPUs of autonomous systems is important for the algorithm's design and evaluation. To prove applicability in real-life scenarios (e.g. in the pipeline with time-sensitive local and global motion planners), discussion should include formal complexity and runtime analysis, conditioned on the scene complexity and prediction horizon. For a fair objective comparison of the prediction algorithms, developing a standard benchmark with testing scenarios and metrics is becoming a task of critical importance, e.g. given the rapid growth in published literature (see Fig.~\ref{fig:paperstatistics}). The first attempt to build such a benchmark, TrajNet, is taken by \cite{sadeghiankosaraju2018trajnet}, with the follow up, TrajNet++, to be released soon. TrajNet is based on selected trajectories from the ETH, UCY and Stanford Drone Dataset and uses the ADE and FDE evaluation metrics. We encourage more researchers to follow this example and contribute to the unification of benchmarking practices. \subsection{Modeling Approaches} \label{sec:modeling_approaches_discussion} With such a wide variety of motion modeling approaches, a natural question arises: which one should be preferred? In this section we discuss the inherent strengths and limitations of different approaches' classes and the efforts to incorporate various contextual cues. This discussion continues in Sec.~\ref{sec:application_areas_discussion} with highlighting the specifics of several key tasks in the application domains. Physics-based approaches are suitable in those situations where the effect of other agents or the static environment, and the agent's motion dynamics can be modeled by an explicit transition function. Many of the physics-based approaches naturally handle joint predictions and group coherence. With the choice of an appropriate transition function, physics-based approaches can be readily applied across multiple environments, without the need for training datasets (some data for parameter estimation is useful, though). The downside of using explicitly designed motion models is that they might not capture well the complexity of the real world. The transition functions tend to lack information regarding the ``greater picture'', both on the spatial and the temporal scale, leading to solutions that represent local minima (``dead ends''). In practice, this limits the usability of physics-based methods to short prediction horizons and relatively obstacle-free environments. All in all, the existence of fast approximate inference, the applicability across multiple domains under mild conditions, and the interpretability make physics-based approaches a popular option for the collision avoidance of the mobile platforms ({e.g.} self-driving vehicles, service robots) and the people tracking applications. Pattern-based approaches are suitable for environments with complex unknown dynamics (e.g. public areas with rich semantics), and can cope with comparatively large prediction horizons. However, this requires ample data that must be collected for training purposes in a particular type of location or scenario. One further issue is the generalization capability of such learned model, whether it can be transferred to a different site, especially if the map topology changes (cf. service robot in an office where the furniture has been moved). Pattern-based approaches tend to be used in non-safety critical applications, where explainability is less of an issue and where the environment is spatially constrained. Planning-based approaches work well if goals, that the agents try to accomplish, can be explicitly defined and a map of the environment is available. In these cases, the planning-based approaches tend to generate better long-term predictions than the physics-based techniques and generalize to new environments better than the pattern-based approaches. In general, the runtime of planning-based approaches, based on classical planning algorithms ({i.e.} Dijkstra \citep{schrijver2012history}, Fast Marching Method \citep{sethian1996fast}, optimal sampling-based motion planners \citep{janson2018deterministic, karaman2011sampling}, value iteration \citep{littman1995complexity}) scales exponentially with the number of agents, the size of the environment and the prediction horizon \citep{russell2016artificial}. \subsubsection{On question 2:} In our view, \emph{Q2} is not confirmed. As we have seen, the different modeling approaches have various strengths and weaknesses. Although in principle it could be possible to incorporate the same contextual cues, there have been so far insufficient studies to compare prediction performance across modeling approaches. Moreover, different modeling approaches exhibit varying degree of complexity and efficiency in including contextual cues from different categories. Physics-based methods are by their very nature aware of the target agent cues and may be easily extended with other ones ({e.g.} social-force-based \citep{helbing1995social} and circular distribution-based \citep{coscia2018long}). Pattern-based methods can potentially handle all kind of contextual information which is encoded in the collected datasets. Some of them are intrinsically map-aware \citep{kucner2013conditional,bennewitz2005learning,roth2016iv}. Several others can be extended to include further types of contextual information ({e.g.} \cite{alahi2016social,trautman2010unfreezing,vemula2017socialattention,pfeiffer2018data,bartoli2017context}) but such extension may lead to involved learning, data efficiency and generalization issues ({e.g.} for the clustering methods \citep{bennewitz2005learning, chen2008pedestrian}). Planning-based approaches are intrinsically map- and obstacle-aware, natural to extend with semantic cues \citep{kitani2012activity,ziebart2009planning,Rudenko2018iros, Rhinehart_2018_ECCV}. Usually they encode the contextual complexity into an objective/reward function, which may fail to properly incorporate dynamic cues ({e.g.} changing traffic lights). Therefore, authors have to design specific modifications to include dynamic cues into the prediction algorithm (such as Jump Markov Processes in \citep{karasev2016intent}, local adaptations of the predicted trajectory in \citep{Rudenko2018iros, Rudenko2018icra}, game-theoretic methods in \citep{ma2016forecasting}. Unlike for the pattern-based approaches, target agents cues are natural to incorporate, e.g. as in \citep{kuderer2012feature, Rudenko2018icra, ma2016forecasting}, as both forward and inverse planning approaches rely on a dynamical model of the agents. Contextual cues-dependent parameters of the planning-based methods ({e.g.} reward functions for inverse planning and models for forward planning) are trivial and typically easier to learn but inference-wise less efficient for high-dimensional (target) agent states compared to the simple physics-based models. \subsection{Application Domains} \label{sec:application_areas_discussion} In Sec.~\ref{sec:modeling_approaches_discussion} we have shown that all modeling approaches theoretically can handle various contextual cues. However, the question of preferring one approach over the others also depends on the task at hand. \subsubsection{Service robots} \label{sec:discussion:robotics} Predictors for mobile robots usually estimate the most likely future trajectory of each person in the vicinity of the robot. The usual setup includes cameras, range and depth sensors mounted on the robot, operating on a limited-performance mobile CPU. Physics-based or pattern-based human interaction models, capable of providing short-term high-confidence predictions (i.e. for 1-2 seconds), are best suited for local motion planning and collision avoidance in the crowd. Methods used to this end should have fast and efficient inference for predicting short-term dynamics of several people around the robot. In the simplest case, even linear velocity projection is sufficient for smoothing the robot's local planning \citep{Bai2015,chen2017decentralized}. More advanced methods should handle human-human interaction \citep{pellegrini2009you,ferrer2014behavior,alahi2016social,moussaid2010walking,Gupta2018SocialGAN}, the influence of robot's presence and actions on human motion \citep{oli2013human,schmerling2017multimodal,eiffert2019predicting,Rhinehart_2019_ICCV} and high-level body cues of human motion for disambiguating the immediate intention \citep{quintero2014pedestrian,unhelkar2015human,kooij2018ijcv,hasan2018mx}. In safety-critical applications, reachability-based methods provide a guarantee on local collision avoidance \citep{bansal2019hamilton}. Furthermore, understanding local motion patterns is useful for compliant and unobstructive navigation \citep{palmieri2017kinodynamic,vintr2019time}. For global path and task planning, on the other hand, long-term multi-hypothesis predictions ({i.e.} for 15-20 seconds ahead) are desired, posing a considerably more challenging task for the prediction system. Reactivity requirement is relaxed, however understanding dynamic \citep{ma2016forecasting,bera2017aggressive} and static contextual cues \citep{sun20173dof,kitani2012activity,chung2010mobile,coscia2018long}, which influence motion in the long-term perspective, reasoning on the map of the environment \citep{karasev2016intent,Rudenko2018icra} and inferring intentions of observed agents \citep{vasquez2016novel,best2015bayesian,rehder2017pedestrian} becomes more important. For both local and global path planning, location-independent methods are best suited for predicting motion in a large variety of environments \citep{fernando2019neighbourhood,bansal2019hamilton,shi2019pedestrian}. In terms of accuracy of the current state-of-the-art methods, experimental evaluations on simpler datasets, such as the ETH and UCY, show an average displacement error of \SI{0.19}{} -- \SI{0.4}{\metre} for \SI{4.8}{\second} prediction horizon \citep{yamaguchiCVPR2011,alahi2016social,vemula2017socialattention,radwan2018multimodal}. Linear velocity projection in these scenarios is estimated at \SI{0.53}{\metre} ADE. In more challenging scenarios of the ATC dataset with obstacles and longer trajectories an average error of \SI{1.4}{} -- \SI{2}{\metre} for \SI{9}{\second} prediction {has been reported} \citep{sun20173dof,alahi2016social,Rudenko2018iros}. \subsubsection{Self-driving vehicles} \label{sec:discussion:self-driving} The early recognition of maneuvers of road users in canonical traffic scenarios is the subject of much interest in the self-driving vehicles application. Several approaches stop short of motion trajectory prediction ({i.e.} regression) and consider the problem as action classification, while operating on short image sequences. Sensors are typically on-board the vehicle, although some work involves infrastructure-based sensing ({e.g.} stationary cameras or laser scanners) which can potentially avoid occlusions and provide more precise object localization. Most works consider the scenario of the laterally crossing pedestrian, dealing with the question what the latter will do at the curbside: start walking, continue walking, or stop walking \citep{schneider2013gcpr, keller2014tits, kooij2014eccv, kooij2018ijcv}. Some works enlarge the pedestrian crossing scenario, by allowing some initial pedestrian movement along the boardwalk before crossing (\cite{schneider2013gcpr} perform trajectory prediction, while other approaches are limited to crossing intention recognition, e.g. \citep{schneemann2016context,kohler2015stereo,fang2017board}). This scenario is safety-critical and crucial for autonomous vehicles to solve with high confidence. Pose and high-level contextual cues of the target agent \citep{kooij2018ijcv}, and the scene context modeling (e.g. location and type of the obstacles \citep{muench2019composable,volz2016predicting}, state of the traffic lights \citep{karasev2016intent}) are helpful to improve the crossing trajectory prediction. As to cyclists, \cite{kooij2018ijcv} consider the scenario of a cyclist moving in the same direction as the ego-vehicle, and possibly bending left into the path of the approaching vehicle. \cite{pool2017iv} consider the scenario of a cyclist nearing an intersection with up to five different subsequent road directions. Both involve trajectory prediction. For predicting motion of both cyclists and vehicles is it important to consider multi-modality and uncertainty of the future motion. Recently many authors have proposed solutions to this end \citep{chai2019multipath,zhao2019multi,hong2019rules,cui2019multimodal}. Furthermore, it is important to consider coordination of actions between the vehicles \citep{schmerling2017multimodal,Rhinehart_2019_ICCV}. It is difficult to compare the experimental results, as the datasets are varying (different timings of same scenario, different sensors, different metrics). Several works report improvements vs. their baselines. For example, Fig. 2 in \citep{kooij2014eccv} shows that during pedestrian stopping, \SI{0.9}{} and \SI{1.1}{\metre} improvements in lateral position prediction can be reached with a context-based SLDS, compared to a simpler context-free SLDS and basic LDS (Kalman Filter), respectively, for prediction horizons up to \SI{1}{\second}. A live vehicle demo of this system at the ECCV'14 conference in Zurich, showed that the superior prediction of the context-based SLDS could lead to evasive vehicle action being triggered up to \SI{1}{\second} earlier, than with the basic LDS. \subsubsection{{Surveillance}} \label{sec:discussion:surveillance} {The classification of goals and behaviors as well as the accurate prediction of human motion is of great importance for surveillance applications such as retail analytics or crowd control. Common setups for these applications use stationary sensors to monitor the environment. While single-frame based systems allow to partially solve some tasks such as perimeter protection, incorporating a sequence of observations and making use of behavior prediction models often improve accuracy in cases of occlusions or measurements with low quality (e.g. noise, bad lighting conditions).} {Traffic monitoring and management applications can benefit from from long-term prediction models, as they allow to associate new observations with existing tracks ({e.g.} \cite{pellegrini2009you,yamaguchiCVPR2011,Luber2010,pellegrini2010improving}) and to model long-term distributions over possible future positions of each person \citep{yen2008goal,chung2012incremental}. Furthermore, it enables the analysis and control of customer flow in populated areas such as malls and airports, by gathering extensive information on human motion patterns \citep{ellis2009modelling,yoo2016visual,kim2011gaussian,tay2008modelling}, understanding crowd movement in light and dense scenarios, tracking individuals within them, and making future predictions of individuals or crowds (e.g. crowd density prediction). Often these methods benefit from employing sociological methods, such as understanding of social interaction, behavior analysis, group and crowd mobility modeling \citep{antonini2006behavioral,zhou2015learning,bera2016glmp,ma2016forecasting}}. Furthermore {identifying deviation from usual patterns often makes the foundation for anomaly detection methods that go beyond perimeter protection, as they analyze trajectories instead of the pure existence of a pedestrian in a specific region.} Also in this application area it is difficult to compare results obtained by different approaches, due to the diversity of the used datasets and the way the evaluation has been performed (e.g. different prediction horizons). In terms of prediction accuracy, we report the most interesting results obtained in densely crowded environments using mainly image data. In these settings, recent state-of-the-art approaches achieve an average displacement error of \SI{0.08}{} -- \SI{1.2}{\metre} on the ETH, UC, NY Grand Central, Town Center and TrajNet datasets, and a final displacement error of \SI{0.081}{} -- \SI{2.44}{\metre}, with a prediction horizon that generally goes from \SI{0.8}{\second} up to \SI{4.8}{\second} (\cite{xue2018ss, xue2017bi, xue2019location, zhou2015learning, shi2019pedestrian}, the latter using a proprietary dataset and going up to a prediction horizon of \SI{10}{\second}). \subsubsection{On question 3:} As we show in Sec.~\ref{sec:discussion:robotics}--\ref{sec:discussion:surveillance}, requirements to the motion prediction framework strongly depend on the application domain and particular use-case scenarios therein (e.g. vehicle merging vs. pedestrian crossing within the Intelligent Vehicles domain). Therefore, it is not possible to conclude achievement of absolute requirements of any sort. When considering concrete use-cases, industry-driven domains, such as intelligent vehicles (IV), appear to be the most mature in terms of formulated requirements and proposed solutions. For instance, requirements to the prediction horizon and metric accuracy for emergency braking of IV in urban driving scenarios are described in the \cite{ISO15622} standard, which defines norms for comfortable acceleration/deceleration rates for vehicles, conditioned on the maximum speed and traffic rules, as well as the distribution of pedestrian speed and acceleration. Therefore we conclude, that for specific use-cases, in particular for basic emergency braking for IV, solutions have achieved a level of performance that allows for industrialization into consumer products. Those use-cases can be considered solved. For other use-cases we expect more standardization and explicit formulation of requirements to take place in the near future. For instance, the standard for safety requirements for personal care robots \cite{ISO13482} suggests using sensors for detecting a human in the vicinity of the robot to issue a protective stop, and controlling the speed and force when the robot is in close proximity to humans to reduce the risk of collision. This standard, however, does not propose motion anticipation to improve the risk assessment. Furthermore, several aspects of performance, robustness and generalization to new environments, discussed in the following sections, need to be explored before reaching further conclusions on maturity of the solutions. Finally, in order to reliably assess the quality of existing solutions across all application domains, is it critical to address the issues of benchmarking. \subsection{Future Directions} \label{sec:discussion:open_challenges} Developing more sophisticated methods for motion prediction which go beyond Kalman filtering with simple motion models is a clear trend of the recent years. Modern techniques make extensive use of machine learning in order to better estimate context-dependent patterns in real-data, handle more complex environment models and types of motion, or even propose end-to-end reasoning on future motion from visual input. An increasing number of methods also includes reasoning on the global structure of the environment, intentions and actions of the agent. {Having these trends in mind, we see several directions of future research:} \subsubsection{Use of {enhanced} contextual cues} To analyze and predict human motion, as well as to plan and navigate alongside them, intelligent systems should have an in-depth semantic scene understanding. Context understanding with respect to features of the static environment and its semantics for better trajectory prediction is still a relatively unexplored area, see Sec.~\ref{sec:other_classifications:environment_description} for more details. The same argument applies for the contextual cues of the dynamic environment. Socially-aware methods are making an important improvement over socially-unaware ones in such spaces where the target agent is not acting in isolation. However, most existing socially-aware methods still assume that all observed people are behaving similarly and that their motion can be predicted by the same model and with the same features. Capturing and reasoning on the high-level social attributes is at an early stage of development, see Sec.~\ref{sec:other_classifications:target_agent} and Sec.~\ref{sec:other_classifications:interaction}, however recent methods take steps. Furthermore, most available approaches assume cooperative behavior, while real humans might rather optimize personal goals instead of joint strategies. In such cases, game-theoretic approaches are possibly better suited for modeling human behavior. Consequently, adopting classical AI and game-theoretic approaches in multi-agent systems is a promising research direction, that is only partly addressed in recent work, see e.g. \citep{ma2016forecasting, bahram2016game}. One task where contextual cues become particularly important is long-term prediction of motion trajectories. While context-agnostic motion and behavioral patterns are helpful for short prediction horizons, long term predictions should account for intentions, based on the context and the surrounding environment. Many pattern-based methods treat agents as particles, placed in the field of learned transitions, dictating the direction of future motion. Extending these models by more goal- or intention-driven predictions, that resemble human goal-directed behavior, would be beneficial for long-term predictions. Consequently, further research on automatic goal inference based on the semantics of the environment is important. Most planning-based methods rely on a given set of goals, which makes them unusable or imprecise in a situation where no goals are known beforehand, or the number of possible goals is too high. Alternatively, one could consider identifying on-the-fly possible goals in the environment and predicting the way the agent may reach those goals. This would allow application of the planning-based methods in unknown environments. Additionally, semantic indicators of possible goals, coming from understanding the person's social role or current activity \citep{bruckschen2019human}, could lead to more robust intention recognition. Apart from the contextual cues, discussed in this survey, there are many other factors influencing pedestrian motion, according to the recent studies \citep{rasouli2019autonomous}, e.g. weather conditions, time of day, social roles of agents. Future methods could benefit from closer connection to the studies of human motion and behavior in social spaces \citep{arechavaleta2008optimality,do2016group,gorrini2016age}. \subsubsection{{Robustness and integration}} Several practical aspects of deploying prediction systems in real environments should be considered in the future work. Most of the presented methods are designed for specific tasks, scenarios or types of motion. These methods work well in certain situations, {e.g.} when prominent motion patterns exist in the environment, or when {the spatial structure of the environment and target agent's} goals are known beforehand. A conceptually interesting approach that uses a combination of multiple prediction algorithms to reason about best performance in the given situation is presented by \cite{lasota2017multiple}. The multiple-predictor framework opens a possibility for achieving more robust predictions when operating in undefined, changing situations, where a combination of strengths of different methods is required. We suggest that more emphasis should be put on transfer learning and generalization of approaches to new environments. Learning and reasoning on basic, invariant rules and norms of human motion and collision avoidance is a better approach in this case. When having access to several environments, domain adaptation could be potentially used for learning generalizable models Integration of prediction in planning and control is another worthwhile {topic for overall system robustness}. Predicting human motion is usually motivated with increased safety of human-robot interaction and efficiency of operation. However, the insights on exploiting predictions in the robot's motion or action planning module are typically left out of scope in many papers. Future work would benefit from outlining possible ways to incorporate predictions in the robot control framework. \section{Motion Prediction Evaluation} \label{sec:evaluation} An important challenge for motion prediction methods is the design of experiments to evaluate their performance with respect to other methods and the requirements from the targeted application. In this section we review and discuss common metrics and datasets to this end. \subsection{Performance Metrics} \label{sec:evaluation:metrics} Due to the stochastic nature of human decision making and behavior, exact prediction of trajectories is rarely possible, and we require measures to quantify the similarity between predicted and actual motion. Different prediction types -- see Fig.~\ref{introduction:fig:overview} -- require different measures: for single trajectories we need geometric measures of trajectory similarity or final displacement, for parametric and non-parametric distributions over trajectories we can use geometric measures as well as difference measures for probability distributions. Metrics, commonly used in the literature, are summarized in Table~\ref{tab:metrics}. \subsubsection{Geometric accuracy metrics} \label{sec:evaluation:metrics:geometric} Geometric measures are the most commonly used across all application domains. Several surveys have considered the topic of trajectory analysis and comparison \citep{zhang2006comparison, morris2008survey, zheng2015trajectory, quehl2017howgood, pan2016mining} where, based on the previous ones, only the recent survey by \cite{quehl2017howgood} specifically considers geometric similarity measures for trajectory prediction evaluation. In addition to that, we review the probabilistic metrics and the assessment of distributions with geometric methods in Sec.~\ref{sec:evaluation:metrics:probabilistic}, and the experiments to evaluate robustness in Sec.~\ref{sec:evaluation:metrics:other}. Summarizing \citep{morris2008survey,quehl2017howgood}, we consider eight metrics: {\bf Mean Euclidean Distance} (MED), also called \emph{Average Displacement Error} (ADE), averages Euclidean distances between points of the predicted trajectory and the ground truth that have the same temporal distance from their respective start points. An alternate form computes MED in a subspace between coefficients of the trajectories' principal components (PCA-Euclid). A third variant (MEDP) is a path measure able to compare paths of different length. For each $(x,y)$-point of the predicted path, the nearest ground truth point is searched. Being a path measure, MEDP is invariant to velocity differences and temporal misalignment but does not account for temporal ordering. A fourth variant (n-ADE) measures MED only on non-linear segments of trajectories. MED measures are widely used by many authors across all domains, see Table \ref{tab:metrics}. Many authors evaluate probabilistic predictions by computing expected MED under the predictive distribution, referring to it as \emph{mean ADE}, \emph{weighted mean ADE}, or, abusing notation, simply MED or ADE. This type of evaluation, however, does not measure how good the predictive distribution matches the ground truth distribution, falling short of being a true probabilistic measure. For example, it favors point predictions and avoids larger variances, as they often increase the expected ADE. {\bf Dynamic Time Warping} (DTW) \citep{Berndt1994DTW} computes a similarity metric between trajectories of different length as the minimum total cost of warping one trajectory into another under some distance metric for point pairs. As DTW operates on full trajectories, it is susceptible to outliers. {\bf Modified Hausdorff Distance} (MHD) \citep{dubuisson1994modified} is related to the Hausdorff distance as the maximal minimal distance between the points of predicted and actual trajectory. MHD was designed to be more robust against outliers by allowing slack during matching and to compare trajectories of different length. A further variant is the \emph{trajectory Hausdorff} measure (THAU) \citep{Lee2007TCP}, a path metric that computes a weighted sum over three distance terms each focusing on differences in perpendicular direction, length, and orientation between the paths. The weights can be chosen to be application-dependent. {\bf Longest Common Subsequence} (LCS) \citep{Buzan2004LCSS} aligns two trajectories of different length so as to maximize the length of the common subsequence, i.e. the number of matching points between both trajectories. A good match is determined by thresholding a pair-wise distance and time difference where not all points need to be matched. LCS is more robust to noise and outliers than DTW but finding suitable values for the two thresholds is not always easy. {\bf CLEAR multiple object tracking accuracy} (CLEAR-MOTA) was initially introduced as a performance metric for target tracking \citep{Bernardin2008CLEARMOTA}. In the context of prediction evaluation, it is similar to LCS in that it sums up good matches between points on the predicted trajectory and the ground truth. The difference is that the concept of pair-wise matches/mismatches is more complex including false negatives, false positives and non-unique correspondences. In addition to the metrics considered in \citep{morris2008survey,quehl2017howgood}, relevant metrics used in the reviewed literature include the \emph{Quaternion-based Rotationally Invariant LCS} (QRLCS), which is the rotationally invariant counterpart of LCS \citep{hermesIVS09}, and several measures that quantify different geometric aspects in addition to trajectory or path similarity: {\bf Final Displacement Error} (FDE) measures the distance between final predicted position and the ground truth position at the corresponding time point. If the prediction is represented by a distribution, many authors compute expected FDE. FDE however, is not appropriate when there are multiple possible future positions. {\bf Prediction Accuracy} (PA) uses a binary function to classify a prediction as correct if the predicted position fulfills some criteria, e.g. is within a threshold distance away from the ground truth. Percentage of correctly predicted trajectories is then reported. PA allows to incorporate suitable invariances into the distance function such as allowing certain types of errors. As also pointed out by \cite{quehl2017howgood}, the challenge in choosing a suitable measure is that each of these measures usually produce quite different results. For the sake of an unbiased and fair evaluation of different prediction algorithms, measures should be chosen not to suit a particular method but based on the requirements from the targeted application. An application which includes a lot of different velocities, for example, should not solely rely on path measures. \begin{table*}[ptbh] \centering \footnotesize \begin{tabular}{p{1.4cm}p{3.3cm}p{11.4cm}} \hline & {\bf Metric} & {\bf Used by} \\ \hline Geometric & Average Displacement Error (ADE) & \cite{pellegrini2009you,yamaguchiCVPR2011,alahi2016social,sun20173dof,bartoli2017context,vemula2017modeling,karasev2016intent,kim2015brvo,vasquez2008intentional,yiTRIP2016,rosmann2017online,yoo2016visual,schulz2015controlled,zernetsch2016trajectory,pool2017iv,minguez2018pedestrian,wuIV18,hermesIVS09,raipuriaIV2017,deo2018multi,kim2017probabilistic,vemula2017socialattention,radwan2018multimodal,pfeiffer2018data,kooij2018ijcv,quintero2014pedestrian,saleh2018intent,saleh2018cyclist,bisagno2018group,xue2019location,zhang2019sr,shi2019pedestrian,zhao2019multi,xue2017bi,hasan2018mx,xue2018ss,su2017forecast,srikanth2019infer,sadeghian2018sophie,park2018sequence,djuric2018motion,xie2018vehicle,Gupta2018SocialGAN,huynh2019trajectory,nikhil2018convolutional,xu2018encoding,fernando2018soft,cui2019multimodal,luo2019gamma,hong2019rules,pei2019human,altche2017lstm,huang2019stgat,chai2019multipath,amirian2019social,blaiotta2019learning,dai2019modeling,kosaraju2019social,ivanovic2019trajectron,eiffert2019predicting,saleh2019contextual,choi2019drogon,Rhinehart_2018_ECCV,fernando2019neighbourhood,li2019coordination,jain2019discrete} \vspace{3pt} \\ & Final Displacement Error (FDE) & \cite{varshneya2017human,alahi2016social,vemula2017modeling,chung2010mobile,vemula2017socialattention,radwan2018multimodal,bisagno2018group,xue2019location,zhang2019sr,shi2019pedestrian,zhao2019multi,xue2017bi,hasan2018mx,xue2018ss,su2017forecast,sadeghian2018sophie,Gupta2018SocialGAN,huynh2019trajectory,nikhil2018convolutional,xu2018encoding,fernando2018soft,luo2019gamma,pei2019human,huang2019stgat,amirian2019social,blaiotta2019learning,kosaraju2019social,ivanovic2019trajectron,eiffert2019predicting,choi2019drogon} \vspace{3pt} \\ & Modified Hausdorff Distance (MHD) & \cite{vasquez2016novel,kitani2012activity,jacobsRAL2017,Rudenko2017workshop,Rudenko2018icra,Rudenko2018iros,yoo2016visual,coscia2018long,shenTransferable2018,habibi2018context,fernando2019neighbourhood,saleh2019contextual} \vspace{3pt} \\ & Prediction Accuracy (PA) & \cite{ferrer2014behavior,ikeda2013modeling,bera2016glmp,best2015bayesian,ding2019predicting,hong2019rules} \\ \hline Probabilistic & Negative Log Likelihood (NLL) & \cite{coscia2018long, Rudenko2017workshop, suraj2018predicting,jain2019discrete,chai2019multipath,pool2019context,makansi2019overcoming,ivanovic2019trajectron,Rhinehart_2019_ICCV} \vspace{3pt} \\ & Negative Log Loss (NLL) & \cite{ma2016forecasting,previtaliICMLA2016, vasquez2016novel,kitani2012activity,tang2019mfp} \vspace{3pt} \\ & Predicted Probability (PP) & \cite{kooij2014eccv,kooij2018ijcv,rehder2015goal,Rudenko2018icra,Rudenko2018iros} \vspace{3pt} \\ & Min. Avg. or Final Displacement Error (mADE, mFDE) & \cite{lee2017desire,park2018sequence,Rhinehart_2018_ECCV,Rhinehart_2019_ICCV,ridel2019scene,ivanovic2019trajectron,amirian2019social,chai2019multipath,vanderHeiden2019safecritic,hong2019rules,li2019coordination,tang2019mfp} \vspace{3pt} \\ & Cumulative Probability (CP) & \cite{suraj2018predicting} \\ \hline \end{tabular} \vspace{6pt} \caption{Metrics to evaluate motion prediction} \label{tab:metrics} \end{table*} \subsubsection{Probabilistic accuracy metrics} \label{sec:evaluation:metrics:probabilistic} One of the drawbacks of geometric metrics is their inability to measure uncertainty and also multimodal nature of predictions, e.g. when the target agent may take different paths to reach the goal, or when an observed partial trajectory matches several previously learned motion patterns. Moreover due to the stochasticity of the human behaviors, motion prediction algorithms need to be evaluated on their accuracy to match the underlying probability distribution of human movements. Several probabilistic accuracy metrics can be used for this purpose. Many variational inference and machine learning algorithms \citep{mackay2003information,Bishop2006} use the Kullback-Leibler (KL) divergence \citep{kullback1951information} to measure dissimilarity of two distributions, e.g. the unknown probability distribution of human behavior $p(\mathbf{s}_{1:T})$ and the predicted probability distribution $q(\mathbf{s}_{1:T}|\theta)$, with $\theta$ being a set of parameters of the chosen prediction model. The KL divergence is computed as $d_\mathit{KL}(p||q) \simeq \sum_{\mathbf{s}_{1:T} \in \mathbb{S}} \{ - p(\mathbf{s}_{1:T})\log{q(\mathbf{s}_{1:T}|\theta)} + p(\mathbf{s}_{1:T})\log{p(\mathbf{s}_{1:T})}\}$ with the space of all trajectories $\mathbb{S}$. Minimizing the $d_\mathit{KL}(p||q)$ corresponds to maximizing the log-likelihood function for $\theta$ under the predicted distribution $q(\mathbf{s}_{1:T}|\theta)$. Different surveyed papers have adopted variants of the KL divergence as accuracy metric for their stochastic predictions. For example, the {\bf average Negative Log Likelihood} or {\bf average Negative Log Loss} evaluates the negative log likelihood term \big($\simeq \sum_{\mathbf{s}_{1:T} \in \mathbb{D}} \log{q(\mathbf{s}_{1:T}|\theta)}$\big) of $d_\mathit{KL}$ from a set of ground truth demonstrations $\mathbb{D} = \left\{\mathbf{s}_{1:T}^i\right\}_{i=1}^N$ with the total number of demonstrations $N$. Furthermore, several approaches use the {\bf Predicted Probability} (PP) metric, \big($\simeq \sum_{t=1}^T q(\mathbf{s}_t|\theta)$\big) or its negative logarithm, to calculate the probability of the ground truth path (\emph{i.e} $\mathbf{s}_{1:T}$) on the predicted states distribution. For the above metrics, the computation of the log likelihood depends on the chosen model, its induced graph and the corresponding factorization. Finally, the {\bf Cumulative Probability} (CP) metric computes the fraction of the predictive distribution that lies within a radius $r$ from the correct position for various values of $r$. Several recently introduced metrics follow a sampling approach to evaluate a probability distribution. {\bf Minimum Average Displacement Error} (mADE) metric \citep{walker2016uncertain,Rhinehart_2019_ICCV,scholler2019simpler, thiede2019analyzing,tang2019mfp}, as well as \emph{variety loss}, \emph{oracle}, \emph{Minimum over N}, \emph{Best-of-N}, \emph{top n\%}, or \emph{minimum Mean Squared Distance} (minMSD), computes Euclidean distance between the ground truth position of the agent $\mathbf{s}_t^*$ at time $t$ and the closest (or the n\% closest) of the $K$ samples from the predicted probability distribution: $\min_k ||\mathbf{s}_t^*-\mathbf{s}_t^{k}||$. Similarly, {\bf minimum Final Displacement Error} (mFDE) evaluates only the distribution at the prediction horizon $T$. Such metrics encourage the predicted distribution to cover multiple modes of the ground truth distribution, while placing probability mass according to the mode likelihood. An evaluation of the robustness of top 1 vs. top n\% metrics by \cite{bhattacharyya2019conditional} has shown that the \emph{top n\%} metric produces more stable results. \subsubsection{Other performance metrics} \label{sec:evaluation:metrics:other} Prediction accuracy is by far the primary performance indicator in the reviewed literature across approaches and application domains. In particular for long-term prediction methods, authors evaluate accuracy against the prediction horizon \citep{karasev2016intent, Rudenko2018icra, wuIV18, rehder2015goal, Rudenko2018iros, galceran2015multipolicy, bahram2016game, chung2010mobile, pfeiffer2016predicting, lee2016predicting,thompson2009probabilistic,jacobsRAL2017,ikeda2013modeling,vasishta2018building,keller2014tits,quintero2014pedestrian,GoldhammerICPR2014,pfeiffer2018data,sun20173dof,raipuriaIV2017,deo2018multi,radwan2018multimodal,suraj2018predicting,hermesIVS09,xu2018encoding,blaiotta2019learning,choi2019drogon}. Much fewer authors address other aspects of robustness and investigate the range of conditions under which prediction results remain stable and how they are impacted by different types of perturbations. Experiments to explore robustness evaluate prediction accuracy as a function of various influences: the length or duration of the observed partial trajectory until prediction (addresses the question of how long the target agent needs to be observed for a good prediction) \citep{lee2017desire, kitani2012activity,radwan2018multimodal}, the size of the training dataset \citep{vasquez2009incremental,vasishta2018building,suraj2018predicting,huynh2019trajectory}, number of agents in the scene \citep{Rhinehart_2019_ICCV}, input data sampling frequency and the amount of sensor noise \citep{bera2016glmp} or amount of anomalies in the training trajectories \citep{han2019pedestrian}. Several authors report a separate accuracy measurement for the more challenging (e.g. non-linear or anomalous) part of the test set \citep{fernando2018soft,huynh2019trajectory,kooij2018ijcv}, or evaluate the model's performance on different classes of behavior, e.g. walking or stopping \citep{saleh2018intent}. Analysis of generalization, overfitting and input utilization by a neural network, presented by \cite{scholler2019simpler}, makes a good case for robustness evaluation. Furthermore, to quantify efficiency of a prediction method, some authors relate inference time to the number of agents in the scene \citep{Rudenko2018icra, Rudenko2018iros,thompson2009probabilistic}, and only a few papers provide an analysis of their algorithms' complexity \citep{best2015bayesian, Rudenko2018iros,chen2016augmented,keller2014tits,zhao2019multi}. \subsection{Datasets} \label{sec:evaluation:datasets} In order to evaluate the quality of predictions, predicted states or distributions are usually compared to the ground truth states using standard datasets of recorded motion. Availability of annotated trajectories, represented with the sequence of states or bounding boxes in the top-down view, sets prediction benchmarking datasets aside from the other popular computer vision datasets, where the ground truth state of the agent is not available and is difficult to estimate. Common recording setup includes a video-camera with static top-down view of the scene, or ground-based lasers and/or depth sensors, mounted on a static or moving platform. Detected agents in each frame are labeled with unique IDs, and their positions with respect to the global world frame are given as $(x,y)$ coordinates together with the frame time-stamp $t$, i.e. (id, $t, x, y$). Often the coordinate vector is augmented with orientation and velocity information. Furthermore, social grouping information, gaze directions, motion mode or maneuver labels and other contextual cues can be provided. Apart from this specific form of labeling, further requirements to prediction benchmarking datasets include interaction between agents, varying density of agents, presence of non-convex obstacles in the environment, availability of the semantic map and long continuous observations of the agents. In Table~\ref{tab:datasets} we review the most popular datasets, used for evaluation in the surveyed literature. Out of many datasets, used for benchmarking by different authors, we picked those used by at least two independent teams, excluding the creators of the dataset. We believe that this is a good indication of the dataset's relevance, which also supports the primary purpose of benchchmarking -- comparing performance of different methods on the same dataset. Additionally, in Table~\ref{tab:additional-datasets} we include four recent datasets, which do not meet the selection criterion, but cover valuable aspects, missing from the earlier datasets. This includes the first dataset of cyclists trajectories \citep{pool2017iv}, the first large-scale dataset of vehicles trajectories \citep{Krajewski2018HighDdataset}, the first dedicated benchmark for human trajectory prediction \citep{sadeghiankosaraju2018trajnet} and the first dataset of human motion trajectories with accurate motion capture data \citep{rudenko2019thor}. \begin{table*}[tbh] \centering \footnotesize \begin{tabular}{p{2.2cm}p{1.0cm}p{1.0cm}p{1.1cm}p{4.1cm}p{1.7cm}p{3.2cm}} \hline {\bf Dataset} & {\bf Location} & {\bf Agents} & {\bf Sensors} & {\bf Scene description} & {\bf Duration and tracks} & {\bf Annotations and sampling rate} \\ \hline \textbf{ETH} \citep{pellegrini2009you} & Outdoor & People & Camera & 2 pedestrian scenes, top-down view, moderately crowded & 25 min, 650 tracks & Positions, velocities, groups, maps \linebreak @2.5 Hz \\ \multicolumn{7}{p{16.9cm}}{Used by: \cite{varshneya2017human,bera2016glmp,alahi2016social,vemula2017modeling,trautman2010unfreezing,kim2015brvo,yamaguchiCVPR2011,chung2010mobile,vemula2017socialattention,radwan2018multimodal,pfeiffer2018data,bisagno2018group,zhang2019sr,zhao2019multi,xue2018ss,sadeghian2018sophie,Gupta2018SocialGAN,huynh2019trajectory,nikhil2018convolutional,xu2018encoding, luo2019gamma,pei2019human,huang2019stgat,amirian2019social,blaiotta2019learning,kosaraju2019social,ivanovic2019trajectron}} \\ \hline \textbf{UCY} \citep{lerner2007crowds} & Outdoor & People & Camera & 2 pedestrian scenes (sparsely populated Zara and crowded Students), top-down view & 16.5 min, over 700 tracks & Positions, gaze directions \linebreak -- \\ \multicolumn{7}{p{16.9cm}}{Used by: \cite{ma2016forecasting,varshneya2017human,alahi2016social,bartoli2017context,best2015bayesian,yamaguchiCVPR2011,pellegrini2010improving,vemula2017socialattention,radwan2018multimodal,bisagno2018group,zhang2019sr,zhao2019multi,hasan2018mx,xue2018ss,sadeghian2018sophie,Gupta2018SocialGAN,huynh2019trajectory,nikhil2018convolutional,xu2018encoding,vanderHeiden2019safecritic,pei2019human,huang2019stgat,luo2019gamma,amirian2019social,blaiotta2019learning,kosaraju2019social,ivanovic2019trajectron}} \\ \hline \textbf{Stanford Drone Dataset} \citep{robicquet2016learning} & Outdoor & People, cyclists, vehicles & Camera & 8 urban scenes, $\sim$\SI{900}{\metre\squared} each, top-down view, moderately crowded & 5 hours, 20k tracks & Bounding boxes \linebreak @30 Hz \\ \multicolumn{7}{p{16.9cm}}{Used by: \cite{varshneya2017human,jacobsRAL2017,coscia2018long,zhao2019multi,sadeghian2018sophie,vanderHeiden2019safecritic,chai2019multipath,fernando2019neighbourhood,makansi2019overcoming,eiffert2019predicting,ridel2019scene,saleh2019contextual}} \\ \hline \textbf{NGSIM} \citep{colyar2006us,colyar2007us} & Outdoor & Vehicles & Camera network & Recording of the US Highway 101 and Interstate 80, road segment length 640 and 500 \SI{}{\metre} & 90 min & Local and global positions, velocities, lanes, vehicle type and parameters, \linebreak @10 Hz \\ \multicolumn{7}{p{16.9cm}}{Used by: \cite{kuefler2017imitating,deo2018multi,zhao2019multi,altche2017lstm,li2019coordination,kalayeh2015understanding,dai2019modeling,ding2019predicting,tang2019mfp}} \\ \hline \textbf{Edinburgh} \citep{majecka2009statistical} & Outdoor & People & Camera & 1 pedestrian scene, top-down view, 12~x~16 \SI{}{\metre\squared}, varying density of people & Several months, 92k tracks & Positions \linebreak @9 Hz \\ \multicolumn{7}{p{16.9cm}}{Used by: \cite{previtaliICMLA2016,elfring2014learning,Rudenko2017workshop,xue2017bi,fernando2018soft,carvalho2019long}} \\ \hline \textbf{Grand Central Station Dataset} \citep{zhou2012understanding} & Indoor & People & Camera & Recording in the crowded New York Grand Central train station & 33 minutes & Tracklets \linebreak @25 Hz \\ \multicolumn{7}{p{16.9cm}}{Used by: \cite{su2017forecast,xue2017bi,xue2019location,yiTRIP2016,xu2018encoding,fernando2018soft}} \\ \hline \textbf{VIRAT} \citep{oh2011large} & Outdoor & People, cars, other vehicles & Camera & 16 urban scenes, 20--50$^\circ$ camera view angle towards the ground plane, homographies included & 25 hours & Bounding boxes, events (e.g. entering a vehicle or using a facility) \linebreak @10,~5 and~2~Hz \\ \multicolumn{7}{p{16.9cm}}{Used by: \cite{previtaliICMLA2016,vasquez2016novel,kitani2012activity,walker2014patch,xieICCV2013}} \\ \hline \textbf{KITTI} \citep{Geiger2012CVPR} & Outdoor & People, cyclists, vehicles & Velodyne, 4 cameras & Recorded around the mid-size city of Karlsruhe (Germany), in rural areas and on highways & 21 training sequences and 29 test sequences & 3D Positions \linebreak @10 Hz \\ \multicolumn{7}{p{16.9cm}}{Used by: \cite{karasev2016intent, wuIV18, Rhinehart_2018_ECCV, lee2017desire,srikanth2019infer}} \\ \hline \textbf{Town Center Dataset} \citep{benfold2011stable} & Outdoor & People & Camera & Pedestrians moving along a moderately crowded street & 5 minutes, 230 hand labelled tracks & Bounding boxes \linebreak @15 Hz \\ \multicolumn{7}{p{16.9cm}}{Used by: \cite{ma2016forecasting,xue2018ss,xue2019location,hasan2018mx}} \\ \hline \textbf{ATC} \citep{brscic2013person} & Indoor & People & 3D range sensors & Recording in a shopping center, 900 \SI{}{\metre\squared} coverage, varying density of people & 92 days, long tracks & Positions, orientations, velocities, gaze directions, \linebreak @10-30 Hz \\ \multicolumn{7}{p{16.9cm}}{Used by: \cite{Rudenko2018icra,Rudenko2018iros,molina2018modelling}} \\ \hline \textbf{Daimler Pedestrian Path Prediction Dataset} \citep{schneider2013gcpr} & Outdoor & People & Stereo camera & Recording from a moving or standing vehicle, pedestrians are crossing the street, stopping at the curb, starting to move or bending in & 68 tracks of pedestrians, 4 sec each & Positions, bounding boxes, stereo images, calibration data \linebreak @17 Hz \\ \multicolumn{7}{p{16.9cm}}{Used by: \cite{schulz2015controlled,saleh2018intent,saleh2019contextual}} \\ \hline \textbf{L-CAS} \citep{yan2017online} & Indoor & People & Velodyne & Recording in a university building from a moving or stationary robot & 49 minutes & Positions, groups, Velodyne scans \linebreak @10 Hz \\ \multicolumn{7}{p{16.9cm}}{Used by: \cite{sun20173dof,radwan2018multimodal}} \\ \hline \end{tabular} \vspace{6pt} \caption{Overview of the motion trajectories datasets} \label{tab:datasets} \end{table*} \begin{table*}[ptbh] \centering \footnotesize \begin{tabular}{p{2.2cm}p{1.0cm}p{1.0cm}p{1.1cm}p{4.1cm}p{1.7cm}p{3.2cm}} \hline {\bf Dataset} & {\bf Location} & {\bf Agents} & {\bf Sensors} & {\bf Scene description} & {\bf Duration and tracks} & {\bf Annotations and sampling rate} \\ \hline \textbf{Tsinghua-Daimler Cyclist} \citep{pool2017iv} & Outdoor & Cyclists & Stereo camera & Recording from a moving vehicle & 134 track & Positions, road topology \linebreak @5 Hz \\ \multicolumn{7}{p{16.9cm}}{Used by: \cite{saleh2018cyclist}} \\ \hline \textbf{TrajNet} \citep{sadeghiankosaraju2018trajnet} & Outdoor & People & Cameras & Superset of datasets, collecting also relevant metrics and visualization tools & Superset of image-plane and world-plane datasets & Bounding boxes and tracklets, datasets recording at different frequencies \\%\linebreak @... Hz \\ \multicolumn{7}{p{16.9cm}}{Used by: \cite{xue2019location}} \\ \hline \textbf{highD Dataset} \citep{Krajewski2018HighDdataset} & Outdoor & Vehicles & Camera & 6 different highway locations near Cologne, top-down view, varying densities with light and heavy traffic & Over 110k vehicles, 447 driven hour & Positions and additional features, e.g. THW, TTC \linebreak @25 Hz \\ \hline \textbf{TH\"OR} \citep{rudenko2019thor} & Indoor & People & Motion capture & Human-robot navigation study in a university lab & Over 600 person and group trajectories in 60 minutes & Positions, head orientations, gaze directions, groups, map, Velodyne scans \linebreak @100 Hz \\ \hline \end{tabular} \vspace{6pt} \caption{Additional motion trajectories datasets} \label{tab:additional-datasets} \end{table*} \section{Introduction} \label{sec:introduction} Understanding human motion is a key skill for intelligent systems to coexist and interact with humans. It involves aspects in representation, perception and motion analysis. Prediction plays an important part in human motion analysis: foreseeing how a scene involving multiple agents will unfold over time allows to incorporate this knowledge in a pro-active manner, i.e. allowing for enhanced ways of active perception, predictive planning, model predictive control, or human-robot interaction. As such, human motion prediction has received increased attention in recent years across several communities. Many important application domains exist, such as self-driving vehicles, service robots, and advanced surveillance systems, see Fig.~\ref{introduction:fig:application-domains}. The challenge of making accurate predictions of human motion arises from the complexity of human behavior and the variety of its internal and external stimuli. Motion behavior may be driven by own goal intent, the presence and actions of surrounding agents, social relations between agents, social rules and norms, or the environment with its topology, geometry, affordances and semantics. Most factors are not directly observable and need to be inferred from noisy perceptual cues or modeled from context information. Furthermore, to be effective in practice, motion prediction should be robust and operate in real-time. \begin{figure}[t] \includegraphics[height=0.334\linewidth,keepaspectratio]{introduction_image_fussgaenger.jpg} \hfill \includegraphics[height=0.334\linewidth,keepaspectratio]{introduction_image_freiburg.jpg} \vspace{2mm} \includegraphics[height=0.299\linewidth,keepaspectratio]{introduction_image_zhou_crowd.png} \hfill \includegraphics[height=0.299\linewidth,keepaspectratio]{introduction_image_SPENCER.jpg} \caption{Application domains of human motion prediction. {\bf Top left:} Will the pedestrian cross? Self-driving vehicles have to quickly reason about intentions and future locations of other traffic participants, such as pedestrians (Illustration from \citep{kooij2018ijcv}). {\bf Top right:} Advanced traffic surveillance systems can provide real-time alerts of pending collisions using communication technology. {\bf Bottom left:} Advanced surveillance systems analyze human motion in public spaces for suspicious activity detection or crowd control (Illustration from \citep{zhou2015learning}). {\bf Bottom right:} Robot navigation in densely populated spaces requires accurate motion prediction of surrounding people to safely and efficiently move through crowds.} \label{introduction:fig:application-domains} \end{figure} Human motion comes in many forms: articulated full body motion, gestures and facial expressions, or movement through space by walking, using a mobility device or driving a vehicle. The scope of this survey is human motion trajectory prediction. Specifically, we focus on ground-level 2D trajectory prediction for pedestrians and also consider the literature on cyclists and vehicles. Prediction of video frames, articulated motion, or human actions or activities is out of scope although many of those tasks rely on the same motion modeling principles and trajectory prediction methods considered here. Within this scope, we survey a large selection of works from different communities and propose a novel taxonomy based on the motion modeling approaches and the contextual cues. We categorize the state of the art and discuss typical properties, advantages and drawbacks of the categories as well as outline open challenges for future research. Finally, we raise three questions: \emph{Q1}: are the evaluation techniques to measure prediction performance good enough and follow best practices? \emph{Q2}: have all prediction methods arrived on the same performance level and the choice of the modeling approach does not matter anymore? \emph{Q3}: is motion prediction solved? The paper is structured as follows: we present the taxonomy in Sec.~\ref{sec:taxonomy}, review and analyze the literature on human motion prediction first by the modeling approaches in Sec.~\ref{sec:posterior_distribution:physics_based} -- Sec.~\ref{sec:posterior_distribution:planning_based}, and then by the contextual cues in Sec.~\ref{sec:classification_contextualcues}. In Sec.~\ref{sec:evaluation} we review the benchmarking of motion prediction techniques in terms of commonly used performance metrics and datasets. In Sec.~\ref{sec:discussion} we discuss the state of the art with respect to the above three questions and outline open research challenges. Finally, Sec.~\ref{sec:conclusions} concludes the paper. We recommend Sec.~\ref{sec:introduction},\ref{sec:taxonomy}, Fig.~\ref{fig:pictograms-physics-based}--\ref{fig:pictograms-planning-based} and Sec.~\ref{sec:discussion} as a coarse overview of the motion prediction methodology for a general reader. A practitioner may find value in the review of the datasets and metrics in Sec.~\ref{sec:evaluation}. Finally, the thorough analysis of the literature in Sec.~\ref{sec:posterior_distribution:physics_based}--\ref{sec:classification_contextualcues} is recommended for expert readers. \subsection{Overview and Terminology} On the highest level of abstraction, the motion prediction problem contains the following three elements (Fig. \ref{introduction:fig:overview}): \begin{itemize} \item \emph{Stimuli:} Internal and external stimuli that determine motion behavior include the agents' motion intent and other directly or indirectly observable influences. Most prediction methods rely on observed partial trajectories, or generally, sequences of agent state observations such as positions, velocities, body joint angles or attributes. Often, this is provided by a target tracking system and it is common to assume correct track identity over the observation period. Other forms of inputs include contextual cues from the environment such as scene geometry, semantics, or cues that relate to other moving entities in the surrounding. End-to-end approaches rely on sequences of raw sensor data. \item \emph{Modeling approach:} Approaches to human motion prediction differ in the way they represent, parametrize, learn and solve the task. This paper focuses on finding and analyzing useful categories, hidden similarities, common assumptions and best evaluation practices in the growing body of literature. \item \emph{Prediction:} Different methods produce different parametric, non-parametric or structured forms of predictions such as Gaussians over agent states, probability distributions over grids, singular or multiple trajectory samples or motion patterns using graphical models. \end{itemize} \begin{figure}[t] \centering \includegraphics[width=0.99\columnwidth]{introduction_image_overview.png} \caption{Typical elements of a motion prediction system: internal and external stimuli that influence motion behavior, the method itself and the different parametric, non-parametric or structured forms of predictions.} \label{introduction:fig:overview} \end{figure} We use the term {\em agent} to denote dynamic objects of interest such as robots, pedestrians, cyclists, cars or other human-driven vehicles. The {\em target agent} is the dynamic object for which we make the actual motion prediction. We assume the agent behavior to be non-erratic and goal-directed with regard to an optimal or near-optimal expected outcome. This assumption is typical as the motion prediction problem were much harder or even ill-posed otherwise. We define a \emph{path} to be a sequence of $(x,y)$-positions and a \emph{trajectory} to be a path combined with a timing law or a velocity profile. We refer to {\em short-term} and {\em long-term} prediction to characterize prediction horizons of 1-2 $s$ and up to 20 $s$ ahead, respectively. Formally, we denote $\mathbf{s}_t$ as the state of an agent at time $t$, $\mathbf{u}_t$ as the action that the agent takes at time $t$, $\mathbf{{o}}_t \in \mathcal{O}$ as the observations of the agent's state at time $t$, and use $\zeta$ to denote trajectories. We refer to a history of several states, actions or observations from time $t$ to time $T$ using subscripts ${t:T}$. \subsection{Application Domains} Motion prediction is a key task for service robots, self-driving vehicles, and advanced surveillance systems (Fig.~\ref{introduction:fig:application-domains}). \subsubsection{Service robots} \label{sec:introduction:mobile-robotics} Mobile service robots increasingly operate in open-ended domestic, industrial and urban environments shared with humans. Anticipating motion of surrounding agents is an important prerequisite for safe and efficient motion planning and human-robot interaction. Limited on-board resources for computation and first-person sensing makes this a challenging task. \subsubsection{Self-driving vehicles} \label{sec:introduction:self-driving-vehicles} The ability to anticipate motion of other road users is essential for automated driving. Similar challenges apply as in the service robot domain, although they are more pronounced given the higher masses and velocities of vehicles and the resulting larger harm that can potentially be inflicted, especially towards vulnerable road users (i.e. pedestrians and cyclists). Furthermore, vehicles need to operate in rapidly changing, semantically rich outdoor traffic settings and need hard real-time operating constraints. Knowledge of the traffic infrastructure (location of lanes, curbside, traffic signs, traffic lights, other road markings such as zebras) and the traffic rules can help in the motion prediction. \subsubsection{Surveillance} \label{sec:discussion:computer-vision} Visual surveillance of vehicular traffic or human crowds relies on the ability to accurately track a large number of targets across distributed networks of stationary cameras. Long-term motion prediction can support a variety of surveillance tasks such as person retrieval, perimeter protection, traffic monitoring, crowd management or retail analytics by further reducing the number of false positive tracks and track identifier switches, particularly in dense crowds or across non-overlapping fields of views. \begin{figure*}[t] \centering \includegraphics[width=0.99\linewidth]{taxonomy-image-overview.pdf} \caption{Overview of the categories in our taxonomy.} \label{fig:taxonomy-overview} \end{figure*} \begin{figure}[t] \centering \includegraphics[width=0.999\linewidth,keepaspectratio]{paper_statistics.png} \caption{Publications trends in the literature reviewed for this survey, color-coded by modeling approach. } \label{fig:paperstatistics} \end{figure} \subsection{Related Surveys} In this section, we detail related surveys from different scientific communities, i.e. robotics \citep{kruse2013human,chik2016review,lasota2017survey}, intelligent vehicles \citep{lefevre2014survey,brouwer2016comparison,ridel2018literature}, and computer vision \citep{morris2008survey,murino2017crowdBehavior,hirakawaDAPI18}. \cite{kruse2013human} provide a survey of approaches for wheeled mobile robots and categorize human-aware motion based on comfort, naturalness and sociability features. Motion prediction is seen as part of a human-aware navigation framework and categorized into {\em reasoning-based} and {\em learning-based} approaches. In reasoning-based methods, predictions are based on simple geometric reasoning or dynamic models of the target agent. Learning-based approaches make predictions via motion patterns that are learned from observed agent trajectories. A short survey on frameworks for socially-aware robot navigation is provided by \cite{chik2016review}. The authors discuss key components of such frameworks including several planners and human motion prediction techniques. \cite{lasota2017survey} survey the literature on safe human-robot interaction along the four themes of safety through control, motion planning, prediction and psychological factors. In addition to wheeled robots, they also include related works on manipulator arms, drones or self-driving vehicles. The literature on human motion prediction is divided into methods based on {\em goal intent} or {\em motion characteristics}. Goal intent techniques infer an agent's goal and predict a trajectory that the agent is likely to take to reach that goal. The latter group of approaches does not rely explicitly on goals and makes use of observations about how humans move and plan natural paths. \cite{lefevre2014survey} survey vehicular motion prediction and risk assessment in an automated driving context. The authors discuss the literature based on the semantics used to define motion and risk and distinguish {\em physics-based}, {\em maneuver-based} and {\em interaction-aware} models for prediction. Physics-based methods predict future trajectories via forward simulation of a vehicle model, typically under kinodynamic constraints and uncertainties in initial states and controls. Maneuver-based methods assume that vehicle motion is a series of typical motion patterns (maneuvers) that have been acquired a priori and can be recognized from observed partial agent trajectories. Intention-aware methods make joint predictions that account for inter-vehicle interactions, also considering that such interactions are regulated by traffic rules. \cite{brouwer2016comparison} review and compare pedestrian motion models for vehicle safety systems. According to the cues from the environment used as input for motion prediction, authors distinguish four classes of methods: \emph{dynamics-based models} which only use the target agent's motion state, methods which use \emph{psychological knowledge of human behavior} in urban environments (e.g. probabilities of acceleration, deceleration, switch of the dynamical model), methods which use \emph{head orientation} and \emph{semantic map} of the environment. This categorization is extended by \cite{ridel2018literature} to review pedestrian crossing intention inference techniques. \cite{morris2008survey} survey methods for trajectory learning and analysis for visual surveillance. They discuss similarity metrics, techniques and models for learning prototypical motion patterns (called activity paths) and briefly consider trajectory prediction as a case of online activity analysis. \cite{murino2017crowdBehavior} discuss group and crowd motion analysis as a multidisciplinary problem that combines insights from the social sciences with concepts from computer vision and pattern recognition. The authors review several recent methods for tracking and prediction of human motion in crowds. \cite{hirakawaDAPI18} survey video-based methods for semantic feature extraction and human trajectory prediction. The literature is divided based on the motion modeling approach into \emph{Bayesian models}, \emph{energy minimization methods}, \emph{deep learning methods}, \emph{inverse reinforcement learning methods} and \emph{other} approaches. {Related to our discussion of the benchmarking practices, several works survey the datasets of motion trajectories \citep{poiesi2015predicting,hirakawaDAPI18,ridel2018literature} and metrics for prediction evaluation \citep{quehl2017howgood}. \cite{poiesi2015predicting} and \cite{hirakawaDAPI18} describe several datasets of human trajectories in crowded scenarios, used to study social interactions and evaluate path prediction algorithms. \cite{ridel2018literature} discuss available datasets of pedestrian motion in urban settings. \cite{quehl2017howgood} review several trajectory similarity metrics, applicable in the motion prediction context. } Unlike these surveys, we review and analyze the literature across multiple application domains and agent types. Our taxonomy offers a novel way to structure the growing body of literature, containing the categories proposed by \cite{kruse2013human}, \cite{lasota2017survey} and \cite{lefevre2014survey} and extending them with a systematic categorization of contextual cues. In particular, we argue that the modeling approach and the contextual cues are two fundamentally different aspects underlying the motion prediction problem and should be considered separate dimensions for the categorization of methods. This allows, for example, the distinction of physics-based methods that are unaware of any external stimuli from methods in the same category that are highly situational aware accounting for road geometry, semantics and the presence of other agents. This is unlike previous surveys whose categorizations are along a single dimension based on both different modeling approaches and increasing levels of contextual awareness. We {extend existing reviews of the} benchmarking and evaluation efforts for motion prediction {\citep{poiesi2015predicting,hirakawaDAPI18,ridel2018literature,quehl2017howgood} with additional datasets, probabilistic and robustness metrics, and a principled analysis of existing benchmarking practices. Furthermore, we} give an up-to-date discussion of the current state of the art and conclude with recommendations for promising directions of future research. \section{Pattern-based Approaches} \label{sec:posterior_distribution:motion_patterns} \begin{figure*}[t!] \centering \includegraphics[width=0.99\linewidth,keepaspectratio]{pattern-based-image-pictograms.pdf} \caption{Examples of the pattern-based approaches: {\bf (a)} grid-based local transitions learning method, {\bf (b)} sequential location-independent transition model, which accounts for cues from the dynamic environment, {\bf (c)} higher-order sequential Markov model, {\bf (d)} clustering of full trajectories, {\bf (e)} location-independent method which learns long-term transition sequences, i.e. maneuvers.} \label{fig:pictograms-pattern-recognition-based} \end{figure*} In contrast to the physics-based approaches which use explicitly defined, parametrized functions of motion dynamics, pattern-based approaches learn the latter from data, following the \emph{Sense - Learn - Predict} paradigm. These methods learn human motion behaviors by fitting different function approximators (i.e. neural networks, hidden Markov models, Gaussian processes) to data. Many of those methods were introduced by the machine learning and computer vision communities (i.e. for behavior cloning and video surveillance applications), and later applied in robotics and autonomous navigation settings. In our taxonomy we classify pattern-based approaches into two categories, based on the type of function approximator used: \noindent \emph{(1) Sequential methods} typically learn conditional models, where it is assumed that the state (e.g. position, velocity) at one time instance is conditionally dependent on some sufficient statistic of the full history of past states. Many of the proposed methods are Markov models, where an $N$-th order Markov model assumes that a limited state history of $N$ time steps is a sufficient representation of the entire state history. Similarly to many physics-based approaches, sequential methods aim to learn a one-step predictor $\mathbf{s}_{t+1} = f(\mathbf{s}_{t-n:t} )$, where the state $\mathbf{s}_{t+1}$ is the one step prediction and the sequence of states $\mathbf{s}_{t-n:t}$ is the sufficient statistic of the history. In order to predict a sequence of state transitions (i.e. a trajectory), consecutive one-step predictions are made to compose a single long-term trajectory. \noindent\emph{(2) Non-sequential methods} directly model the distribution over full trajectories without imposing a factorization of the dynamics (i.e. Markov assumption) as with sequential models. \subsection{Sequential Models} \label{sec:motion_patterns:sequential} Sequential models are built on the assumption that the motion of intelligent agents can be described with causally conditional models over time. Similarly to the physics-based methods, transition function of sequential models has Markovian property, i.e. information on the future motion is confined in the current state of the agent. Differently, the function, often non-parametric (e.g. Gaussian Processes, vector fields), is learned from statistical observations, and its parameters cannot be directly interpreted as for many of the physics-based methods. \subsubsection{Local transition patterns} Learning local motion patterns, such as probabilities of transitions between cells on a grid-map (Fig.~\ref{fig:pictograms-pattern-recognition-based} (a)), is a simple, commonly used technique for making sequential predictions \citep{kruse1998camera,tadokoro1993stochastic,thompson2009probabilistic,kucner2013conditional,wang2015modeling,wang2016building,ballan2016knowledge,molina2018modelling}. Early examples of local motion patterns include the works of \cite{tadokoro1993stochastic} and \cite{kruse1998camera}. \cite{kruse1998camera} build two transition models: a stochastic grid where usual motion patterns of dynamic obstacles are stored, and stochastic trajectory prediction modeled with Poisson processes. \cite{tadokoro1993stochastic} include empirical biases to account for context features of the cells in the regions where the observations are sparse, e.g. increasing the probability to move away from the wall, stop near a bookshelf or decrease walking speed at the crossing. More recently, \cite{thompson2009probabilistic} expand the local motion patterns model by accounting for further transitions for several steps into the future. Their method maps the motion state of the person to a series of local patches, describing where the person might be in the future. Besides the current motion state, the learned patterns are also conditioned on the final goal or the topological sub-goal in the environment. \cite{wang2015modeling} model local transition probabilities with an Input-Output HMM. Transition in each cell is conditioned both on the direction of cell entrance and the global starting point of the person's movement. \cite{jacobsRAL2017} use nonlinear estimation of pedestrian dynamics with the learned vector-fields to improve the linear velocity projection model. \cite{ballan2016knowledge} propose a Dynamic Bayesian Network method to predict not-interacting human motion based on statistical properties of human behavior. To this end a transferable navigation grid-map is learned. It encodes {functional properties} of the environment (i.e. direction and speed of the targets, crossing frequency for each patch, identification of routing points). \cite{molina2018modelling} address periodic temporal variations in the learned transition patterns, e.g. based on the time of the day. In contrast to the discrete transition patterns discussed so far, several authors model the transition dynamics as a continuous function of the agent's motion state, using Gaussian Processes and their mixtures \citep{ellis2009modelling,joseph2011bayesian,ferguson2015real,kucner2017enabling}. \cite{ellis2009modelling} model trajectory data in the observed environment by regressing relative motion against current position. Predictions are generated using a sequential Monte-Carlo sampling method. \cite{joseph2011bayesian} model the multi-modal mobility patterns as a mixture of Gaussian processes with a Dirichlet process prior over mixture weights. \cite{ferguson2015real} further extends the work of \cite{joseph2011bayesian} by including a change-point detection and clustering algorithm which enables quick detection of changes in intent and on-line learning of motion patterns not seen in prior training data. \cite{kucner2017enabling} model multimodal distributions with a Gaussian Mixture Model (GMM) in the joint velocity-orientation space. Apart from the commonly used grid-cells, local transition patterns can be learned using a higher-level abstraction of the workspace, such as a graph of sub-goals or transition points \citep{ikeda2013modeling,han2019pedestrian}, map of connected position-velocity points \cite{kalayeh2015understanding}, Voronoi diagram \citep{liao2003voronoi}, Instantaneous Topological Map (ITM) \citep{vasquez2009incremental}, semantic-aware ITM \citep{vasishta2018building}. More flexible representation of the workspace topology is achieved this way. Combining the merits of local and global motion patterns (i.e. sequential and non-sequential models), \cite{chen2016augmented} model trajectories in the environment with a set of overcomplete basis vectors. The method breaks down trajectories into a small number of representative partial motion patterns, where each partial pattern consists of a series of local transitions. A follow-up work by \cite{habibi2018context} incorporates semantic features from the environment (relative distance to curbside and the traffic lights signals) in the learning process, improving prediction accuracy and generalization to similar environments. \cite{han2019pedestrian} propose a method to explicitly learn transition points between the local patterns. \subsubsection{Location-independent behavioral patterns} Unlike the local transition patterns, which are learned and applied for prediction only in a particular environment, \emph{location-independent} patterns are used for predicting transitions of an agent in the general free space \citep{aoude2011mobile,tran2014online,foka2002predictive,shalev2016long,quintero2014pedestrian} (see Fig.~\ref{fig:pictograms-pattern-recognition-based} (b)). Several authors, e.g. \cite{foka2002predictive,shalev2016long}, use location-invariant one-step prediction as a part of collision avoidance framework using neural networks. \cite{aoude2011mobile} extend their physics-based approach \citep{aoude2010threat} by introducing location-independent GP-based motion patterns that guide the RRT-Reach to grow probabilistically weighted feasible paths of the surrounding vehicles. \cite{tran2014online} model location-independent motion patterns of vehicles by applying spatial normalization to the trajectories in the learning set. Cartesian coordinates are turned into the relative coordinate system of the road intersection, based on the topology of the lanes. \cite{keller2014tits} use optical flow features derived from a detected pedestrian bounding box to predict future motion. \cite{quintero2014pedestrian} instead extract full-body articulated pose. In both works, body motion dynamics {for walking and stopping} are learned using Gaussian Processes with Dynamic Model (GPDM) in a compact low-dimensional latent space. \cite{minguez2018pedestrian} extend {\citep{quintero2014pedestrian} by considering standing and starting activities as well. A first-order} HMM is used to model the transition between the activities. Several location-independent methods learn socially-aware models of local interactions \citep{antonini2006behavioral,vemula2017modeling}. \cite{antonini2006behavioral} adapt the Discrete Choice Model from econometrics studies to predict local transitions of individuals, given the intended direction, current velocity, locations of obstacles and other people nearby. \cite{vemula2017modeling} reformulates the non-sequential joint human motion prediction approach by \cite{trautman2010unfreezing}, discussed in Sec.~\ref{sec:motion_patterns:nonsequential}, as sequential inference with Gaussian Processes. They model the local motion of each agent conditioned on relative positions of other people in the surroundings and the person's goal \subsubsection{Complex long-term dependencies} Several recent sequential methods use neural networks for time series prediction, i.e. assuming higher order Markov property \citep{sumpter2000learning,alahi2016social,bartoli2017context,varshneya2017human,sun20173dof,jain2016structural,vemula2017socialattention,GoldhammerICPR2014,schmerling2017multimodal,zheng2016generating}, see Fig.~\ref{fig:pictograms-pattern-recognition-based} (c). Such time series-based models are making a natural transition between the first order Markovian methods (e.g. local transition patterns) and non-sequential techniques (e.g. clustering-based). An early method, presented by \cite{sumpter2000learning} learns long-term spatio-temporal motion patterns from visual input in a known environment. The simple neural network architecture, based on natural language processing networks, quantizes partial trajectories in location/shape-space: the symbol network categorizes the object shape and locations at any time, and the context network categorizes the order in which they appear. \cite{GoldhammerICPR2014} learn usual human motion patterns using an ANN with the multilayer perceptron architecture. This method was adapted to predict motion of cyclists by \cite{zernetsch2016trajectory}. Recurrent Neural Networks (RNN) for sequence learning, and Long Short-term Memory (LSTM) networks in particular, have recently become a widely popular modeling approach for predicting human \citep{alahi2016social, bartoli2017context,varshneya2017human,sun20173dof,vemula2017socialattention,saleh2018intent,sadeghian2018sophie}, vehicle \citep{kim2017probabilistic,altche2017lstm,park2018sequence,ding2019predicting} and cyclist \citep{pool2019context} motion. \cite{alahi2016social} was the first one to propose a Social-LSTM model to predict joint trajectories in continuous spaces. Each person is modeled by an individual LSTM. Since humans are influenced by nearby people, LSTMs are connected in the social pooling system, sharing information from the hidden state of the LSTMs with the neighbouring pedestrians. The work of \cite{bartoli2017context} extends the Social-LSTM, explicitly modeling human-space interactions by defining a ``context-aware'' pooling layer, which considers the static objects in the neighborhood of a person. \cite{varshneya2017human} use a Spatial Matching Network, first introduced by \cite{huang2016deep} (discussed in Sec.~\ref{subsubsection:learningplanning}), that models the spatial context of the surrounding environment, predicting the probability of the subject stepping on a particular patch. \cite{sun20173dof} use LSTM to learn environment- and time-specific human activity patterns in the target environment from long-term observations, i.e. covering several weeks. The state of the person is extended to include contextual information, i.e. the time of the day when the person is observed. \cite{pfeiffer2018data} couple obstacle-awareness with an efficient representation of the surrounding dynamic agents using a 1D vector in polar angle space. \cite{bisagno2018group} add group coherence information in the social pooling layer. Saleh et al. predict trajectories of pedestrians \citep{saleh2018intent} and cyclists \citep{saleh2018cyclist}, adapting the LSTM architecture for the perspective of a moving vehicle. Numerous other implementations of the LSTM-based predictors offer various improvements, such as increased generalizability to new and crowded environments \citep{xue2019location,shi2019pedestrian}, considering the immediate \citep{zhang2019sr} or long-term \citep{xue2017bi} intention of the agents, augmenting the state of the person with the head pose \citep{hasan2018mx} or adding a better pooling mechanism with relative importance of each person in the vicinity of the target agent \citep{xu2018encoding,fernando2018soft,pei2019human}. \cite{huynh2019trajectory} apply LSTM-based trajectory prediction in combination with local transition patterns, learned on the fly in a particular scene. Non-linear motion, historically observed in a coarse grid cell of the environment, informs the LSTM predictor. Several authors use LSTMs to estimate kinodynamic motion of vehicles, combining the benefits of the physics-based and the pattern-based methods \citep{raipuriaIV2017,deo2018multi}. \cite{raipuriaIV2017} augment the LSTM model with the road infrastructure indicators, expressed in the curvilinear coordinate system, to better predict motion in curved road segments. \cite{deo2018multi} propose an interaction-aware multiple-LSTM model to compute stochastic maneuver-dependent predictions of a vehicle, and augment it with an LSTM-based maneuver classification and mixing mechanism. Other approaches use RNN as models of spatio-temporal graphs for problems that require both spatial and temporal reasoning \citep{jain2016structural,vemula2017socialattention,huang2019stgat,dai2019modeling,ivanovic2019trajectron,eiffert2019predicting}. \cite{jain2016structural} propose an approach for training sequence prediction models on arbitrary high-level spatio-temporal graphs, whose nodes and edges are represented by RNNs. The resulting graph is a feed-forward, fully differentiable, and jointly trainable RNN mixture. \cite{vemula2017socialattention} apply this method to jointly predict transitions in human crowds RNN abilities for prediction of time-series is also combined with different neural networks architectures \citep{schmerling2017multimodal, zheng2016generating, zhan2018generative,li2019coordination, choi2010multiple}. \cite{schmerling2017multimodal} consider a traffic weaving scenario and propose a Conditional Variational Autoencoder (CVAE) with RNN subcomponents to model interactive human driver behaviors. The CVAE characterizes a multi-modal distribution over human actions at each time step conditioned on interaction history, as well as future robot action choices. \cite{zheng2016generating} describes a hierarchical policy approach that automatically reasons about both long-term and short-term goals. The model uses recurrent convolutional neural networks to make predictions for macro-goals (intermediate goals) and micro-actions (relative motion), which are trained independently by supervised learning, combined by an attention module, and finally jointly fine-tuned. \cite{zhan2018generative} extend this approach using Variational RNNs. \cite{choi2019drogon} uses spatial-temporal graphs in combination with CVAE. The spatial-temporal graphs are used to model the relational influence among predicted agents. Conditions of the CVAE are represented by estimated intentions. Also \cite{li2019coordination} propose a hierarchical architecture where an upper level (based on variational RNN) provides predictions of discrete coordination activities between agents and a lower level generates actual geometric predictions (using a Conditional Generative Adversarial Network). The probabilistic framework called \emph{Multiple Futures Predictor} (MFP) \citep{tang2019mfp} models joint behavior of an arbitrary number of agents via a dynamic attention-based state encoder for capturing relationships between agents, a set of stochastic, discrete latent variables per agent to allow for multimodal future behavior, as well as interactive and step-wise parallel rollouts with agent-specific RNNs to model future interactions. Furthermore, there model allows to make hypothetical rollouts under assumptions of behavior for a particular agent. Several recent works \citep{xue2018ss,zhao2019multi,srikanth2019infer,radwan2018multimodal,vanderHeiden2019safecritic,jain2019discrete,ridel2019scene,Rhinehart_2019_ICCV} combine the benefits of sequential (e.g. RNN-based) and convolutional approaches for modeling jointly the spatial and temporal relations of the observed agents' motion. \cite{xue2018ss} introduce a hierarchical LSTM model, which combines inputs on three scales: trajectory of the person, social neighbourhood and features of the global scene layout, extracted with a CNN. \cite{zhao2019multi} propose the Multi-Agent Tensor Fusion encoding, which fuses contextual image of the environment with sequential trajectories of agents, thus retaining spatial relation between features of the environment and capturing interaction between the agents. This method is applied to both pedestrian and vehicles. Also \cite{Rhinehart_2019_ICCV} present a prediction scheme for multi-agent that combines CNNs with a generative model based on RNNs. Moreover the approach conditions the predictions on inferred intentions of the agents. \cite{srikanth2019infer} propose a novel input representation for learning vehicle dynamics, which includes semantics images, depth information and other agents' positions. This input is projected into top-down view and fed into the autoregressive convolutional LSTM model to learn temporal dynamics. LSTMs have been also used to predict sequence of future human movements based on a learned reward map \cite{saleh2019contextual}. Recently, many authors have applied the GAN architecture to achieve multi-modality in the prediction output \citep{Gupta2018SocialGAN,amirian2019social,kosaraju2019social}. For instance, \cite{Gupta2018SocialGAN} extend the Social-LSTM by using Generative Adversarial Networks and a novel variety loss which encourages the generative network to produce diverse multi-modal predictions. \cite{kosaraju2019social} use Graph Attention Network in combination with GAN architecture to better capture relative importance of surrounding agents and semantic features of the environment. \subsection{Non-sequential Models} \label{sec:motion_patterns:nonsequential} Learning motion patterns in complex environments requires the model to generalize across non-uniform, context-dependent behaviors. Specifying causal constraints, e.g. through the Markovian assumption for the sequential models and additionally the particular functional form for the physics-based methods, might be too restrictive for these situations. Alternatively, instead of focusing on the local transitions of the system, \emph{non-sequential approaches} aim to directly learn a distribution over long-term trajectories, that the observed agent may follow in the future, i.e. learn a set of full motion patterns from data. Most basic non-sequential approaches are based on clustering the observed trajectories, which creates a set of long-term motion patterns \citep{bennewitz2002using,bennewitz2005learning,chen2008pedestrian,bera2016glmp,bera2017aggressive}. This way global structure of the workspace is imposed on top of a sequential model. Clustering-based approaches are illustrated in Fig.~\ref{fig:pictograms-pattern-recognition-based} (d). \cite{bennewitz2002using, bennewitz2005learning} cluster recorded trajectories of humans into global motion patterns using the expectation maximization (EM) algorithm and build an HMM model for each cluster. For prediction, the method compares the observed track with the learned motion patterns, and reasons about which patterns best explain it. Uncertainty is handled by probabilistic mixing of the most likely patterns. Similarly, \cite{zhou2015learning} models the global motion patterns in a crowd with Linear Dynamic Systems using EM for parameters estimation. Several authors \citep{makris2002path,piciarelli2005trajectory} propose graph structures to efficiently capture the branching of trajectory clusters. \cite{chen2008pedestrian} propose a method for dynamic clustering of the observed trajectories, assuming that the set of complete motion patterns may mot be available at the time of prediction, e.g. in new environments. \cite{sung2012trajectory} propose to represent the agent's states as short trajectories rather than static positions. This higher level of abstraction provides greater flexibility to represent not only position, but also velocity and intention. \cite{suraj2018predicting} directly use a large-scale database of observed trajectories (up to 10 millions) to estimate the future positions of a vehicle given only its position, rotation and velocity. Combining the concepts of local motion patterns and clustering, \cite{carvalho2019long} represent each cluster with a piece-wise linear vector field over an arbitrary state-space mesh. Several approaches use Gaussian processes (GPs) or mixture models as cluster centroids representation \citep{tay2008modelling,kim2011gaussian,yoo2016visual}. \cite{tay2008modelling} introduce an approach to predict motion of a dynamic object in known scenes based on Gaussian mixture models and Gaussian processes. \cite{kim2011gaussian} model continuous dense flow fields from a sparse set of vector sequences. \cite{yoo2016visual} propose to learn most common patterns in the scene and their co-occurrence tendency using topic mixture and Gaussian mixture models. Observed trajectories are clustered into several groups of typical patterns that occur at the same time with high probability. Given a set of observed trajectories, prediction is performed considering the dominant pattern group. \cite{makansi2019overcoming} present a Mixture Density Network architecture which generates multiple hypotheses of future position in fixed interval $\Delta t$ and then fits a mixture of Gaussian or Laplace distributions to these hypothesis. Clustering-based methods, discussed so far, generalize statistical information in a particular environment. In comparison, location-invariant methods, based on matching the observed partial trajectory to a set of prototypical trajectories, can be used in arbitrary free space \citep{hermesIVS09,keller2011dagm,xiao2015unsupervised}, see Fig.~\ref{fig:pictograms-pattern-recognition-based} (e). \cite{hermesIVS09} predict trajectories of vehicles by comparing the observed track to a set of motion patterns, clustered with a rotationally-invariant distance metric. In their Probabilistic Hierarchical Trajectory Matching (PHTM) approach, \cite{keller2011dagm} propose a probabilistic search tree of sample human trajectory snippets to find the corresponding matching sub-sequence. \cite{xiao2015unsupervised} decompose the set of sample trajectories into pre-defined motion classes, such as wandering or stopping, rotating and aligning them to start from the same point and have the longest span along the same axis. In contrast, skipping the clustering step, \cite{nikhil2018convolutional} propose a simple method to map the input trajectory of fixed length to the full future trajectory using a Convolutional Neural Network. For interaction-aware non-sequential motion prediction, several authors consider the case with two interacting agents \citep{kafer2010recognition,luberIROS2012}. \cite{kafer2010recognition} propose a method for joint pairwise vehicle trajectory estimation at intersections. Comparing the observed motion pattern to the ones stored in a motion database, several prospective future trajectories are extracted independently for each vehicle. Probability of each pair of possible future trajectories is then estimated. \cite{luberIROS2012} model joint pairwise interactions between two people using social information. Authors learn a set of dynamic motion prototypes from observations of relative motion behavior of humans in public spaces. An unsupervised clustering technique determines the most likely future paths of two humans approaching a point of social interaction. In contrast to multi-agent clustering, \cite{trautman2010unfreezing} use Gaussian Processes for making single-agent trajectory predictions. Then, an interaction potential re-weights the set of trajectories based on how close people are located to each other at every moment. A follow-up work \citep{trautman2013robot} incorporates goal information into the model: the goal position is added as a training point into the GP. Another approach by \cite{su2017forecast} uses a social-aware LSTM-based crowd descriptor, which is later integrated into the deep Gaussian Process to predict a complete distribution over future trajectories of all people. Recently, several approaches for non-sequential prediction of vehicle motion using CNNs were presented \citep{djuric2018motion,cui2019multimodal,hong2019rules}. An uncertainty-aware CNN-based vehicle motion prediction approach is presented by \cite{djuric2018motion}. Authors use a high-definition map image with projected prior motion of the target vehicle and full surrounding context as an input to the CNN, which produces the short-term trajectory of the target vehicle. The approach is extended by \cite{cui2019multimodal} to inferring multi-modal predictions. \cite{hong2019rules} propose two methods for output representation using multi-modal regression with uncertainty or stacks of grid-map crops. \cite{chai2019multipath} use a fixed set of state-sequence ``anchor'' trajectories (clustered from training data), which correspond to possible modes of future behavior, as input to a CNN for mid-level scene features inference, and predict a discrete distribution over these anchors. For each anchor, the method regresses offsets from anchor waypoints along with uncertainties, yielding a Gaussian mixture at each time step. \subsection{Cues of the Static Environment} \label{sec:other_classifications:environment_description} Humans adapt their behaviors according not only to the movements of the other agents but also to the environment's shape and structure, making extensive use of its topology to reason on the possible paths to reach the long-term goal. Many existing prediction algorithms make use of such geometric information of the environment. Some approaches produce \emph{unaware predictions}, assuming an obstacle-free environment. This category includes several physics-based approaches \citep{zhu1991hidden,elnagarTSMC1998,elnagarISCIRA2001,Foka2010,schneider2013gcpr,Bai2015,pettre2009experiment,blaiotta2019learning}. Pattern-based methods usually model obstacles implicitly, by learning collision-free patterns \citep{tadokoro1993stochastic,kruse1998camera,bennewitz2002using,ellis2009modelling,tay2008modelling,thompson2009probabilistic,kim2011gaussian,jacobsRAL2017,vasquez2008intentional,joseph2011bayesian,ferguson2015real,wang2015modeling,wang2016building,kucner2013conditional,kucner2017enabling,sun20173dof,yoo2016visual,chen2008pedestrian,chen2016augmented,molina2018modelling,saleh2018intent,saleh2018cyclist,xue2019location,xue2017bi,hasan2018mx,huynh2019trajectory,carvalho2019long,sung2012trajectory,han2019pedestrian,piciarelli2005trajectory,makris2002path,makansi2019overcoming}. When facing a change in the obstacles' configuration, such patterns become obstacle-unaware. Location-independent motion patterns are usually obstacle-unaware \citep{luberIROS2012,hermesIVS09,xiao2015unsupervised,GoldhammerICPR2014,unhelkar2015human,nikhil2018convolutional}. Pedestrian crossing prediction methods typically assume obstacle-free environment \citep{gu2016motion,quintero2014pedestrian,roth2016iv,kooij2014eccv,schulz2015controlled,keller2014tits,minguez2018pedestrian,kooij2018ijcv}, as well as most of the vehicle prediction methods \citep{kim2017probabilistic,raipuriaIV2017,deo2018multi,suraj2018predicting,park2018sequence,altche2017lstm,ding2019predicting}, which assume the road-surface to be free of static obstacles. Finally, many methods consider only dynamic entities, but no static obstacles in the environment \citep{trautman2010unfreezing, trautman2013robot,bera2016glmp,althoff2013road,althoff2008stochastic,vemula2017modeling,alahi2016social,bartoli2017context,varshneya2017human,kim2015brvo,zanlungo2011social,kuderer2012feature,broadhurst2005monte,kafer2010recognition,vemula2017socialattention,radwan2018multimodal, bahram2016game,pfeiffer2018data,bisagno2018group,zhang2019sr,shi2019pedestrian,su2017forecast,Gupta2018SocialGAN,xu2018encoding,fernando2018soft,li2019coordination,pei2019human,huang2019stgat,amirian2019social,fernando2019neighbourhood,dai2019modeling,ivanovic2019trajectron,eiffert2019predicting}. In several approaches the exact pose of the objects is known and utilized to compute more informed predictions (we refer to such methods as to \emph{obstacle-aware} methods). Mainly the social force-based and similar techniques model the interaction between the moving agents and individual static obstacles \citep{van2008reciprocal,Luber2010,elfring2014learning,ferrer2014behavior,kretzschmar2014learning,pellegrini2009you,yamaguchiCVPR2011,pellegrini2010improving,robicquet2016learning,oli2013human,karasev2016intent,karamouzas2009predictive,karamouzas2010velocity,paris2007pedestrian,zechel2019pedestrian,luo2019gamma, Rhinehart_2019_ICCV}. Several location-independent pattern-based methods \citep{antonini2006behavioral,aoude2011mobile} can handle static objects avoidance. Still, obstacle-aware methods may fail in very cluttered environments, due to the complexity of representing an environment with a set of individual obstacles. To overcome this difficulty many prediction approaches use maps which are a more complete representation of the environment (we call them \emph{map-aware} methods). Occupancy grid maps are the most common representation for these approaches, e.g. in the physics-based approach by \cite{rehder2015goal} reachability-based transitions are calculated on a binary grid-map. Particularly the planning-based approaches use this kind of representation: thanks to the map they can infer global, intentional behaviors of the agents \citep{ziebart2009planning, vasquez2016novel, pfeiffer2016predicting, xieICCV2013, previtaliICMLA2016,yiTRIP2016,chen2017decentralized,Rudenko2017workshop,Rudenko2018icra,Rudenko2018iros,henry2010learning,bruce2004better,best2015bayesian,ikeda2013modeling,liao2003voronoi,chung2010mobile,yen2008goal,chung2012incremental,gong2011multi,rosmann2017online}. Fig.~\ref{fig:staticcontextualcues} shows the difference between the \emph{pure motion based predictions}, the \emph{obstacle-aware} and the \emph{map-aware} approaches. The latter perform better in terms of global obstacle avoidance behavior during prediction. \emph{Semantic map based} approaches extend the map-aware approaches by considering various semantic attributes of the static environment. A semantic map \citep{karasev2016intent,kitani2012activity,rehder2017pedestrian,coscia2018long,shenTransferable2018,vasishta2017natural,vasishta2018building,Rhinehart_2018_ECCV,rhinehartArxiv2018,tadokoro1993stochastic,ballan2016knowledge,zhao2019multi,vanderHeiden2019safecritic,ridel2019scene,muench2019composable,saleh2019contextual} or extracted features from a top-down image \citep{xue2018ss,sadeghian2018sophie,kosaraju2019social,tang2019mfp} can be used to capture people preferences in walking on a particular type of surfaces. Furthermore, planning-based methods often use prior knowledge on potential goals in the environment \citep{karasev2016intent,Rudenko2017workshop,previtaliICMLA2016,vasquez2016novel,best2015bayesian}. Location- and time-specific information in the particular environment may help to improve prediction quality \citep{sun20173dof,molina2018modelling}. Due to the high level of structure in the environment, methods in autonomous driving scenarios extensively use available semantic information, such as street layout and traffic rules \citep{kuhnt2016understanding,agamennoni2012estimation,gu2016motion,keller2014tits,lee2017desire,kooij2014eccv,petrich2013map,pool2017iv,srikanth2019infer,djuric2018motion,xie2018vehicle,cui2019multimodal,hong2019rules,jain2019discrete,chai2019multipath,pool2019context,choi2010multiple} or current state of the traffic lights \citep{karasev2016intent,gu2016motion,jain2019discrete}, also for predicting pedestrian and cyclist motion \citep{habibi2018context,koschiset,kooij2018ijcv}. \subsection{Cues of Other Dynamic Agents} \label{sec:other_classifications:interaction} Most of the time all agents navigate in a shared environment, adapting their actions, timing and route based on the others' presence and behavior. Therefore for predicting motion it is beneficial to consider interaction between moving agents. We classify the existing approaches in three categories: \emph{unaware predictors}, \emph{individual-aware predictors} and \emph{group-aware predictors}. The class of unaware predictors includes all methods that generate motion prediction for a single agent, considering only the static contextual cues of the environment. Having no need to explicitly define or learn the interaction model, these methods are simpler to set up, require less training data to generalize, typically have less parameters to estimate. Simpler physics-based methods, such as linear velocity projection or constant acceleration models, are unaware predictors \citep{zhu1991hidden,elnagarTSMC1998,elnagarISCIRA2001,Foka2010,Bai2015,coscia2018long,koschiset,vasishta2017natural,vasishta2018building,xie2018vehicle}. Many pattern-based \citep{tadokoro1993stochastic,bennewitz2002using,bennewitz2005learning,thompson2009probabilistic,kim2011gaussian,wang2016building,kucner2013conditional,kucner2017enabling,unhelkar2015human,xiao2015unsupervised,GoldhammerICPR2014,chen2008pedestrian,chen2016augmented,suraj2018predicting,habibi2018context,hermesIVS09,molina2018modelling,kim2017probabilistic,saleh2018intent,xue2019location,xue2017bi,huynh2019trajectory,nikhil2018convolutional,sung2012trajectory,carvalho2019long,han2019pedestrian,piciarelli2005trajectory,makris2002path,makansi2019overcoming,ridel2019scene} and planning-based methods \citep{yen2008goal,ziebart2009planning, vasquez2016novel, kitani2012activity, karasev2016intent,gong2011multi,Rudenko2017workshop, Rhinehart_2018_ECCV} are unaware predictors, due to the increase of complexity for conditioning the learned transition patterns or optimal actions on the presence and positions of other agents. Methods for predicting pedestrians crossing behavior \citep{kooij2014eccv,quintero2014pedestrian,minguez2018pedestrian,roth2016iv,gu2016motion,keller2014tits,schulz2015controlled} and cyclist motion \citep{zernetsch2016trajectory,pool2017iv,saleh2018cyclist,pool2019context} typically treat each agent individually. Individual-aware predictors methods consider the interaction between agents by modeling or learning their influence on each other. Physics-based methods that use social forces \citep{zanlungo2011social,Luber2010,elfring2014learning,ferrer2014behavior,oli2013human,karamouzas2009predictive,blaiotta2019learning} or similar local interaction models \citep{paris2007pedestrian,pellegrini2009you,kim2011gaussian,yamaguchiCVPR2011,robicquet2016learning,pellegrini2010improving,karamouzas2010velocity,pettre2009experiment,luo2019gamma, bansal2019hamilton} are classical examples of individual-aware prediction models. A pattern-based approach by \cite{ikeda2013modeling} models deviations from the desired path using social forces. In general, however, learning joint motion patterns is a considerably harder task. For example, \cite{trautman2010unfreezing,trautman2013robot} learn unaware motion patterns, and then evaluate the predicted probability distribution over the joint paths using an explicit interaction potential. \cite{luberIROS2012} learn pairwise joint motion patterns of two humans approaching the spatial point of interaction. The approach by \cite{yoo2016visual} learns which motion patterns are likely to occur at the same time and uses this information for predicting the future motion of several dynamic objects. Some approaches propose to learn a motion policy or reward function that accounts for dynamic objects in the surrounding \citep{chung2010mobile,chung2012incremental,henry2010learning,lee2016predicting,vemula2017modeling}. \cite{Rudenko2018icra} propose an MDP planning-based method, where optimal policies of people are locally modified to account for other dynamic entities. \cite{wuIV18} and \cite{zechel2019pedestrian} discount predicted transition probabilities to states in collision with other agents. \cite{muench2019composable} decompose the interactive planning problem into two policies with the corresponding Q-functions: one for prediction in static environment, and another for interaction prediction in an obstacle-free environment. Many deep learning methods consider interactions between participants: explicitly modeling interacting entities \citep{alahi2016social,bartoli2017context,varshneya2017human,vemula2017socialattention,radwan2018multimodal,pfeiffer2018data,shi2019pedestrian,zhao2019multi,hasan2018mx,xue2018ss,su2017forecast,sadeghian2018sophie,Gupta2018SocialGAN,xu2018encoding,fernando2018soft,vanderHeiden2019safecritic,pei2019human,huang2019stgat,amirian2019social,fernando2019neighbourhood,kosaraju2019social,ivanovic2019trajectron,eiffert2019predicting,saleh2019contextual, choi2019drogon, Rhinehart_2019_ICCV}, implicitly as a result of pixel-wise prediction \citep{walker2014patch}, or by learning a joint motion policy \citep{ma2016forecasting,lee2017desire,shalev2016long,zhan2018generative}. Many vehicle prediction methods consider interaction between traffic participants, e.g. \citep{agamennoni2012estimation,kuhnt2016understanding,raipuriaIV2017,deo2018multi,kim2017probabilistic,broadhurst2005monte,kafer2010recognition, bahram2016game,srikanth2019infer,park2018sequence,djuric2018motion, cui2019multimodal, li2019coordination,hong2019rules,altche2017lstm,jain2019discrete,chai2019multipath,dai2019modeling,ding2019predicting}. \cite{kooij2018ijcv} consider whether the ego-vehicle is on a potential collision course when predicting the road user path in their SLDS-based approach. Group-aware predictors also recognize affiliations and relations of individual agents and respect the probability of them traveling together, as well as model an appropriate reaction of other agents to the moving group formation. For example, several physics-based methods model group relations by introducing additional attractive forces between group members \citep{yamaguchiCVPR2011,pellegrini2010improving,singh2009modelling,qiu2010modeling,karamouzas2012simulating,seitz2014pedestrian,moussaid2010walking,choi2010multiple,robicquet2016learning}. Several learning-based approaches that use LSTMs \citep{alahi2016social,bartoli2017context,varshneya2017human,pfeiffer2018data,zhang2019sr,shi2019pedestrian} may be capable of implicitly learning intra- and inter-group coherence behavior, however only the work by \cite{bisagno2018group} states this capability explicitly. A planning-based approach which implicitly respects group integrity by increasing the costs of passing between group members is presented by \cite{rosmann2017online} and an approach that explicitly models group motion constraints by \cite{Rudenko2018iros}. Algorithms using high-level context information about dynamic agents produce more precise predictions in a variety of cases. Learning advanced social features of human motion improves interactive predictors performance, for instance different parameters for interactions of heterogeneous agents \citep{ferrer2014behavior}, advanced motion criteria such as \emph{social comfort} of navigation \citep{kuderer2012feature,luberIROS2012,pfeiffer2016predicting} or ``desire to move with the flow'' or ``avoid dense areas'' \citep{henry2010learning}. Some approaches model prior knowledge in terms of the dynamics of moving agents \citep{lee2017desire,rosmann2017online}, human attributes and personal traits \citep{ma2016forecasting}. \cite{chung2012incremental} present a general framework for learning context-related spatial effects, which affect the human motion, such as avoiding going through a waiting line, or in front of a person, who observes the work of art in a museum. Modeling also the influence of the robot's presence on the agents' paths is another interesting line of research: \cite{trautman2010unfreezing} and \cite{oli2013human} tackle this problem by placing the robot as a peer-interacting agent among moving humans. Several authors \citep{kuderer2012feature,kretzschmar2014learning,pfeiffer2016predicting,rosmann2017online} optimize joint trajectories for all humans and the robot. A relevant case of modeling the effect of robotic herd actions on the location and shape of the flock of animals is studied by \cite{sumpter2000learning}. Similarly, \cite{schmerling2017multimodal} condition human response on the candidate robot actions for modeling pairwise human-robot interaction. \cite{eiffert2019predicting} include the robot as an interacting agent in the LSTM-based predictor. \cite{tang2019mfp} compute a conditional probability density over the trajectories of other agents given the hypothetical rollout for the robot. \section{Contextual Cues} \label{sec:classification_contextualcues} In this section we discuss the categorization of the contextual cues, in those dealing with the target agent (Sec.~\ref{sec:other_classifications:target_agent}), the other dynamic agents (Sec.~\ref{sec:other_classifications:interaction}) and the static environment (Sec.~\ref{sec:other_classifications:environment_description}). \subsection{Cues of the Target Agent} \label{sec:other_classifications:target_agent} Most essential cues, used to predict future states of an agent, are related to the agent itself. To this end most of the algorithms use current position and velocity of the target agent \citep{ferrer2014behavior, elfring2014learning, pellegrini2009you, kitani2012activity, karasev2016intent, ziebart2009planning, trautman2010unfreezing, kuderer2012feature, bennewitz2005learning, kucner2017enabling,bera2016glmp,wuIV18,habibi2018context,Rudenko2018iros, bahram2016game, Rhinehart_2018_ECCV, luo2019gamma,bansal2019hamilton}, often considering also the history of recent states/velocities. Position and velocity are also the main attributes of the target agent in vehicle motion prediction tasks \citep{hermesIVS09,broadhurst2005monte,kafer2010recognition}. Considering the head orientation or full articulated pose of the person \citep{quintero2014pedestrian,unhelkar2015human,kooij2014eccv,roth2016iv,schulz2015controlled,minguez2018pedestrian,kooij2018ijcv,hasan2018mx,blaiotta2019learning} may bring valuable insights on the target agent's immediate intentions or their awareness of the environment. Considering additional semantic attributes of the target agent may further refine the quality of predictions: gender and age in \citep{ma2016forecasting}, personality type \citep{bera2017aggressive}, class of the dynamic agent ({e.g.} a person or a cyclist in pedestrian areas, motorcycle, car or a truck on a highway) \citep{coscia2018long,ballan2016knowledge,altche2017lstm}, person's attention and awareness of the robot's presence in \citep{oli2013human,kooij2018ijcv,blaiotta2019learning}, raised arm as a bending intention indicator for cyclists \citep{pool2019context,kooij2018ijcv}. \section{Physics-based Approaches} \label{sec:posterior_distribution:physics_based} Physics-based models generate future human motion considering a hand-crafted, explicit dynamical model $f$ based on Newton's laws of motion. A common form for $f$ is $\dot{\mathbf{s}}_t = f(\mathbf{s}_t, \mathbf{u}_t, t) + w_t$ where $\mathbf{u}_t$ is the (unknown) control input and $w_t$ the process noise. In fact, motion prediction can be seen as inferring $\mathbf{s}_t$ and $\mathbf{u}_t$ from various estimated or observed cues. A large variety of physics-based models have been developed in the target tracking and automatic control communities to describe motion of dynamic objects in ground, marine, airborne or space applications, typically used as building blocks of a recursive Bayesian filter or multiple-model algorithm. These models differ in the type of motion they describe such as maneuvering or non-maneuvering motion in 2D or 3D, and in the complexity of the target's kinematic or dynamic model and the complexity of the noise model. See \citep{rongliTAES03survey, rongliTAES10survey} for a survey on physics-based motion models for target tracking. We subdivide physics-based models into (1) \emph{single-model approaches} that rely on a single dynamical model $f$ and (2) \emph{multi-model approaches} that {involve} several modes of dynamics (see Fig.~\ref{fig:pictograms-physics-based}). \subsection{Single-model Approaches} \subsubsection{Early works and basic models} Many approaches to human motion prediction represent the motion state of target agents as position, velocity and acceleration and use different physics-based models for prediction. Among the simplest ones are kinematic models without considering forces that govern the motion. Popular examples include the constant velocity model (CV) that assumes piecewise constant velocity with white noise acceleration, the constant acceleration model (CA) that assumes piecewise constant acceleration with white noise jerk, the coordinated turn model (CT) that assumes constant turn rate and speed with white noise linear and white noise turn acceleration or the more general curvilinear motion model by \cite{bestTAES97}. The bicycle model is an often used {as an approximation to model the vehicle dynamics} (see e.g. \citep{schubertFUSION08}). A large number of works across all application domains rely on kinematic models for their simplicity and acceptable performance under mild conditions such as tracking with little motion uncertainty and short prediction horizons. Examples include \citep{mogelmose2015trajectory} for hazard inference from linear motion predictions of pedestrians or \citep{elnagarISCIRA2001} for Kalman filter-based (KF) prediction of dynamic obstacles using a constant acceleration model. \cite{barth2008will} use the coordinated turn model for one-step ahead prediction in an Extended Kalman Filter (EKF) to track oncoming vehicles from point clouds generated by an in-car stereo camera. \cite{batz2009recognition} use a variant of the coordinated turn model for one-step motion prediction of vehicles within an Unscented KF to detect dangerous situations based on predicted mutual distances between vehicles. Dynamic models account for forces which, following Newton's laws, are the key descriptor of motion. Such models can become complex when they describe the physics of wheels, gearboxes, engines, or friction effects. In addition to their complexity, forces that govern the motion of other agents are not directly observable from sensory data. This makes dynamic models more challenging for motion prediction. \cite{zernetsch2016trajectory} use a dynamic model for trajectory prediction of cyclists that contains the driving force and the resistance forces from acceleration, inclination, rolling and air. The authors show experimentally that long-term predictions up to 2.5 sec ahead are geometrically more accurate when compared to a standard CV model. Autoregressive models (ARM) that, unlike first-order Markov models, account for the history of states have also been used for motion prediction. \cite{elnagarTSMC1998} employ a third-order ARM to predict the next position and orientation of moving obstacles using maximum-likelihood estimation of the ARM parameters. \cite{caiECCV06} use a second-order ARM for single step motion prediction within a particle filter for visual target tracking of hockey players. The early work by \cite{zhu1991hidden} uses an autoregressive moving average model as transition function of a Hidden Markov Model (HMM) to predict occupancy probabilities of moving obstacles over multiple time steps with applications to predictive planning. Physics-based models are used for motion prediction by recursively applying the dynamics model $f$ to the current state of the target agent. So far, with the exception of \citep{zhu1991hidden}, the works described above make only one-step ahead predictions and ignore contextual cues from the environment. To account for context, the dynamics model $f$ can be extended by additional forces, model parameters or state constraints as discussed hereafter. \begin{figure*}[t!] \centering \includegraphics[width=0.99\linewidth,keepaspectratio]{physics-based-image-pictograms.pdf} \caption{Examples of the physics-based approaches: {\bf (a)} a method with a single dynamical model, {\bf (b)} a reachability-based method, which accounts for all possible transitions from the given motion state, {\bf (c)} an attraction-repulsion approach, which accounts for dynamic environment cues, {\bf (d)} a multi-model method with several modes of dynamics and the DBN switching mechanism.} \label{fig:pictograms-physics-based} \end{figure*} \subsubsection{Models with map-based contextual cues} A number of approaches extend physics-based models to account for information from a map, particularly for the task of tracking ground vehicles on roads. The methods developed to this end differ in how road constraints are derived and incorporated into the state estimation problem, see the survey by \cite{simonCTA10}. \cite{yangJAIF08}, for example, use a regular KF and project the unconstrainted state estimate onto the constrained surface for tracking on-road ground vehicles with a surveillance radar. \cite{yangFUSION05} use the technique to reduce the system model parametrization to the constrained surface. They reduce vehicle motion to a 1D curvilinear road representation for filtering. \cite{batkovicARXIV18} predict pedestrian motion along a graph with straight line edges centered on side- and crosswalks. Using a unicycle model and a control approach to keep the predictions along the edges, they evaluate long-term predictions up to 10 sec ahead. When there are several possible turns at a node, i.e. at bifurcations, predictions are propagated along all outgoing edges. Another class of techniques uses the road information as pseudo measurements, pursued e.g. by \cite{petrich2013map} who use a kinematic bicycle model for $f$ and pseudo measurements from the centerlines of lanes to predict future vehicle trajectories several seconds ahead. When there are several possible turns, e.g. at intersections, the approach generates new motion hypothesis for each relevant lane by using an EKF. When agents move freely, e.g. do not comply with road constraints, we need different ways to represent free space and account for map information. To this end, several authors propose grid-based \citep{luberIJRR11,rehder2015goal,coscia2018long} and more general graph-based space discretizations \citep{aoude2010threat,koschiset}. \cite{luberIJRR11} use 2D laser data to track people from a mobile robot and learn a so called spatial affordance map, a grid-based spatial Poisson process from which a walkable area map of the environment can be derived. They predict future trajectories of people during lengthy occlusion events using an auxiliary PF with look-ahead particles obtained by forward-simulation of the curvilinear motion model proposed by \cite{bestTAES97}. This way, long-term predictions (up to 50 steps ahead) stay focused on high-probability regions with the result of improved tracking performance. \cite{rehder2015goal} also choose a regular grid to represent the belief about pedestrian locations in a linear road scenario. They propose a variant of a Bayesian histogram filter to achieve map-aware predictions 3 seconds ahead by combining forward propagation of an unicycle pedestrian model from the start and in backward direction from the goal with prior place-dependent knowledge of motion learned from previously observed trajectories. Similarly, \cite{coscia2018long} use polars grids, centered at the currently predicted agent position to represent four different local influences: a CV motion model, prior motion knowledge learned from data, semantic map annotations like ``road'' or ``grass'' and direction to goal. The next velocity is then obtained from the normalized product of the four polar distributions and forward propagated for long-term prediction of pedestrians and cyclists in urban scenarios. Like \citep{rehder2015goal}, no planning is involved and the learned prior knowledge is place-dependent. \cite{koschiset} exploit information on road segments connectivity and semantic regions to compute reachability-based predictions of pedestrians, similarly to \citep{rehder2015goal}. The authors formalize several relevant traffic rules, e.g. pedestrian crossing permission on the green light, as additional motion constraints. \cite{aoude2010threat} grow a tree of future trajectories for each target agent using a closed-loop RRT algorithm that samples the controls of a bicycle motion model \citep{kuwataTCST09} avoiding obstacles in the map. Based on agent's recognized intentions using an SVM classifier and features from observed trajectories, they bias the tree growth towards areas that are more likely for the agent to enter and determine the best evasive maneuver for the ego-vehicle to minimize threat at intersection scenarios. A reachibility-based model, such as \citep{rehder2015goal,koschiset,aoude2010threat}, is illustrated in Fig.~\ref{fig:pictograms-physics-based} (b). So far, we discussed extensions to physics-based motion models that embed different types of map information. All those works, however, consider only a single target agent and neglect local interactions between multiple agents. Hereafter, we will discuss methods that add social situation awareness, predicting several target agents jointly. \subsubsection{Models with dynamic environment cues} There are several ways to incorporate local agent interaction models into physics-based approaches for prediction, one popular example being the social force (SF) model by \cite{helbing1995social}, see Fig.~\ref{fig:pictograms-physics-based} (c). Developed for the purpose of crowd analysis and egress research, the model superimposes attractive forces from a goal with repulsive forces from other agents and obstacles. Several works extend the dynamics model $f$ to include social forces e.g. for improved short-term prediction for pedestrian tracking in 2D laser data \citep{Luber2010} or image data \citep{pellegrini2009you}. \cite{elfring2014learning} combine the HMM-based goal estimation method introduced by \cite{vasquez2008intentional} with the basic SF-based human motion prediction by \cite{Luber2010}. For intention estimation, the observed people trajectories are summarized in a sparse topological map of the environment. Each node of the map encodes a state--destination pair, and the goal inference using the observed trajectory is carried out in a maximum-likelihood manner. \cite{ferrer2014behavior} estimate the interaction parameters of the SF for each two people in the scene individually. For this purpose several \emph{behaviors} (i.e. sets of SF parameters) are learned offline, and the observed interaction between any two people is associated to the closest ``behavior''. The approach by \cite{oli2013human} defines the robot operating in social spaces as an interacting agent, affected by the social forces. Each human is flagged as either aware or unaware of the robot, which defines the repulsive force the robot exerts on that person. Such awareness is inferred using visual cues (gaze direction and past trajectory). In order to achieve more realistic behaviors, several extensions to the social force model are proposed. \cite{yan2014modeling} present a model that embeds social relationships in the linear combination of predefined basic social effects (attraction, repulsion and non-interaction). The motion predictor maintains several hypothesis over the social modes, in which the pedestrians are involved. Predictive collision avoidance behavior of the SF agents is introduced by \cite{karamouzas2009predictive} and \cite{zanlungo2011social}. In particular, \cite{karamouzas2009predictive} models each agent to adapt their route as early as possible, trying to minimize the amount of interactions with others and the energy required to solve these interactions. To this end an evasion force, that depends on the predicted point of collision and the distance to it, is applied to each agent. Updates to the SF model to consider also group motion are proposed by \cite{moussaid2010walking} and \cite{farina2017walking}. Other agent interaction models, not based on the social forces, for example for road vehicles, have also been used. An interactive kinematic motion model for vehicles on a single lane has been proposed by \cite{treiber2000congested} to predict the longitudinal motion of a target vehicle in the presence of preceding vehicles. The model, called Intelligent Driver Model (IDM), was used e.g. by \cite{liebnerITSM13} for driver intent inference at urban intersections. \cite{hoermannIV17} learn the driving style of preceding vehicles by on-line estimating the IDM parameters using particle filtering and near- and far-range radar observations. Prediction of longitudinal motion of preceding vehicles, in the experiments up to 10 seconds ahead, is then obtained by forward propagation of the model. Several approaches exploit the \emph{reciprocal velocity obstacles} (RVO) model \citep{van2008reciprocal} for jointly predicting human motions. \cite{kim2015brvo} use the Ensemble Kalman filtering technique together with the Expectation-Maximization algorithm to estimate and improve the human motion model (i.e. RVO parameters). \cite{bera2016glmp} propose a method that dynamically estimates parameters of the RVO function for each pedestrian, moving in a crowd, namely current and preferred velocities per agent and global motion characteristics such as entry points and movement features. A follow-up work \citep{bera2017aggressive} also introduces online estimation of personality traits. Each pedestrian's behavior is characterized as a weighted combination of six personality traits (aggressive, assertive, shy, active, tense and impulsive) based on the observations, thus defining parameters of the RVO model for this person. Other approaches instead compute joint motion predictions based on the time of possible collision between pairs of agents. \cite{paris2007pedestrian} propose a method for modeling predictive collision avoidance behavior in simulated scenarios. For each pedestrian current velocities of their neighbors are extrapolated in the 3D $(x,y,t)$ space, and all actions that result in collision with dynamic and static obstacles are excluded. A similar problem is addressed by \cite{pettre2009experiment}, who evaluate real people trajectories in an interactive experiment and design a predictive collision avoidance approach, capable of reproducing realistic joint maneuvers, such as giving way and passing first. Other methods propose to compute joint motion prediction based on the expected point of closest approach between pedestrians. \cite{pellegrini2009you} is the first to propose such approach called \emph{Linear Trajectory Avoidance} (LTA): the method firstly computes the expected point of closest approach between different agents, and then uses it as driving force to perform avoidance between the agents. Based on the LTA, \cite{yamaguchiCVPR2011} formulate a human motion prediction approach as an energy minimization problem. The energy function considers different properties of people motion: damping, speed, direction, attraction, being in a group, avoiding collisions. The approach of Yamaguchi is further improved by \cite{robicquet2016learning} by considering several different sets of the energy functional parameters, learned from the training data. Each set of parameters represents a distinct behavior (navigation style of the agent). Local interaction modeling methods, as well as approaches for predicting motion in crowds, usually benefit from detecting and considering groups of people who walk together. For example, \cite{pellegrini2010improving} propose an approach to model joint trajectories of people, taking group relations into account. The proposed framework operates in two steps: first, it generates possible trajectory hypotheses for each person, then it selects the best hypothesis that maximize a likelihood function, taking into account social factors, while at the same time estimating group membership. People and relations are modeled with Conditional Random Fields (CRF). \cite{choi2010multiple} propose an interaction model that incorporates linear motion assumption, repulsion of nearby people and group coherence via synchronization of velocities. Further group motion models, e.g. \citep{singh2009modelling,qiu2010modeling,karamouzas2012simulating,seitz2014pedestrian}, developed in the simulation and visualization communities, typically address the groups cohesion with additional forces to attract members to each other, assigning leader's and follower's roles or imposing certain group formation. A recent reachability-based pedestrian occupancy prediction method, presented by \cite{zechel2019pedestrian}, accounts both for dynamic objects and semantics of the static environment. The authors first use a physical model to determine reachable locations of a person, and then reduce the area based on the intersections with static environment and presence probabilities of other dynamic agents. Similarly \cite{luo2019gamma} compute future agents predictions based on an optimization approach that handles physical constraints, i.e. kinematics and geometry of the agents, and behavioral constraints, i.e. intention, attention and responsibility. \subsection{Multi-model Approaches} Complex agent motion is poorly described by a single dynamical model $f$. Although the incorporation of map information and influences from multiple agents render such approaches more flexible, they remain inherently limited. A common approach to modeling general motion of maneuvering targets is the definition and fusion of different prototypical motion modes, each described by a different dynamic regime $f$. Modes may be linear movements, turn maneuvers, or sudden accelerations, that over time, form sequences able to describe complex motion behavior. Since the motion modes of other agents are not directly observable, we need techniques to represent and reason about motion mode uncertainty. The primary approach to this end are multi-model (MM) methods \citep{rongliTAES05survey} and hybrid estimation \citep{hofbaur2004hybrid}. MM methods maintain a hybrid system state $\xi=(\mathbf{x},s)$ that augments the continuous valued $\mathbf{x}$ by a discrete-valued modal state $s$. Following \citep{rongliTAES05survey}, MM methods generally consist of four elements: a fixed or on-line adaptive model set, a strategy to deal with the discrete-valued uncertainties (e.g. model sequences under a Markov or semi-Markov assumption), a recursive estimation scheme to deal with the continuous valued components conditioned on the model, and a mechanism to generate the overall best estimate from a fusion or selection of the individual filters. For prediction, MM methods are used in several ways, to represent more complex motion, to incorporate context information from other agents and context information from the map. A naive MM approach, presented by \cite{pool2017iv}, predicts future motion of cyclists using a uniform mixture of five Linear Dynamic Systems (LDS) dynamics-based motion strategies: go on straight, turn $45^\circ$ or $90^\circ$ left or right. Probability of each strategy is set to zero if the predicted path does not comply with the road topology in the place of prediction. The interactive multiple model filter (IMM) is a widely used inference technique applied on MM models with numerous applications in tracking \citep{mazor1998interacting} and predictions. For instance, \cite{kaempchen2004imm} propose a method for future vehicle states estimation that switches between constant acceleration and simplified bicycle dynamical models. Uncertainty in the next transition is explicitly modeled with Gaussian noise. \cite{schneider2013gcpr} introduce an IMM for pedestrian trajectory prediction which combines several basic motion models (constant velocity, constant acceleration and constant turn). Also \cite{schulz2015controlled} propose a method for predicting the future path of a pedestrian using an IMM framework with constant velocity, constant position and coordinated turn models. In this work, model transitions are controlled by an intention recognition system based on Latent-dynamic Conditional Random Fields: based on the features of the person's dynamics (position and velocity) and situational awareness (head orientation), intention is classified as crossing, stopping or going in the same direction. Joint vehicle trajectory estimation also using IMMs is considered by \cite{kuhnt2015towards,kuhnt2016understanding} in a method which adopts pre-defined environment geometry to estimate possible routes of each individual vehicle. Contextual interaction constraints are embedded in a Bayesian Network that estimates the evolution of the traffic situation. Other examples of IMMs techniques are variable-structure IMM for ground vehicles \citep{kirubarajanTAES00,noeSDPST00,pannetierFUSION05,sheaSDPST00} to account for road constraints. In a recent work \cite{xie2018vehicle} combined a kinematics-based constant turn rate and acceleration model with IMM-based lane keeping and changing maneuvers mixing. The method is aware of road geometry and produces results for a varying prediction horizon. An alternative approach to hybrid estimation problems are dynamic Bayesian networks (DBN) which inherit the broad variety of modeling schemes and large corpus of exact and approximate inference and learning techniques from probabilistic graphical models \citep{koller2009probabilistic}. An example of a DBN-based multi-model approach is given in Fig.~\ref{fig:pictograms-physics-based} (d). The seminal work of \cite{pentland1999modeling} introduces an approach to model human behaviors by coupling a set of dynamic systems (i.e. a bank of Kalman filters (KF)) with an HMM, which is a special case of the DBNs. The authors introduce a dynamic Markov system that infers human future behaviors, a set of macro-actions described by a set of KFs, based on measured dynamic quantities (i.e. acceleration, torque). The approach was used to accurately categorize human driving actions. \cite{agamennoni2012estimation} jointly model the agent dynamics and situational context using a DBN. The vehicular dynamics is described by a bicycle model whereas the context is defined by a weighted feature function to account e.g. for closeness between agents or place-dependent information from a map. The model resembles a switched Bayesian filter but considers a more general conditioning of the switch transitions and the case of multiple agents. The authors apply the model for the task of long-term multi-vehicle trajectory prediction of mining vehicles, useful for instance during GPS outages. \cite{kooij2014eccv} propose a context-aware path prediction method for pedestrians intending to laterally cross a street, that makes use of Switching Linear Dynamical Systems (SLDS) to model maneuvering pedestrians that alternate {between} motion models (e.g. {walking straight, stopping)}. The approach adopts a Dynamic Bayesian Network (DBN) to infer the next pedestrian movements based on the SLDS model. {The latent (context) variables relate to pedestrian awareness of an oncoming vehicle (head orientation), the distance to the curbside and the situation criticality. \cite{kooij2018ijcv} extend this work to cover a cyclist turning scenario. In another extension of \citep{kooij2014eccv}, \cite{roth2016iv} use a second context-based SLDS to model the ``braking'' and ``driving'' behaviors of the ego-vehicle}. The two SLDS sub-graphs for modeling pedestrian and vehicle paths are combined into a joint DBN, where the situation criticality latent state is shared. \cite{gu2016motion} propose a DBN-based motion model with a particle filter inference to estimate future position, velocity and crossing intention of a pedestrian. During inference the approach considers standing, walking and running motion modes of pedestrians. \cite{gindele2010probabilistic} is jointly modeling future trajectories of vehicles with a DBN, describing the local context of the interaction between multiple drivers with a set of numerical features. These features are used to classify the current situation of each driver and reason on available behaviors, such as ``follow'', ``sheer in'' or ``overtake'', represented as B\'ezier curves. \cite{blaiotta2019learning} also proposes a DBN for pedestrian prediction with two motion modes (walking and standing), contextual awareness flag for the oncoming vehicle and social force-based motion dynamics for pedestrians. Techniques derived by the stochastic reachability analysis theory \citep{althoff2010reachability} form another class of hybrid approaches to compute human motion prediction. In general, those methods model agents as hybrid systems (with multiple modes) and infer agents' future motions by computing stochastic reachable sets. The approach by \cite{althoff2008stochastic} generates the stochastic reachable sets for interacting traffic participants using Markov chains, where each chain approximates the behavior of a single agent. Each vehicle has its own dynamics with many modes (e.g. acceleration, deceleration, standstill, speed limit), and its goal is assumed to be known. \cite{althoff2013road} further extend \citep{althoff2008stochastic} with the over-approximative estimation of the occupancy sets. The method is particularly framed for hybrid dynamics (mixed discrete and continuous) where computing the exact reachability sets could be computationally unfeasible. To overcome this issue, the method proposes to intersect different occupancy sets for different abstractions of the dynamical model. The work by \cite{bansal2019hamilton} also uses a reachability approach for solving the prediction problem for multi-models systems. The approach rather than using a probability distribution over human next actions, it uses a deterministic set of allowable human actions. This reduces the complexity of the predictor and allows for an easy certification process. \section{Planning-based Approaches} \label{sec:posterior_distribution:planning_based} Planning-based approaches solve a sequential decision-making problem by reasoning about the future to infer a model of agent's motion. These approaches follow the \emph{Sense-Reason-Act} paradigm introduced earlier in Sec.~\ref{sec:taxonomy}. Unlike the previous two modeling approaches, the planning-based approach incorporates the concept of a rational agent when modeling human motions. By placing an assumption of rationality on the human, the models used to represent human motion must take into account the impact of current actions on the future as part of its model. As a result, much of the work covered in this section use objective functions that minimizes some notion of the total cost of a sequence of actions (motions), and not just the cost of one action in isolation. \begin{figure}[t!] \centering \includegraphics[width=0.99\linewidth,keepaspectratio]{planning-based-image-pictograms.pdf} \caption{Examples of the planning-based approaches: {\bf (a)} forward planning approach, which uses a predefined cost function (e.g. Euclidean distance), and {\bf (b)} inverse planning approach, which infers the feature-based cost function from observations.} \label{fig:pictograms-planning-based} \end{figure} Here we classify planning-based approaches into two sub-categories, depicted in Fig.~\ref{fig:pictograms-planning-based}. \emph{Forward planning-based approaches} (Sec.~\ref{sec:forwardplanning}) use a pre-defined cost function to predict human motion, and \emph{inverse planning-based approaches} (Sec.~\ref{subsubsection:learningplanning}) infer the cost (or policy) function from observations of human behavior and then use that cost (or policy) function to predict human motion. \subsection{Forward Planning Approaches} \label{sec:forwardplanning} \subsubsection{Motion and path planning methods} To make basic goal-informed predictions, several methods use optimal motion and path planning techniques with a hand-crafted cost-function \citep{bruce2004better, gong2011multi,xieICCV2013,yiTRIP2016,vasishta2017natural}. \cite{bruce2004better} propose to use a path planning algorithm to infer how a person would move towards destinations in the environment. Predictions are performed using a set of learned goals. \cite{gong2011multi} use multiple long-term goal-directed path hypothesis from different homotopy classes, generated with a modified A* algorithm \citep{bhattacharya2010search}. \cite{xieICCV2013} describe a Dijkstra-based approach to predict human transitions across \emph{dark energy} fields generated from video data. Every goal location generates an attractive \emph{dark matter} Gaussian force field, while every non-walkable location generates a repulsive one. The dark matter functional objects, the map and the goals are inferred on-line using a Monte Carlo Markov Chain technique. For predicting human motion in a crowd, \cite{yiTRIP2016} introduce an energy map to model the traveling difficulty of each location in the scene, accounting for obstacles layout, moving people and stationary groups. The energy map is personalized for each observed agent, and the Fast Marching Method (FMM) \citep{sethian1996fast} is used to predict the person's path. \cite{vasishta2017natural} use A* search over the potential cost-map function for pedestrian trajectory prediction, aiming to recognize illegal crossing intention of the observed agent. The potential field accounts for semantic properties of the urban environment. Other methods model the probabilities of future motion based on cost-to-go value estimates \citep{yen2008goal,best2015bayesian,karasev2016intent,vasquez2016novel,Rudenko2017workshop}. \cite{yen2008goal} propose a probabilistic goal-directed motion model that accounts for several goals in the environment. The method computes the cost-to-go function for each goal and evaluates the probabilities of feasible transitions in each state. A person's trajectory is predicted using a particle filter with Monte-Carlo sampling. \cite{best2015bayesian} propose a Bayesian framework that exploits the set of path hypotheses to estimate the intended destination and the future trajectory. To this end, a probabilistic dynamical model is used, which evaluates next states of the agent based on the decrease of the distance to the intended goal. Hypothesis are generated from the Probabilistic Roadmap (PRM). \cite{karasev2016intent} solve the prediction problem using a jump-Markov Decision Process, modeling the agents' behavior as switching non-linear dynamical systems. A soft MDP policy describes the nonlinear motion dynamics, and the latent goal variable governs the switches. The method uses hand-crafted costs for each surface type (e.g. sidewalk, crosswalk, road, grass), and handles time-dependent information such as traffic signals. Instead of using an MDP formulation, \cite{vasquez2016novel} proposes the Fast Marching Method (FMM) to compute the cost-to-go function for a set of goals. The predictor uses a velocity-dependent probabilistic motion model, describes the temporal evolution along the predicted path, and offers a gradient-based goal prediction that allows quick recognition of the intended destination changes. \subsubsection{Multi-agent forward planning} Most planning-based methods discussed so far do not consider interactions between agents in the scene. To account for presence of other agents, several authors propose to modify individual optimal policies locally with physics-based methods \citep{van2008interactive,Rudenko2018icra,wuIV18} or imitation learning \cite{muench2019composable}. A crowd simulation approach that combines global planning and local collision avoidance is presented by \cite{van2008interactive}. A global path for each agent is computed using a Probabilistic Road Map (PRM), considering only static obstacles. Local collision avoidance along the global path is done jointly for all agents using the Reciprocal Velocity Obstacles (RVO) \citep{van2008reciprocal} method. \cite{Rudenko2018icra} extend the MDP-based approaches \citep{ziebart2009planning,karasev2016intent} with a fast random-walk based method to generate joint predictions for all observed people using social forces. The authors extend their approach considering group-based social motion constraints in \citep{Rudenko2018iros}. \cite{wuIV18} extend the gridmap transition-based and reachability-based framework \citep{rehder2015goal,coscia2018long} with automatic inference of local goal points, and calculate the stochastic policy in each cell, augmenting the physics-based dynamics with optimal motion direction. The motion of pedestrians is predicted jointly with other traffic participants by risk checking of future states based on gap acceptance model \citep{brewer2006exploration}. Instead of using a physics-based approach (e.g. social forces) for augmenting the MDP-based predictor, \cite{muench2019composable} propose to learn an additional interaction-aware Q-function with imitation learning. A number of approaches consider cooperative planning in joint state-space that includes all agents \citep{broadhurst2005monte,rosmann2015timed, bahram2016game,chen2017decentralized}. \cite{broadhurst2005monte} use Monte Carlo sampling to generate probability distributions over future trajectories of the vehicles and pedestrians jointly. The approach considers several available actions for each agent in the scene: each vehicle executes one of the hand-crafted behaviors, and humans are assumed to move freely in all directions. Also \cite{rosmann2017online} considers planning for cooperating agents. A set of topologically distinct candidate trajectories for each person is computed using trajectory optimization techniques \citep{rosmann2015timed}. Among those trajectories the best candidate is chosen according to a metric that includes group integrity, right versus left motion bias and curvature constraints. Finally, the encounter is resolved jointly in an iterative fashion. The interaction point of minimal spatial separation is computed between each two people, who adjust their trajectories accordingly, possibly switching to a different topological candidate. \cite{mavrogiannis2016decentralized} represent multi-agent interaction through the use of braid groups (topological patterns) which formalize trajectories sets. At inference time, the problem of predicting joint trajectories is posed as a graph search in a permutation graph. Joint planning for the robot and the human is addressed by several works \citep{bandyopadhyay2013intention,galceran2015multipolicy,chen2017decentralized}. Assuming availability of a fixed set of goals, \cite{bandyopadhyay2013intention} solve an optimal motion problem for each of it, and generate appropriate motion policies. The latter are used to estimate the future evolution of the joint state-space of the robot and the human. \cite{galceran2015multipolicy} introduce a multi-policy decision-making systems to generate robot motions based on predicted movements of other agents in the scene, estimated with a changepoint-based technique \citep{fearnhead2007line}. Likelihood of future actions are sampled from the policies. The final prediction is generated by an exhaustive search of closed-loop forward simulations of these samples. The approach is well suited for predicting future macro-actions (i.e. turn left or right, slow down or speed up). \cite{bahram2016game} generates joint robot and agents' motions using a sequential game theory technique. The approach presents an interactive prediction and planning loop where a sequence of predictions (i.e. motion primitives) is generated for the ego-vehicle by considering the sequential evolution of the entire scene. \cite{chen2017decentralized} develop a de-centralized multi-agent collision avoidance algorithm, which resolves local interactions with a learned joint value function that implicitly encodes cooperative behaviors. \subsection{Inverse Planning Approaches} \label{subsubsection:learningplanning} Forward planning approaches, discussed so far, make an explicit assumption about the optimality criteria (reward or cost function) of an agent's motion. In this section we discuss algorithms that estimate the reward function of agents (or directly a policy) from observations, using statistical and imitation learning techniques (for a survey on imitation learning techniques applied to robotic systems we refer the reader to \citep{osa2018algorithmic}). Inverse planning methods assume that the reward or cost function, which depends on contextual and social features and defines the rational behavior, can be learned from observations (see Fig.~\ref{fig:pictograms-planning-based} (b)). \subsubsection{Single-agent inverse planning} In their influential work, \cite{ziebart2009planning} propose to learn a reward function yielding goal-directed behavior of pedestrians using maximum entropy inverse optimal control (MaxEnt IOC). Humans are assumed to be near-optimal decision makers with stochastic policies, learned from observations, which are used to predict motion as a probability distribution over trajectories. Building upon \citep{ziebart2009planning}, \cite{kitani2012activity} expand it to include the labeled semantic map of the environment. An IOC method takes the semantic map as an input, and learns the feature-based cost function that captures agents' preferences for e.g. walking on the sidewalk, or keeping some distance from parked cars. \cite{previtaliICMLA2016} propose an approach that adopts linear programming formulation of IRL. Using a discrete and non-uniform representation of the 2D workspace, it scales linearly with respect to the size of the environment. \cite{chung2010mobile} present an MDP-based model that describes spatial effects between agents and the environment. The authors use IRL to estimate cost of each state as a linear combination of trajectory length, static and dynamic obstacle avoidance and steering smoothness. Special context-based spatial effects (SSE) are identified by comparing the costs of the states, learned with IRL, and the actual observed trajectories. A follow-up work \citep{chung2012incremental} introduces a feature-based representation of SSEs, which can be modeled before being naturally observed, as in \citep{chung2010mobile}. Instead of IRL, other works use different techniques to learn the reward function \citep{rehder2017pedestrian,huang2016deep}. \cite{rehder2017pedestrian} solve the problem of intention recognition and trajectory prediction in one single Artificial Neural Network (ANN). The destinations and costly areas are predicted from stereo images using a recurrent Mixture Density Network (RMDN). Planning towards these destinations is performed using fully Convolutional Neural Networks (CNN). Two different architectures for planning are proposed: an MDP network and a forward-backward network, both using contextual features of the environment. \cite{huang2016deep} propose an approach that exploits two CNNs to learn a reward function considering spatial and temporal contextual information from a video sequences. A Spatial Matching Network (SMN) learns the spatial context of human motion. An Orientation Network (ON) is used to model the position variation of the object. The Dijkstra algorithm is used to find the minimum cost solution over a graph whose edges' weights are set by considering the reward function and the facing orientation computed by the two networks (SMN and ON). All the detailed methods show that IRL or similar methods are providing powerful tools to learn human behaviors. Furthermore, \cite{shenTransferable2018} show that under some particular requirements (i.e. when the feature vector, model parameter and output representation are invariant under a rigid body transformation of the world fixed coordinate frame), IRL is suitable for learning location-independent transferable motion models. \subsubsection{Imitation learning} Instead of first learning a reward function and then applying planning techniques to generate motion predictions, imitation learning approaches directly extract a policy from the data. Generative Adversarial Imitation Learning (GAIL) approach, proposed by \cite{ho2016generative}, aims for matching long-term distributions over states and actions. It uses a GAN-based \citep{goodfellow2014generative} optimization procedure, in which a discriminator tries to distinguish between observations from experts and generated ones by making model rollouts. Afterwards, a model is trained to make predictions that yield similar long-term distributions over states and actions. This method has been successfully applied to learning human highway driving behavior \citep{kuefler2017imitating} and training joint pedestrian motion models \citep{Gupta2018SocialGAN}. \cite{li2017inferring} extend GAIL by introducing a component to the loss function, which maximizes the mutual information between the latent structure and observed trajectories. They test their approach in a simulated highway driving scenario, predicting the driver's actions given an input image and auxiliary information (e.g. velocity, last actions, damage), and show that it is able to imitate human driving, while automatically distinguishing between different types of behaviors. Differently from GAIL, the deep generative technique by \cite{Rhinehart_2018_ECCV} adopts a fully differentiable model, which is easy to train without the need of an expensive policy gradient search. By minimizing a symmetrized cross-entropy between the distributions of the policy and of the demonstration data, the method allows to learn a policy that generates predictions which balance precision (i.e. avoid obstacle areas) and diversity (i.e. being multi-modal). \subsubsection{Multi-agent inverse planning} In the following we review several inverse planning approaches that predict multi-agent motions \citep{kuderer2012feature, kretzschmar2014learning, pfeiffer2016predicting, ma2016forecasting, lee2017desire, fernando2019neighbourhood}. \cite{kuderer2012feature} and \cite{kretzschmar2014learning} propose a continuous formulation of the MaxEnt IOC \citep{ziebart2009planning} by considering a continuous spline-based trajectory representation. Their method relies on several features (e.g. travel time, collision avoidance) to capture physical and topological aspects of the pedestrians trajectories. \cite{pfeiffer2016predicting} extend the latter works by introducing the variable end-position of the each trajectory, thus reasoning over the agents' goals. \cite{walker2014patch} present an unsupervised learning approach for visual scene prediction. The approach exploits mid-level elements (i.e. image patches) as building blocks for jointly predicting positions of agents in the scene and changes in their visual appearance. The learned reward function defines the probability of a patch moving to a different location in the image. To generate predictions, the method performs a Dijkstra search on the learned reward function considering several goals. \cite{ma2016forecasting} combine the Fictitious Play \citep{brown1951iterative} game theory method with the deep learning-based visual scene analysis. Future paths hypothesis are generated jointly and iteratively: each pedestrian adapts her motion based on the predictions of the other pedestrians' actions. IRL's reward function features encode social compliance, neighborhood occupancy, distance to the goal and body orientation. Gender and age attributes, extracted with a deep network from video, define the possible average velocity of pedestrians. \cite{lee2017desire} formulate the prediction problem as an optimization task. The method reasons on multi-modal future trajectories accounting for agent interactions, scene semantics and expected reward function, learned using a sampling-based IRL scheme. The model is wrapped into the single end-to-end trainable RNN encoder-decoder network, {called DESIRE}. The RNN architecture allows incorporation of past trajectory into the inference process, which improves prediction accuracy compared to the standard IRL-based techniques. The previously discussed approaches for joint prediction assume multi-agent settings with rational and cooperative behavior of all agents. Differently, several approaches \citep{henry2010learning,lee2016predicting} address the problem by modeling one target person as a rational agent, acting in a dynamic environment. The influence of other agents then becomes part of the stochastic transition model of the environment. For example, \cite{henry2010learning} propose an IRL-based method for imitating human navigation in crowded environments. They conjecture that humans take into account the density and velocity of nearby people and learn a reward function that weights between these and additional features. Another approach by \cite{lee2016predicting} learns a reward function that explains behavior of a wide receiver in American football, whose strategy takes into account the behavior of the defenders. Models of the dynamic environment (e.g. linear or Gaussian Processes) are used as transitions in the IRL framework. \cite{Rhinehart_2019_ICCV} has developed a multi-agent forecasting model called Estimating Social-forecast Probabilities (ESP) that uses exact likelihood inference (unlike VAEs or GANs) derived from a deep neural network for forecast trajectories. In contrast to most standard trajectory forecasting methods, the approach is able to reason conditionally based on additional information that it was not trained to use by accepting agent goals at test time. The approach uses a generative multi-agent model in order to perform PREdiction Conditioned On Goals (PRECOG). \section{Taxonomy} \label{sec:taxonomy} In this section we describe our taxonomy to decompose the motion prediction problem based on the modeling approach and the type of contextual cues, see Fig.~\ref{fig:taxonomy-overview} for an overview. In Sec.~\ref{sec:taxonomy:modeling_approach} and \ref{sec:taxonomy:cues} we detail the categories and give representative papers as examples of each category, and in Sec.~\ref{sec:taxonomy:classification_rules} we describe the rules for classifying the methods. \begin{figure*}[t] \centering \includegraphics[width=0.95\linewidth,keepaspectratio]{taxonomy-image-modeling-approach.pdf} \caption{Illustration of the basic working principle of the modeling approaches: {\bf (a)} physics-based methods project the motion state of the agent using explicit dynamical models based on Newton's law of motion. {\bf (b)} pattern-based methods learn prototypical trajectories from observed agent behavior to predict future motion. {\bf (c)} planning-based methods include some form of reasoning about the likely goals and compute possible paths to reach those goals. In order to incorporate internal and external stimuli that influence motion behavior, approaches can be extended to account for different contextual cues. } \label{fig:modelingapproaches} \end{figure*} \subsection{Modeling Approach} \label{sec:taxonomy:modeling_approach} The motion modeling category subdivides the prediction approaches based on how they represent human motion and formulate the causes thereof. {\em Physics-based methods} define an explicit dynamical model based on Newton's law of motion. {\em Pattern-based methods} learn motion patterns from data of observed agent trajectories. {\em Planning-based methods} reason on motion intent of rational agents (see Fig.~\ref{fig:modelingapproaches}). The categorization can be seen to differ also in the level of cognition typically involved in the prediction process: physics-based methods follow a reactive sense-predict scheme, pattern-based methods follow a sense-learn-predict scheme, and planning-based methods follow a sense-reason-predict scheme in which agents reason about intentions and possible ways to the goal. \begin{enumerate}[label*=\arabic*.] \item{\textbf{Physics-based methods} (Sense -- Predict): motion is predicted by forward simulating a set of explicitly defined dynamics equations that follow a physics-inspired model. Based on the complexity of the model, we recognize the following subclasses: \begin{enumerate}[label*=\arabic*.] \item \textbf{Single-model methods} define a single dynamical motion model, e.g. \citep{elnagarISCIRA2001,zernetsch2016trajectory,Luber2010,coscia2018long,pellegrini2009you,yamaguchiCVPR2011,aoude2010threat,petrich2013map} \item \textbf{Multi-model methods} include a fixed or on-line adaptive set of multiple dynamics models and a mechanism to fuse or select the individual models, e.g. \citep{agamennoni2012estimation,pool2017iv,kooij2018ijcv,kaempchen2004imm,althoff2008reachability,gindele2010probabilistic} \end{enumerate} } \item{\textbf{Pattern-based methods} (Sense -- Learn -- Predict) approximate an arbitrary dynamics function from training data. These approaches are able to discover statistical behavioral patterns in the observed motion trajectories and are separated into two categories: \begin{enumerate}[label*=\arabic*.] \item \textbf{Sequential methods} learn conditional models over time and recursively apply learned transition functions for inference, e.g. \citep{kruse1998camera,kucner2017enabling,liao2003voronoi,aoude2011mobile,keller2014tits,vemula2017modeling,alahi2016social,GoldhammerICPR2014} \item \textbf{Non-sequential methods} directly model the distribution over full trajectories without temporal factorization of the dynamics, e.g. \citep{bennewitz2005learning,xiao2015unsupervised,keller2014tits,tay2008modelling,trautman2010unfreezing,kafer2010recognition,luberIROS2012} \end{enumerate} } \item{\textbf{Planning-based methods} (Sense -- Reason -- Predict) explicitly reason about the agent's long-term motion goals and compute policies or path hypotheses that enable an agent to reach those goals. We classify the planning-based approaches into two categories: \begin{enumerate}[label*=\arabic*.] \item {\bf Forward planning methods} make an explicit assumption regarding the optimality criteria of an agent's motion, using a pre-defined reward function, e.g. \citep{vasquez2016novel, xieICCV2013, karasev2016intent, yiTRIP2016, Rudenko2017workshop, galceran2015multipolicy, best2015bayesian, bruce2004better, rosmann2017online} \item {\bf Inverse planning methods} estimate the reward function or action model from observed trajectories using statistical learning techniques, e.g. \citep{ziebart2009planning,kitani2012activity,rehder2017pedestrian,kuderer2012feature, pfeiffer2016predicting, chung2012incremental, shenTransferable2018, lee2017desire, walker2014patch, huang2016deep} \end{enumerate} } \end{enumerate} Figure~\ref{fig:paperstatistics} shows the publications trends over the last years, color-coded by modeling approach. The number of related works is strongly increasing during the last two years in particular for the pattern-based methods. \subsection{Contextual Cues} \label{sec:taxonomy:cues} We define contextual cues to be all relevant internal and external stimuli that influence motion behavior and categorize them based on their relation to the target agent, other agents in the scene and properties of the static environment, see Fig.~\ref{fig:dynamiccontextualcues} and Fig.~\ref{fig:staticcontextualcues}. \begin{enumerate}[label*=\arabic*.] \item Cues of the \textbf{target agent} include \begin{enumerate}[label*=\arabic*.] \item \textbf{Motion state} (position and possibly velocity), e.g. \citep{ferrer2014behavior, elfring2014learning, pellegrini2009you, kitani2012activity, karasev2016intent, ziebart2009planning, kooij2018ijcv, trautman2010unfreezing, kuderer2012feature, bennewitz2005learning, kucner2017enabling,bera2016glmp} \item \textbf{Articulated pose} such as head orientation \citep{unhelkar2015human,kooij2014eccv,kooij2018ijcv,roth2016iv,hasan2018mx} or full-body pose \citep{quintero2014pedestrian,minguez2018pedestrian} \item \textbf{Semantic attributes} such as the age and gender \citep{ma2016forecasting}, personality \citep{bera2017aggressive}, and awareness of the robot's presence \citep{oli2013human,kooij2018ijcv} \end{enumerate} \item With respect to the \textbf{dynamic environment} we distinguish \begin{enumerate}[label*=\arabic*.] \item \textbf{Unaware methods}, which compute motion predictions for the target agent not considering the presence of other agents, e.g. \citep{zhu1991hidden,elnagarTSMC1998,elnagarISCIRA2001,bennewitz2005learning,thompson2009probabilistic,kim2011gaussian,wang2016building,kucner2013conditional,bennewitz2005learning,thompson2009probabilistic,kim2011gaussian,wang2016building,kucner2013conditional} \item \textbf{Individual-aware methods}, which account for the presence of other agents, e.g. \citep{Luber2010,elfring2014learning,ferrer2014behavior,kooij2018ijcv,trautman2010unfreezing,vemula2017modeling,kuderer2012feature,alahi2016social} \item \textbf{Group-aware methods}, which account for the presence of other agents as well as social grouping information. This allows to consider agents in groups, formations or convoys that move differently than independent agents, e.g. \citep{yamaguchiCVPR2011,pellegrini2010improving,robicquet2016learning,singh2009modelling,qiu2010modeling,karamouzas2012simulating,seitz2014pedestrian} \end{enumerate} \item With respect to the \textbf{static environment} we distinguish \begin{enumerate}[label*=\arabic*.] \item \textbf{Unaware methods}, which assume an open-space environment, e.g. \citep{Foka2010, schneider2013gcpr, kruse1998camera, bennewitz2002using, ellis2009modelling,jacobsRAL2017, vasquez2008intentional, unhelkar2015human, ferguson2015real, luberIROS2012} \item \textbf{Obstacle-aware methods}, which account for the presence of individual static obstacles, e.g. \citep{rehder2015goal, trautman2010unfreezing, bera2016glmp, althoff2008stochastic, vemula2017modeling, alahi2016social, elfring2014learning, ferrer2014behavior} \item \textbf{Map-aware methods}, which account for environment geometry and topology, e.g. \citep{ziebart2009planning, vasquez2016novel, pfeiffer2016predicting, chen2017decentralized, pool2017iv, Rudenko2017workshop, Rudenko2018iros, kooij2018ijcv, henry2010learning, ikeda2013modeling, liao2003voronoi, chung2010mobile, yen2008goal, chung2012incremental, gong2011multi, rosmann2017online} \item \textbf{Semantics-aware methods}, which additionally account for environment semantics or affordances such as no-go-zones, crosswalks, sidewalks, or traffic lights, e.g. \citep{karasev2016intent, kitani2012activity, ballan2016knowledge, ma2016forecasting, zheng2016generating, rehder2017pedestrian, coscia2018long, lee2017desire, kuhnt2016understanding} \end{enumerate} \end{enumerate} \begin{figure}[t!] \centering \includegraphics[width=0.99\columnwidth]{taxonomy-image-cues-dynamic.pdf} \caption{Dynamic environment cues: {\bf (a)} unaware, {\bf (b)} individual-aware, {\bf (c)} group-aware (accounting for social grouping cues, in green).} \label{fig:dynamiccontextualcues} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.72\columnwidth]{taxonomy-image-cues-static.pdf} \caption{Static environment cues: {\bf (a)} unaware (ignoring any static objects, dashed line), {\bf (b)} obstacle-aware (accounting for unmodeled obstacles, dotted line), {\bf (c)} map-aware (accounting for a topometric environment model avoiding local minima, solid line), {\bf (d)} semantics-aware (solid line).} \label{fig:staticcontextualcues} \end{figure} In the following Sections \ref{sec:posterior_distribution:physics_based},~\ref{sec:posterior_distribution:motion_patterns} and \ref{sec:posterior_distribution:planning_based} we survey the different classes of the motion model category. We detail contextual cues categories in Section \ref{sec:classification_contextualcues}. In each section we discuss methods in the order of increasing complexity, considering inheritance of ideas and grouped by the similarity of the motion modeling techniques. \subsection{Classification Rules} \label{sec:taxonomy:classification_rules} Some of the surveyed papers may not fall univocally into a single class of our taxonomy, especially those using a mixture of different approaches, e.g. the work by \cite{bennewitz2005learning} which combines a non-sequential clustering approach with sequential HMM inference. For those borderline cases, we adopt the following rules: \\ \emph{i)} We classify methods primarily in the category that best describes the modelling approach over the inference method, e.g. for \citep{bennewitz2005learning} we give more weight to the clustering technique used for modelling the usual human motion behavior. \\ \emph{ii)} Some approaches add sub-components from other categories in their main modeling approach, e.g. planning-based approaches using physics-based transition functions \citep{van2008interactive, Rudenko2018icra}, physics-based methods tuned with learned parameters \citep{ferrer2014behavior}, planning-based approaches using inverse reinforcement learning to recover the hidden reward function of human behaviors \citep{ziebart2009planning, kitani2012activity}. We classify such approaches based on their main modeling method. \\ \emph{iii)} Methods that use behavior cloning (imitation of human behaviors with supervised learning techniques), i.e. learn/recover the motion model directly from data, are classified as pattern-based approaches \citep{schmerling2017multimodal, zheng2016generating}. In contrast to that, imitation learning techniques that reason on policies (e.g. using generative adversarial imitation learning \citep{li2017inferring}) are classified as planning-based methods. Furthermore, a single work is categorized into three contextual cues' classes with respect to its perception of the target agent, static and dynamic contextual cues.
1905.06025
\section{INTRODUCTION} \label{sec:intro} Type Ia supernova (SN Ia) research has advanced in recent years due to new observations and theoretical models. At present there is no consensus on the scenarios that bring white dwarfs (WDs) to experience thermonuclear explosions as SNe Ia (for recent reviews see \citealt{Maozetal2014, LivioMazzali2018, Wang2018, RuizLapuente2019}, in particular \citealt{Soker2018Rev} for a table comparing the five scenarios). For that I list (in an alphabetical order) \textit{all} binary scenarios and emphasise the differences between them. (1) In the \textit{core-degenerate (CD) scenario} a CO WD companion merges with the CO (or possibly HeCO) core of a massive asymptotic giant branch (AGB) star during a common envelope evolution (CEE). The CD scenario is a separate scenario (e.g., \citealt{Kashi2011, Ilkov2013, AznarSiguanetal2015}) because (a) at explosion there is one star, (b) it leaves no remnant, and (c) the delay time from CEE to explosion is set by the evolution of a single WD remnant of the merger. (2) In the \textit{double degenerate (DD) scenario} two WDs merge (e.g., \citealt{Webbink1984, Iben1984}), most likely in a violent process (e.g., \citealt{Pakmoretal2011, Liuetal2016}) a long time after the CEE. One major open parameter in the DD scenario is the time delay from merger to explosion (merger explosion delay, or MED; e.g., \citealt{LorenAguilar2009, vanKerkwijk2010, Pakmor2013, Levanonetal2015}). This is a separate scenario because (a) at explosion or shortly before explosion there are two WDs, (b) the explosion leaves no remnant, and (c) the delay time from CEE to explosion is set mainly by gravitational wave radiation of the two WDs. Note that the two WDs need not be CO WDs, e.g., one of them might be a helium WD or a HeCO hybrid WD (e.g., \citealt{YungelsonKuranov2017, Zenatietal2019}). (3) In the \textit{double-detonation (DDet) mechanism} the companion transfers mass to a CO WD, and the ignition of the helium-rich layer that was accreted from a companion ignites the CO WD (e.g., \citealt{Woosley1994, Livne1995, Shenetal2018}). This is a separate scenario because (a) there are two stars at explosion where only one of them explodes and (b) it leaves a surviving star, either an evolved helium star or a WD. Although there might be two WDs at explosion, because one WD survives the explosion this scenario is different than the DD scenario. In many channels of the DDet scenario the system experiences a CEE to bring closer the helium-rich companion and the CO WD. (4) In the \textit{single degenerate (SD) scenario} a WD accretes a hydrogen-rich material from a non-degenerate companion. The WD reaches close to the Chandrasekhar mass limit ($M_{\rm Ch}$), and explodes (e.g., \citealt{Whelan1973, Han2004, Wangetal2009}), either as soon as it reaches this mass or much later after it loses some of its angular momentum (e.g., \citealt{Piersantietal2003, DiStefanoetal2011, Justham2011}). This scenario is different than the other scenarios by that the WD reaches a mass of $\simeq M_{\rm Ch}$ by accreting hydrogen-rich gas. If the explosion takes place after a long delay, it might leave behind a subdwarf B star or a WD. In the CEE-wind SD scenario that \cite{MengPodsiadlowski2017} suggested, the explosion might occur shortly after a CEE if the WD is a hybrid CONe WD \citep{MengPodsiadlowski2018}. In this scenario, that \cite{MengPodsiadlowski2018} predict to be $\approx 5-10 \%$ of all SNe Ia, the WD accretes hydrogen-rich material from the envelope of a giant star while it spirals-in and ejects the envelope. This channel of the SD scenario is relevant to the present study. (5) The \textit{WD-WD collision (WWC) scenario} involves the collision of two WDs at about their free fall velocity into each other (e.g., \citealt{Raskinetal2009, Rosswogetal2009, Kushniretal2013, AznarSiguanetal2014}). \cite{Toonenetal2018} conduct a thorough population synthesis study and conclude, as some earlier studies did, that the SN Ia rate from the WWC scenario might be of the order of $0.1 \%$ of all SNe Ia. Follow up studies reach the same qualitative conclusion (e.g., \citealt{HallakounMaoz2019, HamersThompson2019}). As this scenario seems incapable to explain even a small fraction of SNe Ia, and it does not need the CEE, I will not consider it anymore in the present study. A different classification that is more relevant to the explosion mechanism and the nucleosynthesis yield is to WDs that explode with masses near the Chandrasekhar mass limit, `$M_{\rm Ch}$ explosions`, and WDs that explode with masses below that mass, `sub-$M_{\rm Ch}$ explosions` (e.g., \citealt{Maguireetal2018}). Crudely, the DD, DDet, and WWC scenarios belong to the sub-$M_{\rm Ch}$ explosions while the CD and the SD scenarios belong to $M_{\rm Ch}$ explosions. As there are indications for $M_{\rm Ch}$ explosions (e.g., \citealt{Ashalletal2018, Dhawanetal2018, Diamondetal2018}) there is place to consider the CD and the SD scenarios. In earlier papers I already noted the severe difficulties with the SD scenario (e.g., \citealt{Soker2018Rev}). There are also strong indications from the behaviour of SNe Ia for the presence of sub-$M_{\rm Ch}$ explosions (e.g. \citealt{Scalzo2014b, Blondin2017, Goldstein2018, Wygoda2019a, LevanonSoker2019, KonyvesTothetal2019}). The DD scenario is a promising sub-$M_{\rm Ch}$ scenario (e.g., \citealt{Soker2018Rev} for a review), possibly the hybrid DD scenario where there is one HeCO WD (e.g., \citealt{YungelsonKuranov2017, Zenatietal2019}). In many cases more than one scenario can account for a specific observation, and so it is mandatory to consider all relevant scenarios. For example, the presence of a circumstellar matter (CSM) is expected in some SNe Ia of all scenarios, beside the WWC scenario. A massive CSM with hydrogen is expected only in the CD scenario, in the DD scenario, and in the CEE-wind channel of the SD scenario. In the DD scenario this is the case if the two WD merge shortly after the CEE. For example, the CSM of SN Ia PTF11kx \citep{Dildayetal2012} is too massive for most channels of the SD scenario, and requires the CEE-wind channel of the SD scenario \citep{MengPodsiadlowski2017}, the CD scenario \citep{Sokeretal2013}, or with some fine tuning the DD scenario. In the present study I consider the common envelope to explosion delay (CEED) time of the CD and DD scenarios, some channels of the DDet scenario (those that experience the CEE), and the CEE-wind channel of the SD scenario. In section \ref{sec:delay} I define the CEED time in relation to the delay time distribution (DTD) from star formation to explosion, and the merger/accretion to explosion delay (MED) time. In section \ref{sec:Estimating CEED} I derive a crude expression to the CEED time distribution (CEEDTD) and discuss its implications, and in section \ref{sec:summary} I summarise this short study. \section{The delay times} \label{sec:delay} \subsection{Delay time distribution (DTD)} \label{subsec:delay} The DTD is the distribution of the delay time from star formation to the actual SN Ia explosion, $t_{\rm SF-E}$. Different studies with different techniques (see, e.g., \citealt{Heringeretal2019}) have deduced somewhat different expressions for the DTD from observations (e.g., \citealt{Grauretal2014, Heringeretal2017, MaozGraur2017}). The two recent studies of \cite{FriedmannMaoz2018} for the rate of SNe Ia in galaxy clusters and that of \cite{Heringeretal2019} for field galaxies derive very similar parameters in the expression for the DTD \begin{equation} \dot N_{\rm DTD} \equiv \left( \frac {d N_{\rm Ia}}{dt} \right)_{\rm DTD} = A \left( \frac{t}{1 {~\rm Gyr}} \right)^{\alpha}. \label{eq:dotN} \end{equation} \cite{FriedmannMaoz2018} derive $A = 5-8 \times 10^{-13} M^{-1}_\odot {~\rm yr}^{-1}$ and $\alpha=-1.30^{+0.23}_{-0.16}$, while \cite{Heringeretal2019} derive $A = 7\pm 2 \times 10^{-13} M^{-1}_\odot {~\rm yr}^{-1}$ and $\alpha=-1.34^{+0.19}_{-0.17}$. I will use these results in what follows (but I note recent different results, e.g., \citealt{Frohmaieretal2019}). Some studies compare this derived DTD to the spiralling-in time due to gravitational wave emission of two WDs in the frame of the DD scenario, $t_{\rm GW}$. But it is important to remember that there are actually two other evolutionary phases that add up to yield the total delay time from star formation to explosion in the DD scenario, $t_{\rm SF-E}({\rm DD})$. These are the times from star formation to the formation of the two WDs in the post-CEE phase, $t_{\rm SF-CE}$, and the time from the merger of the two WDs to explosion, the MED time $t_{\rm MED}$ (section \ref{subsec:MED}). Namely, \begin{equation} t_{\rm SF-E}({\rm DD}) = t_{\rm SF-CE} + t_{\rm CEED} = t_{\rm SF-CE} + t_{\rm GW} + t_{\rm MED} , \label{eq:DDtSFE} \end{equation} where $t_{\rm CEED}$ is the time from the end of the CEE to explosion. If both $t_{\rm SF-CE} \ll t_{\rm GW}$ and $t_{\rm MED} \ll t_{\rm GW}$ then the assumption $t_{\rm SF-E}({\rm DD}) \simeq t_{\rm GW}$ holds. In the CD scenario the WDs merge during the CEE, and so \begin{equation} t_{\rm SF-E}({\rm CD}) = t_{\rm SF-CE} + t_{\rm CEED} = t_{\rm SF-CE} + t_{\rm MED} . \label{eq:CDtSFE} \end{equation} I discuss the MED time in section \ref{subsec:MED} and the CEED time in section \ref{subsec:CEED time}. \cite{FriedmannMaoz2018} fit their DTD down to delay time of $t_{\rm SF-E} = 1.5 {~\rm Gyr}$ and consider SNe Ia to occur from $t_{\rm SF-E} = 0.04 {~\rm Gyr}$ to present $t_{\rm SF-E} = 13.7 {~\rm Gyr}$. They find a production efficiency (defined as Hubble-time-integrated SN Ia number per formed stellar mass) of $n_{\rm Ia}\simeq 0.003-0.008 M^{-1}_\odot$. \cite{Heringeretal2019} consider SNe Ia to occur in the time interval from $t_{\rm SF-E} = 0.1 {~\rm Gyr}$ to $t_{\rm SF-E} = 13.7 {~\rm Gyr}$ and find $n_{\rm Ia} \simeq 0.003-0.006 M^{-1}_\odot$. As for the slope of the DTD, \cite{Heringeretal2019} note that a slope of $\alpha \simeq -1.35$ falls between the expected value of the DD scenario and the DDet scenario (e.g., \citealt{Ruiteretal2011}). \cite{Neunteufeletal2019} argue that the DDet scenario with a non-degenerate helium donor can account for no more than few percent of all SNe Ia. Indeed, \cite{Ruiteretal2011} find in their population synthesis study that most of their DDet SNe Ia come from WD donors. These systems experience a CEE phase, and are relevant to the present study. \subsection{Merger to explosion delay (MED) time} \label{subsec:MED} In an earlier study \citep{Soker2018Rev} I argued that in a large fraction of SNe Ia there must be a substantial time delay between the end of the merger of the WD with a companion or the end of mass accretion on to the WD and the terminal explosion of the WD as a SN Ia. Several observations suggest the existence of a merger/accretion to explosion delay (MED) time, $t_{\rm MED}$. I give here a brief summary before I introduce the motivation for my definituon of the CEED time (section \ref{subsec:CEED time}). (1) If the explosion of the two WDs in the DD scenario occurs as they dynamically interact, then the explosion is asymmetrical (e.g., \citealt{Kashyapetal2017, Pakmor2012, Tanikawaetal2015, vanRossumetal2016}), contradicting the structure of most SN Ia remnants (SNRs Ia) that tend to be spherical or axisymmetrical (e.g., \citealt{Lopezetal2011}). In that respect I note that a surviving WD companion in the DDet scenario also leads to a SNR Ia that possesses non-spherical morphological features (e.g., \citealt{Papishetal2015, Tanikawaetal2018, Tanikawaetal2019}). (2) Several SNe~Ia show early ($\la 5 {~\rm days}$) excess emission in their light curve (e.g., \citealt{Marionetal2016, Hosseinzadehetal2017, Shappee2019, Dimitriadisetal2019a, Jiang2018}). According to the SD scenario such an emission is expected in most SNe Ia (e.g. \citealt{Kasen2010}). However, such an emission is possible also in the DD scenario, as the ejecta collides with disk-originated matter (DOM; \citealt{Levanonetal2015, LevanonSoker2017, LevanonSoker2019}). \cite{Levanonetal2015} argued that in the frame of the DD scenario the presence of an early excess emission in only a small fraction of SNe Ia implies that in most cases the MED should be longer than tens of years to allow the DOM to disperse. (3) Another limit is on the ionisation radiation tens of thousands of years before the explosion of some SNe Ia, e.g., Tycho SN Ia. \cite{Woodsetal2017} find that the medium around the Tycho is not ionised, and hence was not ionised before explosion. \cite{Woodsetal2018}, for Galactic SNRs, and \cite{Kuuttilaetal2019}, for Large Magellanic Cloud SNRs, constrain the pre-explosion ionisation of more SNe Ia. These studies put limits on the SD scenario. These results also put some limits on the ionisation from the merging WDs in the DD scenario, as merging WDs might emit strong UV radiation (e.g., \citealt{TornambPiersanti2013}). Overall, I estimated \citep{Soker2018Rev} that the MED time of the DD scenario should be in many cases $t_{\rm MED}({\rm DD}) \ga 10^5 {~\rm yr}$, while in the SD scenario there are cases where $t_{\rm MED}({\rm SD}) \ga 10^7 {~\rm yr}$. The DDet scenario with a WD companion and the WWC scenario allow for no MED time, and this is one of the problems of these scenarios \citep{Soker2018Rev}. In the CD scenario the MED time is built-in to the scenario, hence it is one of its advantages. In those scenarios where the binary system experiences a CEE, the time from the end of the CEE to explosion, $t_{\rm CEED}$, includes the MED time (section \ref{subsec:delay}). In the DD scenario the MED time is a small fraction of $t_{\rm CEED}$, $t_{\rm MED}({\rm DD}) \ll t_{\rm CEED}$, while in the CD scenario $t_{\rm MED}({\rm CD})=t_{\rm CEED}$, and its value might be up to billions of years if the CD scenario can allow for a long delay time (e.g., \citealt{Ilkov2012}). \subsection{Common envelope to explosion delay (CEED) time} \label{subsec:CEED time} The following considerations motivate me to define the CEED time and study the CEEDTD. (1) The ejecta of the Kepler SNR Ia interact with a CSM (e.g., \citealt{Sankritetal2016}). The non detection of a giant star or a post-giant star (e.g., \citealt{Kerzendorfetal2014, Medanetal2017}) suggests that the CSM was blown during a CEE in the frame of either the CD scenario, the DD scenario, the CEE-wind channel of the SD scenario, or the DDet scenario. In the CEE-wind channel of the SD scenario for Kepler the remnant is a subdwarf B (sdB) star that is below observational limits \citep{MengLi2019}, while in the DDet scenario the remnant is a WD that might also be below observational limits and far from the center of Kepler SNR. (2) The mass of the CSM in the SNe Ia-CSM PTF11kx seems to be too large for the SD scenario with a giant donor \citep{Sokeretal2013}, and better fits mass ejection in a CEE. But I do note that \cite{MengPodsiadlowski2018} claim that their suggested CEE-wind channel of the SD scenario can account for a more massive CSM, such as that in PTF11kx. (3) In the DD scenario and in the DDet scenario with a WD donor the CEE forms the initial setting of two WDs. In the CD scenario the CEE forms the single WD merger product of the core and the WD companion. In the CEE-wind channel of the SD scenario the CEE ensures the right conditions to bring the WD to explode \citep{MengPodsiadlowski2017}. These suggest that an important time of evolution is the time from the end of the CEE to the explosion itself, i.e., $t_{\rm CEED}$. (4) The recent new derivations of parameters for the DTD \citep{FriedmannMaoz2018, Heringeretal2019} and the estimate of the fraction of SNe Ia-CSM \citep{Grahametal2019} allow an attempt to connect the very short post-CEE time with times of $>1 {~\rm Gyr}$. I attempt now such a derivation. \section{Estimating the CEED time distribution (CEEDTD)} \label{sec:Estimating CEED} \subsection{SN Ia rates from observations} \label{subsec:Rate} I turn to derive the CEEDTD in relative numbers, i.e., relative to the total number of SNe Ia. Unlike an expression for the CEEDTD in absolute values, i.e. per stellar mass, the relative CEEDTD is less sensitive to the uncertainties in the observationally derived absolute total number of SNe Ia per stellar mass and to new values of this absolute number that future observations might deduce. To crudely derive the relative CEEDTD I use the following expressions. (1) \textit{Very long DTD.} I take equation (\ref{eq:dotN}) and substitute numbers from \cite{Heringeretal2019} and \cite{FriedmannMaoz2018} (see discussion following equation \ref{eq:dotN}). This equation reads now \begin{equation} \dot N_{\rm DTD} = 0.19 N_{\rm Ia} F_1(t_{\rm i}) \left( \frac{t }{1 {~\rm Gyr}} \right)^{-1.32} {~\rm Gyr}^{-1}, \label{eq:dotN3} \end{equation} where $t_{\rm i}$ is the first time after star formation when SNe Ia occur, $F_1(t_{\rm i})=1.68 [ (t_{\rm i}/{~\rm Gyr})^{-0.32} - 13.7^{-0.32}]^{-1}$, and with an uncertainty of $\alpha \simeq -1.32 \pm 0.2$. The scaling of $F_1(t_{\rm i})$ is such that $F_1(0.1 {~\rm Gyr})=1$. Integrating the rate in equation (\ref{eq:dotN3}) from $t_{\rm i}$ to $t=13.7 {~\rm Gyr}$ gives a total SNe Ia number of $N_{\rm Ia}$. For $t_{\rm i}=0.1 {~\rm Gyr}$ for example, the maximum rate (at $t=t_{\rm i}$) is $\dot N_{\rm DTD} = 4 N_{\rm Ia} {~\rm Gyr}^{-1}$, while for $t_{\rm i}=0.04 {~\rm Gyr}$ the maximum rate is $\dot N_{\rm DTD}= 9.5 N_{\rm Ia} {~\rm Gyr}^{-1}$. (2) \textit{SNe Ia inside planetary nebulae (SNIP).} \cite{TsebrenkoSoker2015a} estimated that the fraction of SNe Ia that explode within a CSM, i.e., a planetary nebula or a remnant of a planetary nebula, is at least $\simeq 20 \pm 10 \%$ of all SNe Ia. These are termed SNIPs, including SNe Ia that explode inside proto-planetary nebulae \citep{Cikotaetal2017}. \cite{TsebrenkoSoker2015a} assumed that the dispersion time of the planetary nebulae is $t_{\rm SNIP} \approx 10^5 {~\rm yr}$, but might be as long as $\approx 10^6 {~\rm yr}$. I take here the dispersion time to be $t_{\rm SNIP} \approx 3 \times 10^5 {~\rm yr}$. For example, for an expansion velocity of $10 {~\rm km} {~\rm s}^{-1}$ and an ejecta velocity of $10^4 {~\rm km} {~\rm s}^{-1}$ the ejecta will interact with the CSM at a SN age of $\simeq 300 {~\rm yr}$. As an indication for the presence of a CSM \cite{TsebrenkoSoker2015a} took the presence of two opposite protrusions termed `Ears' in the SNR (see also \citealt{Chiotellisetal2016}). They find that out of their 13 SNRs Ia two posses ears and 4 maybe possess ears. From this they estimated that $\simeq 15-45 \%$ of the SNRs Ia are SNIPs. However, the SNR Ia N103B that they did not list as a SNIP does interact with a CSM (e.g., \citealt{Williamsetal2018}). If I take the two SNRs that are known to interact with a CSM from the list of 13 SNRs, Kepler and N103B, I find the fraction of SNIPs to be $\approx 15 \%$. I note that in principle there are two other possibilities for the formation of SNRs with two opposite protrusions. The first one is the formation of a CSM by a giant companion to the WD in the frame of the SD scenario, without going through the planetary nebula phase. This cannot work for the Kepler SNR as there is no giant companion there (e.g., \citealt{Kerzendorfetal2014, Medanetal2017}). The second possibility is that there is an ISM close to the SN, and the interaction of an axisymmetrical explosion with two opposite clumps (jets) form the two protrusions \citep{TsebrenkoSoker2013}. It is not clear if a regular ISM can form such a massive CSM. For the above reservations, I consider in the present study only the CEE channel to form the dense CSM. Overall, I take for the SNIP fraction out of all SNe Ia and for the planetary nebula dispersion time $f_{\rm SNIP} \simeq 15-20 \%$ and $t_{\rm SNIP} \simeq 3 \times 10^5 {~\rm yr}$, respectively, from which I estimate the average SN Ia rate in the time interval $0< t_{\rm CEED} < 3 \times 10^5 {~\rm yr}$ to be \begin{equation} {\overline {\dot N}}_{\rm SNIP} = \frac{f_{\rm SNIP} N_{\rm Ia}}{t_{\rm SNIP}} \approx (100-1000) N_{\rm Ia} {~\rm Gyr}^{-1}. \label{eq:SNIPrate} \end{equation} This rate is much larger than the rate that equation (\ref{eq:dotN3}) gives for $0< t_{\rm CEED} < 3 \times 10^5 {~\rm yr}$. For example, if the time from star formation to CEE is $0.1 {~\rm Gyr}$ then the time in equation (\ref{eq:dotN3}) is $t=0.1 {~\rm Gyr} + t_{\rm CEED}$. I conclude therefore, that equation (\ref{eq:dotN3}) cannot be used as is to give the SN Ia rate short times after the CEE. (3) \textit{SNe Ia-CSM.} There are SNe Ia that show signatures of interaction with CSM within months after explosion, e.g., PTF11kx \citep{Dildayetal2012} and SN~2015cp \citep{Grahametal2019}. Such SNe Ia-CSM are very rare (e.g., \citealt{Szalaietal2019}). From their detection of CSM interaction 686 days after explosion \cite{Grahametal2019} determine the maximum inner radius of the CSM to be $R_{\rm CSM} \la 10^{17} {~\rm cm}$. For a CSM expansion velocity of $10 {~\rm km} {~\rm s}^{-1}$ the time from the end of the CEE (assuming the CSM was formed in a CEE) to explosion is $t_{\rm CSM} \la 3000 {~\rm yr}$. \cite{Grahametal2019} further estimate that the fraction of SNe Ia-CSM is $f_{\rm CSM} < 0.06$ of all SNe Ia. I crudely estimate the SNe explosion rate within the CSM interaction time by taking $t_{\rm CSM} \approx 1000 - 3000 {~\rm yr}$ and $f_{\rm CSM} \approx 0.03-0.05$. This gives for the average SN Ia rate at $t_{\rm CEED}=t_{\rm CSM} \approx 1000 - 3000 {~\rm yr}$ \begin{equation} {\overline {\dot N}}_{\rm CSM} = \frac{f_{\rm CSM} N_{\rm Ia}}{t_{\rm CSM}} \approx (10^4-5 \times 10^4 ) N_{\rm Ia} {~\rm Gyr}^{-1}. \label{eq:CSMrate} \end{equation} I note that the SN Ia fraction $f_{\rm SNIP} \simeq 0.15-0.2$ includes the fraction $f_{\rm CSM}\la 0.06$. Namely the fraction of SNe Ia that explode inside extended planetary nebulae but show no interaction within few years from explosion is $f_{\rm SNIP}-f_{\rm CSM} \approx 0.1-0.2$. \subsection{A crude plausible short CEED time distribution} \label{subsubsec:CEED} Equations (\ref{eq:dotN3}), (\ref{eq:SNIPrate}), and (\ref{eq:CSMrate}) show that the SN Ia rates at short times after the CEE, i.e., the SNe Ia-CSM and the SNIPs, require a different expression for their rate, and that the time from the CEE to explosion, $t_{\rm CEED}$, is a better measure than the time from star formation. The two rates of the two populations, of SNIPs and of SNe Ia-CSM, do not allow to derive an expression. I make two more assumptions to derive a plausible expression, but it is definitely not a unique expression. It only serves to emphasise some properties of these populations. (1) I assume that the time from the end of the CEE to explosion, $t_{\rm CEED}$ of a specific system is sensitive to a parameter $\aleph$ (pronounced `aleph') according to \begin{equation} t_{\rm CEED} \propto \aleph ^{\eta} \quad \rightarrow \quad \frac{d \aleph}{d t_{\rm CEED}} \propto \left( t_{\rm CEED} \right)^{\eta^{-1}-1} ; \quad \eta \gg 1, \label{eq:tauexp} \end{equation} and that $\aleph$ decreases with time. Let the formation of systems to be exploded, like WD binary systems in the DD scenario or single WDs in the CD scenario, be distributed in a weakly-dependent manner on $\aleph$ at the end of the CEE. Namely, \begin{equation} \left( \frac {dN_{\rm Ia}}{d \aleph} \right)_{t_{\rm CEED}=0} \propto \aleph ^{\epsilon}; \quad -1 \la \epsilon \la 1. \label{eq:npe} \end{equation} From equations (\ref{eq:tauexp}) and (\ref{eq:npe}) one gets \begin{equation} \begin{aligned} \dot N_{\rm Ia} = & \frac{d N_{\rm e}}{d \aleph} \frac{d \aleph}{d t_{\rm CEED}} \propto \left( t_{\rm CEED} \right)^{\epsilon/\eta} \left( t_{\rm CEED} \right)^{\eta^{-1}-1} \\ & \simeq \left( t_{\rm CEED} \right)^{-1}, \quad {\rm for} \quad \eta \gg 1. \label{eq:Neexp} \end{aligned} \end{equation} This derivation is not new, e.g., \cite{Greggio2005}. For the DD scenario, for example, the orbital decay is due to gravitational radiation, so the parameter is the orbital separation, i.e., $\aleph \rightarrow a$ with $\eta=4$ and $\epsilon \simeq -1$ and one obtains $d N_{\rm Ia}/d t_{\rm CEED} \propto (t_{\rm CEED})^{-1}$ (e.g., \citealt{Maoz2010}). But for the specific populations I focus on the parameter might be another one, e.g, the angular momentum of the WD that was formed by the WD-core merger in the CD scenario. (2) The second assumption I make is that the rate of equation (\ref{eq:Neexp}) is applicable in a relatively short time range of $t_1 \la t_{\rm CEED} \la t_2$, where $t_1 \approx 1000 {~\rm yr}$ and $t_2 \approx 10^6 -10^7 {~\rm yr}$. The upper limit is similar to what \cite{MengPodsiadlowski2018} argue for in the CEE-wind channel of the SD scenario. I can use now the two rates given in equations (\ref{eq:SNIPrate}) and (\ref{eq:CSMrate}) with the above time limit, to write for the rate shortly after the CEE, i.e., for the CEEDTD \begin{equation} \begin{aligned} & \dot N_{\rm Ia,short} \approx 10^{3.7 \pm 0.2} N_{\rm Ia} \left( \frac{t_{\rm CEED}}{10^4 {~\rm yr}} \right)^{-1} {~\rm Gyr}^{-1} , \\ & {\rm for} \quad 10^3 {~\rm yr} \approx t_1 \le t_{\rm CEED} \le t_2 \approx 3 \times 10^6 {~\rm yr}, \label{eq:NeexpFinal} \end{aligned} \end{equation} with large uncertainties in the time range and in the rate itself. Despite the large uncertainties in expression (\ref{eq:NeexpFinal}), both in its form and in its numerical values, it emphasises two properties of the SN Ia population that takes place shortly, within $\approx 10^3-10^6 {~\rm yr}$, after the CEE, i.e., SNIPs and SNe Ia-CSM. (1) Integrating equation (\ref{eq:NeexpFinal}) over the time span and for the lower value coefficient $10^{3.7-0.2}$ gives a total SNe Ia population of $N_{\rm Ia,short} \approx 0.03 N_{\rm Ia} \ln (t_2/t_1)$. For $t_2=3000t_1$ this gives $N_{\rm Ia,short} \approx 0.25 N_{\rm Ia}$ and for $t_2=300t_1$ this gives $N_{\rm Ia,short} \approx 0.18 N_{\rm Ia}$. For the upper value of $10^{3.7+0.2}$ the values are $2.5$ larger. Over all I find $N_{\rm Ia,short} \approx {\rm few} \times 0.1 N_{\rm Ia}$. (2) If we would have continue equation (\ref{eq:dotN}) to short times down to $t=1000 {~\rm yr}$, it would be always larger than the rate given by equation (\ref{eq:NeexpFinal}) for $t \la 1 {~\rm Gyr}$. This hints that the physical processes that determine the delay time to explosion shortly after the CEE are not identical to those that determine the delay time at very long times. Due to the very large uncertainties this conclusion is only a tentative one. \subsection{Observational tests} \label{subsubsec:Tests} The tentative conclusion that I derived above requires more observational test, as I suggest below. The first one is to follow SNe Ia for a long time, years to tens of years, after explosion. In cases where the CSM is at distances of $\la 1 {~\rm pc}$ the ejecta-CSM collision within several years might cause re-brightening in different bands, e.g., optical and UV (e.g., \citealt{Grahametal2019}), X-ray, and radio. The analysis of this study suggests that $\approx 10\%$ of all SNe Ia should experience ejecta-CSM interaction within 30 years. We might detect signatures of interaction in nearby SNe Ia. The analysis of re-brightening tens of years after explosion deserves a study of its own. Another observational test might be light-echo. The CSM might contain large mass of dust that might reflect a large portion of the SN light. \cite{Maund2019}, for example, suggests that the re-brightening of the type IIb SN~2011dh in M51 that took place few years after explosion might come from light echo. Some SNe Ia also show light echo (e.g, \citealt{Graur2019}). When resolved, I predict that in some cases the morphology of the echoing dust will have axisymmetrical structure, such as that of planetary nebulae. In cases where the CSM is large, interaction with the ISM might change completely its geometry. Finally, I predict that in the local group, where we can easily detect planetary nebulae, in $\approx 10 \%$ of SNe Ia we might detect a pre-explosion planetary nebula. The planetary nebula might be very faint, so the detection is not easy. Although the chance of detecting pre-explosion planetary nebulae is very low (as there are not many SNe Ia in the local group), it is not zero. \section{SUMMARY} \label{sec:summary} The goal of the present study is a derivation of a crude SNe Ia rate, the CEEDTD, as function of the time $t_{\rm CEED}$ after the CEE that, according to my assumption, forms the progenitors of most SNe Ia. For that, the present study is relevant to the CD scenario, the DD scenario, the DDet scenario with a WD companion, and to the CEE-wind channel of the SD scenario. While the usual DTD refers to a long time after star formation (equations \ref{eq:dotN} and \ref{eq:dotN3}), in this study I focused on the rate of SNe Ia that interact with a CSM within months after explosion, so called SNe Ia-CSM (equation \ref{eq:CSMrate}), and the rate of SNe Ia that interact with a CSM that might have been a planetary nebula, so called SNIPs (equation \ref{eq:SNIPrate}). To derive a plausible expression for the CEEDTD (equation \ref{eq:NeexpFinal}) I made two assumptions (section \ref{subsubsec:CEED}). This expression is crude (and not unique). Despite the very large uncertainties in the parameters and time span of equation (\ref{eq:NeexpFinal}) it emphasises the conclusions of this study. \begin{enumerate} \item There is a large population of SNe Ia, $\approx {\rm few} \times 0.1$ of all SNe Ia, that explode a short time, within $t_{\rm CEED} \approx 10^6 {~\rm yr}$ (and possibly up to $t_{\rm CEED} \approx 3 \times 10^6 {~\rm yr}$), after the CEE. \item The expression for the SNe Ia rate as a function of time after the CEE, the CEEDTD, cannot be the one that is used for DTD long after star formation. \item The previous conclusion hints that the physical processes that determine the short delay time from the CEE to explosion, i.e., of the SNe Ia-CSM and of SNIPs that occur at $t_{\rm CEED} \la 10^6 {~\rm yr}$, are different (at least to some extend) from those that determine the DTD at long time scales of $t_{\rm CEED} \ga 10^7 {~\rm yr}$. This very tentative conclusion deserves deeper studies. \end{enumerate} I thank an anonymous referee for useful comments. This research was supported by the Israel Science Foundation.
1905.06097
\section{Introduction} Consider an unknown signal $\mathbf{x}_* \in \mathbb{R}^p$ observed via $n$ noisy linear measurements \[\mathbf{y}=\mathbf{A}\mathbf{x}_*+\mathbf{e} \in \mathbb{R}^n.\] We study the problem of estimating $\mathbf{x}_*$, under the assumption that its coordinates correspond to the $p$ vertices of a given graph $G=(V,E)$, and $\mathbf{x}_*$ is gradient-sparse. By this, we mean that \begin{equation}\label{eq:l0norm} \|\nabla \mathbf{x}_*\|_0 \equiv \sum_{(i,j) \in E} \mathbf{1}\{x_{*,i} \neq x_{*,j}\} \end{equation} is much smaller than the total number of edges $|E|$. Special cases of interest include the 1-D line graph, where variables have a sequential order and $\mathbf{x}_*$ has a changepoint structure, and the 2-D lattice graph, where coordinates of $\mathbf{x}_*$ represent pixels of a piecewise-constant image. This problem has been studied since early pioneering works in compressed sensing \cite{candesrobust,candesstable,donoho}. Among widely-used approaches for estimating $\mathbf{x}_*$ are those based on constraining or penalizing the total-variation (TV) semi-norm \cite{rudinosherfatemi}, which may be defined (anisotropically) for a general graph as \[\|\nabla \mathbf{x}\|_1 \equiv \sum_{(i,j) \in E} |x_i-x_j|.\] These are examples of $\ell_1$-analysis methods \cite{eladetal,candescoherent,nametal}, which regularize the $\ell_1$-norm of a general linear transform of $\mathbf{x}$ rather than of its coefficients in an orthonormal basis. Related fused-lasso methods have been studied for different applications of regression and prediction in \cite{tibshiranifused,rinaldo,rjtibshirani}, other graph-based regularization methods for linear regression in \cite{lietal,kimgao}, and trend-filtering methods regularizing higher-order discrete derivatives of $\mathbf{x}$ in \cite{kimetal,wangetal}. Theoretical recovery guarantees for TV-regularization depend on the structure of the graph \cite{needellward,needellward3d,caixu}, and more generally on sparse conditioning properties of the pseudo-inverse $\nabla^\dagger$ for $\ell_1$-analysis methods with sparsifying transform $\nabla$. For direct measurements $\mathbf{A}=\mathbf{I}$, these and related issues were studied in \cite{hutterrigollet,dalalyanetal,fanguan}, which showed in particular that TV-regularization may not achieve the same worst-case recovery guarantees as analogous $\ell_0$-regularization methods on certain graphs including the 1-D line. In this setting of $\mathbf{A}=\mathbf{I}$, different computational approaches exist which may be used for approximately minimizing an $\ell_0$-regularized objective on general graphs \cite{boykovetal,kleinbergtardos,xuetal}. Motivated by this line of work, we propose and study an alternative to TV-regularization for the problem with indirect linear measurements $\mathbf{A} \neq \mathbf{I}$. Our procedure is based similarly on the idea of minimizing a possibly non-convex objective \begin{equation}\label{eq:generalobjective} F(\mathbf{x})=\frac{1}{2}\|\mathbf{y}-\mathbf{A}\mathbf{x}\|_2^2+\lambda \sum_{(i,j) \in E} c(x_i,x_j) \end{equation} for an edge-associated cost function $c$. We will focus attention in this work on the specific choice of an $\ell_0$-regularizer \begin{equation}\label{eq:l0cost} c(x_i,x_j)=\mathbf{1}\{x_i \neq x_j\}, \end{equation} which matches (\ref{eq:l0norm}), although the algorithm may be applied with more general choices of metric edge cost. For the above $\ell_0$ edge cost, the resulting objective takes the form \[F(\mathbf{x})=\frac{1}{2}\|\mathbf{y}-\mathbf{A}\mathbf{x}\|_2^2+\lambda \|\nabla \mathbf{x}\|_0.\] We propose to minimize $F(\mathbf{x})$ using an iterative algorithm akin to proximal gradient descent: For parameters $\gamma \in (0,1)$ and $\eta>0$, the iterate $\mathbf{x}_{k+1}$ is computed from $\mathbf{x}_k$ via \begin{align*} \a_{k+1} &\leftarrow \mathbf{x}_k-\eta \mathbf{A}^\mathsf{T}(\mathbf{A}\mathbf{x}_k-\mathbf{y})\\ \mathbf{x}_{k+1} ``&\leftarrow" \argmin_{\mathbf{x}} \frac{1}{2}\|\mathbf{x}-\a_{k+1}\|_2^2+\lambda_k \sum_{(i,j) \in E} c(x_i,x_j)\\ \lambda_{k+1} &\leftarrow \lambda_k \cdot \gamma \end{align*} For general graphs, the second update step for $\mathbf{x}_{k+1}$ is only approximately computable in polynomial time. We apply the alpha-expansion procedure of Boykov, Veksler, and Zabih \cite{boykovetal} for this task, first discretizing the continuous signal domain, as analyzed statistically in \cite{fanguan}. In contrast to analogous proximal methods in convex settings \cite{beckteboulle,parikhboyd}, where typically $\lambda_k \equiv \lambda\eta$ is fixed across iterations, we decay $\lambda_k$ geometrically from a large initial value to ensure algorithm convergence. We call the resulting algorithm ITerative ALpha Expansion, or ITALE. Despite $F(\mathbf{x})$ being non-convex and non-smooth, we provide global recovery guarantees for a suitably chosen ITALE iterate $\mathbf{x}_k$. For example, under exact gradient-sparsity $\|\nabla \mathbf{x}_*\|_0=s_*$, if $\mathbf{A}$ consists of \begin{equation}\label{eq:ndependence} n \gtrsim s_*\log (1+|E|/s_*) \end{equation} linear measurements with i.i.d.\ $\mathcal{N}(0,1/n)$ entries, then the ITALE iterate $\mathbf{x}_k$ for the $\ell_0$-regularizer (\ref{eq:l0cost}) and a penalty value $\lambda_k \asymp \|\mathbf{e}\|_2^2/s_*$ satisfies with high probability \begin{equation}\label{eq:heuristicguarantee} \|\mathbf{x}_k-\mathbf{x}_*\|_2 \lesssim \|\mathbf{e}\|_2. \end{equation} More generally, we provide recovery guarantees when $\mathbf{A}$ satisfies a certain cut-restricted isometry property, described in Definition \ref{def:RIP} below. (In accordance with the compressed sensing literature, we state all theoretical guarantees for deterministic and possibly adversarial measurement error $\mathbf{e}$.) Even for i.i.d.\ Gaussian design, we are not aware of previous polynomial-time algorithms which provably achieve this guarantees for either the 1-D line or the 2-D lattice. In particular, connecting with the above discussion, similar existing results for TV-regularization in noisy or noiseless settings require $n \gtrsim s_*(\log |E|)^3$ Gaussian measurements for the 2-D lattice and $n \gtrsim \sqrt{|E|s_*}\log |E|$ measurements for the 1-D line \cite{needellward,caixu}. In contrast, for lattice graphs of dimensions 3 and higher where the Laplacian $\mathbf{L}=\nabla^\mathsf{T}\nabla$ is well-conditioned, as well as for more general $\ell_1$-analysis methods where $\nabla^\dagger$ is a tight frame, optimal recovery guarantees for TV/$\ell_1$-regularization hold with $n \gtrsim s_* \log |E|$ or $n \gtrsim s_* \log (|E|/s_*)$ measurements as expected \cite{candescoherent,needellward3d,caixu}. ITALE provides this guarantee up to a constant factor, irrespective of the graph structure. In practice, for $\gamma$ sufficiently close to 1, we directly interpret the sequence of ITALE iterates $\mathbf{x}_k$ as approximate minimizers of the objective function (\ref{eq:generalobjective}) for penalty parameters $\lambda=\lambda_k/\eta$ along a regularization path. We select the iterate $k$ using cross-validation on the prediction error for $\mathbf{y}$, and we use the final estimate $\hat{\mathbf{x}}^\mathrm{ITALE}=\mathbf{x}_k$. Figure \ref{fig:XCAT} compares in simulation $\hat{\mathbf{x}}^\mathrm{ITALE}$ using the $\ell_0$-regularizer (\ref{eq:l0cost}) with $\hat{\mathbf{x}}^\mathrm{TV}$ (globally) minimizing the TV-regularized objective \begin{equation}\label{eq:TVobjective} F^\mathrm{TV}(\mathbf{x})=\frac{1}{2}\|\mathbf{y}-\mathbf{A}\mathbf{x}\|_2^2+\lambda\|\nabla \mathbf{x}\|_1, \end{equation} with $\lambda$ selected also using cross-validation. The example depicts a synthetic image of a human chest slice, previously generated by \cite{gongetal} using the XCAT digital phantom \cite{segarsetal}. The design $\mathbf{A}$ is an undersampled and reweighted Fourier matrix, using a sampling scheme described in Section \ref{sec:theory} and similar to that proposed in \cite{krahmerward} for TV-regularized compressed sensing. In a low-noise setting, a detailed comparison of the recovered images reveals that $\hat{\mathbf{x}}^\mathrm{ITALE}$ provides a sharper reconstruction than $\hat{\mathbf{x}}^\mathrm{TV}$. As noise increases, $\hat{\mathbf{x}}^\mathrm{TV}$ becomes blotchy, while $\hat{\mathbf{x}}^\mathrm{ITALE}$ begins to lose finer image details. Quantitative comparisons of recovery error are provided in Section \ref{sec:2D} and are favorable towards ITALE in lower noise regimes. \begin{figure}[t] \begin{minipage}{0.33\textwidth} \includegraphics[width=0.9\textwidth]{XCAT_phantom.png} \end{minipage}% \begin{minipage}{0.65\textwidth} \includegraphics[width=0.45\textwidth]{{XCAT_ITALE_0.2_04_CV}.png}% \includegraphics[width=0.45\textwidth]{{XCAT_ITALE_0.2_16_CV}.png}\\ \includegraphics[width=0.45\textwidth]{{XCAT_TV_0.2_04_CV}.png}% \includegraphics[width=0.45\textwidth]{{XCAT_TV_0.2_16_CV}.png}% \end{minipage} \caption{Left: Original image slice from the XCAT digital phantom. Top row: $\hat{\mathbf{x}}^\mathrm{ITALE}$ from 20\% undersampled and reweighted Fourier measurements, in low noise ($\sigma=4$, left) and medium noise ($\sigma=16$, right) settings. Bottom row: $\hat{\mathbf{x}}^\mathrm{TV}$ for the same measurements.}\label{fig:XCAT} \end{figure} ITALE is similar to some methods oriented towards $\ell_0$-regularized sparse regression and signal recovery \cite{troppgilbert,zhang,bertsimasetal}, including notably the Iterative Hard Thresholding (IHT) \cite{blumensathdavies} and CoSaMP \cite{needelltropp} methods in compressed sensing. We highlight here several differences: \begin{itemize} \item For sparsity in an orthonormal basis, forward stepwise selection and orthogonal matching pursuit provide greedy ``$\ell_0$'' approaches to variable selection, also with provable guarantees \cite{troppgilbert,zhang,elenbergetal}. However, such methods do not have direct analogues for gradient-sparsity in graphs, as one cannot select a single edge difference $x_i-x_j$ to be nonzero without changing other edge differences. \item IHT and CoSaMP enforce sparsity of $\mathbf{x}_{k+1}$ in each iteration by projecting to the $s$ largest coordinates of $\a_{k+1}$, for user-specified $s$. In contrast, ITALE uses a Lagrangian form that penalizes (rather than constrains) $\|\nabla \mathbf{x}_{k+1}\|_0$. This is partly for computational reasons, as we are not aware of fast algorithms that can directly perform such a projection step onto the (non-convex) set $\{\mathbf{x}:\|\nabla \mathbf{x}\|_0 \leq s\}$ for general graphs. Our theoretical convergence analysis handles this Lagrangian form. \item In contrast to more general-purpose mixed-integer optimization procedures in \cite{bertsimasetal}, each iterate of ITALE (and hence also the full algorithm, for a polynomial number of iterations) is provably polynomial-time in the input size $(n,p,|E|)$ \cite{fanguan}. On our personal computer, for the $p=360 \times 270=97200$ image of Figure \ref{fig:XCAT}, computing the 60 iterates constituting a full ITALE solution path required about 20 minutes, using the optimized alpha-expansion code of \cite{boykovkolmogorov}. \end{itemize} While our theoretical focus is on $\ell_0$-regularization, we expect that for certain regimes of undersampling and signal-to-noise, improved empirical recovery may be possible with edge costs $c(x_i,x_j)$ interpolating between the $\ell_0$ and $\ell_1$ penalties. These are applicable in the ITALE algorithm and would be interesting to investigate in future work. \section{Model and algorithm}\label{sec:model} Let $G=(V,E)$ be a given connected graph on the vertices $V=\{1,\ldots,p\}$, with undirected edge set $E$. We assume throughout that $p \geq 3$. For a signal vector $\mathbf{x}_* \in \mathbb{R}^p$, measurement matrix $\mathbf{A} \in \mathbb{R}^{n \times p}$, and measurement errors $\mathbf{e} \in \mathbb{R}^n$, we observe \begin{equation}\label{eq:model} \mathbf{y}=\mathbf{A}\mathbf{x}_*+\mathbf{e} \in \mathbb{R}^n. \end{equation} Denote by $\nabla \in \{-1,0,1\}^{|E| \times p}$ the discrete gradient matrix on the graph $G$, defined\footnote{Here, we may fix an arbitrary ordering of the vertex pair $(i,j)$ for each edge.} by \[\nabla \mathbf{x}=\big(x_i-x_j:\;(i,j) \in E\big) \in \mathbb{R}^{|E|}.\] We study estimation of $\mathbf{x}_*$, assuming that $\mathbf{x}_*$ has (or is well-approximated by a signal having) small exact gradient sparsity $\|\nabla \mathbf{x}_*\|_0$. Our proposed algorithm is an iterative approach called ITALE, presented as Algorithm \ref{alg:ITALE}. It is based around the idea of minimizing the objective (\ref{eq:generalobjective}). In this objective, the cost function $c:\mathbb{R}^2 \to \mathbb{R}$ must satisfy the metric properties \begin{equation}\label{eq:cost} c(x,y)=c(y,x) \geq 0, \qquad c(x,x)=0 \Leftrightarrow x=0, \qquad c(x,z) \leq c(x,y)+c(y,z), \end{equation} but is otherwise general. Importantly, $c$ may be non-smooth and non-convex. The algorithm applies proximal descent, alternating between constructing a surrogate signal $\a_{k+1}$ in line 3 and denoising this surrogate signal in line 4, discussed in more detail below. Some intuition for $\a_{k+1}$ is provided by considering the setting $\mathbf{e} \approx \mathbf{0}$ and $\eta\mathbf{A}^\mathsf{T}\mathbf{A} \approx \mathbf{I}$, in which case \begin{align*} \a_{k+1}&=\mathbf{x}_k-\eta\mathbf{A}^\mathsf{T}(\mathbf{A}\mathbf{x}_k-\mathbf{y})\\ &=\mathbf{x}_*+(\mathbf{I}-\eta\mathbf{A}^\mathsf{T}\mathbf{A})(\mathbf{x}_k-\mathbf{x}_*)+\eta\mathbf{A}^\mathsf{T}\mathbf{e} \approx \mathbf{x}_*. \end{align*} There are two sources of noise $(\mathbf{I}-\eta\mathbf{A}^\mathsf{T}\mathbf{A})(\mathbf{x}_k-\mathbf{x}_*)$ and $\eta\mathbf{A}^\mathsf{T}\mathbf{e}$ in $\a_{k+1}$, the former expected to decrease across iterations as the reconstruction error $\|\mathbf{x}_k-\mathbf{x}_*\|$ decreases. A tuning parameter $\lambda_k$ is applied to denoise $\a_{k+1}$ in each iteration, where $\lambda_k$ also decreases across iterations to match the noise level. Our theoretical analysis indicates to use a geometric rate of decay $\lambda_{k+1}=\lambda_k \cdot \gamma$, starting from a large initial value $\lambda_{\max}$. \begin{algorithm}[t] \caption{Iterative Alpha Expansion}\label{alg:ITALE} \begin{algorithmic}[1] \REQUIRE{$\mathbf{y} \in \mathbb{R}^n$, $\mathbf{A} \in \mathbb{R}^{n \times p}$, and parameters $\gamma \in (0,1)$, $\lambda_{\max}>\lambda_{\min}>0$, and $\eta,\delta>0$.} \STATE{Initialize $\mathbf{x}_0 \gets \mathbf{0}$, $\lambda_0 \gets \lambda_{\max}$} \FOR{$k=0,1,2,\ldots,K$ until $\lambda_K<\lambda_{\min}$} \STATE{$\a_{k+1} \gets \mathbf{x}_k-\eta\mathbf{A}^\mathsf{T}(\mathbf{A}\mathbf{x}_k-\mathbf{y})$} \STATE{$\mathbf{x}_{k+1} \gets \text{AlphaExpansion}(\a_{k+1},\lambda_k,\delta)$} \STATE{$\lambda_{k+1} \gets \lambda_k \cdot \gamma$} \ENDFOR \ENSURE{$\mathbf{x}_1,\ldots,\mathbf{x}_K$} \end{algorithmic} \end{algorithm} ITALE yields iterates $\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_K$, which we directly interpret as recovered signals along a regularization path for different choices of $\lambda \equiv \lambda_k/\eta$ in the objective (\ref{eq:generalobjective}). We choose $\lambda_{\max}$ such that the initial iterates oversmooth $\mathbf{x}_*$, and $\lambda_{\min}$ such that the final iterates undersmooth $\mathbf{x}_*$. We remark that an alternative approach would be to iterate lines 3 and 4 in Algorithm \ref{alg:ITALE} until convergence for each $\lambda_k$, before updating $\lambda_k$ to the next value $\lambda_{k+1}$. However, we find that this is not necessary in practice if $\gamma$ is chosen close enough to 1, and our stated algorithm achieves a computational speed-up compared to this approach. To perform the denoising in line 4, ITALE applies the alpha-expansion graph cut procedure from \cite{boykovetal} to approximately solve the minimization problem \[\min_{\mathbf{x} \in \mathbb{R}^p} \frac{1}{2}\|\mathbf{x}-\a_{k+1}\|_2^2+\lambda_k\sum_{(i,j) \in E} c(x_i,x_j).\] This sub-routine is denoted as $\text{AlphaExpansion}(\a_{k+1},\lambda_k,\delta)$, and is described in Algorithm \ref{alg:AlphaExpansion} for completeness. At a high level, the alpha-expansion method encodes the above objective function in the structure of an edge-weighted augmented graph, and iterates over global moves that swap the signal value on a subset of vertices for a given new value by finding a minimum graph cut. The original alpha-expansion algorithm of \cite{boykovetal} computes an approximate maximum-a-posteriori estimate in a discrete Potts model with a metric edge-cost satisfying (\ref{eq:cost}). To apply this to a continuous signal domain, we restrict coordinate values of $\mathbf{x}$ to a discrete grid \[\delta\mathbb{Z}=\{k\delta:k \in \mathbb{Z}\}.\] Here, $\delta$ is a small user-specified discretization parameter. As shown in \cite[Lemma S2.1]{fanguan} (see also \cite[Theorem 6.1]{boykovetal}), the output $\mathbf{x}_{k+1}=\text{AlphaExpansion}(\a_{k+1},\lambda_k,\delta)$ has the deterministic guarantee \begin{equation}\label{eq:AEguarantee} \frac{1}{2}\|\mathbf{x}_{k+1}-\a_{k+1}\|_2^2+\lambda_k\sum_{(i,j) \in E} c(x_i,x_j) \leq \min_{\mathbf{x} \in (\delta\mathbb{Z})^p} \left(\frac{1}{2}\|\mathbf{x}-\a_{k+1}\|_2^2 +2\lambda_k\sum_{(i,j) \in E} c(x_i,x_j)\right) \end{equation} with the additional factor of 2 applying to the penalty on the right side. This guarantee is important for the theoretical recovery properties that we will establish in Section \ref{sec:theory}. \begin{algorithm}[t] \caption{AlphaExpansion$(\a,\lambda,\delta)$ subroutine}\label{alg:AlphaExpansion} \begin{algorithmic}[1] \REQUIRE{$\a \in \mathbb{R}^p$, cost function $c:\mathbb{R}^2 \to \mathbb{R}$, parameters $\lambda,\delta>0$.} \STATE{Let $a_{\min},a_{\max}$ be the minimum and maximum values of $\a$. Initialize $\mathbf{x} \in \mathbb{R}^p$ arbitrarily.} \LOOP \FOR{each $z \in \delta\mathbb{Z} \cap [a_{\min},a_{\max}]$} \STATE{Construct the following edge-weighted augmentation $G_{z,\mathbf{x}}$ of the graph $G$:} \begin{ALC@g} \STATE{Introduce a source vertex $s$ and a sink vertex $t$, connect $s$ to each $i \in \{1,\ldots,p\}$ with weight $\frac{1}{2}(a_i-z)^2$, and connect $t$ to each $i \in \{1,\ldots,p\}$ with weight $\frac{1}{2}(a_i-x_i)^2$ if $x_i \neq z$, or weight $\infty$ if $x_i=z$.} \FOR{each edge $\{i,j\} \in E$} \IF{$x_i=x_j$} \STATE{Assign weight $\lambda c(x_i,z)$ to $\{i,j\}$.} \ELSE \STATE{Introduce a new vertex $v_{i,j}$, and replace edge $\{i,j\}$ by the three edges $\{i,v_{i,j}\}$, $\{j,v_{i,j}\}$, and $\{t,v_{i,j}\}$, with weights $\lambda c(x_i,z)$, $\lambda c(x_j,z)$, and $\lambda c(x_i,x_j)$ respectively.} \ENDIF \ENDFOR \end{ALC@g} \STATE{Find the minimum s-t cut $(S,T)$ of $G_{z,\mathbf{x}}$ such that $s \in S$ and $t \in T$.} \STATE{For each $i \in \{1,\ldots,p\}$, update $x_i \leftarrow z$ if $i \in T$, and keep $x_i$ unchanged if $i \in S$.} \ENDFOR \STATE{If $\mathbf{x}$ was unchanged for each $z$ above, then return $\mathbf{x}$.} \ENDLOOP \ENSURE{$\mathbf{x}$} \end{algorithmic} \end{algorithm} We make a few remarks regarding parameter tuning in practice: \begin{itemize} \item Using conservative choices for $\lambda_{\max}$ (large), $\gamma$ (close to 1), and $\delta$ (small) increases the total runtime of the procedure, but does not degrade the quality of recovery. In our experiments, we fix $\gamma=0.9$ and set $\delta$ in each iteration to yield 300 grid values for $\delta \mathbb{Z} \cap [a_{\min},a_{\max}]$ in Algorithm \ref{alg:AlphaExpansion}. \item We monitor the gradient sparsity $\|\nabla \mathbf{x}_k\|_0$ across iterations, and terminate the algorithm when $\|\nabla \mathbf{x}_K\|_0$ exceeds a certain fraction (e.g.\ 50\%) of the total number of edges $|E|$, rather than fixing $\lambda_{\min}$. \item The parameter $\eta$ should be matched to the scaling and restricted isometry properties of the design matrix $\mathbf{A}$. For sub-Gaussian and Fourier designs scaled by $1/\sqrt{n}$ as in Propositions \ref{prop:subgaussian} and \ref{prop:2DFourier} below, we set $\eta=1$. \item The most important tuning parameter is the iterate $k$ for which we take the final estimate $\hat{\mathbf{x}}^\mathrm{ITALE}=\mathbf{x}_k$. In practice, we apply cross-validation on the mean-squared prediction error for $\mathbf{y}$ to select $k$. Note that $\eta$ should be rescaled by the number of training samples in each fold, i.e.\ for 5-fold cross-validation with training sample size $0.8n$, we set $\eta=1/0.8$ instead of $\eta=1$ in the cross-validation runs. \end{itemize} \section{Recovery guarantees}\label{sec:theory} We provide in this section theoretical guarantees on the recovery error $\|\hat{\mathbf{x}}^\mathrm{ITALE}-\mathbf{x}_*\|_2$, where $\hat{\mathbf{x}}^\mathrm{ITALE} \equiv \mathbf{x}_k$ for a deterministic (non-adaptive) choice of iterate $k$. Throughout this section, ITALE is assumed to be applied with the $\ell_0$ edge cost $c(x_i,x_j)=\mathbf{1}\{x_i \neq x_j\}$. \subsection{cRIP condition}\label{sec:RIP} Our primary assumption on the measurement design $\mathbf{A}$ will be the following version of a restricted isometry property. \begin{definition}\label{def:RIP} Let $\kappa>0$, and let $\rho:[0,\infty) \to [0,\infty)$ be any function satisfying $\rho'(s) \geq 0$ and $\rho''(s) \leq 0$ for all $s>0$. A matrix $\mathbf{A} \in \mathbb{R}^{n \times p}$ satisfies the $(\kappa,\rho)$-{\bf cut-restricted isometry property} (cRIP) if, for every $\mathbf{x} \in \mathbb{R}^p$ with $\|\nabla \mathbf{x}\|_0 \geq 1$, we have \[\left(1-\kappa-\sqrt{\rho(\|\nabla\mathbf{x}\|_0)}\right) \|\mathbf{x}\|_2 \leq \|\mathbf{A}\mathbf{x}\|_2 \leq \left(1+\kappa+\sqrt{\rho(\|\nabla\mathbf{x}\|_0)}\right)\|\mathbf{x}\|_2.\] \end{definition} This definition depends implicitly on the structure of the underlying graph $G$, via its discrete gradient matrix $\nabla$. Examples of the function $\rho$ are given in the two propositions below. This condition is stronger than the usual RIP condition in compressed sensing \cite{candesrobust,candesstable} in two ways: First, Definition \ref{def:RIP} requires quantitative control of $\|\mathbf{A}\mathbf{x}\|_2$ for \emph{all} vectors $\mathbf{x} \in \mathbb{R}^p$, rather than only those with sparsity $\|\nabla \mathbf{x}\|_0 \leq s$ for some specified $s$. We use this in our analysis to handle regularization of $\|\nabla \mathbf{x}\|_0$ in Lagrangian (rather than constrained) form. Second, approximate isometry is required for signals with small gradient-sparsity $\|\nabla \mathbf{x}\|_0$, rather than small sparsity $\|\mathbf{x}\|_0$. For graphs with bounded maximum degree, all sparse signals are also gradient-sparse, so this is indeed stronger up to a relabeling of constants. This requirement is similar to the D-RIP condition of \cite{candescoherent} for general sparse analysis models, and is also related to the condition of \cite{needellward} that $\mathbf{A}\mathcal{H}^{-1}$ satisfies the usual RIP condition, where $\mathcal{H}^{-1}$ is the inverse Haar-wavelet transform on the 2-D lattice. Despite this strengthening of the required RIP condition, Definition \ref{def:RIP} still holds for sub-Gaussian designs $\mathbf{A}$, where $\kappa$ depends on the condition number of the design covariance. We defer the proof of the following result to Appendix \ref{appendix:RIP}. For a random vector $\a$, we denote its sub-Gaussian norm as \[\|\a\|_{\psi_2}=\sup_{\u:\|\u\|_2=1} \sup_{k \geq 1}\; k^{-1/2}\mathbb{E}\Big[|\u^\mathsf{T}\a|^k\Big]^{1/k},\] and say that $\a$ is sub-Gaussian if $\|\a\|_{\psi_2} \leq K$ for a constant $K>0$. \begin{proposition}\label{prop:subgaussian} Let $\mathbf{A} \in \mathbb{R}^{n\times p}$ have i.i.d.\ rows $\a_i/\sqrt{n}$, where $\operatorname{Cov}[\a_i]=\mathbf{\Sigma}$ and $\|\a_i\|_{\psi_2} \leq K$. Suppose that the largest and smallest eigenvalues of $\mathbf{\Sigma}$ satisfy $\sigma_{\max}(\mathbf{\Sigma}) \leq (1+\kappa)^2$ and $\sigma_{\min}(\mathbf{\Sigma})\geq (1-\kappa)^2$ for a constant $\kappa \in (0,1)$. Then for any $k>0$ and some constant $C>0$ depending only on $K,\kappa,k$, with probability at least $1-|E|^{-k}$, the matrix $\mathbf{A}$ satisfies $(\kappa,\rho)$-cRIP for the function \[\rho(s)=\frac{Cs\log(1+|E|/s)}{n}.\] \end{proposition} For large 2-D images, using Fourier measurements with matrix multiplication implemented by an FFT can significantly reduce the runtime of Algorithm \ref{alg:ITALE}. As previously discussed in \cite{lustigetal,needellward,krahmerward}, uniform random sampling of Fourier coefficients may not be appropriate for reconstructing piecewise-constant images, as these typically have larger coefficients in the lower Fourier frequencies. We instead study a non-uniform sampling and reweighting scheme similar to that proposed in \cite{krahmerward} for total-variation compressed sensing, and show that Definition \ref{def:RIP} also holds for this reweighted Fourier matrix. For $p=N_1N_2$ and $N_1,N_2$ both powers of 2, let $\mathcal{F} \in \mathbb{C}^{p \times p}$ be the 2-D discrete Fourier matrix on the lattice graph $G$ of size $N_1 \times N_2$, normalized such that $\mathcal{F}\F^*=\mathbf{I}$. We define this as the Kronecker product $\mathcal{F}=\mathcal{F}^1 \otimes \mathcal{F}^2$, where $\mathcal{F}^1 \in \mathbb{C}^{N_1 \times N_1}$ is the 1-D discrete Fourier matrix with entries \[\mathcal{F}^1_{jk}=\frac{1}{\sqrt{N_1}} \cdot e^{2\pi \i \cdot \frac{(j-1)(k-1)}{N_1}},\] and $\mathcal{F}^2 \in \mathbb{C}^{N_2 \times N_2}$ is defined analogously. (Thus rows closer to $N_1/2+1$ in $\mathcal{F}^1$ correspond to higher frequency components.) Let $\mathcal{F}_{(i,j)}^*$ denote row $(i,j)$ of $\mathcal{F}$, where we index by pairs $(i,j) \in \{1,\ldots,N_1\} \times \{1,\ldots,N_2\}$ corresponding to the Kronecker structure. We define a sampled Fourier matrix as follows: Let $\nu_1$ be the probability mass function on $\{1,\ldots,N_1\}$ given by \begin{equation}\label{eq:nu} \nu_1(i) \propto \frac{1}{C_0+\min(i-1,N_1-i+1)}, \qquad C_0 \geq 1. \end{equation} Define similarly $\nu_2$ on $\{1,\ldots,N_2\}$, and let $\nu=\nu_1 \times \nu_2$. For a given number of measurements $n$, draw $(i_1,j_1),\ldots,(i_n,j_n) \overset{iid}{\sim} \nu$, and set \begin{equation}\label{eq:weightedfourier} \tilde{\mathbf{A}}=\frac{1}{\sqrt{n}}\begin{pmatrix} \mathcal{F}_{(i_1,j_1)}^*/\sqrt{\nu(i_1,j_1)} \\ \vdots \\ \mathcal{F}_{(i_n,j_n)}^*/\sqrt{\nu(i_n,j_n)} \end{pmatrix} \in \mathbb{C}^{n \times p}. \end{equation} \begin{proposition}\label{prop:2DFourier} Let $G$ be the 2-D lattice graph of size $N_1 \times N_2$, where $N_1,N_2$ are powers of 2 and $1/K<N_1/N_2<K$ for a constant $K>0$. Set $p=N_1N_2$ and let $\tilde{\mathbf{A}}$ be the matrix defined in (\ref{eq:weightedfourier}). Then for some constants $C,t_0>0$ depending only on $K$, and for any $t>t_0$, with probability at least $1-e^{-(\log n) (\log p)^3}-p^{-t}$, $\tilde{\mathbf{A}}$ satisfies the $(\kappa,\rho)$-cRIP with $\kappa=0$ and \[\rho(s)=Cts\frac{(\log p)^8\log n}{n}.\] \end{proposition} We defer the proof also to Appendix \ref{appendix:RIP}. This proposition pertains to the complex analogue of Definition \ref{def:RIP}, where $\tilde{\mathbf{A}},\mathbf{x}$ are allowed to be complex-valued, and $\|\cdot\|_2$ denotes the complex $\ell_2$-norm. For a real-valued signal $\mathbf{x}_* \in \mathbb{R}^p$, Algorithm \ref{alg:ITALE} may be applied to $\tilde{\mathbf{y}}=\tilde{\mathbf{A}}\mathbf{x}_*+\mathbf{e} \in \mathbb{C}^n$ by separating real and imaginary parts of $\tilde{\mathbf{y}}$ into a real vector $\mathbf{y} \in \mathbb{R}^{2n}$. The corresponding $\mathbf{A} \in \mathbb{R}^{2n \times p}$ satisfies $\|\mathbf{A}\mathbf{x}\|^2=\|\tilde{\mathbf{A}}\mathbf{x}\|^2$, so the same cRIP condition holds (in the real sense) for $\mathbf{A}$. \subsection{Recovery error bounds} To illustrate the idea of analysis, we first establish a result showing that ITALE can yield exact recovery in a setting of no measurement noise. We require $\mathbf{x}_*$ to be gradient-sparse with coordinates belonging exactly to $\delta\mathbb{Z}$, as the ITALE output has this latter property. Discretization error will be addressed in our subsequent result. \begin{theorem}\label{thm:noiseless} Suppose $\mathbf{e}=\mathbf{0}$ and $\mathbf{x}_* \in (\delta\mathbb{Z})^p$, and denote $s_*=\max(\|\nabla \mathbf{x}_*\|_0,1)$. Suppose $\sqrt{\eta} \cdot \mathbf{A}$ satisfies $(\kappa,\rho)$-cRIP, where $\kappa \in [0,\sqrt{3/2}-1)$. Set $t(\kappa)=1-4\kappa-2\kappa^2 \in (0,1]$, and choose tuning parameters \[\left(1-\frac{t(\kappa)}{4}\right)^2<\gamma<1, \qquad \lambda_{\max}>\|\mathbf{x}_*\|_2^2.\] For some constants $C,c>0$ depending only on $\kappa$, if \[\rho(s_*) \leq c,\] then each iterate $\mathbf{x}_k$ of Algorithm \ref{alg:ITALE} satisfies \begin{equation} \|\mathbf{x}_k-\mathbf{x}_*\|_2 \leq C\sqrt{\lambda_{\max}s_*} \cdot \gamma^{k/2}. \label{eq:recoverynonoise} \end{equation} In particular, $\mathbf{x}_k=\mathbf{x}_*$ for all sufficiently large $k$. \end{theorem} Thus, in this noiseless setting, the iterates exhibit linear convergence to the true signal $\mathbf{x}_*$. The required condition $\rho(s_*) \leq c$ translates into a requirement of \[n \gtrsim s_*\log(1+|E|/s_*)\] measurements for $\mathbf{A}$ having i.i.d.\ $\mathcal{N}(0,1/n)$ entries, by Proposition \ref{prop:subgaussian}, or \[n \gtrsim s_*(\log p)^8\log\log p\] weighted Fourier measurements for the 2-D lattice graph, as defined in Proposition \ref{prop:2DFourier}. For these designs, $(\kappa,\rho)$-cRIP holds for $\sqrt{\eta} \cdot \mathbf{A}$ where $\kappa=0$ and $\eta=1$. \begin{proof}[Proof of Theorem \ref{thm:noiseless}] Denote \[s_k=\|\nabla \mathbf{x}_k\|_0, \qquad \r_k=\mathbf{x}_k-\mathbf{x}_*.\] Applying the optimality condition (\ref{eq:AEguarantee}) to compare $\mathbf{x}_{k+1}$ with $\mathbf{x}_*=\mathbf{x}_k-\r_k$, we obtain \begin{equation}\label{eq:noiselessoptimality} \|\mathbf{x}_{k+1}-\a_{k+1}\|_2^2+2\lambda_ks_{k+1} \leq \|\mathbf{x}_k-\r_k-\a_{k+1}\|_2^2+4\lambda_ks_*. \end{equation} Let $\S_k$ be the partition of $\{1,\ldots,p\}$ induced by the piecewise-constant structure of $\mathbf{x}_k$: Each element of $\S_k$ corresponds to a connected subgraph of $G$ on which $\mathbf{x}_k$ takes a constant value. Let $\S_{k+1},\S_*$ similarly be the partitions induced by $\mathbf{x}_{k+1},\mathbf{x}_*$, and denote by $\S$ the common refinement of $\S_k,\S_{k+1},\S_*$. Defining the boundary \[\partial \S=\{(i,j) \in E:\;i,j \text{ belong to different elements of } \S\},\] observe that each edge $(i,j) \in \partial S$ must be such that at least one of $\mathbf{x}_k$, $\mathbf{x}_{k+1}$, or $\mathbf{x}_*$ takes different values at its two endpoints. Then \begin{equation}\label{eq:boundarysize} |\partial \S| \leq s_*+s_k+s_{k+1}. \end{equation} Let $\mathbf{P}:\mathbb{R}^p \to \mathbb{R}^p$ be the orthogonal projection onto the subspace of signals taking a constant value over each element of $\S$, and let $\mathbf{P}^\perp=\mathbf{I}-\mathbf{P}$. Then $\mathbf{x}_{k+1},\mathbf{x}_k,\r_k$ all belong to the range of $\mathbf{P}$, so an orthogonal decomposition yields \begin{align*} \|\mathbf{x}_{k+1}-\a_{k+1}\|_2^2&= \|\mathbf{x}_{k+1}-\mathbf{P}\a_{k+1}\|_2^2+\|\mathbf{P}^\perp\a_{k+1}\|_2^2,\\ \|\mathbf{x}_k-\r_k-\a_{k+1}\|_2^2 &=\|\mathbf{x}_k-\r_k-\mathbf{P}\a_{k+1}\|_2^2 +\|\mathbf{P}^\perp \a_{k+1}\|_2^2. \end{align*} Applying this, the definition (in the noiseless setting $\mathbf{e}=\mathbf{0}$) \[\a_{k+1}=\mathbf{x}_k-\eta \mathbf{A}^\mathsf{T}(\mathbf{A}\mathbf{x}_k-\mathbf{y})=\mathbf{x}_k-\eta\mathbf{A}^\mathsf{T}\mathbf{A}\r_k,\] and the condition $\mathbf{P}\mathbf{x}_k=\mathbf{x}_k$ to (\ref{eq:noiselessoptimality}), we obtain \[\|\mathbf{x}_{k+1}-\mathbf{x}_k+\eta\mathbf{P}\mathbf{A}^\mathsf{T}\mathbf{A}\r_k\|_2^2 \leq \|\eta\mathbf{P}\mathbf{A}^\mathsf{T}\mathbf{A}\r_k-\r_k\|_2^2 +\lambda_k(4s_*-2s_{k+1}).\] Applying the triangle inequality and $\mathbf{x}_{k+1}-\mathbf{x}_k=\r_{k+1}-\r_k$, \begin{equation}\label{eq:tmpbound} \left(\|\r_{k+1}\|_2-\|\r_k-\eta\mathbf{P}\mathbf{A}^\mathsf{T}\mathbf{A}\r_k\|_2\right)_+^2 \leq \|\r_k-\eta\mathbf{P}\mathbf{A}^\mathsf{T}\mathbf{A}\r_k\|_2^2+\lambda_k(4s_*-2s_{k+1}). \end{equation} We derive from this two consequences: First, lower-bounding the left side by 0 and rearranging, \begin{equation}\label{eq:tmpbound2} \lambda_k s_{k+1} \leq \frac{1}{2}\|\r_k-\eta\mathbf{P}\mathbf{A}^\mathsf{T}\mathbf{A}\r_k\|_2^2+ 2\lambda_k s_* \leq \|\r_k\|_2^2+\|\sqrt{\eta}\mathbf{A}\mathbf{P}\|_{\operatorname{op}}^2 \cdot\|\sqrt{\eta}\mathbf{A}\r_k\|_2^2+ 2\lambda_k s_*. \end{equation} The condition (\ref{eq:boundarysize}) and definition of $\mathbf{P}$ imply, for any $\u \in \mathbb{R}^p$, that $\|\nabla(\mathbf{P}\u)\|_0 \leq s_*+s_k+s_{k+1}$. The definition of $\r_k$ implies $\|\nabla \r_k\|_0\leq s_*+s_k$. Setting \[\tau_k=\kappa+\sqrt{\rho(s_*+s_k+s_{k+1})},\qquad \zeta_k=\kappa+\sqrt{\rho(s_*+s_k)}\] we deduce from the $(\kappa,\rho)$-cRIP condition for $\sqrt{\eta} \cdot \mathbf{A}$ that \begin{equation}\label{eq:tmpbound3} \|\sqrt{\eta}\mathbf{A}\mathbf{P}\|_{\operatorname{op}}^2=\sup_{\u\in\mathbb{R}^p:\|\u\|_2=1}\|\sqrt{\eta}\mathbf{A}\mathbf{P}\u\|_2^2\leq (1+\tau_k)^2, \qquad \|\sqrt{\eta}\mathbf{A}\r_k\|_2^2\leq (1+\zeta_k)^2\|\r_k\|_2^2. \end{equation} Note that since $\rho(s)$ and $\sqrt{\rho(s)}$ are both nonnegative and concave by Definition \ref{def:RIP}, we have \[\rho'(s) \leq (\rho(s)-\rho(0))/s \leq \rho(s)/s, \qquad \frac{d}{ds}[\sqrt{\rho(s)}] \leq (\sqrt{\rho(s)}-\sqrt{\rho(0)})/s \leq \sqrt{\rho(s)}/s.\] The function \[f_k(s)=\Big(1+\kappa+\sqrt{\rho(s_*+s_k+s)}\Big)^2\] is also increasing and concave, and by the above, its derivative at $s=0$ satisfies \[f_k'(0) \leq d_k/(s_*+s_k), \qquad d_k \equiv 2(1+\kappa)\sqrt{\rho(s_*+s_k)}+\rho(s_*+s_k).\] Thus \begin{equation}\label{eq:taukbound} (1+\tau_k)^2=f_k(s_{k+1}) \leq f_k(0)+f_k'(0) \cdot s_{k+1} \leq (1+\zeta_k)^2+d_ks_{k+1}/s_*. \end{equation} Applying this and (\ref{eq:tmpbound3}) to (\ref{eq:tmpbound2}), we get \begin{align*} \lambda_k s_{k+1} &\leq \left(1+(1+\tau_k)^2(1+\zeta_k)^2\right)\|\r_k\|_2^2+ 2\lambda_k s_*\\ &\leq \left(1+(1+\zeta_k)^4+(1+\zeta_k)^2d_ks_{k+1}/s_*)\right)\|\r_k\|_2^2+2\lambda_k s_*. \end{align*} Rearranging gives \begin{equation}\label{eq:sbound} \Big(\lambda_k-(1+\zeta_k)^2d_k\|\r_k\|_2^2/s_*\Big)\cdot s_{k+1} \leq(1+(1+\zeta_k)^4)\cdot\|\r_k\|_2^2+2\lambda_k s_*. \end{equation} Second, applying the $(\kappa,\rho)$-cRIP condition for $\sqrt{\eta} \cdot \mathbf{A}$ again, we have for every $\u \in \mathbb{R}^p$ \begin{align*} \Big|\u^\mathsf{T}(\eta\mathbf{P}\mathbf{A}^\mathsf{T}\mathbf{A}\mathbf{P}-\mathbf{P})\u\Big| &=\Big|\|\sqrt{\eta} \mathbf{A}\mathbf{P}\u\|_2^2-\|\mathbf{P}\u\|_2^2\Big|\\ &\leq \max\Big(|1-(1-\tau_k)^2|,|1-(1+\tau_k)^2|\Big)\|\mathbf{P}\u\|_2^2=(2\tau_k+\tau_k^2)\|\mathbf{P}\u\|_2^2, \end{align*} So $\|\eta\mathbf{P}\mathbf{A}^\mathsf{T}\mathbf{A}\mathbf{P}-\mathbf{P}\|_{\operatorname{op}} \leq 2\tau_k+\tau_k^2$. Then, as $\r_k=\mathbf{P}\r_k$, we get from (\ref{eq:tmpbound}) that \[\Big(\|\r_{k+1}\|_2-(2\tau_k+\tau_k^2)\|\r_k\|_2\Big)_+^2 \leq (2\tau_k+\tau_k^2)^2\|\r_k\|_2^2+\lambda_k(4s_*-2s_{k+1}).\] Taking the square-root and applying $\sqrt{x+y} \leq \sqrt{x}+\sqrt{y}$, \begin{align*} \|\r_{k+1}\|_2 \leq (4\tau_k+2\tau_k^2)\|\r_k\|_2 +\sqrt{\lambda_k(4s_*-2s_{k+1})_+} \end{align*} Applying the definitions of $\tau_k$ and $t(\kappa)$, \[4\tau_k+2\tau_k^2 \leq 1-t(\kappa)+ 4(1+\kappa)\sqrt{\rho(s_*+s_k+s_{k+1})}+2\rho(s_*+s_k+s_{k+1}).\] Thus \begin{align}\label{eq:rbound} \|\r_{k+1}\|_2\leq\Big[1-t(\kappa)+4(1+\kappa)\sqrt{\rho(s_*+s_k+s_{k+1})}+2\rho(s_*+s_k+s_{k+1})\Big]\cdot\|\r_k\|_2+\sqrt{4\lambda_k s_*}. \end{align} We now claim by induction on $k$ that, if $\rho(s_*) \leq c_0$ for a sufficiently small constant $c_0>0$, then \begin{equation}\label{eq:induction} s_k\leq \frac{90}{t(\kappa)^2} s_*, \qquad \|\r_k\|_2 \leq \frac{4\sqrt{\lambda_k s_*}}{t(\kappa)} \end{equation} for every $k$. For $k=0$, these are satisfied as $s_0=0$ and $\lambda_0=\lambda_{\max} \geq \|\r_0\|_2^2=\|\mathbf{x}_*\|_2^2$. Assume inductively that these hold for $k$. Note that for any $t \geq 1$, nonnegativity and concavity yield $\rho(ts_*) \leq t\rho(s_*)$. In particular, assuming (\ref{eq:induction}) and applying $\kappa<\sqrt{3/2}-1$ and $\rho(s_*) \leq c_0$, we get for small enough $c_0$ that $(1+\zeta_k)^2<2$. Then applying (\ref{eq:induction}) to (\ref{eq:sbound}), we get for a constant $C \equiv C(\kappa)>0$ not depending on $c_0$ that \begin{align*} \left(1-C\sqrt{c_0}\right)\lambda_k s_{k+1} \leq \left(\frac{80}{t(\kappa)^2}+2\right)\lambda_k s_*. \end{align*} Then for small enough $c_0$, \[s_{k+1} \leq \left(1-C\sqrt{c_0}\right)^{-1}\frac{82}{t(\kappa)^2}s_*< \frac{90}{t(\kappa)^2}s_*.\] Applying (\ref{eq:induction}) and this bound to (\ref{eq:rbound}), for sufficiently small $c_0$, we have \[\|\r_{k+1}\|_2 \leq \left(1-\frac{3}{4}t(\kappa)\right)\|\r_k\|_2 +\sqrt{4\lambda_ks_*} \leq \left(\frac{4}{t(\kappa)}-1\right)\sqrt{\lambda_ks_*}.\] Applying $\sqrt{\lambda_k}=\sqrt{\lambda_{k+1}/\gamma} \leq \sqrt{\lambda_{k+1}}(1-t(\kappa)/4)^{-1}$, we obtain from this \[\|\r_{k+1}\|_2 \leq \frac{4\sqrt{\lambda_{k+1}s_*}}{t(\kappa)}.\] This completes the induction and establishes (\ref{eq:induction}) for every $k$. The bound (\ref{eq:recoverynonoise}) follows from (\ref{eq:induction}), the definition of $\r_k$, and $\lambda_k=\lambda_{\max}\gamma^k$. Since $\mathbf{x}_k,\mathbf{x}_* \in (\delta \mathbb{Z})^p$, for $k$ large enough such that the right side of (\ref{eq:recoverynonoise}) is less than $\delta^2$, we must have $\mathbf{x}_k=\mathbf{x}_*$. \end{proof} We now extend this result to provide a robust recovery guarantee in the presence of measurement and discretization error. The proof is an extension of the above argument, which we defer to Appendix \ref{appendix:recovery}. \begin{theorem}\label{thm:oracle} Suppose $\sqrt{\eta} \cdot \mathbf{A}$ satisfies $(\kappa,\rho)$-cRIP, where $\kappa \in [0,\sqrt{3/2}-1)$. Choose tuning parameters $\gamma,\lambda_{\max}$ as in Theorem \ref{thm:noiseless}. Then for some constants $C,C',c>0$ depending only on $\kappa$, the following holds: Let $\mathbf{x} \in (\delta\mathbb{Z})^p$ be any vector satisfying \[\rho(s) \leq c, \qquad s \equiv \max(\|\nabla \mathbf{x}\|_0,1).\] Let $D$ be the maximum vertex degree of $G$, and define \[E(\mathbf{x})=\left(1+\sqrt{D\rho(s)}\right)\cdot \left(\|\mathbf{x}-\mathbf{x}_*\|_2+\frac{\|\mathbf{x}-\mathbf{x}_*\|_1}{\sqrt{s}}\right)+\sqrt{\eta} \cdot \|\mathbf{e}\|_2.\] Suppose $\lambda_{\max} \geq CE(\mathbf{x})^2/s \geq \lambda_{\min}$, and let $k_*$ be the last iterate of Algorithm \ref{alg:ITALE} where $\lambda_{k_*} \geq CE(\mathbf{x})^2/s$. Then $\hat{\mathbf{x}} \equiv \mathbf{x}_{k_*}$ satisfies \[\|\hat{\mathbf{x}}-\mathbf{x}_*\|_2 \leq C'E(\mathbf{x}).\] \end{theorem} The quantity $E(\mathbf{x})$ above is the combined measurement error and approximation error of $\mathbf{x}_*$ by a discretized piecewise-constant signal $\mathbf{x}$. For any $\mathbf{A}$ scaled such that it satisfies $(\kappa,\rho)$-cRIP with $\eta=1$, and for $G$ with maximum degree $D \lesssim 1$, we get \[\|\hat{\mathbf{x}}-\mathbf{x}_*\| \lesssim \|\mathbf{x}_*-\mathbf{x}\|_2+\frac{\|\mathbf{x}_*-\mathbf{x}\|_1}{\sqrt{s}}+\|\mathbf{e}\|_2.\] This guarantee is similar to those for compressed sensing of sparse signals in \cite{candesstable,needelltropp,blumensathdavies}. If $\mathbf{x}_*$ has exact gradient-sparsity $\|\nabla \mathbf{x}_*\|_0 \leq s$, then also $\mathbf{x} \in (\delta \mathbb{Z})^p$ obtained by entrywise rounding to $\delta\mathbb{Z}$ satisfies $\|\nabla \mathbf{x}\|_0 \leq s$. Hence choosing $\delta \ll \|\mathbf{e}\|_2/p$ further ensures \[\|\hat{\mathbf{x}}-\mathbf{x}_*\| \lesssim \|\mathbf{e}\|_2\] i.e.\ the discretization error is negligible in the above bound. The required number of measurements is the same as in Theorem \ref{thm:noiseless} for the noiseless setting, which is $n \gtrsim s_*\log(1+|E|/s_*)$ for i.i.d.\ Gaussian designs. This is the claim (\ref{eq:heuristicguarantee}) stated in the introduction. \section{Simulations}\label{sec:simulations} We compare $\hat{\mathbf{x}}^\mathrm{ITALE}$ using the $\ell_0$ edge cost (\ref{eq:l0cost}) with $\hat{\mathbf{x}}^\mathrm{TV}$ minimizing the TV-regularized objective (\ref{eq:TVobjective}), for several signals on the 1-D and 2-D lattice graphs. We used software developed by \cite{boykovkolmogorov} to implement the alpha-expansion sub-routine of Algorithm \ref{alg:AlphaExpansion}. To minimize the TV-regularized objective (\ref{eq:TVobjective}), we used the generalized lasso path algorithm from \cite{rjtibshirani} in the 1-D examples and the FISTA algorithm from \cite{beckteboulle} in the 2-D examples. All parameters were set as described in Section \ref{sec:model} for ITALE. \subsection{1-D changepoint signals}\label{sec:1D} \begin{figure} \begin{minipage}[t]{\textwidth} \includegraphics[width=\textwidth]{spike_150_1.pdf} \caption{Left: True spike signal $\mathbf{x}_*$ (black) and a depiction of $\mathbf{x}_*+\mathbf{A}^\mathsf{T}\mathbf{e}/n$ (red) under low noise $\sigma=1$ for i.i.d.\ measurements $A_{ij} \sim \mathcal{N}(0,1)$ with 15\% undersampling. Middle and right: True signal (black), $\hat{\mathbf{x}}^\mathrm{ITALE}$ (green), and $\hat{\mathbf{x}}^\mathrm{TV}$ (blue) for one simulation.}\label{fig:spike_150_1} \end{minipage} \begin{minipage}[t]{\textwidth} \includegraphics[width=\textwidth]{spike_150_6.pdf} \caption{Same setting as Figure \ref{fig:spike_150_1}, for noise level $\sigma=6$.} \label{fig:spike_150_6} \end{minipage} \end{figure} \begin{figure} \begin{minipage}[t]{\textwidth} \includegraphics[width=\textwidth]{wave_150_1.pdf} \caption{Left: True wave signal $\mathbf{x}_*$ (black) and a depiction of $\mathbf{x}_*+\mathbf{A}^\mathsf{T}\mathbf{e}/n$ (red) under low noise $\sigma=1$ for i.i.d.\ measurements $A_{ij} \sim \mathcal{N}(0,1)$ with 15\% undersampling. Middle and right: True signal (black), $\hat{\mathbf{x}}^\mathrm{ITALE}$ (green), and $\hat{\mathbf{x}}^\mathrm{TV}$ (blue) for one simulation.}\label{fig:wave_150_1} \end{minipage} \begin{minipage}[t]{\textwidth} \includegraphics[width=\textwidth]{wave_150_6.pdf} \caption{Same setting as Figure \ref{fig:wave_150_1}, for noise level $\sigma=6$.}\label{fig:wave_150_6} \end{minipage} \end{figure} We tested ITALE on two simulated signals for the linear chain graph, with different changepoint structures: the ``spike'' signal depicted in Figures \ref{fig:spike_150_1} and \ref{fig:spike_150_6}, and the ``wave'' signal depicted in Figure \ref{fig:wave_150_1} and \ref{fig:wave_150_6}. The two signals both have $p=1000$ vertices with $s_*=9$ break points. The spike signal consists of short segments of length 10 with elevated mean, while the breaks of the wave signal are equally-spaced. We sampled random Gaussian measurements $A_{ij} \overset{iid}{\sim} \mathcal{N}(0,1)$. The measurement error $\mathbf{e}$ was generated as Gaussian noise $e_k \overset{iid}{\sim} \mathcal{N}(0,\sigma^2)$. To provide an intuitive understanding of the tested signal-to-noise, we plot $\mathbf{x}_*+\mathbf{A}^\mathsf{T}\mathbf{e}/n$ in red in Figures \ref{fig:spike_150_1} to \ref{fig:wave_150_6}, corresponding to two different tested noise levels. Recall that ITALE denoises $\a_{k+1}=\mathbf{x}_*+(\mathbf{I}-\mathbf{A}^\mathsf{T}\mathbf{A}/n)(\mathbf{x}_k-\mathbf{x}_*)+\mathbf{A}^\mathsf{T}\mathbf{e}/n$ in each iteration (corresponding to $\eta=1/n$ for this normalization of $\mathbf{A}$), so that $\mathbf{x}_*+\mathbf{A}^\mathsf{T}\mathbf{e}/n$ represents the noisy signal in an ideal setting if $\mathbf{x}_k \equiv \mathbf{x}_*$ is a perfect estimate from the preceding iteration. Tables \ref{tab:spike} and \ref{tab:wave} display the root-mean-squared estimation errors \[\text{RMSE} = \sqrt{\|\hat{\mathbf{x}}-\mathbf{x}_*\|_2^2/p},\] for undersampling ratio $n/p$ from 10\% to 50\%, and a range of noise levels $\sigma$ that yielded RMSE values between 0 and roughly 0.2. Each reported error value is an average across 20 independent simulations. In these results, the iterate $k$ in ITALE and penalty parameter $\lambda$ in TV were both selected using 5-fold cross-validation. Best-achieved errors over all $k$ and $\lambda$ are reported in Appendix \ref{appendix:bestachieved}, and suggest the same qualitative conclusions. Standard deviations of the RMSE across simulations are also reported in Appendix \ref{appendix:bestachieved}. In the spike example, ITALE yielded lower RMSE in all of the above settings of undersampling and signal-to-noise. Figures \ref{fig:spike_150_1} and \ref{fig:spike_150_6} display one instance each of the resulting estimates $\hat{\mathbf{x}}^\mathrm{ITALE}$ and $\hat{\mathbf{x}}^\mathrm{TV}$ at 15\% undersampling, illustrating some of their differences and typical features. Under optimal tuning, $\hat{\mathbf{x}}^\mathrm{TV}$ returns an undersmoothed estimate even in a low-noise setting where ITALE can often correctly estimate the changepoint locations. With higher noise, ITALE begins to miss changepoints and oversmooth. In the wave example, with undersampling ranging between 15\% and 50\%, ITALE yielded lower RMSE at most tested noise levels. Figures \ref{fig:wave_150_1} and \ref{fig:wave_150_6} depict two instances of the recovered signals at 15\% undersampling. For 10\% undersampling, the component $(\mathbf{I}-\mathbf{A}^\mathsf{T}\mathbf{A}/n)(\mathbf{x}_k-\mathbf{x}_*)$ of the effective noise was sufficiently high such that ITALE often did not estimate the true changepoint structure, and TV usually outperformed ITALE in this case. The standard deviations of RMSE reported in Appendix \ref{appendix:bestachieved} indicate that the ITALE estimates are a bit more variable than the TV estimates in all tested settings, but particularly so in this 10\% undersampling regime. \begin{table} \input{spike_CV.tab} \caption{RMSE for the 1-D spike signal, averaged over 20 simulations.}\label{tab:spike} \end{table} \begin{table} \input{wave_CV.tab} \caption{RMSE for the 1-D wave signal, averaged over 20 simulations.}\label{tab:wave} \end{table} \subsection{2-D phantom images}\label{sec:2D} Next, we tested ITALE on three 2-D image examples, corresponding to piecewise-constant digital phantom images of varying complexity: the Shepp-Logan digital phantom depicted in Figure \ref{fig:shepplogan}, a digital brain phantom from \cite{fesslerhero} depicted in Figure \ref{fig:brain}, and the XCAT chest slice from \cite{gongetal} as previously depicted in Figure \ref{fig:XCAT}. \begin{table} \input{SheppLogan_CV.tab} \caption{RMSE for the Shepp-Logan phantom}\label{tab:shepplogan} \end{table} \begin{table} \input{Brain_CV.tab} \caption{RMSE for the brain phantom}\label{tab:brain} \end{table} \begin{table} \input{XCAT_CV.tab} \caption{RMSE for the XCAT chest slice phantom}\label{tab:XCAT} \end{table} Each image $\mathbf{x}_*$ was normalized to have pixel value in $[0,1]$. We sampled random Fourier design matrices as specified in (\ref{eq:weightedfourier}), fixing the constant $C_0=10$ in the weight distribution (\ref{eq:nu}) for this design. This yielded the best recovery across several tested values for both ITALE and TV. The measurement error $\mathbf{e}$ was generated as Gaussian noise $e_k \overset{iid}{\sim} \mathcal{N}(0,\sigma^2)$, applied to the measurements $\mathcal{F}_{(i,j)}^*\mathbf{x}_*/\sqrt{\nu(i,j)}$ before the $1/\sqrt{n}$ normalization. Tables \ref{tab:shepplogan}, \ref{tab:brain}, and \ref{tab:XCAT} display the RMSE of the estimates $\hat{\mathbf{x}}^\mathrm{ITALE}$ and $\hat{\mathbf{x}}^\mathrm{TV}$ for a single simulation, with tuning parameters selected by 5-fold cross-validation. Best-achieved errors are reported in Appendix \ref{appendix:bestachieved}. For the simpler Logan-Shepp and brain phantom images, which exhibit stronger gradient-sparsity, ITALE yielded lower RMSE in nearly all tested undersampling and signal-to-noise regimes. For the XCAT chest phantom, with undersampling ranging between 15\% and 50\%, ITALE yielded lower RMSE at a range of tested noise levels, and in particular for those settings of higher signal-to-noise. With 10\% undersampling for the XCAT phantom, ITALE was not able to recover some details of the XCAT image even with no measurement noise, and RMSE was higher than TV at all tested noise levels. Results of Appendix \ref{appendix:bestachieved} indicate that this is partially due to sub-optimal selection of the tuning parameter using 5-fold cross-validation, caused by the further reduction of undersampling from 10\% to 8\% in the size of the training data in each fold. \begin{figure} \begin{minipage}{0.25\textwidth} \includegraphics[width=0.9\textwidth]{SheppLogan_phantom.png} \end{minipage}% \begin{minipage}{0.5\textwidth} \includegraphics[width=0.45\textwidth]{{SheppLogan_ITALE_0.15_04_CV}.png}% \includegraphics[width=0.45\textwidth]{{SheppLogan_ITALE_0.15_16_CV}.png}\\ \includegraphics[width=0.45\textwidth]{{SheppLogan_TV_0.15_04_CV}.png}% \includegraphics[width=0.45\textwidth]{{SheppLogan_TV_0.15_16_CV}.png}% \end{minipage} \caption{Left: Original Shepp-Logan phantom. Top row: $\hat{\mathbf{x}}^\mathrm{ITALE}$ from 15\% undersampled and reweighted Fourier measurements, in low noise ($\sigma=4$, left) and medium noise ($\sigma=16$, right) settings. Bottom row: $\hat{\mathbf{x}}^\mathrm{TV}$ for the same measurements.}\label{fig:shepplogan} \end{figure} \begin{figure} \begin{minipage}{0.25\textwidth} \includegraphics[width=0.8\textwidth]{Brain_phantom.png} \end{minipage}% \begin{minipage}{0.5\textwidth} \includegraphics[width=0.4\textwidth]{{Brain_ITALE_0.2_16_CV}.png}% \includegraphics[width=0.4\textwidth]{{Brain_ITALE_0.2_40_CV}.png}\\ \includegraphics[width=0.4\textwidth]{{Brain_TV_0.2_16_CV}.png}% \includegraphics[width=0.4\textwidth]{{Brain_TV_0.2_40_CV}.png}% \end{minipage} \caption{Left: Original brain phantom. Top row: $\hat{\mathbf{x}}^\mathrm{ITALE}$ from 20\% undersampled reweighted Fourier measurements, in low noise ($\sigma=16$, left) and medium noise ($\sigma=40$, right) settings. Bottom row: $\hat{\mathbf{x}}^\mathrm{TV}$ for the same measurements.}\label{fig:brain} \end{figure} Examples of recovered signals $\hat{\mathbf{x}}^\mathrm{ITALE}$ and $\hat{\mathbf{x}}^\mathrm{TV}$ are depicted for the Shepp-Logan and brain phantoms in Figures \ref{fig:shepplogan} and \ref{fig:brain}, at 15\% and 20\% undersampling for two low-noise and medium-noise settings. The qualitative comparisons are similar to those in the 1-D simulations, and to those previously depicted for the XCAT chest slice in Figure \ref{fig:XCAT}: As measurement noise increases, ITALE begins to lose finer details, while TV begins to yield an undersmoothed and blotchy image. These observations are also similar to previous comparisons that have been made for algorithms oriented towards $\ell_0$ versus TV regularization for direct measurements $\mathbf{A}=\mathbf{I}$, in \cite{xuetal,fanguan,kimgao}. \section{Conclusion} We have studied recovery of piecewise-constant signals over arbitrary graphs from noisy linear measurements. We have proposed an iterative algorithm, ITALE, to minimize an $\ell_0$-edge-penalized least-squares objective. Under a cut-restricted isometry property for the measurement design, we have established global recovery guarantees for the estimated signal, in noisy and noiseless settings. In the field of compressed sensing, for signals exhibiting sparsity in an orthonormal basis, $\ell_1$-regularization \cite{donoho,candesstable,candesrobust} and discrete iterative algorithms \cite{troppgilbert,needelltropp,blumensathdavies} constitute two major approaches for signal recovery. It has been observed that for recovering piecewise-constant signals, regularizing the signal gradient in a sparse analysis framework can yield better empirical recovery than regularizing signal coefficients in such a basis. Whereas $\ell_1$-regularization extends naturally to the sparse analysis setting, iterative algorithms have received less attention. By applying the alpha-expansion idea for MAP estimation in discrete Markov random fields, ITALE provides a computationally tractable approach for ``iterative thresholding'' recovery of gradient-sparse signals, with provable recovery guarantees. In contrast to sparse signal recovery over an orthonormal basis, the comparison of $\ell_1$ versus $\ell_0$ regularization for gradient-based sparsity is graph-dependent. Using an $\ell_0$-based approach, we establish signal recovery guarantees on the 1-D and 2-D lattice graphs with numbers of measurements optimal up to a constant factor, which were not previously available for TV-regularization. This difference is closely connected to slow and fast rates of convergence for lasso and best-subset regression for correlated regression designs \cite{buhlmannetal,zhangwainwrightjordan,dalalyanetal}. ITALE provides a polynomial-time approach for $\ell_0$-regularization in a special graph-based setting, and we believe it is an interesting question whether similar algorithmic ideas may be applicable to other classes of sparse regression problems.
1402.1418
\section{\sc #1}} \def\scss#1{\subsection{\sc #1}} \def\scsss#1{\subsubsection{\sc #1}} \def\alpha{\alpha} \def\beta{\beta} \def\gamma{\gamma} \def\Gamma{\Gamma} \def\delta{\delta} \def\Delta{\Delta} \def\epsilon{\epsilon} \def\varepsilon{\varepsilon} \def\zeta{\zeta} \def\eta{\eta} \def\theta{\theta} \def\Theta{\Theta} \def\vartheta{\vartheta} \def\iota{\iota} \def\kappa{\kappa} \def\lambda{\lambda} \def\Lambda{\Lambda} \def\mu{\mu} \def\nu{\nu} \def\xi{\xi} \def\Xi{\Xi} \def\pi{\pi} \def\Pi{\Pi} \def\varpi{\varpi} \def\rho{\rho} \def\varrho{\varrho} \def\sigma{\sigma} \def\Sigma{\Sigma} \def\tau{\tau} \def\upsilon{\upsilon} \def\phi{\phi} \def\varphi{\varphi} \def\chi{\chi} \def\psi{\psi} \def\Psi{\Psi} \def\omega{\omega} \def\Omega{\Omega} \def{\cal A}{{\cal A}} \def{\cal B}{{\cal B}} \def{\cal C}{{\cal C}} \def{\cal D}{{\cal D}} \def{\cal E}{{\cal E}} \def{\cal F}{{\cal F}} \def{\cal G}{{\cal G}} \def{\cal H}{{\cal H}} \def{\cal I}{{\cal I}} \def{\cal J}{{\cal J}} \def{\cal K}{{\cal K}} \def{\cal L}{{\cal L}} \def{\cal M}{{\cal M}} \def{\cal N}{{\cal N}} \def{\cal O}{{\cal O}} \def{\cal P}{{\cal P}} \def{\cal Q}{{\cal Q}} \def{\cal R}{{\cal R}} \def{\cal S}{{\cal S}} \def{\cal T}{{\cal T}} \def{\cal U}{{\cal U}} \def{\cal V}{{\cal V}} \def{\cal W}{{\cal W}} \def{\cal X}{{\cal X}} \def{\cal Y}{{\cal Y}} \def{\cal Z}{{\cal Z}} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \def\begin{align}{\begin{array}} \def\end{array}{\end{array}} \def\begin{center}{\begin{center}} \def\end{center}{\end{center}} \def\begin{align}{\begin{align}} \def\end{align}{\end{align}} \def\footnote{\footnote} \def\footnote{\footnote} \def\label{\label} \def\textsl{\textsl} \def\mathcal{\mathcal} \def\textsc{\textsc} \def\notag{\notag} \def\nonumber{\nonumber} \def\newline{\newline} \def\prime \hat{\phantom{\! \! \prime}}{\prime \hat{\phantom{\! \! \prime}}} \def\prime{\prime} \def\frac{1}{2}{\frac{1}{2}} \def\frac{\frac} \def\partial{\partial} \def\partial \cdot{\partial \cdot} \def\langle \,{\langle \,} \def\, \rangle{\, \rangle} \def\,,\,{\,,\,} \def\equiv{\equiv} \def\dagger{\dagger} \newcommand{\bin}[2]{{#1 \choose #2}} \def&\!\!{&\!\!} \def\!\!&{\!\!&} \def\leftarrow{\leftarrow} \def\rightarrow{\rightarrow} \def\Longleftarrow{\Longleftarrow} \def\Longrightarrow{\Longrightarrow} \def\leftrightarrow{\leftrightarrow} \def\leftrightarrow{\leftrightarrow} \newcommand{\comp}[2]{\phantom{\alpha}^{(#1)}\hspace{-19pt}\alpha_{\phantom{(1)}#2}} \newcommand{\compt}[2]{\phantom{\alpha}^{(#1)}\hspace{-19pt}\widetilde{\alpha}_{\phantom{(1)}#2}} \def\not {\! \pr}{\not {\! \partial}} \def\not {\! \pr}{\not {\! \partial}} \def\hat{\h}{\hat{\eta}} \def\hat{\pr} \cdot{\hat{\partial} \cdot} \def\hat{\pr}{\hat{\partial}} \def\not {\!\! \psi}{\not {\!\! \psi}} \def\not {\!\!\! \cal W}{\not {\!\!\! \cal W}} \def\, \not {\!\!\!\! \cal W}{\, \not {\!\!\!\! \cal W}} \def\not {\! \cal{A}}{\not {\! \cal{A}}} \def\not {\! \epsilon}{\not {\! \epsilon}} \def\not {\! \! \epsilon}{\not {\! \! \epsilon}} \def\not {\! \cal S}{\not {\! \cal S}} \def\not {\! \xi}{\not {\! \xi}} \def\not {\! \bar{\xi}}{\not {\! \bar{\xi}}} \def\not {\! \nabla}{\not {\! \nabla}} \def\not {\! \! \D}{\not {\! \! \Delta}} \def\not {\! \l}{\not {\! \lambda}} \def\not {\! \! \cZ}{\not {\! \! {\cal Z}}} \def\not {\! \cal R}{\not {\! \cal R}} \def\not {\! \bar{\xi}}{\not {\! \bar{\xi}}} \def\not {\! \cal S}{\not {\! \cal S}} \def\not {\! \Gamma}{\not {\! \Gamma}} \def\not {\! \!\chi}{\not {\! \!\chi}} \def\not {\! \! p}{\not {\! \! p}} \def\not { p}{\not { p}} \def\bar{\e}{\bar{\epsilon}} \newcommand{\mathfrak{so}}{\mathfrak{so}} \newcommand{\mathfrak{su}}{\mathfrak{su}} \newcommand{\mathfrak{usp}}{\mathfrak{usp}} \newcommand{\mathfrak{u}}{\mathfrak{u}} \newcommand{\mathfrak{sp}}{\mathfrak{sp}} \newcommand{\mathfrak{sl}}{\mathfrak{sl}} \newcommand{\mathfrak{gl}}{\mathfrak{gl}} \newcommand{\mathrm{d}}{\mathrm{d}} \newcommand{\mathrm{i}}{\mathrm{i}} \newcommand{\mathrm{e}}{\mathrm{e}} \newcommand{\mathop{\rm E}}{\mathop{\rm E}} \newcommand{\mathop{\rm SU}}{\mathop{\rm SU}} \newcommand{\mathop{\rm SO}}{\mathop{\rm SO}} \newcommand{\mathop{\rm SO}}{\mathop{\rm SO}} \newcommand{\mathop{\rm CSO}}{\mathop{\rm CSO}} \newcommand{\mathop{\rm ISO}}{\mathop{\rm ISO}} \newcommand{\mathop{\rm {}U}}{\mathop{\rm {}U}} \newcommand{\mathop{\rm {}USp}}{\mathop{\rm {}USp}} \newcommand{\mathop{\rm {}Sp}}{\mathop{\rm {}Sp}} \newcommand{\mathop{\rm {}OSp}}{\mathop{\rm {}OSp}} \newcommand{\mathop{\rm {}Sp}}{\mathop{\rm {}Sp}} \newcommand{\mathop{\rm {}S}\ell }{\mathop{\rm {}S}\ell } \newcommand{\mathop{\rm SL}}{\mathop{\rm SL}} \newcommand{\mathop{\rm GL}}{\mathop{\rm GL}} \newcommand{\mathop{\rm {}G}\ell }{\mathop{\rm {}G}\ell } \newcommand{\mathop{\rm {}Spin}}{\mathop{\rm {}Spin}} \newcommand{\mathcal{F}}{\mathcal{F}} \newcommand{\mathrm{Im}}{\mathrm{Im}} \newcommand{\mathrm{Re}}{\mathrm{Re}} \newcommand{\mathcal{N}}{\mathcal{N}} \thispagestyle{empty} \begin{document} \begin{flushright} {\today} \end{flushright} \vspace{10pt} \begin{center} {\Large\sc Pre -- Inflationary Clues from String Theory ?}\\ \vspace{25pt} {\sc N.~Kitazawa${}^{\; a}$ \ and \ A.~Sagnotti${}^{\; b}$}\\[15pt] {${}^a$\sl\small Department of Physics, Tokyo Metropolitan University\\ Hachioji, Tokyo \\ 192-0397 JAPAN \\ }e-mail: {\small \it kitazawa@phys.se.tmu.ac.jp}\vspace{8pt} {${}^b$\sl\small Scuola Normale Superiore and INFN\\ Piazza dei Cavalieri, 7\\ 56126 Pisa \ ITALY \\ e-mail: {\small \it sagnotti@sns.it}}\vspace{10pt} \vspace{24pt} {\sc\large Abstract}\end{center} \noindent ``Brane supersymmetry breaking'' occurs in String Theory when the only available combinations of D--branes and orientifolds are not mutually BPS and yet do not introduce tree--level tachyon instabilities. It is characterized by the emergence of a steep exponential potential, and thus by the absence of maximally symmetric vacua. The corresponding low--energy supergravity admits intriguing spatially--flat cosmological solutions where a scalar field \emph{is forced to climb up} toward the steep potential after an initial singularity, and additional milder terms can inject an inflationary phase during the ensuing descent. We show that, in the resulting power spectra of scalar perturbations, an infrared suppression is typically followed by a pre--inflationary peak that reflects the end of the climbing phase and can lie well apart from the approximately scale invariant profile. A first look at WMAP9 raw data shows that, while the $\chi^2$ fits for the low--$\ell$ CMB angular power spectrum are clearly compatible with an almost scale invariant behavior, they display nonetheless an eye--catching preference for this type of setting within a perturbative string regime. \setcounter{page}{1} \pagebreak \newpage \section{\sc Introduction}\label{sec:intro} \vskip 12pt It would be bizarre if Supersymmetry \cite{SUSY} were not to play a role in the Fundamental Interactions, since its local realization in Supergravity \cite{SUGRA} and its completion in String Theory \cite{strings} contain profound lessons on the links between gravity and the other forces. Yet, the apparent lack of super--partners of the known particles has already survived the first round of experiments at LHC \cite{LHC}. Under these circumstances, the standard recipe to keep Supersymmetry alive is to try and make all putative partners heavy enough, appealing to some mechanism of supersymmetry breaking \cite{SUSY_breaking}. However, arriving at a fully satisfactory scenario of this type remains a key challenge in current attempts to combine the Standard Model of Particle Physics with gravity, despite decades of intense effort and a number of important results. A proper understanding of supersymmetry breaking cannot forego a detailed grasp of general matter couplings in Supergravity, which has long been available for models with ${\cal N}=1$ supersymmetry in four dimensions \cite{cfgv} but is not nearly as complete in the higher--dimensional settings that are so important in String Theory. As a result, perhaps, non--supersymmetric string compactifications are generally expected to spell trouble for the vacuum state and have been explored only to a limited extent. Still, there is a higher--dimensional setting that was identified long ago and stands out for its relative simplicity and rigidity. This is ``brane supersymmetry breaking'' (BSB) \cite{bsb}, which presents itself in some orientifold vacua \cite{orientifolds} where Ramond--Ramond (RR) charge neutrality requires the simultaneous presence, in the vacuum, of combinations of branes and orientifolds that are not mutually BPS and yet do not introduce tree--level tachyon instabilities. These extended objects leave behind a distinctive mark, a steep exponential potential proportional to their overall tension, whose lack of local minima excludes from the outset maximally symmetric geometries, and flat space in particular, for these systems. The phenomenon finds its simplest manifestation in the ten--dimensional Sugimoto model in \cite{bsb}, where supersymmetry appears exact, at tree level, insofar as the closed spectrum is concerned, but is actually non-linearly realized due to the non-supersymmetric brane configuration, whose modes include a singlet spinor that plays the role of a goldstino \cite{10d_bsb_couplings}. The nine dimensional vacuum geometry realizing the largest symmetry allowed for this system was presented by Dudas and Mourad in \cite{dm_vacuum}. The ten--dimensional Sugimoto model in \cite{bsb} admits an intriguing spatially flat cosmological solution where the dilaton exhibits a striking behavior. The solution has a long history \cite{lm,exp_sol,dm_vacuum}, but its main lesson was appreciated only recently \cite{dks}. A minimally coupled scalar field ought to possess the two distinct options of emerging from an initial singularity while \emph{descending} or \emph{climbing} mild exponential potentials of the type \begin{equation} {V}(\varphi) \ = \ {V}_0 \, e^{\,2\,\gamma\, \varphi} \ , \label{potonexp_intro} \end{equation} and this is indeed the case. However, \emph{for $\gamma$ large enough only the climbing behavior becomes possible}, and this what we shall refer to as the climbing phenomenon. In particular, with the convenient non--canonical normalization for $\varphi$ described in the following sections the climbing behavior sets in at $\gamma=1$ for all space--time dimensions $d$. A striking feature of ``brane supersymmetry breaking'' is that its potential lies precisely at the critical point $\gamma=1$ \cite{dks}. Moreover, this property continues to hold for $d<10$ for a special combination of the dilaton and the breathing mode, and under the assumption that this field dominates the early dynamics other branes present in String Theory \cite{branescan,branesugimoto} can contribute milder exponential terms \cite{dks,fss,as13} that could have injected inflation \cite{inflation} after the initial climbing phase. One is thus led to consider the class of potentials \cite{dks,dkps} \begin{equation} {V}(\varphi) \ = \ {V}_0 \left( e^{\,2\,\varphi} \ + \ e^{\,2\,\gamma\, \varphi} \right) \ \label{potwoexp_intro} \end{equation} in four dimensions, where $\gamma< 1/\sqrt{3}$ in order to allow for the onset of inflation following an initial climbing phase, with the eventual goal of comparing their implications with the low--$\ell$ end of the CMB angular power spectrum. An enticing feature of the climbing phenomenon is that it links two apparently unrelated problems, the breaking of Supersymmetry and the onset of inflation. We are well aware of the limitations of the simple potentials of eq.~\eqref{potwoexp_intro}, which result for one matter in tensor--to--scalar ratios that are too large \cite{inflation}, but we can anticipate that the key phenomenon that we shall come to can be traced to the behavior of the scalar field near the ``hard'' exponential, the main datum that String Theory and ``brane supersymmetry breaking'' contribute to this discussion, so that it occurs in a variety of more realistic potentials, some of which will be touched upon in the following. Some features of the scalar dynamics in the potential \eqref{potwoexp_intro} can be readily anticipated. To begin with, the ``hard'' potential forces the field to emerge from an initial singularity while climbing up from large negative values of $\varphi$, and this early phase is essentially driven by the mild exponential. Notice that the climbing phenomenon constrains the choice of initial conditions, a nice feature for a theory of inflation, so that a single parameter is left in this case, a constant $\varphi_0$ that determines to which extent the scalar feels the ``hard'' exponential. Depending on the value of $\varphi_0$, the reversal that opens the descent can be more or less abrupt, but lo and behold one would expect it to bring along a spurt of exponential expansion for the Universe, before the actual inflationary phase sets in. This generic feature was readily recognized and actually served as a motivation for the analysis in \cite{dks}, but turning into a quantitative prediction for the spectrum of scalar perturbations proved rather difficult, so much so that it failed to emerge in \cite{dkps}. As we shall see, a pre--inflationary peak does show up in the power spectrum of scalar perturbations that, as discussed in \cite{dkps}, experiences in general a wide infrared depression before merging with an almost scale invariant profile ${\cal P}_\zeta(k) \sim k^{n_s-1}$ as the dynamical evolution finally approaches the Lucchin--Matarrese (LM) attractor \cite{lm}. The reversal becomes less abrupt for lower values of $\varphi_0$, while the peak grows in size, until it eventually turns into the typical feature well described in \cite{destri}. This signals the approach to slow roll in more conventional models, and actually shows up, in the presence of a mild exponential alone, in the whole range of values for $\varphi_0$ that we have explored. As expected, however, the reversal of the scalar motion leaves no signs in the corresponding spectrum of tensor perturbations, consistently with the analysis in \cite{dkps}. It is natural to inquire whether this type of behavior could have some bearing on our current understanding of the CMB angular power spectrum. We shall qualify under which assumptions this might well be the case, and we shall also manage to vindicate, to some extent at least, our expectation via a detailed, if preliminary, comparison with CMB data. As we have anticipated, the region of interest is the low--$\ell$ tail of the angular power spectrum, where some anomalies with respect to the $\Lambda$CDM setting have long been spotted but where cosmic variance, which reflects our very special observation site for the Universe, adds more than one word of caution to any attempt to interpret their actual meaning. The main low--$\ell$ anomaly in the CMB angular power spectrum is quadrupole reduction, and it is large enough to go unnoticed. Indeed, a number of works recently touched upon the subject from different viewpoints \cite{dkps,quadrupole_red}, and they include a detailed re--analysis of the cosmic mask \cite{gruppuso}. Interestingly one can argue, on rather general grounds, that \emph{quadrupole reduction accompanies naturally the emergence from an initial singularity}. Moreover, for low multipoles $\ell \lesssim 35$ the actual CMB observable, the angular power spectrum, is determined essentially by a Bessel--like transform \cite{mukhanov_slow} that follows closely the power spectrum, so that in principle features of the former can reflect themselves in similar features of the latter, and vice versa. In practice, however, this type of correspondence requires an additional assumption that, if correct, would make, by itself, the whole story quite interesting. The assumption, which is not implausible numerically, posits that the largest wavelengths entering the horizon at the present epoch are essentially those that exited around the onset of the inflationary phase. Or, if you will, that the low--$\ell$ CMB data are opening in front of our instruments a small window on the onset of inflation, the very phenomenon that is usually advocated to explain the apparent flatness and homogeneity of our Universe and also explains naturally the slight tilt of the CMB power spectrum \cite{cm} that was recently confirmed to high precision by PLANCK \cite{PLANCK}. Working within this assumption, we shall begin to explore how far one can go in relating the available WMAP9 raw data \cite{wmap9} to the models at stake. We shall explore, to this end, the first 31 multipoles starting from the quadrupole, for a range of values of $\varphi_0$ that encompasses the emergence of the pre--inflationary peak, its growth and its eventual coalescence into the attractor profile. As we shall see, the data are apparently not insensitive to the pre--inflationary peak, since centering it around $\ell = 5$ brings about a noticeable reduction of the $\chi^2$ value by two or three units. Amusingly the agreement improves, as we shall see, for values of $\gamma$ that lie below $0.08$, the choice that would tune the large -- $k$ behavior of ${\cal P}_\zeta(k)$ with the observed spectral index $n_s \approx 0.96$. This result resonates with a key indication of the PLANCK experiment, which favors generically concave inflationary potentials \cite{concave}. As we have anticipated, the pre--inflationary peak draws its origin from the region where the ``hard'' exponential begins to dominate and only the nearby behavior, which is naturally flatter in a concave potential, should play a role. We shall also vindicate this claim by displaying some power spectra computed directly in Starobinsky--like potentials \cite{starobinsky} \begin{equation} {V}_S(\varphi) \ = \ {V}_0\,\left[ \left(1- e^{\,-\,\gamma\,(\varphi+\Delta)}\right)^2 \ + \ e^{\,2\,\varphi}\right] \label{starobinsky_intro} \end{equation} that terminate on the same hard exponential, which possess very similar qualitative features. The Starobinsky potentials have aroused some interest lately in connection with Supergravity \cite{starobinsky_supergravity}, and are not foreign, in principle, to ``brane supersymmetry breaking'', if quantum corrections are taken into account. Interestingly, as we shall see, the comparison with CMB raw data favors scenarios of this type that appear to fit well within perturbative String Theory. The string coupling is in fact sized by $e^\varphi$, a quantity that stays well below one for the choices of $\varphi_0$ that result in better fits. We plan to return soon to a more detailed comparison with the CMB, modifying the standard $\Lambda$CDM setup to allow for quadrupole reduction and a pre--inflationary peak, the key features suggested by this class of models \cite{gnks}. To reiterate, among a multitude of available vacua, String Theory suggests some peculiar options related to orientifold models where supersymmetry is broken at the string scale \cite{bsb}. Orientifold models generally allow a wide range of values for the string scale \cite{aadd}, all compatible with the standard values of Newton's constant and of gauge couplings, which includes the scales typically associated with inflation. And, as we have stressed, these two scales are linked in the simplest realizations of ``brane supersymmetry breaking'' with a ``climbing scalar''. A common origin for the two phenomena of supersymmetry breaking and inflation would represent an economy of principles, and our results can perhaps serve as a motivation in this respect, although they do not arise generically but only within a specific class of string vacua. The structure of the paper is as follows. In Section \ref{sec:climbing} we review the climbing phenomenon starting from the one--exponential case, stressing its generality and illustrating its realization in the relatively simple class of potentials of eq.~\eqref{potwoexp_intro} and in the richer class of potentials of eq.~\eqref{starobinsky_intro}. In Section \ref{sec:powerspectrum}, which focusses on power spectra of scalar perturbations, we pinpoint the origin of the pre--inflationary climbing peak and we illustrate its dependence on $\varphi_0$. We also discuss briefly, for the sake of comparison, corresponding spectra of tensor perturbations for the same range of values for $\varphi_0$. In Section \ref{sec:observables} we move some first steps toward a quantitative comparison with the CMB, insofar as the first 30 multipoles or so are concerned. Finally, in Section \ref{sec:conclusions} we briefly summarize our main conclusions and some perspectives for future research along these lines, while the Appendix elaborates on the links between the two--exponential potentials \eqref{potwoexp_intro} and String Theory. \vskip 24pt \section{\sc Climbing Scalars and String Theory}\label{sec:climbing} \vskip 12pt The spatially flat cosmologies of interest here correspond to a slight generalization of the Friedmann--Lemaitre--Robertson--Walker setting obtained considering the class of four--dimensional metrics \begin{equation} ds^{\,2} \ = \ e^{\,2\,{\cal B}(t)} \, dt^2 \ - \ e^{\,\frac{2\,{\cal A}(t)}{3}}\ d{\bf x} \,\cdot \, d{\bf x} \ , \label{FLRW_gen} \end{equation} where for later convenience we wrote the scale factor $a(t)$ in the form \begin{equation} a(t) \ = \ e^{\,\frac{{\cal A}(t)}{3}} \ . \end{equation} These types of cosmologies emerge naturally when Einstein gravity is minimally coupled to a scalar field subject to a potential $V(\phi)$, so that in a ``mostly negative'' signature, \begin{equation} \mathcal{S} \,= \, \int \, d^{\,4} \,x \, \sqrt{- \, \mbox{det} \,g} \left[ \, \frac{1}{2\, k_N^2} \ R \, + \ \frac{1}{2}\ g^{\mu\nu} \, \partial_\mu \,\phi \ \partial_\nu \, \phi \, - \, V(\phi) \right] \ . \label{scalanorma} \end{equation} The introduction of the gauge function ${\cal B}$ is very convenient, since it allows to obtain analytic solutions, albeit not in terms of the actual cosmic time measured by comoving observes, in a single--exponential potential \cite{exp_sol} and in a number of similar instructive cases \cite{fss,bo_integrable}. Here \begin{equation} {R^\mu}_{\nu\rho\sigma} \ = \ \partial_\sigma \Gamma^{\mu}_{\nu\rho} \ - \ \partial_\rho \Gamma^{\mu}_{\nu\sigma} \ + \ \Gamma^{\mu}_{\sigma\tau} \, \Gamma^{\tau}_{\nu\rho} \ - \ \Gamma^{\mu}_{\rho\tau} \, \Gamma^{\tau}_{\nu\sigma} \end{equation} and $R={\delta_\mu}^\rho \, g^{\,\nu\sigma}\, {R^\mu}_{\nu\rho\sigma}$, and with the convenient redefinition \begin{equation} \varphi \ = \ k_N \, \sqrt{\frac{3}{2}} \ \phi \ , \label{redef} \end{equation} for the class of metrics of eq.~\eqref{FLRW_gen} and with $\varphi$ only depending on $t$ the Lagrangian reduces, up to an overall constant, to \begin{equation} {\cal L} \ = \ e^{\,{\cal A} \, -\, {\cal B}} \left[ \, \frac{1}{2} \, \left( \frac{d\varphi}{dt}\right)^2 \ - \ \frac{1}{2} \, \left(\frac{d{\cal A}}{dt} \right)^2 \ - \ \frac{3}{2} \ k_N^2 \ e^{\,2\, {\cal B}} \, V \, \right] \ . \end{equation} If the potential $V$ is always \emph{positive}, one can work in the very convenient gauge determined by the condition \begin{equation} V \, e^{\,2\,{\cal B}} \ = \ V_0 \ , \label{gauge} \end{equation} where $V_0$ denotes its overall scale. In terms of the dimensionless ``parametric time'' \begin{equation} \tau \ = \ t\, \sqrt{3\, V_0\, k_N^2} \ , \end{equation} if $\dot{\cal A}>0$ the equations of motion take the convenient form \cite{dks} \begin{eqnarray} && \dot{\cal A}^{\,2} \ - \ \dot{\varphi}^{\,2} \ = \ 1 \ , \\ && \ddot{\varphi} \ + \ \dot{\varphi} \sqrt{1 \ + \ \dot{\varphi}^{\,2}} \ + \ \frac{V_{\varphi}}{2\, V} \, \left( 1 \ + \ \dot{\varphi}^{\,2} \right) \ = \ 0 \ , \label{eqsgaugeB} \end{eqnarray} where ``dots'' denote derivatives with respect to $\tau$ and $V_{\varphi}$ denotes the derivative of the potential with respect to $\varphi$. Note that in this gauge the driving force originates from the logarithm of the potential. Therefore, it is exactly constant for an exponential potential and remains essentially piecewise constant in the presence of positive combinations of exponentials. An interesting class of exact solutions exists, in terms of the parametric time $\tau$, for an exponential potential \begin{equation} {V}(\varphi) \ = \ {V}_0 \, e^{\,2\,\gamma\, \varphi}\ . \label{potonexp} \end{equation} For $0< \gamma<1$ there are actually \emph{two} classes of such solutions, which describe respectively a scalar that emerges from the initial singularity while \emph{climbing} or \emph{descending} the potential. To begin with, the \emph{climbing} solutions for the $\tau$--derivatives of $\varphi$ and ${\cal A}$ are \begin{eqnarray} \dot{\varphi} &=& \frac{1}{2} \left[ \sqrt{\frac{1\,-\, \gamma}{1\,+\, \gamma}}\, \coth \left( \frac{\tau}{2}\ \sqrt{1\,-\, \gamma^{\,2}}\, \right) \ - \ \sqrt{\frac{1\,+\, \gamma}{1\,-\, \gamma}}\, \tanh \left( \frac{\tau}{2}\ \sqrt{1\,-\, \gamma^{\,2}}\, \right)\right]\ , \nonumber \\ \dot{\cal A} &=& \frac{1}{2} \left[ \sqrt{\frac{1\,-\, \gamma}{1\,+\, \gamma}}\, \coth \left( \frac{\tau}{2}\ \sqrt{1\,-\, \gamma^{\,2}}\, \right) \ + \ \sqrt{\frac{1\,+\, \gamma}{1\,-\, \gamma}}\, \tanh \left( \frac{\tau}{2}\ \sqrt{1\,-\, \gamma^{\,2}}\, \right)\right] \ , \label{speeds} \end{eqnarray} and the reader should appreciate that these expressions \emph{do not involve any initial--value constants} other than the Big--Bang time, here set for convenience at $\tau=0$. On the other hand, the corresponding fields read \begin{eqnarray} \varphi &=& \varphi_0 \ + \ \frac{1}{1+\gamma} \ \log \sinh \left( \frac{\tau}{2}\ \sqrt{1\,-\, \gamma^{\,2}}\, \right) \ - \ \frac{1}{1-\gamma} \ \log \cosh \left( \frac{\tau}{2}\ \sqrt{1\,-\, \gamma^{\,2}}\, \right)\ , \nonumber \\ {\cal A} &=& \ \frac{1}{1+\gamma} \ \log \sinh \left( \frac{\tau}{2}\ \sqrt{1\,-\, \gamma^{\,2}}\, \right) \ + \ \frac{1}{1-\gamma} \ \log \cosh \left( \frac{\tau}{2}\ \sqrt{1\,-\, \gamma^{\,2}}\, \right) \ ,\label{fields_1exp} \end{eqnarray} and do involve an important \emph{integration constant}, $\varphi_0$. This determines the value of $\varphi$ at a reference ``parametric time'' $\tau>0$ or, what is more interesting for us, bounds from above the largest value that it can attain during the cosmological evolution. Strictly speaking, ${\cal A}$ would also involve an additive constant, but one can set it to zero up to a rescaling of the spatial coordinates. On the other hand, $\varphi_0$ has interesting effects on the dynamics that become particularly pronounced in the two--exponential potentials \begin{equation} {V}(\varphi) \ = \ {V}_0 \left( e^{\,2\,\varphi} \ + \ e^{\,2\,\gamma\, \varphi} \right) \ . \label{potwoexp} \end{equation} Much information on these systems can be extracted from the preceding special case even though a general exact solution is not available, and the one--exponential solutions provide accurate accounts of the behavior close to the initial singularity and at late epochs, where one of the two terms dominates. As we have anticipated, for $\gamma<1$ another class of solutions exists in the potential of eq.~\eqref{potonexp}, which describes a scalar that emerges from the initial singularity \emph{descending} the potential. The corresponding expressions can be simply obtained from eqs.~\eqref{speeds} and \eqref{fields_1exp} with the two replacements $\gamma \to - \gamma$ and $\varphi \to - \varphi$, which are a symmetry of the action of eq.~\eqref{scalanorma} with this potential, and eventually both classes of solutions converge for large $\tau$ on the LM attractor \cite{lm}, for which \begin{equation} \dot{\varphi} \ = \ - \ \frac{\gamma}{\sqrt{1-\gamma^{\,2}}} \ , \qquad \dot{\cal A} \ = \ \frac{1}{\sqrt{1-\gamma^{\,2}}} \ . \label{lm} \end{equation} However, only climbing solutions exist for $\gamma \geq 1$, and we should stress that the example has general implications: in any potential that for $\varphi\to +\infty$ is dominated by the first term in eq.~\eqref{potwoexp} the scalar field cannot emerge from an initial singularity descending that end. The two--exponential potentials of eq.~\eqref{potwoexp} find a key motivation in String Theory, in a link between string scale and supersymmetry breaking scale that manifests itself in a class of orientifold models \cite{orientifolds}. The effect is brought about by \emph{classically stable} and yet \emph{non--mutually BPS} combinations of branes and orientifolds that are to be present simultaneously in some orientifold vacua to guarantee RR charge neutrality. It is usually called ``brane supersymmetry breaking'' (BSB) \cite{bsb} and finds its simplest manifestation in the ten--dimensional Sugimoto model in \cite{bsb}. This mechanism is directly responsible for the first contribution present in eq.~\eqref{potwoexp}, a ``hard'' term with exponent $2\,\varphi$ that is left over at the (projective--)disk level by the ${\overline D}9$ branes (anti--BPS objects with tension $T>0$ and RR charge $Q<0$) and the $O9_+$ planes (BPS objects with $T>0$ and $Q>0$) that are present in the vacuum, whose opposite charges cancel one another but whose identical tensions add up. The Polyakov expansion of String Theory \cite{polyakov} \emph{predicts} that in ten dimensions this exponent lies precisely at the ``critical value'' where descending solutions disappear. Let us stress that the uncanceled $D9$--$O9_+$ tension introduces conceptual and technical difficulties in the applications of BSB to Particle Physics, since for one matter flat space is not a solution of the equations of motion in the low--energy effective field theory, and consequently in dealing with this type of models one is inevitably confronted with the presence of uncanceled tadpoles. This leads readily to the need for resummations, which are complicated in Field Theory and prohibitive at the string level \cite{resummations}. For this reason in \cite{dks} we started to explore the possible role of these types of models in Cosmology. After all, the typical scales of inflationary models and the typical ranges for the string scale in type-I orientifold models can be close to one another \cite{aadd}, while in Cosmology the issue is not vacua but rather evolving states. Moreover, the nature of the classical solutions of the field equations that we have described gives us some confidence that a full--fledged string embedding might be found eventually, at least in special cases. All in all, in the class of potentials \eqref{potwoexp} the scalar field can only emerge from the initial singularity \emph{climbing up} their left portion, which is essentially determined by the second, ``mild'' exponential. Therefore, it should not come as a surprise that the exact solutions for the one--exponential potential \eqref{potonexp} provide an effective way of setting initial conditions close to the initial singularity when solving numerically for the dynamics in the two--exponential potential of eq.~\eqref{potwoexp}, according to \footnote{The corresponding expressions in the cosmic time $t_c$ are independent of $\gamma$, and are both asymptotic to $1/t_c$.} \begin{eqnarray} \varphi &\ \ \ \thicksim\!\!\!\!\!\!\!\!\!_{{}_{{\tau \to 0}}}& \varphi_0 \ + \ \frac{1}{1+\gamma} \ \log \left( \frac{\tau}{2}\ \sqrt{1\,-\, \gamma^{\,2}}\, \right) \ , \nonumber \\ {\cal A} &\ \ \ \thicksim\!\!\!\!\!\!\!\!\!_{{}_{{\tau \to 0}}}& \ \frac{1}{1+\gamma} \ \log \left( \frac{\tau}{2}\ \sqrt{1\,-\, \gamma^{\,2}}\, \right) \ . \label{earlytimes} \end{eqnarray} The climbing phase ends at a turning point whose location, sensitive to $\varphi_0$, determines to which extent the scalar feels the first, ``hard'' exponential while reverting to a descending phase. Eventually, if $\gamma< {1}/{\sqrt{3}}$ the Universe will attain an accelerated expansion, again largely under the spell of the mild exponential alone. Clearly, what makes this phenomenon interesting is that \emph{the climbing phase can provide a rationale for the very onset of inflation within perturbative String Theory}. In the ten--dimensional Sugimoto model of \cite{bsb} $\varphi$ is in fact the dilaton $\phi_{10}$, whose expectation value determines the string coupling in terms of the dilaton vacuum value according to \begin{equation} g_s \ = \ e^{\, \langle \phi_{10} \rangle} \end{equation} so that, with $\varphi$ bounded from above during the whole cosmic evolution, the available initial conditions leave naturally room for models with $g_s<1$. Moreover, for $d<10$ the performer changes and yet the music somehow does not: the (non--canonically normalized) field $\varphi$ becomes a $d$--dependent linear combination of dilaton and internal breathing mode \cite{fss,as13}, but the ``hard'' exponential term retains in all cases the ``critical'' form $e^{\,2\,\varphi}$. Insofar as the orthogonal combination of the two fields is somehow stabilized, climbing thus remains an inevitable fate. In terms of a canonically normalized scalar $\phi$, however, the potential changes in a definite fashion with the number of space--time dimensions, becoming in particular $e^{\,\sqrt{6} k_N \phi}$ in four dimensions. \begin{figure}[h] \begin{center}$ \begin{array}{ccc} \epsfig{file=starobinsky_pot.eps, height=1.2in, width=1.2in} & \qquad\quad \epsfig{file=starobinsky_scale.eps, height=1.2in, width=1.2in} & \qquad\quad \epsfig{file=starobinsky_phi.eps, height=1.2in, width=1.2in} \end{array}$ \end{center} \caption{\small A Starobinsky-like potential whose right end terminates on a ``hard'' exponential (left) and the corresponding evolutions of ${\cal A}/3$ (center) and $\varphi$ (right) in cosmic time $t_c$. After an initial fast--roll descent of the left end, the scalar climbs up to a point, reverts its motion, attains a slow--roll regime for a while and eventually reaches the bottom of the potential well, where it comes to rest after some oscillations. An enlarged early--time view is provided in fig.~\ref{fig:twoexp}.} \label{fig:starobinsky} \end{figure} \begin{figure}[h] \begin{center}$ \begin{array}{ccc} \epsfig{file=twoexp_phi_short.eps, height=1.2in, width=1.2in} & \qquad\qquad \epsfig{file=starobinsky_phi_short.eps, height=1.2in, width=1.2in} \end{array}$ \end{center} \caption{\small The scalar behaves in qualitatively similar ways (displayed here in cosmic time) near the inversion points of the two--exponential potential of eq.~\eqref{potwoexp} (left) and of the Starobinsky--like potential of fig.~\ref{fig:starobinsky} (right), if the inversion occurs close enough to the ``hard'' exponential.} \label{fig:twoexp} \end{figure} As we have stressed, the second term in eq.~\eqref{potwoexp} plays an essential role both in the early ascent and in the final descent. Yet, its origin is admittedly less compelling. It can be traced, to some extent, to other string $p$--branes, which under some assumptions spelled out in \cite{fss,as13} give rise for space--filling branes to the discrete set of values \begin{equation} \gamma \ = \ \frac{1}{12} \left( p \ + \ 9 \ - \ 6\, \alpha \right) \ , \end{equation} if the dilaton enters their world--volume actions in the string frame via the exponential $e^{\,-\,\alpha \phi_{10}}$. This set includes the non--BPS D3 brane found long ago in \cite{branesugimoto} following the approach of Sen \cite{sen}, but there are clearly two familiar types of branes: $\alpha$ would be one for $D$--branes and two for the Neveu--Schwarz (NS) fivebrane, but the formula has in principle a wider range of applicability since a zoo of exotic branes with higher values of $\alpha$ that are present for $d<10$ was also identified in \cite{branescan}. As argued in \cite{fss,as13}, all these branes ought to have been generically present in the vacuum at very early epochs, close to the initial singularity. In particular, an NS fivebrane wrapped on a small internal cycle, corresponding to $p=4$ and $\alpha=2$, would yield a ``mild'' exponential term with $\gamma=\frac{1}{12}$, while its instability in orientifold models and its consequent decay could perhaps account for the eventual graceful exit of the Universe from the inflationary phase. A brief discussion of the role of these branes is presented in the Appendix. On the other hand, $\gamma=\frac{1}{12}$ would naively translate, in the potentials of eq.~\eqref{potwoexp}, into a spectral index equal to $0.957$, which lies intriguingly within the experimentally allowed range for the CMB, $n_s = 0.9603 \pm 0.0073$. All in all, the two--exponential model of eq.~\eqref{potwoexp} is admittedly somewhat naive, but nonetheless in the next sections we shall hopefully convince the reader that it can convey interesting dynamical lessons, possibly with some bearing on the CMB power spectrum. Let us therefore concentrate on very early epochs in these cosmologies, leaving aside for the moment a detailed account of an eventual graceful exit following a typical inflationary epoch with about 60 $e$--folds or so. As we have already stressed, we are drawing some motivation from the striking fact that climbing can make inflation an inevitable feat while also linking it to another phenomenon, the breaking of Supersymmetry, which would also occur at very high scales in this context. We would like to stress that, in this whole class of systems, the scalar field spans twice a given region of its configuration space during the cosmic evolution, first moving toward the steep potential in a regime of fast roll and then reverting from it. An eventual slow--roll regime can be attained for suitable completions of the potential, and the scalar can even be stabilized in potential wells that left no tangible signs on its fast ascent. For instance, the Starobinsky--like potentials \cite{starobinsky} \begin{equation} {V}_S(\varphi) \ = \ {V}_0\,\left[ \left(1- e^{\,-\,\gamma\,(\varphi+\Delta)}\right)^2 \ + \ e^{\,2\,\varphi}\right]\ , \label{starobinsky} \end{equation} \noindent which have received some attention lately in connection with Supergravity \cite{starobinsky_supergravity}, can be combined with the ``hard exponential'' in eq.~\eqref{potwoexp} to yield this type of dynamics. A typical solution for $\varphi(t)$ in this context is displayed in fig.~\ref{fig:starobinsky}, which vindicates some of the preceding claims since the scalar field: \begin{itemize} \item[1. ] emerges in fast roll from the left end of the potential (dominated by a ``mild'' exponential that actually grows rapidly as $\varphi \to -\infty$), moves to the right and climbs up, leaving behind a potential well; \item[2. ] reverts its motion, more or less abruptly depending on how close it comes to the ``hard wall'', before attaining during the ensuing descent a slow--roll regime driven by the milder terms in the potential; \item[3. ] eventually settles at the bottom of the potential well after some oscillations. \end{itemize} As we have anticipated, our aim here is to elucidate how the effects of the transition between the early fast--roll ascent driven by the ``hard'' exponential of BSB and the subsequent descent depend on $\varphi_0$. Transitions to slow roll in conventional inflationary potentials were nicely investigated in \cite{destri}, and were shown to leave a distinctive mark in power spectra of scalar perturbations: a quick growth from a deep infra--red depression followed by a sharp peak and a few damped oscillations before a rapid approach to an almost scale invariant spectrum. Both this result and the far wider infrared depression described in detail in \cite{dkps} will emerge again from our analysis for special choices of $\varphi_0$, but taking a closer look will unveil the generic emergence of a new type of spectral distortion. \begin{figure}[h] \begin{center}$ \begin{array}{ccc} \epsfig{file=WMAP9.eps, height=1.5in, width=1.5in} & \qquad\quad \epsfig{file=raw_WMAP9.eps, height=1.5in, width=1.5in} & \qquad\quad \epsfig{file=PLANCK.eps, height=1.5in, width=1.5in} \end{array}$ \end{center} \caption{\small WMAP9 determination \cite{wmap9} of the CMB angular power spectrum (left), the raw data for its low--$\ell$ portion (center) and the corresponding PLANCK determination \cite{PLANCK} (right). The anomalies of interest in this paper concern the low-$\ell$ region detailed in the central portion of the figure, while the shadows in the outer portions are meant emphasize the role of ``cosmic variance''.} \label{fig:WMAP9-PLANCK} \end{figure} A proper characterization of these phenomena cannot forego some reference to the behavior of the Hubble parameter \begin{equation} H \ = \ k_N\ \sqrt{\frac{V(\varphi)}{3} \ \left(1 \ + \ \dot{\varphi}^2\right)} \ . \label{hubble} \end{equation} and of two familiar slow--roll parameters \begin{eqnarray} \epsilon_\phi &\equiv& - \ \frac{1}{H^2}\ \frac{dH}{dt_c} \ = \ 3 \ \frac{\dot{\varphi}^2}{1 \ + \ \dot{\varphi}^2} \ ,\nonumber \\ \eta_\phi &\equiv& \frac{1}{k_N^2\, V} \ V_{\phi\phi} \ = \ \frac{3}{2\,V} \ V_{\varphi\varphi} \ , \label{eps_eta} \end{eqnarray} where we have used a shorthand notation for the second derivative of $V$ with respect to the scalar field and $t_c$ denotes the cosmic time measured by comoving observers, defined in the two--exponential models according to \begin{equation} dt_c\ = \ e^{\,{\cal B}}\, dt\ . \end{equation} Restricting the attention to the two--exponential potentials $\eqref{potwoexp}$ brings about a number of technical simplifications and yet, we believe, can still capture the essential new features that can be largely traced to the epoch when the scalar terminates its ascent. The ensuing analysis refines and corrects to some extent the results in \cite{dkps}, and hopefully it can also convey an overall picture of the potential imprints of BSB on the CMB power spectrum of scalar perturbations. The first of these imprints is a \emph{reduction of power at low--frequencies} within a window that, as we shall see, becomes \emph{significantly wider for larger values of $\varphi_0$}, as the scalar feels more intensely the ``hard wall''. This is the scenario that was elaborated upon at length in \cite{dkps}, but as we shall see the available CMB data do not seem to favor it. Moreover, \emph{a sizable lack of power in wide--angle correlations is a generic feature in cosmologies emerging from an initial singularity}, while the climbing phenomenon can leave behind a more distinctive mark. Low and behold, current measurements may be confronting us with some pre--inflationary information, since for one matter the low--$\ell$ tails of WMAP9 or PLANCK angular power spectra, if taken at face value, point to a reduction of the quadrupole, and refined and well motivated alternative choices for the cosmic mask enhance the effect \cite{gruppuso} rather than reducing it. However, in the same spirit one can perhaps spot in fig.~\ref{fig:WMAP9-PLANCK} a rather pronounced peak for $\ell \simeq 5$ and some more oscillations. Following the suggestion of \cite{as13}, in this paper we would like to elaborate in detail on the possible lessons that a climbing scalar can provide in this respect, and conversely on how these types of spectra can select preferred values for $\varphi_0$. The resulting picture will comply to the intuitive idea that a ``hard'' reflecting wall can make the reversal of the scalar motion more or less abrupt depending on how close it gets to it, and thus on $\varphi_0$. We can conclude this section adding to this positive note some cautionary remarks on the role of climbing in String Theory. To begin with, climbing is not a definite prediction of String Theory, although it occurs in a wide class of cosmologies related to orientifold models with BSB, where the supersymmetry breaking scale is tied to the scale of inflation. Moreover, even in this context the phenomenon is inevitable only in one--field reductions, which are at any rate a familiar choice in Cosmology. Remarkably, as we have already stressed, even after compactification there is always a one--field reduction that leaves behind a ``critical'' scalar $\varphi$, which is purely the dilaton only in ten dimensions \cite{fss,as13}. All in all, larger values of $\varphi$ bring about larger values of the string coupling, so that as we have stressed a dynamics where these are subject to an upper bound possesses the attractive feature of being naturally captured by string perturbation theory. Still, one should not forget that, in taking these models seriously, one is pushing rather far our current grasp of String Theory. Curvature corrections become in fact important near the initial singularity, and are naively expected to dominate precisely at epochs where the early climbing would occur. These intricacies were examined in a number of cases in \cite{cd}, with due attention to possible ways of bypassing them, at least insofar as quadratic curvature corrections go, but no definite conclusion was reached in this sense. However, with no better way to proceed at present, one may well explore the possible consequences of this intriguing dynamics while keeping well in mind this important proviso. This is what we shall do in the following sections. \vskip 24pt \section{\sc A New Look at the Power Spectra of Climbing Scalars}\label{sec:powerspectrum} \vskip 12pt Following \cite{maldacena}, one can study scalar perturbations in the class of cosmological backgrounds of the preceding section, even at the non--linear level, starting from the ADM decomposition \begin{equation} ds^{\,2} \ = \ N^2\, e^{\,2\,{\cal B}(\tau)}\ d\tau^2 \ - \ h_{ij} \, \left(dx^i \,+\, N^i\, e^{\,{\cal B}(\tau)}\ d\tau \right)\left(dx^j \,+\, N^j\, e^{\,{\cal B}(\tau)}\ d\tau \right) \end{equation} and working with the gauge choice \begin{equation} h_{ij} \ = \ e^{\,\frac{2\,{\cal A}(\tau)}{3}} \, e^{\,2\,\zeta} \ \delta_{ij} \ , \qquad \delta \phi \ = \ 0 \ . \end{equation} The perturbations of the scalar field then disappear and $\zeta({\bf x},\tau)$ becomes the key variable, both for the power spectrum and for the bi-spectrum, and in particular its two--point function at the large ``parametric times'' $\tau_F$ that correspond to the end of inflation determines the power spectrum according to \begin{equation} \langle \zeta({\bf x},\tau_F)\, \zeta({\bf x},\tau_F) \rangle \ = \ \int_0^\infty \frac{dk}{k} \ P_\zeta(k) \ . \label{power_zeta} \end{equation} $\zeta({\bf x},\tau_F)$ possesses the striking property of being conserved outside the horizon: its very existence opens a window on the Early Universe, since information stored on super--horizon scales during an early inflationary phase is ready to reemerge unabridged, in front of our detection instruments, after a decelerated phase. The Mukhanov--Sasaki (MS) variable \begin{equation} v({\bf x},\tau) \ = z(\tau) \, \zeta({\bf x},\tau) \ , \end{equation} where \begin{equation} {z}(\tau) \ = \ \frac{1}{k_N}\ \sqrt{6}\ e^{\,\frac{{\cal A}(\tau)}{3}} \ \frac{d\varphi(\tau)}{d{\cal A}(\tau)} \ , \label{zeta} \end{equation} does not share the property of $\zeta({\bf x},\tau)$, but has the virtue of leading to a very instructive formulation of the quadratic problem. Actually, the difference between $\zeta$ and $v$ is not marginal in our case, since $z(\tau)$ vanishes at the end of the climbing phase, so that $\zeta$ develops a pole there \cite{kodama}. Since the power spectrum depends only on the large--$\tau_F$ behavior of $\zeta$, this fact does not introduce serious difficulties, even in numerical studies, which can be effected in terms of other related quantities as in \cite{dkps}, but special care should be exercised in studying the bi--spectrum, which depends on the detailed behavior of $\zeta$ over the whole range of $\tau$ that is traced out during the cosmological evolution. We hope to reconsider elsewhere the Schwinger--Keldysh formalism \cite{sk} in models exhibiting the climbing phenomenon. Returning to the power spectrum, let us recall that expanding the quantum MS field as \begin{equation} v({\bf x},\eta) \ = \ \int \frac{d^3 {\bf k}}{(2\pi)^3} \left[ v_k (\eta)\, \alpha({\bf k})\,e^{\,i{\bf k}\cdot {\bf x}} \ + \ v_k^\star (\eta)\, \alpha({\bf k})^\dagger\,e^{\,-\,i{\bf k}\cdot {\bf x}}\right] \end{equation} and working in terms of the (dimensionless) conformal time $\eta$ defined according to \begin{equation} ds^{\,2} \ = e^{\,\frac{2\,{\cal A}(\tau)}{3}} \left( d \eta^2 \ - \ d{\bf x} \,\cdot \, d{\bf x}\right) \ , \end{equation} so that \begin{equation} d \eta \ = \ e^{\,-\,\frac{{\cal A}}{3}} \, \sqrt{\frac{V_0}{V}} \, d \tau \ , \end{equation} the Fourier coefficients $v_k$ that play the role of the flat--space exponentials $e^{\pm i \omega t}$ satisfy the Schr\"odinger--like equation \begin{equation} \left(\frac{d^2}{d \eta^2} \ + \ k^2 \ - \ W_s(\eta) \right) v_k(\eta) \ = \ 0 \end{equation} with the Bunch--Davies condition \begin{equation} v_k(\eta) \ \ \ \thicksim\!\!\!\!\!\!\!\!\!\!_{{}_{{k \to \infty}}} \frac{1}{\sqrt{2k}}\ e^{\,-\,i\,k\,\eta}\ \end{equation} and the Wronskian constraint \begin{equation} v_k \, \frac{\partial}{\partial \, \eta} \ v_k^\star \ - \ v_k^\star \, \frac{\partial}{\partial \, \eta} \ v_k \ = \ i \ . \end{equation} $W_s$ is the MS potential, which is determined by the background cosmology via the relation \begin{equation} W_s \ = \ \frac{z^{\prime\prime}(\eta)}{z(\eta)} \ , \label{MS_potential} \end{equation} where ``primes'' denote derivatives with respect to the conformal time $\eta$ and $z$ is defined in eq.~\eqref{zeta}. The power spectrum is then \begin{equation} P_\zeta(k) \ = \ \frac{k^3}{2\,\pi^2} \ \left| \frac{v_k(-\epsilon)}{z(-\epsilon) }\right|^2 \ , \label{power_spec} \end{equation} where the quantities involved are computed at the end of inflation, for small positive values of $\epsilon$, or equivalently for large values of $\tau_F$, when the ratio becomes independent of $\epsilon$ and reduces to a well--defined function of $k$. From the limiting behavior displayed in eqs.~\eqref{earlytimes} one can deduce that \begin{equation} W_s \ \ \ \thicksim\!\!\!\!\!\!\!\!\!\!_{{}_{{\eta \to - \eta_0}}} \ - \ \frac{1}{4}\ \frac{1}{(\eta+\eta_0)^2} \ , \label{early_MS} \end{equation} where the conformal time $\eta\,= \,-\,\eta_0$ corresponds to the initial singularity, and in a similar fashion from the late--time behavior of eqs.~\eqref{lm} one can deduce that \begin{equation} W_s \ \ \ \thicksim\!\!\!\!\!\!\!\!\!\!_{{}_{{\eta \to - 0^-}}} \ \ \frac{\nu^2 - \frac{1}{4}}{\eta^2} \ , \label{attractor_MS} \end{equation} with \begin{equation} \nu \ = \ \frac{3}{2} \ \frac{1\,-\,\gamma^2}{1\,-\,3\,\gamma^2} \ . \label{nugamma} \end{equation} The initial singularity thus translates into a singular attractive behavior for the MS potential, while the final inflationary epoch builds up a ``centrifugal'' barrier. The dynamical properties of different models leave their signature in the intermediate region, which encodes the distinctive features of their power spectra. We shall therefore begin our analysis from $W_s$, following \cite{dkps} in establishing a dictionary between some of its key features and the power spectrum, before extending the lessons to the scalar dynamics itself. In fact, one can apply to this problem much of the machinery that is familiar from one--dimensional Quantum Mechanics, but for a key difference that should not be overlooked, since here one is solving an \emph{initial--value} problem, rather than a typical \emph{boundary--value} problem. As a result the inflationary barrier gives rise to a large \emph{amplification} of (generically ill--tuned) initial signals, which lies at the heart of a slight tilt of the power spectrum of the form \begin{equation} P_\zeta(k) \ \sim \ k^{3-2\,\nu} \ \equiv \ k^{n_s-1} \label{cibmukh} \end{equation} for slow--roll cosmologies that proceed on an attractor curve (on the LM attractor of eq.~\eqref{lm}, in this type of systems), and thus of the whole $\Lambda$CDM setup. This early prediction of \cite{cm} was finally confirmed to high precision by PLANCK \cite{PLANCK}, with $n_s = 0.9603 \pm 0.0073$, and is currently regarded as a main lesson of inflation. As we have anticipated in Section \ref{sec:climbing}, our aim here is to depart slightly from this canonical setting under the spell of String Theory and BSB, and to examine in detail the predictions of the two--exponential model of eq.~\eqref{potwoexp}, whose attractor curve is only approached after a pre--inflationary climbing phase. The dynamical problem, as we shall see, has some interest of its own since it reveals novel effects, but our analysis was clearly motivated by the probable discrepancies between the low--$\ell$ CMB tail of fig.~\ref{fig:WMAP9-PLANCK} and the predictions of the $\Lambda$CDM model, which rest after all on the attractor power spectrum of eq.~\eqref{cibmukh}. Some first steps toward a quantitative comparison between the CMB and the refined spectra that we are about to describe will be the subject of Section \ref{sec:observables}. Some of the results of \cite{dkps} provide a convenient starting point for our current discussion. To begin with, combining the limiting behaviors in eqs.~\eqref{early_MS} and \eqref{attractor_MS} one can readily conclude that \emph{an initial singularity finitely back in the past forces $W_s(\eta)$ to cross the horizontal axis}. The area bounded by its upper portion, which determines the WKB amplification factor, saturates as $k \to 0$, and drawing from standard facts of Quantum Mechanics one can conclude that, in view of the $k^3$ pre--factor in eq.~\eqref{power_spec}, the resulting power spectrum is bound to decrease for low $k$. The lack of power in large--angle correlations that both WMAP9 \cite{wmap9} and PLANCK \cite{PLANCK} apparently see, if their data are taken at face value despite cosmic variance, could thus seemingly translate into an indication that our instruments are capturing some glimpses of an initial singularity. It is actually simple to exhibit a class of MS potentials displaying exactly this type of effect. To this end, it suffices to displace the attractor MS potential \eqref{attractor_MS} according to \begin{equation} \frac{\nu^2 - 1/4}{\eta^2} \ \longrightarrow \ \frac{\nu^2 - 1/4}{\eta^2} \ - \ \frac{\nu^2 - 1/4}{\eta_0^2} \ , \label{lower_Ws} \end{equation} so that it meets the horizontal axis at $\eta\,=\,-\,\eta_0$, which is tantamount to effecting the replacement \begin{equation} k \ \longrightarrow \sqrt{k^2\ + \ \frac{1}{\eta_0^2}\ \left(\nu^2 - \frac{1}{4}\right)} \ , \end{equation} but only in the \emph{second} factor of eq.~\eqref{power_spec}. The resulting power spectrum is then exactly \begin{equation} P_\zeta(k) \ \sim \ \frac{\left(k\, \eta_0\right)^{3}}{\sqrt{\left(k\, \eta_0\right)^2\ + \ \left(\nu^2 - \frac{1}{4}\right)}} \ , \label{cibmukh_BB} \end{equation} and exhibits clearly the type of low--frequency depression that we had anticipated. \begin{figure}[h] \begin{center}$ \begin{array}{ccc} \epsfig{file=coulomb_Ws.eps, height=1.5in, width=1.5in} & \qquad\quad \epsfig{file=coulomb_eta.eps, height=1.5in, width=1.5in} & \qquad\quad \epsfig{file=coulomb_c.eps, height=1.5in, width=1.5in} \end{array}$ \end{center} \caption{\small Attractor (orange, dotted) and Coulomb--like $W_s$ (left) \emph{vs} conformal time $\eta$ for $\eta_0=1$ and $c=1$ (red), $2$ (blue), $3$ (green). Coulomb--like spectra (center) for $c=2$ and $\eta_0=0.2$ (red), $1$ (blue), $6$ (green), or (right) with $\eta_0=1$ and $c=1.3$ (red), $2$ (blue), $2.7$ (green). In the last two cases on the horizontal axis we display $x$, where $k=10^x$.} \label{fig:Coulomb} \end{figure} Actually, in an Appendix of \cite{dkps} we presented an exact solution for the family of Coulomb--like MS potentials \begin{equation} W_s(\eta) \ = \ \frac{\nu^2 - \frac{1}{4}}{\eta^2} \ \left[ c\left(1\, +\, \frac{\eta}{\eta_0} \right) \ + \ (1-c)\left( 1\, +\, \frac{\eta}{\eta_0} \right)^2\right] \ , \end{equation} which include the attractor MS potential \eqref{attractor_MS} and, for $c \geq 1$, also displacements as in eq.~\eqref{lower_Ws} and rotations with respect to it. The corresponding power spectra read \begin{equation} P(k) \ \sim \ \frac{\left({k\, \eta_0}\right)^{3}\, \exp\left[{{\frac {\pi \, \left( \frac{c}{2}-1 \right) \left( {\nu}^{2}-\frac{1}{4} \right) }{\sqrt {\left({k\, \eta_0}\right)^{2}+ (c-1)({\nu}^{2}-\frac{1}{4})}}}}\right]}{ \left| \Gamma\left( \nu+\frac{1}{2}+{\frac {i \left( \frac{c}{2}-1 \right) \left( {\nu}^{2}-\frac{1}{4} \right) }{\sqrt {\left({k\, \eta_0}\right)^{2}+ (c-1)({\nu}^{2}-\frac{1}{4})}}} \right)\right|^2\, \big[\left({k\, \eta_0}\right)^{2}+ \left( c-1 \right) \left( {\nu}^{2}-\frac{1}{4} \right) \big]^{\nu}} \ , \label{power_coulomb} \end{equation} and contain as a special case, for $c=2$, the deformed power spectra of eq.~\eqref{cibmukh_BB}. However, modifying $c$ one can also affect the growth rate, lowering it, enhancing it and even introducing an overshoot with respect to eq.~\eqref{cibmukh}, as in fig.~\ref{fig:Coulomb}. Moreover, as described in \cite{dkps}, an additional type of imprint can be associated to local departures of $W_s$ from its attractor shape. First--order perturbation theory \emph{\`a la} Schwinger--Keldysh implies that this superposes an oscillatory behavior, in general, to the preceding effects, but the Coulomb--like potentials deviate from the attractor $W_s$ in an infinite domain, so that these oscillations are somehow washed out in eq.~\eqref{power_coulomb}. Oscillations of this type were clearly seen to accompany the transition from fast roll to slow roll in more conventional inflationary potentials in \cite{destri}, and are also responsible for part of the behavior displayed in \cite{dkps}. Summarizing, an initial singularity thus translates, almost verbatim, into two interesting types of imprints: \begin{itemize} \item[a. ] an \emph{inevitable suppression of the power spectrum for low frequencies with respect to the attractor form}, which is not necessarily ${\cal O}(k^3)$ but can be milder, as in the cases that were the focus of \cite{dkps}, or more pronounced, as in the cases analyzed in \cite{destri}; \item[b. ] a \emph{possible overshoot}, which can present itself when the actual $W_s$ happens to emerge from the horizontal axis more steeply than the attractor curve. We saw clear signs of the overshoot in the tensor spectra analyzed in \cite{dkps}. \end{itemize} \begin{figure}[h] \begin{center}$ \begin{array}{ccc} \epsfig{file=perturbation_1.eps, height=1.5in, width=1.5in} & \qquad\qquad \epsfig{file=perturbation_2.eps, height=1.5in, width=1.5in} & \qquad\qquad \epsfig{file=perturbation_3.eps, height=1.5in, width=1.5in} \end{array}$ \end{center} \caption{\small Oscillations induced by square--well perturbations of a Coulomb--like $W_s$. These plots have an arbitrary normalization and correspond to perturbations acting in intervals of the same length but centered around decreasing values of $\eta$, and on the horizontal axis we display $x$, where $k=10^x$.} \label{fig:local_perturbations} \end{figure} Moreover: \begin{itemize} \item[c. ] localized perturbations of the MS potential translate into \emph{localized oscillations} in $k$--space, as those that were seen in \cite{destri} to accompany transitions from fast roll to slow roll. \end{itemize} The emergence of this type of behavior in the presence of a climbing scalar was already discussed in \cite{dkps} \footnote{The relevant example of square--well perturbation displayed in eq.~(3.20) contains however a typo, since the overall $k$ should be replaced with an overall $k^{-2}$.} and is also illustrated qualitatively in fig.~\ref{fig:local_perturbations} with reference to square--well perturbations, but here we can be rely on more accurate numerical calculations, while drawing also a better comparison with similar phenomena that were discussed in the recent literature. \begin{figure}[h] \begin{center}$ \begin{array}{cc} \epsfig{file=single_descending.eps, height=1.5in, width=1.5in} & \qquad\qquad \epsfig{file=single_climbing.eps, height=1.5in, width=1.5in} \end{array}$ \end{center} \caption{\small Power spectra of scalar perturbations for the single--exponential potential of eq.~\eqref{potonexp}, for a \emph{descending} scalar, with $\varphi_0=0$ (left, continuous, red) and $\varphi_0=-4$ (left, dashed, blue) and for a \emph{climbing} scalar, with $\varphi_0=0$ (right, continuous, red) and $\varphi_0=-4$ (right, dashed, blue). The two backgrounds leave very similar imprints, which are recurrent in transitions from fast roll to slow roll in other potentials and were discussed at length in \cite{destri}. In both cases, increasing $\varphi_0$ moves the transitions to slightly larger values of $k$. These plots are COBE normalized at $k=38$, and on the horizontal axis we display $x$, where $k=10^x$.} \label{fig:single} \end{figure} The models typified by the two--exponential potentials of eq.~\eqref{potwoexp} and emerging somehow from BSB are the site, in general, of new type of phenomenon, \emph{incomplete transitions to slow-roll}. This feature was not clearly recognized in \cite{dkps}, although in retrospect it could be regarded as their own distinctive signature. Characterizing its effects was a main motivation for the present work, and to this end let us begin by stressing that, intuitively, the scalar has a tendency to climb up too fast the two--exponential potential since, as we have already stressed in Section \ref{sec:climbing}, as it emerges from the initial singularity it is largely driven by the ``mild'' exponential alone. As a result, the encounter with the ``hard'' exponential typically occurs somewhat abruptly and brings about a consequent tendency to bounce against it. Unless of course the parameter $\varphi_0$ is too large and negative for the scalar to ever feel the first term in eq.~\eqref{potwoexp}. In other words, for $\varphi_0$ sufficiently large and negative the two potentials of eqs.~\eqref{potonexp} and \eqref{potwoexp} should lead to power spectra that are essentially identical. \begin{figure}[h] \begin{center}$ \begin{array}{ccc} \epsfig{file=double_climbing_1.eps, height=1.5in, width=1.5in} & \qquad\qquad \epsfig{file=double_climbing_2.eps, height=1.5in, width=1.5in} & \qquad\qquad \epsfig{file=double_climbing_3.eps, height=1.5in, width=1.5in} \end{array}$ \end{center} \caption{\small Power spectra of scalar perturbations for the double--exponential potential of eq.~\eqref{potwoexp} (in all cases the dotted line is the attractor curve). For $\varphi_0=0$ a featureless power spectrum approaches the attractor curve after four decades in $k$ (left), while for $\varphi_0=-0.5,-1$ a small pre--inflationary peak starts to build up (left). For $\varphi_0=-1.5,-2,-2.5$ the pre--inflationary peak becomes more and more pronounced, but remains well separated from the attractor curve (center). For $\varphi_0=-3,-3.5,-4$ the power spectrum is essentially the same as in fig.~\ref{fig:single}: it rises steeply, and a narrow peak overshoots the attractor curve that is readily reached after a few oscillations (right). In all cases on the horizontal axis we display $x$, where $k=10^x$, and these power spectra are normalized, in an arbitrary but convenient fashion, so that they all meet at the end of the explored range.} \label{fig:double} \end{figure} The left portion of fig.~\ref{fig:single} displays some typical power spectra for a \emph{descending} scalar in the one--exponential potential of eq.~\eqref{potonexp}. Repeating the exercise for a \emph{climbing} scalar entails, surprisingly, some numerical subtleties, but the reader will not fail to recognize that the end result, in the right portion of fig.~\ref{fig:single}, is almost identical and is again almost independent of $\varphi_0$. Moreover, these spectra are also strikingly similar to those found in \cite{destri}. This universality is very interesting: their structure typifies conventional transitions from fast roll to slow roll, and emerges again, nicely enough, for $\varphi_0$ sufficiently large and negative in the double--exponential potentials, as shown in the last portion of fig.~\ref{fig:double}. Turning to the two--exponential potential of eq.~\eqref{potwoexp}, the opposite limit of a scalar impinging on a ``hard wall'' formed the core of the numerical analysis in \cite{dkps}, but the neater results displayed there actually correspond to the upper limit of the perturbative regime. Indeed, for sufficiently large values of $\varphi_0$ the scalar experiences a hard bounce and attains a slow--roll regime far later than in the single--exponential potential, so that the power spectrum is widely depressed with respect to the attractor curve over several decades in $k$--space, as can be seen again in fig.~\ref{fig:double}. \begin{figure}[h] \begin{center} \epsfig{file=double_climbing_tensor.eps, height=1.5in, width=1.5in} \end{center} \caption{\small Power spectra of tensor perturbations for the double--exponential potential of eq.~\eqref{potwoexp} for $\varphi_0=0$ (red), $\varphi_0=-1$ (blue) and $\varphi_0=-4$ (green). The only feature is the overshoot that was already discussed in \cite{dkps}, which is only prominent for $\varphi_0=0$ but becomes rapidly less pronounced as $\varphi_0$ is reduced. The dotted line is the attractor curve and the normalization is fixed conventionally in order to grant a tensor--to--scalar ratio $r \simeq 0.1$ at the highest scale explored. As above, $k=10^x$.} \label{fig:double_tensor} \end{figure} The intermediate values of $\varphi_0$ in fig.~\ref{fig:double} are most interesting, since they give rise to the new phenomenon that we are addressing here. Indeed, in reverting its motion, the scalar undergoes inevitably a short period of slow--roll, giving rise to a spurt of almost exponential expansion for the Universe. A period of this type is always present, but it merges into the slow--roll descent for $\varphi_0$ large and negative while it is too short to leave any tangible signs for sufficiently large values of $\varphi_0$. On the other hand, in the intermediate region the reversal does leave a sign although, after reverting its motion, the scalar returns to a fast--roll regime for a while before finally slowing down. This type of dynamics leaves a striking signature: for intermediate values of $\varphi_0$ the power spectra of the two--exponential model display a \emph{pre-inflationary peak} of variable size, which superposes to the slowly growing spectrum present for large values of $\varphi_0$. The peak is well separated in the vertical scale from the more standard feature signaling the onset of the eventual slow--roll phase and yet both occur essentially for the same range of frequencies, albeit for different values of $\varphi_0$. No similar phenomena show up in power spectra of tensor perturbations, which reflect the evolution of the scale factor alone, as can be seen in fig.~\ref{fig:double_tensor}. One can be slightly more quantitative, since on general grounds the link between wave numbers of perturbations exiting the horizon and cosmological dynamics rests on the correspondence \begin{equation} k \ \sim \ {{e^{\,\frac{\cal A}{3}} \ H} \over {k_N \ \sqrt{3V_0}}} \ = \ \dot{\cal A} \, e^{\,\frac{\cal A}{3}} \, \frac{\sqrt{V(\varphi)/V_0}}{3} \ . \end{equation} In all cases displayed in fig.~\ref{fig:double} these numbers are very close to -0.3, in the log--scale of the plots, for the end of the climbing phase, where $\dot{\cal A}=1$. And indeed all pre--inflationary peaks lie in that region in $k$--space, which confirms the link to the reversal of the scalar motion that we have advocated. \begin{figure}[h] \begin{center}$ \begin{array}{ccc} \epsfig{file=compare_phi.eps, height=1.5in, width=1.5in} & \qquad\qquad \epsfig{file=compare_Ws.eps, height=1.5in, width=1.5in} & \qquad\qquad \epsfig{file=compare_Ws2.eps, height=1.5in, width=1.5in} \end{array}$ \end{center} \caption{\small $\tau$--evolution of the scalar field near the turning point (left), the corresponding MS potentials $W_s$ in conformal time $\eta$ (center) and an enlarged view of the region where the $W_s$ curves cross the horizontal axis (right). The lines for $\varphi_0=0,-2,-4$ are dashed, continuous and dashed--dotted.} \label{fig:phi_Ws} \end{figure} Some additional details can perhaps allow a better grasp of these dynamical effects. A closer look at various dynamical quantities in the three significant cases with $\varphi_0=-4$ (dashed--dotted curves), $\varphi_0=-2$ (continuous curves) and $\varphi_0=0$ (dashed curves) in figs.~\ref{fig:phi_Ws} and \ref{fig:H_eps_eta} can provide a clearer picture of the phenomenon that is taking place in two--exponential systems. The $\tau$--evolutions of $\varphi$ collected in the left portion of fig.~\ref{fig:phi_Ws} exhibit clearly the sharp change of behavior experienced by the scalar: as it comes closer to the ``hard'' exponential the reversal of its motion becomes more abrupt, so that it is nearly a reflection for $\varphi_0=0$. On the other hand, there are no appreciable differences between the curves for $\varphi_0=-4$ and $\varphi_0=-2$, and yet as we have seen their spectra are qualitatively rather different. The differences between these two cases become more evident, however, in the rest of fig.~\ref{fig:phi_Ws}, which collects the corresponding $W_s(\eta)$. To begin with, the $W_s$ curve for $\varphi_0=0$ lies well to the right of the others, consistently with a wide reduction of the area below it and hence with the wide WKB suppression of power that, as we have seen, occurs in this case. Moreover, one can also see that the scalar reverts its motion well within the region where $W_s$ is negative, so that for $\varphi_0=0$ the phenomenon can leave no tangible signs in the power spectrum, consistently with fig.~\ref{fig:double}. On the other hand, in the other two cases the inversion occurs in regions where $W_s$ is positive, consistently with the presence of tangible imprints in their power spectra. \begin{figure}[h] \begin{center}$ \begin{array}{ccc} \epsfig{file=compare_H.eps, height=1.5in, width=1.5in} & \qquad\qquad \epsfig{file=compare_eps.eps, height=1.5in, width=1.5in} & \qquad\qquad \epsfig{file=compare_eta.eps, height=1.5in, width=1.5in} \end{array}$ \end{center} \caption{\small $\tau$--evolution of $k_N \, H$, or $H$ in reduced Planck units (left), and the corresponding $\tau$--evolutions of $\epsilon_\phi$ (center) and $\eta_\phi$ (right). The lines for $\varphi_0=0,-2,-4$ are dashed, continuous and dashed--dotted.} \label{fig:H_eps_eta} \end{figure} The differences among the three cases can be scrutinized in further detail from another vantage point, referring to three quantities whose definitions were recalled of Section \ref{sec:climbing}: the Hubble parameter $H$ and the $\epsilon_\phi$ and $\eta_\phi$ parameters. The corresponding curves for $\varphi_0=-4$ (dashed--dotted), for $\varphi_0=-2$ (continuous) and for $\varphi_0=0$ (dashed) are collected in fig.~\ref{fig:H_eps_eta}. In all cases, the dashed line lies well apart from the others, which are relatively close and yet exhibit appreciable differences. Even for $\varphi_0=0$ $H$ contains a noticeable horizontal portion that signals a quasi--exponential expansion around the inversion point, but the corresponding $\epsilon_\phi$ grows rapidly away from it, consistently with the picture of a fast bounce, while finally the corresponding $\eta_\varphi$ is also far larger than in the other cases, consistently with the lack of a sizable growth of perturbations accompanying the reversal of the scalar in this case. The comparison between the $\epsilon_\phi$ curves for $\varphi_0=-2$ and $\varphi_0=-4$ is also very instructive. The former lies in fact first below and then above the latter, since the reversal is more efficient close to the ``wall'', but this makes the intermediate case initially closer to slow roll than the other and then, after the reversal, farther from it, an information that is almost tantamount to drawing the pre--inflationary peak in fig.~\ref{fig:double}. For this case, however, a larger $\eta_\varphi$ makes it comparatively harder for the perturbations to grow. These details are all consistent with the fact that a pre--inflationary peak is not visible for $\varphi=0$, is visible for $\varphi_0=-2$ and is large for $\varphi_0=-4$. Before coming to our first comparisons with the CMB power spectrum, let us pause to summarize some technical details of our computations. To begin with, the natural option to compute power spectra would seem to rest on the Fourier modes of the variable $\zeta$, whose differential equation in conformal time, \begin{equation} \frac{d^2 \zeta_k}{d \eta^2} \ + \ 2 \ \frac{z^\prime}{z} \ \frac{d \zeta_k}{d\eta} + k^2 \, \zeta_k \ = 0 \ , \end{equation} also takes a nice form. Moreover, $\zeta_k$ is to approach a constant for large $\tau_F$, or equivalently for small negative $\eta$, which could be used as a strong test of the numerics. As we have already stressed, however, this variable develops a pole at the inversion point for the climbing scalar, where $z$ vanishes \cite{kodama}. This makes $\zeta_k$ inconvenient in numerical integrations, and therefore both here and in \cite{dkps} we have actually resorted to a different variable, \begin{equation} Q_k \ = \ e^{\,-\, \frac{{\cal A}}{3}}\ {v_k} \ = \ e^{\,-\, \frac{{\cal A}}{3}}\ z \ {\zeta_k} \ , \end{equation} which combines the virtues of both $\zeta_k$ and $v_k$: it has no pole at the inversion point for the climbing scalar and yet it also attains ($k$--dependent) limiting values for large $\tau$, after several $e$--folds of inflation. Working in terms of the parametric time $\tau$, which is particularly convenient for the two--exponential system as we have seen, we were led to the differential equation \begin{equation} \ddot{Q}_k \, + \, \left( \dot{\cal A} \, + \, \frac{V_\varphi}{2\, V} \, \dot{\varphi} \right) \dot{Q}_k \, + \, \left( \frac{k^2\,e^{\,-\, \frac{2}{3}\, {\cal A}}}{V/V_0} \, + \, \frac{V_{\varphi\varphi}}{2\, V} \, + \, \frac{V_\varphi}{2\, V} \ \frac{4 \dot{\varphi}}{\sqrt{1 \, + \, \dot{\varphi}^2}} \, + \, \frac{2\, \dot{\varphi}^2}{1 \, + \, \dot{\varphi}^2} \right) Q_k \, = \, 0 \ . \label{Qeq} \end{equation} As in the preceding section, here ``dots'' denote $\tau$--derivatives. The reader should appreciate that the various terms in eq.~\eqref{Qeq} are indeed manifestly free of singularities at the inversion point for the scalar field $\varphi$. The power spectra were computed in this fashion with Maple programs, exercising special care both with the numerical integration and with the choice of initial conditions. As a result, the present analysis is more precise than that reported in \cite{dkps}, where wide oscillations were present that, in retrospect, reflect in part an imprecise translation of the fixed initial conditions of Section \ref{sec:climbing} into cosmic time. The improved precision was essential to unveil the pre--inflationary peak, and to this end special care was exercised to deal with the oscillatory nature of the complex solutions of the Mukhanov--Sasaki equation. We actually found out, by trial and error, that the numerics is typically more stable if, rather than working with the complex differential equation for $Q_k$, one combines the linear \emph{second--order} differential equation \eqref{Qeq} for its real part $Q_{R,\,k}$ with the \emph{first--order} Wronskian condition \begin{equation} Q_{I,\,k} \ \frac{dQ_{R,\,k}}{d\tau} \ - \ Q_{R,\,k} \ \frac{dQ_{I,\,k}}{d\tau} \ = \ \frac{1}{2\,\sqrt{V(\varphi)/V_0}} \ e^{\,-\, {\cal A}(\tau)} \end{equation} to determine its imaginary part $Q_{I,k}$. All in all, we have gathered the impression that Maple handles numerical instabilities in a clear fashion, so that we have had all the way a good control on which spectra required more sophisticated methods for their determination. Most of the results were obtained with a high--order Runge--Kutta method working with large numbers of digits. As we have stressed, they satisfy the nice consistency condition of converging, for sufficiently large negative $\varphi_0$, to the single--exponential power spectra of fig.~\ref{fig:single}, which are also along the lines of \cite{destri}. In this fashion we reached relatively handily the border of present observations, which lies around $k=10^3$. All these results were obtained, as in \cite{dkps}, starting from a Bunch--Davies--like vacuum, setting initial conditions close to the singularity (at $\tau=0.01$, in terms of parametric time) as we have explained and working in terms of the parametric time $\tau$ in the gauge of eq.~\eqref{gauge}. Our results thus rest on the choice of an initial Bunch--Davies--like vacuum: as we saw in detail in \cite{dkps}, moving away from it blurs the reduction of power at low frequencies. \vskip 24pt \section{\sc A First Look at the CMB}\label{sec:observables} \vskip 12pt We can now see how the power spectra that we have identified, and in particular their pre--inflationary peaks, translate into some features of angular power spectra that do not seem foreign to what WMAP9 \cite{wmap9} and PLANCK \cite{PLANCK} are observing. The potential relevance of pre--inflationary scenarios for the CMB rests, of course, on the key assumption that our Universe gives us access, via the seven or so observable $e$--folds, to the features present in the power spectra of Section \ref{sec:powerspectrum}. Or, if you will, that the CMB gives us access somehow to the onset of inflation. However, a back--of--the--envelope computation shows that this is not implausible, insofar as inflation did not last too long (not more than 60--70 $e$--folds, in a crude scenario), and given this assumption features present in the primordial power spectrum do translate into corresponding features of the angular power spectrum, on account of the neat relation discussed in \cite{mukhanov_slow}, which we shall present in the form \begin{equation} A_\ell\;(\varphi_0,{\cal M},\delta) \ = \ {\cal M} \ \ell(\ell+1)\ \int_0^\infty \frac{dk}{k} \ {\cal P}_\zeta \big( k , \varphi_0 \big) \, {j_\ell}^2 \big( k \, 10^\delta \big) \, \label{bessel} \end{equation} where we are emphasizing the dependence on $\varphi_0$, and is relatively accurate for $\ell \lesssim 35$. Here $j_\ell$ denotes a spherical Bessel function, and nicely enough there is no dependence on complicated plasma effects after recombination. Actually, a further ``plus'' of eq.~\eqref{bessel} is that $j_\ell^{\,2}$ is sizably peaked when its argument is of order $\ell$, so that \begin{equation} A_\ell \ \sim \ {\cal P}_\zeta \left( \ell \ 10^{-\delta} \right) \ . \label{Aelldelta} \end{equation} In other words, the $A_\ell$ encode direct information on the primordial spectral function ${\cal P}_\zeta$, but this remarkable fact goes unfortunately on the par with a big ``minus'' of the whole setting. To begin with, very few independent data, $2\ell+1$ for each value of $\ell$, determine the first few multipoles and thus the large--scale structure of the CMB angular power spectrum, and this brings about correspondingly large error bars. In addition, we are observing the CMB from a very special vantage point, so that ``cosmic variance'' induces a properly conservative attitude, so much so that the sizable reduction of the CMB quadrupole is often signalled as a puzzle, but is not widely regarded, at present, as a critical problem for Cosmology. On the other hand, the discussion presented in Section \ref{sec:powerspectrum} has anticipated our idea that the reduced quadrupole might represent a natural shadow of an initial singularity, while the current estimates of cosmic variance might prove too conservative. For the time being, we shall concentrate on the low--$\ell$ tail of the CMB power spectrum and on eq.~\eqref{bessel}, but we plan to perform a more complete analysis in a future publication \cite{gnks}. The reader should have noticed the two parameters that we have inserted in eq.~\eqref{bessel}. The first plays a more evident role: it is an overall normalization ${\cal M}$, which accounts for various constants entering the relation between the angular power spectrum and the primordial power spectrum of scalar perturbations and for the conversion to the proper units, $\mu K^2$, but ultimately reflects the scale of inflation. The second, the exponent $\delta$, is even more interesting. It can be moved, up a sign, into the argument of ${\cal P}_\zeta$, and controls the horizontal displacement of the features present in the power spectra of Section \ref{sec:powerspectrum} with respect to the main peaks of the Bessel functions. Alternatively, in more physical terms, $\delta$ allows a finer tuning between the largest wavelengths that are entering the cosmic horizon at the present epoch and those that were exiting it at the onset of the inflationary phase, which are assumed to be roughly identical in our setting. \begin{figure}[h] \begin{center}$ \begin{array}{ccc} \epsfig{file=chi-square-single-climb.eps, height=1.5in, width=1.5in}& \qquad\qquad \epsfig{file=chi-square-single-desc.eps, height=1.5in, width=1.5in} \end{array}$ \end{center} \caption{\small $\chi^2/DOF$ arising from comparisons between WMAP9 raw data and the angular power spectra predicted by the one--exponential potential of eq.~\eqref{potonexp} for a climbing scalar (left, blue dots), for a descending scalar (right, blue dots), and for the attractor (orange, diamonds).} \label{fig:angular_fit_single} \end{figure} Let us begin our analysis from the attractor curve \begin{equation} {\cal P}_\zeta \left( k \right) \ = \ k^{n_s-1} \ , \end{equation} noticing that in this case $\delta$ can be set to zero, since it plays \emph{no independent role} with respect to ${\cal M}$, and that the $A_\ell$ can be computed exactly, with the end result \begin{equation} A_\ell^{\rm (attr)}({\cal M}) \ = \ {\cal M} \ \ \frac{\sqrt {\pi }\, \ell(\ell+1)}{4} \ {\frac { \Gamma \left( \frac{3\,-\,{\it n_s}}{2} \right) \Gamma \left( \frac{2\,l\,-\,1\,+\,{\it n_s}}{2} \right) }{\Gamma \left( \frac{4\,-\,{\it n_s}}{2} \right) \Gamma \left( \frac{2\,l\,+\,5\,-\,{\it n_s}}{2} \right) }} \ . \label{bessel_attractor} \end{equation} We can now compare this expression, computed for the preferred value of the spectral index $n_s \simeq 0.96$, with the first 31 raw WMAP9 data, adjusting the normalization ${\cal M}$ in such a way as to minimize \begin{equation} \chi^2 \ = \ \ \sum_{\ell=2}^{32} \frac{\left(A_\ell\ - \ A_\ell^{\rm WMAP9}\right)^2}{\left(\Delta A_\ell^{\rm WMAP9}\right)^2} \ , \label{chi_squared} \end{equation} where $A_\ell^{\rm WMAP9}$ are the WMAP9 central values and $\Delta A_\ell^{\rm WMAP9}$ are the corresponding errors. In this fashion, after adjusting ${\cal M}$ one is effectively left with 30 independent degrees of freedom ($DOF$'s), and the end result for the least $\chi^2$ per $DOF$ is \begin{equation} \frac{\chi^2_{\rm (attr,\, min)}}{DOF} \ = \ \frac{25.46095}{30} \ \simeq \ 0.849 \ . \label{chi2_attractor} \end{equation} We can now proceed to the pre--inflationary models of the preceding sections, starting from the one--exponential climbing and descending systems. In the one--exponential systems, as we have seen, the power spectra depend little on $\varphi_0$, so that we contented ourselves with the nine values $\varphi_0=0,-0.5,-1,\ldots,-4$, first minimizing in all cases eq.~\eqref{chi_squared} analytically with respect to ${\cal M}$ and then exploring the result for more than 60 values of $\delta$ belonging to the interval $[-1.2,1.2]$, which encompasses all the features described in Section \ref{sec:powerspectrum}. Let us stress that the behavior of the spherical Bessel functions makes our knowledge of the primordial power spectra, which is limited to the range $10^{-2} < k < 10^3$, widely sufficient to obtain relatively accurate results in eq.~\eqref{bessel} for $\delta \in [-1.2,1.2]$, since we are only interested in values of $\ell \leq 32$. As expected, the lowest values of $\chi^2/DOF$ are essentially the same for the one--exponential climbing and descending cases, and are essentially independent of $\varphi_0$, as can be seen clearly in the point plots of fig.~\ref{fig:angular_fit_single}. Notice that both lie only slightly below the attractor points although they correspond to $\chi^2 \approx 24$, since in both cases one is left with $DOF=29$, after fixing both ${\cal M}$ and $\delta$, for any choice of $\varphi_0$. For the two--exponential system of eq.~\eqref{potwoexp}, whose dependence on $\varphi_0$ is far richer, we performed a more detailed investigation as follows: \begin{itemize} \item[1. ] we explored a wider sequence of about 25 values including $\varphi_0=0,-0.25,-0.5,,\ldots,-3.75,-4$, in order to make the features of the transition region involving the pre--inflationary peak more transparent. Our initial choice for $\gamma$ was motivated by the naive correspondence between the mild exponential and the spectral index, % \begin{equation} n_s \ - \ 1 \ = \ 3 \ - \ 2\, \nu \ = \ - \ \frac{6\, \gamma^2}{1\,-\,3\,\gamma^2} \ . \end{equation} This result would hold exactly for power--law inflation and gives $\gamma \simeq 0.08$ for $n_s \simeq 0.96$; \item[2. ] we also repeated the analysis for the potentials of eq.~\eqref{potwoexp} with $\gamma=0.04$ and with $\gamma=0.02$, in order to take a first look at systems where an exponential ``hard wall'' accompanies \emph{concave} potentials like those in eq.~\eqref{starobinsky}. Further terms could complete in fact the two--exponential potentials for lower values of $\varphi_0$, turning them into concave functions capable of associating to later portions of the angular power spectrum the proper spectral index, as in fig.~\ref{fig:starobinsky}. This supplement of analysis may be regarded as a first attempt to take into account the well--known difficulty of power--law inflation with large tensor--to--scalar ratios, but as we anticipated we plan to subject the whole construction to more stringent statistical tests \cite{gnks} over a wider range of frequencies. With this proviso in mind, the $\chi^2$ analysis of the angular power spectra in figs.~\ref{fig:single} and \ref{fig:double}, which builds upon the expectation that the key features are well captured by the two--exponential models, represents a natural first step. \end{itemize} As in the one--exponential system, for any choice of $\varphi_0$ two parameters were determined in order to optimize the comparison with WMAP9 raw data: \begin{itemize} \item we minimized eq.~\eqref{chi_squared} analytically with respect to the normalization factor ${\cal M}$ present in eq.~\eqref{bessel}; \item we then identified optimal choices for the parameter $\delta$ in eq.~\eqref{bessel}, which allows a fine tuning between the largest wave numbers entering the horizon at the present epoch and those exiting it around the onset of the inflationary phase. \end{itemize} \begin{figure}[h] \begin{center}$ \begin{array}{cc} \epsfig{file=chi-square-double_comp.eps, height=1.7in, width=1.7in}& \qquad\qquad \epsfig{file=chi-square-double_spline.eps, height=1.7in, width=1.7in} \end{array}$ \end{center} \caption{\small Comparisons between WMAP9 raw data and the angular power spectra predicted by the two--exponential potential of eq.~\eqref{potwoexp}, in point form (left) and in spline form (right), for $\gamma=0.08$ (red), $\gamma=0.04$ (blue) and $\gamma=0.02$ (green), and by the attractor curve (orange). The minima in the three cases are $\chi^2_{\rm min}/{DOF}=0.737$, $0.724$ and $0.718$.} \label{fig:angular_fit_double} \end{figure} \begin{table}[h] \begin{center} \caption{Some values of $(\delta,\chi^2)$ in fig.~\ref{fig:angular_fit_double}} \begin{tabular}{r c c c} \hline \hline $\varphi_0$ & $\gamma=0.08$ & $\gamma=0.04$ & $\gamma=0.02$\\ \hline \hline \\ 0.0 & $(-0.32,23.233)$ & $(-0.75,23.187)$ & $(-0.90,23.667)$ \\ -0.5 & $(-0.21,23.203)$ & $(-0.70,23.121)$ & $(-0.85,23.571)$ \\ -1.0 & $(\ \ 0.00,23.191)$ & $(-0.60,22.949)$ & $(-0.75,23.261)$ \\ -1.5 & $(+1.04,22.212)$ & $(-0.22,22.793)$ & $(-0.50,22.775)$ \\ -2.0 & $(+1.03,21.537)$ & $(+1.00,21.040)$ & $(+0.96,20.834)$ \\ -2.5 & $(+1.04,22.574)$ & $(+1.00,21.796)$ & $(+0.96,21.540)$ \\ -3.0 & $(+1.05,23.326)$ & $(+1.00,22.504)$ & $(+0.96,22.277)$ \\ -3.5 & $(+1.06,23.651)$ & $(+1.00,22.847)$ & $(+0.96,22.628)$ \\ -4.0 & $(+1.08,23.821)$ & $(+1.00,22.983)$ & $(+0.96,22.765)$ \\ \\ \hline \hline \end{tabular} \label{table:norm_delta} \end{center} \end{table} \begin{figure}[h] \begin{center}$ \begin{array}{ccc} \epsfig{file=chi2-quadrupole.eps, height=1.5in, width=1.5in}& \qquad\quad \epsfig{file=chi2-intermediate.eps, height=1.5in, width=1.5in}& \qquad\quad \epsfig{file=chi2-peak.eps, height=1.5in, width=1.5in} \end{array}$ \end{center} \caption{\small After optimizing the normalization ${\cal M}$, for $\varphi_0$ close to $0$ the $\chi^2$--fit is driven by the low CMB quadrupole and there is single minimum for $\delta<0$ (left). For intermediate values of $\varphi_0$ a second extremum emerges for $\delta>0$ (center), which readily becomes the overall minimum as the fit becomes eventually dominated by the pre--inflationary peak. The examples refer to $\gamma=0.08$, but the results for $\gamma=0.04$ and for $\gamma=0.02$ are qualitatively similar.} \label{fig:chi_squared_delta} \end{figure} Our best--fit analysis entailed the comparison between spectra for climbing scalars in two--exponential systems and raw WMAP9 data for a number of choices of $\varphi_0$ that follow closely the evolution of the power spectrum from the slow growth that was elaborated upon in \cite{dkps} to the typical peak that reflects conventional transitions from fast to slow roll and was nicely identified in \cite{destri}. As we have said, in all cases we minimized $\chi^2$ of eq.~\eqref{chi_squared}, first analytically with respect to ${\cal M}$ and then, numerically, with respect to $\delta$, exploring to this end more than 60 values in the interval $[-1.2,1.2]$. This process was repeated for more than 20 values of $\varphi_0$ belonging to the interval $[-4,0]$ and including $-4,-3.75,..,-0.25,0$. This rather rich discrete set sufficed to capture clearly the effects of the transition that, as we described in Section \ref{sec:powerspectrum}, accompanies the emergence of the pre--inflationary peak in fig.~\ref{fig:double}. As can be seen in fig.~\ref{fig:angular_fit_double}, starting from $\varphi_0=-4$ and proceeding toward larger values an initial \emph{decrease} of $\chi^2$ down to a minimum corresponding to $\chi^2_{min}/DOF \simeq 0.737$ is followed by a more rapid \emph{increase} and then essentially by a plateau that extends up to $\varphi_0=0$. This interesting behavior accompanies the transition from a typical pre--inflationary peak terminating on the attractor spectrum of \cite{destri}, to the intermediate pre--inflationary peak of Section \ref{sec:powerspectrum}, and finally to a region where this peak disappears altogether, as we have seen, leaving only the wide infrared depression elaborated upon in \cite{dkps}. We are inclined to regard this rather rich behavior as a \emph{noticeable, if slight, preference of WMAP9 raw data for an infrared depression followed by a pre--inflationary peak}. The transitional behavior finds a neat rationale in the $\delta$--dependence, for any given $\varphi_0$, of the values obtained minimizing $\chi^2$ with respect to ${\cal M}$, as can be seen in fig.~\ref{fig:chi_squared_delta}. Briefly stated, when the pre--inflationary peak is not visible or is too small to play a significant role, this function exhibits wide depressions centered around negative values of $\delta \in (-1,0)$. These clearly signal a tendency to link the slow growth of the corresponding power spectra to the CMB quadrupole. However, as $\varphi_0$ is reduced a second local depression by 2--3 units, quite narrow this time, emerges for values of $\delta$ that are now ${\cal O}(1)$. It becomes lower than the other for $\varphi_0 \simeq -1.5$, and clearly reflects a tendency to link the pre--inflationary peak to an oscillation that appears to be present in the CMB angular power spectrum for $\ell \simeq 5$. This discussion has somehow the flavor of Mean Field Theory so that, borrowing some terminology, one could say that the ``order parameter'' $\delta$ undergoes a first--order transition when the pre--inflationary peak becomes sizable. We have also repeated the analysis for $\gamma = 0.04$ and for $\gamma = 0.02$. These lower values, as we have stated, are meant to simulate the departure from the ``hard'' exponential of concave potentials with lower tensor--to--scalar ratios, as those in eq.~\eqref{starobinsky}. Some of the preferred choices for $\delta$ and the corresponding $\chi^2$ are collected in Table \ref{table:norm_delta}. Notice that \emph{reducing the slope of the mild exponential in eq.~\eqref{potwoexp} leads to slightly improved fits that, as expected, are optimized for slightly lower values of $\varphi_0$}. This tendency affords a simple explanation: with lower values of $\gamma$ the scalar typically approaches the ``hard wall'' more easily, so that comparable conditions obtain only if $\varphi_0$ is correspondingly lowered. \begin{figure}[h] \begin{center} \epsfig{file=optimal-angular_08_vs_data.eps, height=1.7in, width=1.7in} \end{center} \caption{\small The optimal $A_\ell$ for $\gamma=0.08$ (blue), the attractor $A_\ell$ (orange) and the WMAP9 raw data.} \label{fig:attractor-optimalCl} \end{figure} The differences among the various cases, and the overall preferred status of the two--exponential system, are clearly eye--catching even if, admittedly, they are of limited statistical significance. The values of $\chi^2/DOF$ for the one--exponential potential displayed in fig.~\ref{fig:angular_fit_single} lie indeed rather close to the attractor, while the corresponding values for the two--exponential potential displayed in fig.~\ref{fig:angular_fit_double} lie appreciably farther, especially in the intermediate region. Interestingly, the lowest value of $\chi^2/DOF$ for $\gamma=0.08$, about $0.736$, obtains for $\varphi_0 \simeq -1.8$, and can thus be associated to the perturbative region for String Theory, since as we have stressed $\varphi_0$ sets an upper bound on the string coupling. Moreover, as we have seen lower values of $\gamma$ bring about slightly lower minima that are reached for slightly smaller values of $\varphi_0$. For example, for $\gamma=0.02$ the minimum is about $0.718$ and obtains for $\varphi_0 \simeq -2$. The optimal angular power spectrum for $\gamma=0.08$ is displayed in fig.~\ref{fig:attractor-optimalCl}, together with the optimal attractor angular power spectrum and the corresponding raw WMAP9 data. Notice that the optimal curves for the two--exponential system and for the attractor come together for $\ell \simeq 15$, a value that could be regarded as defining a ``COBE--like'' normalization point for the model. \begin{figure}[h] \begin{center}$ \begin{array}{cc} \epsfig{file=p-plot-double.eps, height=1.7in, width=1.7in}& \qquad\qquad \epsfig{file=sigmaplot-double.eps, height=1.7in, width=1.7in} \end{array}$ \end{center} \caption{\small $p$--values for the two--exponential potential \eqref{potwoexp} with $\gamma=0.08$ (red), for $\gamma=0.04$ (blue) and for $\gamma=0.02$ (green) and for the attractor (orange), and corresponding $\sigma$--values.} \label{fig:p_and sigma values} \end{figure} Fig.~\ref{fig:p_and sigma values} collects, for the three double--exponential models with $\gamma=0.08$, $\gamma=0.04$ and $\gamma=0.02$, another characterization of the fits via the $p$--values, which are determined according to \begin{equation} p\left(\chi^2_{\rm min}\ ,\,n\right) \ = \ \frac{1}{2^\frac{n}{2} \ \Gamma\left(\frac{n}{2} \right)}\ \int_{\chi^2_{\rm min}}^\infty dx \ x^{\frac{n}{2}-1} \ e^{\,-\,\frac{x}{2}} \ , \end{equation} where $n$ is the effective number of degrees of freedom (it was called $DOF$ above, and equals 29 in those three fits and 30 in the attractor fit, as we have explained). Larger $p$--values make an effect related to the model more plausible, and the optimal choices for the double--exponential system clearly result in non--trivial, if still not fully significant values of $p$ that lie between $0.8$ and $0.9$. One can also recast these considerations in the gaussian setting. To this end, the starting point is provided by a normalized gaussian distribution \begin{equation} f(x) \ = \ \frac{1}{\sqrt{2\pi}} \ e^{\,-\,x^2} \ , \end{equation} since the $p$--values can be mapped into corresponding $\sigma$--values inverting the relation \begin{equation} p \ = \ 2 \, \int_\sigma^\infty f(x)\, dx \ . \end{equation} In this fashion, a $p$--value of about $6 \times 10^{-7}$ would translate into five--$\sigma$, while a $p$--value of about $3 \times 10^{-3}$ would translate into three--$\sigma$, just to quote a couple of familiar instances. In our case the $p$--value plots of fig.~\ref{fig:p_and sigma values} can be recast into corresponding plots for $\sigma$, also displayed in fig.~\ref{fig:p_and sigma values}. All in all, none of the different models is statistically excluded, since they are lie within one--$\sigma$ of the raw WMAP9 data, with an eye--catching preference for the double--exponential models, and especially so for those with lower values of $\gamma$, in regions close to $\varphi_0=-2$ where the pre--inflationary peak is visible and yet lies well apart from the attractor curve. While we cannot claim to be discovering an evident link between the first peak in the CMB angular power spectrum and the climbing phenomenon, we find it hard to dismiss the feeling that something is going on here, insofar as the currently available data are concerned. \vskip 24pt \section{\sc Conclusions}\label{sec:conclusions} \vskip 12pt This work builds on two main inputs. The first, drawn from String Theory \cite{strings}, is the existence of a class of orientifold vacua \cite{orientifolds} with ``brane supersymmetry breaking'' \cite{bsb}. In these models, which admit no maximally symmetric vacuum geometries, supersymmetry is broken at the string scale and is non-linearly realized in the low--energy supergravity \cite{10d_bsb_couplings}, which includes a ``hard'' exponential potential, but no tachyon excitations are present at tree level. The second input is drawn from a striking feature of the spatially flat cosmologies allowed by the corresponding low--energy supergravities. These involve a scalar field that emerges from an initial singularity with no other option than climbing up the steep exponential potential \cite{exp_sol,dks}. The process comes to an end at a turning point, while other branes of String Theory \cite{branescan,branesugimoto} can give rise, in principle, to milder exponentials \cite{fss,as13} that can force an inflationary phase during the ensuing descent. Therefore, in this setting the breaking of supersymmetry at high scales can provide a rationale for inflation to begin. Intuitively, as the climbing phase ends, in reverting its motion the scalar ought to bring along a spurt of exponential expansion for the Universe and a corresponding peak in the power spectrum of scalar perturbations. Here we showed that this is indeed the case for scalar perturbations, while tensor perturbations, which depend essentially on the scale factor alone, do not exhibit a similar phenomenon. It took some effort to quantify this expectation, but we have provided ample evidence for it here, and we have also started to compare these findings with the low--$\ell$ tail of the CMB angular power spectrum. \emph{Leaving aside cosmic variance, a fair summary of our findings is that the low--$\ell$ WMAP9 raw data tend to favor slightly scenarios of this type with respect to the attractor power spectrum underlying the standard $\Lambda$CDM setup}. The most interesting aspect of the whole setting, however, is perhaps the main assumption on which the comparison rests. This posits an essentially direct correspondence between the largest wavelengths entering the cosmic horizon at the present epoch and those that exited at the onset of inflation. If true, it would translate into the enticing perspective of drawing from the low--$\ell$ tail of the CMB power spectrum some information on the very early Universe. \begin{figure}[h] \begin{center}$ \begin{array}{cccc} \epsfig{file=starobinsky_1.eps, height=1.2in, width=1.2in}& \qquad \epsfig{file=starobinsky_2.eps, height=1.2in, width=1.2in}& \qquad \epsfig{file=starobinsky_3.eps, height=1.2in, width=1.2in}& \qquad \epsfig{file=starobinsky_4.eps, height=1.2in, width=1.2in} \end{array}$ \end{center} \caption{\small Power spectra of scalar perturbations computed, in cosmic time, for the Starobinsky--like potentials of eq.~\eqref{starobinsky}, shown here with an arbitrary normalization but with the parameters adjusted in order to guarantee about 60 $e$--folds of inflation and a fair portion of them with $n_s \simeq 0.96$. The slopes of the dotted lines reflect the range of values for $n_s$ that are consistent with current observations.} \label{fig:starobinsky_power} \end{figure} All we have shown so far rests on the two--exponential potentials of eq.~\eqref{potwoexp}, which are relatively simple to analyze but are clearly incomplete in several respects. As we have stressed, however, the pre--inflationary peaks reflect the local behavior in the region where the scalar reverts its motion, and we have also seen that the agreement with WMAP9 data improves slightly as the parameter $\gamma$ of the two--exponential models is reduced below 0.08, consistently with the expected preference for concave potentials. We can actually conclude with a brief discussion of some results obtained directly for the Starobinsky--like potentials of eq.~\eqref{starobinsky}, with parameters adjusted so as to grant about 60 $e$--folds of inflation and a fair portion of them with $n_s \simeq 0.96$. The computation was more subtle since, for one matter, these potentials are negligibly small near the minimum so that we could not resort to the convenient gauge \eqref{gauge}, and moreover the initial values were to be set very close to the singularity. Still, the ``lsode'' method available in Maple worked rather efficiently, and the power spectra displayed in fig.~\ref{fig:starobinsky_power} clearly vindicate our claims. As in the two--exponential models, tuning the impact with the hard exponential a slow growth leaves way to the birth of a pre--inflationary peak and to its eventual merge with an approximately scale--invariant profile. The curves collected in the figure are indeed very similar to those displayed in Section \ref{sec:powerspectrum}, but turn more gradually toward the slope dictated by the observed spectral index $n_s \simeq 0.96$, which we used as an input. We did not perform a systematic analysis as for the two--exponential case, but for example the power spectra displayed in fig.~\ref{fig:starobinsky_power} yield for $\chi^2_{\rm min}/DOF$ the four values 0.738, 0.724, 0.783 and 0.816. These results are along the lines of those displayed in fig.~\ref{fig:angular_fit_double}, and in particular the second reaches the minimum of the curve for $\gamma=0.04$. Let us stress, to conclude, that only a small portion of the power spectra centered around the pre--inflationary peak played a role in our low--$\ell$ analysis via eq.~\eqref{bessel}. There is thus room, in principle, for power spectra of this type to be compatible with the $\Lambda$CDM analysis of higher multipoles, while additional small features in the potential could well account for the other oscillations that are apparently present for $\ell \lesssim 35$ in fig.~\ref{fig:attractor-optimalCl}. Two very legitimate objections could be raised, within String Theory, against our analysis. The first concerns the values of the string coupling that accompany these phenomena, and in this case we could provide an encouraging answer, since the scenarios that are apparently preferred in the comparison with the CMB rest on relatively small values of the string coupling. There is a second objection, however, on which we have nothing definite to say, now as in the past. It has to do with curvature corrections, which are ubiquitous in String Theory and are expected to dominate near an initial singularity, casting doubts on a low--energy analysis of the climbing phenomenon. They were examined, insofar as possible, at low orders in \cite{cd}, but it is fair to state that at present we do not understand if and how they could be under control. Time and more detailed studies will tell whether these considerations can find a more rigorous origin in String Theory, and whether the PLANCK data to be soon released and a more refined data analysis taking into account wider portions of the CMB power spectrum \cite{gnks} will confirm some encouraging clues that have emerged from the present work. \vskip 24pt \section*{Acknowledgments} \vskip 12pt We are grateful to E.~Dudas and S.P.~Patil for collaboration at earlier stages of this research, to A.~Gruppuso and P.~Natoli for an ongoing collaboration on detailed likelihood tests and to E.~Akhmedov, P.~Fr\'e, H.~Kodama, K.~Kohri, G.~Rolandi and A.S.~Sorin for discussions. We are also grateful to R.~Barbieri, E.~Dudas and S.~Ferrara for discussions on supersymmetry breaking, to M.~Vietri for his interest in this work and to L.~Sabbatino for computer assistance. This work was supported in part by the ERC Advanced Grants n. 226455 (SUPERFIELDS) and n. 226371 (MassTeV), by Scuola Normale Superiore, by the Tokyo Metropolitan University, by INFN (I.S. ST\&FI), by the MIUR-PRIN contract 2009-KHZKRX and by Grant-in-Aid for Scientific Research on Innovative Areas (\# 24104505) from MEXT Japan. The authors would like to thank Scuola Normale Superiore, the \'Ecole Polytechnique and the Tokyo Metropolitan University for the kind hospitality extended to them while this work was in progress. \vskip 24pt \vskip 24pt \newpage
1402.1201
\section{Approach}\label{approach} Simulated annealing is an approach inspired by statistical mechanics which by analogy views the values of a multivariable numeric problem as physical states of particles. Simulated annealing is very useful practically for problems such as the traveling salesperson problem and other so called NP-complete problems which have no known polynomial time numeric solutions. Let $N$ be the given number to be factored, and use $N$ in base 2 with $n$ digits. We seek numbers $A$ and $B$ (written in binary with respectively $a$ and $b$ digits) such that $A * B = N$ with $A$ the larger number (so $a>=b$). Now, $a + b = n$ or $n+1$. For a given $N$ there are at most $n-2$ possibilities for $\{a,b\}$. (Remember that $a>b$, and the leftmost digit of both A and B must be a 1.) Factor $A$, with $a$ binary digits, can have $1, 2, 3, \ldots, a$ 1s in its binary representation. Factor $B$, with $b$ binary digits, can have $1, 2, 3, \ldots, a$ 1s in its binary representation. So, we formulate the factoring problem as follows: Given a binary number $N$ with $n$ digits, find binary numbers $A$ and $B$ with, respectively, $a$ and $b$ digits, of which $a^\prime$ and $b^\prime$ are 1, such that $A * B = N$. The simulated annealing approach starts with a configuration of particles---here the binary digits of trial factors $A$ and $B$---and an energy $E$, here defined as: \begin{equation}\label{energyDefinition}E = \sum_{i=1}^{n}\left\{ \begin{array}{ll} f(i) & \mbox{if $\{AB\}_i = N_i$} \\ 0 & \mbox{if $\{AB\}_i \ne N_i$} \end{array} \right. \end{equation} where $i$ indexes the digits in the binary representations and $f(i)$ is a function that increases monotonically with $i$. We have tested linear ($f(i)=i$) and quadratic ($f(i)=i^2$). This kind of function favors having as many of the binary digits of the product $AB$ match the corresponding binary digits of $N$ as possible, with weighting increasing for the higher bits. If all digits match exactly, $A$ and $B$ are factors of $N$, and the value of $E$ is maximal. If not, try a different configuration---here new possible factors $A^\prime$ and $B^\prime$---and look at the energy $E^\prime$ in this configuration. If the energy is increased then $A^\prime$ and $B^\prime$ are accepted as the new factors. Using the Metropolis algorithm,\cite{:/content/aip/journal/jcp/21/6/10.1063/1.1699114} occasionally $A^\prime$ and $B^\prime$ are accepted as the new trial factors even if $E^\prime<E$. The chance of accepting the lower energy state is reduced as time goes on; by analogy, the problem cools according to an annealing schedule. From an initial ``temperature'' value $T_0$, given a cooling factor $F_c$, we iterate through $N_a$ annealing steps, each time reducing the temperature by the cooling factor. That is, moving from step $i$ to step $i+1$, \begin{equation*} T_{i + 1} = T_i * F_c \end{equation*} At each annealing step, try some number of configurations---rearrangements of the bits of binary representations of $A$ and $B$. We allow a number of different mechanisms to generate configurations. For one of the factors, choose from one of the following moves: \begin{description} \item[swap] Choose a pair of distinct bits (a 1 and a 0) at random, and swap them \item[slide] Choose a random contiguous sub-sequence of the bits. Remove the rightmost bit, slide the remaining bits one to the right, then put the removed bit in the hole left behind. \item[reverse] Choose a random contiguous sub-sequence of the bits and reverse its order. \item[random] Randomly permute a random selection of the bits (generally a sparse selection, not a contiguous sequence). \end{description} For each configuration, test whether $A^\prime$ and $B^\prime$ multiply to $N$ (meaning they are factors of $N$). If so, repeat the whole annealing algorithm recursively on $A^\prime$ and $B^\prime$, finding successive sub-factors, until all endpoint numbers are are either prime factors (success) or the algorithm fails to factor one of them. If $A^\prime B^\prime \neq N$, test whether whether the new energy of the system as defined in Eq. \ref{energyDefinition}, $E^\prime$, is greater than the energy of the previous configuration ($E$). If so, accept the new configuration as current and discard the previous one. If $E^\prime < E$, select a uniform random number between 0 and 1, $r$, and accept the new configuration if \begin{equation*} r < \exp\left(-\frac{E^\prime - E}{kT_i}\right) \end{equation*} where $T_i$ is the temperature at annealing step $i$ and $k$ is a constant analogous to the Boltzmann constant in physics. This approach to factoring $N$ is nondeterministic, meaning that there is no guarantee of successfully finding all (or any) of the prime factors. However, the algorithm executes in polynomial time. If we know in advance that $N$ is {\em semiprime} (the product of two and only two prime factors), we stop the calculation once factors $A$ and $B$ are found such that $AB=N$. The number of configurations tested per annealing step is a tunable parameter, as are the cooling factor, the number of annealing steps, and $k$. Note that our definition of $E$ implies that the optimum value is the largest $E$, not the smallest. The ``cooling'' actually leads to higher and higher values of $E$. At this point, the reader may ask how we determined the number of bits in $A$ and $B$. If the number to be factored, $N$, has $n$ bits in its binary representation, its factors could have any number of bits from 2 to $n-1$. For a given $A$, having $a$ bits, we can compute the possible numbers of bits the other factor $B$ could have, given that $a + b = n$ or $n+1$. Our actual program explicitly factors out any prime factors up to (decimal) 1000; so if $N$ has $n$ bits, $A$ could have anywhere from 10 to $n-9$ bits. The corresponding $B$ could have either $b=n-a$ or $b=n-a+1$ bits. Now the reader may ask how we set the number of 1s in each binary number $A$ and $B$. We specify that the leftmost bit in both is 1 (that is, no leading 0 bits are allowed). With $a_1$ denoting the number of 1s in $A$, and $b_1$ denoting the number of 1s in $B$, we know that their range of values is \begin{equation*} \begin{array}{ll} 1\leq a_1 \leq a \\ 1\leq b_1 \leq b \end{array} \end{equation*} Our algorithm must try all $ab$ combinations. This scales roughly as $n^2$. The whole algorithm then has a deep loop nest: \begin{itemize} \item{loop over the possible values of $a$} \item{loop over the corresponding possible values of $b$} \item{loop over the possible values of $a_1$} \item{loop over the possible values of $b_1$} \item{loop over all annealing steps (temperatures)} \item{loop over configurations} \end{itemize} The scaling of the algorithm is then upper-bounded by \begin{equation*} n * (n-1) * n * (n-1) * N_a * N_c \approx n^4 * N_a * N_c \end{equation*} where $N_a$ is the number of annealing steps and $N_c$ is the number of configurations. Here we ignored the optimization of factoring out all small prime factors less than 1000, and approximated the maximum number if digits in $A$ and $B$ as $n$; their exact limits are lower, as detailed earlier. With $N_a$ and $N_c$ being constants, this is fourth order in $n$, the number of digits in the binary representation of the number we're factoring, $N$. \section{Tests} We tested our algorithm on numbers with up to 31 decimal digits (67 binary digits). Typically, the number of configurations per annealing step was set to \begin{equation*} M * \max(a,b) \end{equation*} where $a$ and $b$ are the number of binary digits in the current configuration of factors $A$ and $B$, and $M$ is an input parameter. We constrain the acceptable configurations to only those for which the number of 1s in the binary representation of the product $AB$ is equal to the (known) number of 1s in the binary representation of $N$. Optionally, we constrain allowed ``bad'' moves to restrict the decrease in energy to be within a specified fraction of the current energy; this avoids ``really bad'' moves. Optionally, we can retain the previous, higher energy configuration prior to an allowed ``bad'' move, run further configurations for a specified number of tries, then revert to the saved prior configuration if the energy has not evolved to exceed the prior energy. Table \ref{testCaseTable} and Table \ref{resultsTable} show results for a few test cases we successfully factored, and parameter values used. \begin{table} \centering { \small \begin{tabular}{clll} \hlin {\bf Case} & {\bf $N$} & {\bf $n$} & {\bf Factors} \\ \hlin A & 99999989237606677 & 57 & $316227731*316227767$ \\ B & 999999866000004473 & 60 & $999999929*999999937$ \\ C & 9999999942014077477 & 64 & $3162277633*3162277669$ \\ D & 99999980360000964323 & 67 & $9999999017*9999999019$ \\ \hlin \end{tabular} \caption{{\small A few cases we tested. $N$ is the (semiprime) number to be factored. $n$ is the number of digits in the base-two representation of $N$.}} \label{testCaseTable} } \end{table} \begin{table} \centering { \small \begin{tabular}{ccrrrc} \hlin {\bf Case} & {\bf $N^*_a$} & $F_c$ &$k$ & {\bf $N_c$} & {\bf Time} \\ \hlin A & $41 \pm 62$ &0.997 & 63365 & 1,450,000 & $17 \pm 16$ \\ B & $13 \pm 8$ &0.997 & 73810 & 1,500,000 & $12 \pm 8$ \\ C & $144 \pm 161$ &0.997 & 89440 & 1,600,000 & $134 \pm 149$ \\ D & $89 \pm 40$ &0.997 &102510 & 1,700,000 & $96 \pm 45$ \\ \hlin \end{tabular} \caption{{\small Results for a few cases we tested. $N^*_a$ is the number of annealing steps before finding the factors. $F_c$ is the temperature factor ($T$ is multiplied by this each annealing step. We always started with initial $T_0=F_c$.) $N_c$ is the maximum number of configurations tried each annealing step. Time is in runtime of the algorithm in minutes. In all cases, $N^*_a$ and Time are averaged over 5 runs, with the standard deviation indicated as error amounts. We used cost function $f(i)=i^2$ (see Eq. \ref{energyDefinition}).}} \label{resultsTable} } \end{table} The current implementation is in Python, using the {\tt bitarray} module for manipulating the binary representation. Python automatically handles arbitrary-precision integers correctly. We ran all the test cases on a laptop. Since we knew the factors in advance for these test cases, we reduced the search space over numbers of bits and the number of 1s in the factors to be consistent with the known factors. This is only an expedient, to be able to demonstrate the algorithm running on a single workstation. To run full tests for these and even larger numbers, with unknown factors, we are implementing a parallel version of the algorithm. \section{Parallelism} In the deep loop nest in Section \ref{approach}, there is ample parallelism to be exploited. All iterates of the outer four loops over numbers of digits and numbers of 1s in the factors can be computed independently, in parallel. This work scales as $n^4$, and is appropriate for distributed-memory parallelism with message passing. Memory requirements are very low, so replication of data is not a problem The parallel algorithm needs periodic, but infrequent, synchronization to test for success and proceed forward factorizing the factors, down to the final level of prime factors only. The loop over annealing steps is sequential, but the innermost loop over configurations can be parallelized; this is a candidate for thread parallelization and shared memory. Our initial target architecture will be an IBM Blue Gene/Q system, on which we have Python for the compute nodes. \section{Conclusions} We have shown the feasibility of using a nondeterministic optimization approach such as simulated annealing to work in a controlled and constrained manner toward prime factorization of large integers. Using a binary (base 2) representation allows for simple configuration-changing rules, and a simple energy cost function whose value is optimized. The algorithm has polynomial scaling in $n$, the numb of bits in the binary representation of the number to be factored. It is not guaranteed to find an exact solution, which is the only solution of interest in number factorization, but in practice we have found solutions for a wide range of numbers. It seems that the closer (further) the ratio of 1s to 0s in the binary representation of a factor of a semiprime number is to one, the harder (easier) it will be for a simulated annealing based method to factor the number. Thus, in picking semiprime numbers to use for encryption, a consideration should be the ratio of 1s to 0s in the two factors. \bibliographystyle{unsrt}
1212.0994
\section{Introduction} Ultraluminous X-ray sources (ULXs; \cite{max00}) continue to be enigmatic objects since their discovery in the 1980's; see \citet{fab04} and \citet{feng11} for reviews. They are bright, variable, and off-nuclear point-like X-ray sources with luminosities of $>$10$^{39-41}$~erg~s$^{-1}$, which exceed the Eddington luminosity ($L_{\rm{Edd}}$) of Galactic black holes (BHs). Despite intensive multi-wavelength studies over three decades, their central engine remains an open question. ULXs are very likely to contain a BH, but not a super-massive BH, since their X-ray properties are similar to those of Galactic BH binaries (BHBs) in many respects; e.g., they show short-term flux variations and have records of exhibiting at least two spectral states characterized by power-law (PL) shaped spectra and convex-shaped (thermal) spectra \citep{max07,sor11}. Some ULXs have even exhibited spectral transitions between the two \citep{laPar01,kubo01,iso09}. These similarities indicate the same mechanism (i.e., gas accretion onto BHs) being responsible for electromagnetic radiation in ULXs. Some authors (e.g., \cite{kubo02,mizu07,max07,iso09}) claimed that the PL and the convex spectral states of ULXs may correspond to the very high state (VHS; \cite{miyamo91}) and an extremely high luminous disk state \citep{kubo04a,iso12} described by a slim disk model \citep{abr88} of Galactic BHBs, respectively. This interpretation is based on the following important observational facts: (i) Some observed properties of ULXs are consistent with the theoretical study based on the slim disk model \citep{wata00}. For example, the derived temperature profile of the disk is flatter than that of the standard disk \citep{kiki06,tsuno06}, and the apparent innermost disk radius is inversely proportional to the innermost disk temperature \citep{mizu01}. (ii) The luminosity difference between the two states of ULXs are consistent with that between the slim disk state and the VHS of Galactic BHBs. Such ULXs were reported to be brighter in the convex state than in the PL state \citep{laPar01,kubo01,iso09}. However, some concerns remain. First, the PL state of ULXs shows a wider range of the photon indices; for example, ${\it{\Gamma}}$~=~1.0--1.5 (M\,51 source-69 by \cite{yoshi10}), 1.6--2.0 (NGC\,1365 X-1 by \cite{sor09}), and 2.0--2.7 (NGC\,5204 X-1 by \cite{kaj09}). Such various photon indices have not been reported in the VHS of Galactic BHBs. For this concern, we are unable to rule out the possibility that the energy range of typical X-ray instruments (e.g., 0.5--10~keV) corresponds to the different portion of the intrinsic X-ray spectrum in various types of ULXs (and BHBs), and that it makes the variety in the photon index of ULXs. The second concern is that a spectral turnover at 3.5--7~keV (e.g., \cite{kubo02,mizu07,gla09}), which is often accompanied with the PL spectra of ULXs, is found at much lower energy than that of Galactic BHBs in the VHS (at about several tens~keV; \cite{kubo04b}). These differences seem to imply that the PL state of ULXs and the VHS of Galactic BHBs could be distinct in nature. Obviously, investigating the characteristics of ULXs in various PL states is a key to unveiling the origin of ULXs, though they have not been so extensively discussed so far, compared with those in the convex state. The suitable target to achieving our purpose is the unique ULX X-1 in the nearby spiral galaxy IC\,342 (figure~\ref{fig1}a) at 3.3~Mpc \citep{saha02}. While this is the first ULX exhibiting the transition from the PL state to the convex one \citep{kubo01,kubo02}, it stayed in the PL state in most of the previous observations \citep{rob04,feng09,mak11}. Further two seemingly distinct sub-states have been reported; the high luminosity PL state and the low luminosity PL state, where the former is brighter than the latter by a factor of 2 to 3 \citep{feng09,mak11}. In order to elucidate the nature of the PL state of the source, we discuss the PL spectral states and the long-term spectral variabilities of IC\,342 X-1 assembling all the recent available data sets, which include our new data taken with Suzaku and unpublished data taken with Swift. A particular attention is paid to the Suzaku data, since it has a detector sensitive above 10~keV (see $\S$~2.1 for details), thereby being able to determine the existence of a turnover at a hard X-ray band in the PL state. \medskip This paper is composed of the following sections: In $\S$~2, we summarize the observations and data reduction. In $\S$~3 and $\S$~4, we describe the results of timing and spectral analysis, respectively. We develop discussion for the PL states of IC\,342 X-1 in $\S$~5, and conclude this paper in $\S$~6. \begin{figure*} \begin{center} \FigureFile(160mm,60mm){./f01.eps} \end{center} \caption{Optical and X-ray images of IC\,342. (a) Second Digitized Sky Survey three-color image. Red, green, and blue represent the 850, 650, and 480~nm band intensities, respectively. Pluses show the positions of the two ULXs. (b)--(c) Smoothed Suzaku XIS three-color images obtained in (b) 2010 August and (c) 2011 March. Red, green, and blue represent the 0.5--2, 2--4, and 4--10~keV bands, respectively. Solid and dashed circles indicate the source and background extraction regions, respectively. The other numbers are each source name (X-3, sources-6, and 11). Solid squares show the FoV of XIS. \label{fig1}} \end{figure*} \section{Observations and Data Reduction} Table~\ref{table1} shows the assembled data sets. We labeled the observations as Su1--2 for Suzaku, XM1--4 for XMM-Newton, Ch1--2 for Chandra, and Sw1--3 for Swift. The list includes all the available data sets except for a Chandra observation (ObsID~=~7069), because the target source suffers a severe pile-up with a pile-up fraction of more than 20\%. Throughout this paper, we use events in the 14--20~keV band for Suzaku PIN, and in 0.5--10~keV for the other instruments. \subsection{Suzaku} The Suzaku observatory \citep{mitsu07} has two operating science instruments. One is the X-ray Imaging Spectrometer (XIS; \cite{koya07}) covering an energy range of 0.2--12~keV, while the other is the Hard X-ray Detector (HXD; \cite{taka07,koku07}) covering 10--600~keV. We observed IC\,342 with Suzaku twice on 2010 August (Su1) and 2011 March (Su2) for about 70~ks each (table~\ref{table1}). We aimed at the middle point of X-1 and the other ULX (X-2) in order to observe the two ULXs within the XIS field of view (FoV). The data in these observations were taken with the normal clocking mode with a frame time of 8~s for XIS, and with the normal mode with the time resolution of 61~$\mu$s for HXD. In order to create new cleaned event files, we reprocessed the data using the software package HEASoft\footnote{See http://heasarc.gsfc.nasa.gov/docs/software/lheasoft/ for details.} version~6.10 with the X-ray telescope calibration database (CALDB) version~20100730, the XIS CALDB version~20110210, and the HXD CALDB version~20101202. The data were screened out during the South Atlantic Anomaly passages (SAA), within 436~s after exiting from the SAA, with the Earth elevation angle of below 5\arcdeg, with the day-Earth elevation angle of below 20\arcdeg\ (for XIS), or at low cutoff rigidity regions ($\le$6~GV for HXD). \begin{table} \begin{center} \caption{Observation log.\label{table1}} \footnotesize \begin{tabular}{ccccc} \hline Data & Observatory & ObsID & Date & $t_{\rm{exp}}$\footnotemark[$*$] \\ label & & & & (ks) \\ \hline Su1 & Suzaku & 705009010 & 2010-08-07 & 74.4/64.1 \\ Su2 & & 705009020 & 2011-03-20 & 68.5/74.3\\ XM1 & XMM-Newton & 0093640901 & 2001-02-11 & \phantom{1}9.7/\phantom{1}5.1 \\ XM2 & & 0206890101 & 2004-02-20 & 23.4/17.8 \\ XM3 & & 0206890201 & 2004-08-17 & 23.4/17.1 \\ XM4 & & 0206890401 & 2005-02-10 & 18.6/15.1 \\ Ch1 & Chandra & 2916 & 2002-04-29 & 9.3 \\ Ch2 & & 2917 & 2002-08-26 & 9.9 \\ Sw1 & Swift & 00035474001 & 2006-01-07 & 7.2 \\ Sw2 & & 00031987001 & 2011-06-14 & 3.0 \\ Sw3 & & 00031987002 & 2011-06-15 & 9.4 \\ \hline \multicolumn{5}{@{}l@{}}{\hbox to 0pt{\parbox{85mm}{\footnotesize \par\noindent \footnotemark[$*$] XIS (left) and HXD PIN (right) exposures for Suzaku, EPIC MOS (left) and EPIC pn (right) exposures for XMM-Newton, ACIS exposure for Chandra, and XRT exposure for Swift. }\hss}} \end{tabular} \end{center} \end{table} \medskip \subsubsection{XIS} XIS is composed of three front-illuminated (FI) and one back-illuminated (BI) CCD devices with a FoV of 17\farcm8$\times$17\farcm8. Since one of the three FI devices became dysfunctional due to a putative micrometeorite hit in 2006 November, we used the remaining two FI and one BI devices for our analysis. We merged the two FI data because the devices have nearly identical responses. Figures~\ref{fig1}~(b) and (c) show Suzaku XIS three-color images of IC\,342. We can see the three bright sources (X-1, X-2, and X-3 named by \cite{oka98}) and two relatively faint sources (sources-6 and 11; \cite{bau03}). X-2 and X-3 are respectively harder and softer than X-1 in the XIS range. Since hard sources may contribute to the PIN spectrum (see $\S$~2.1.2), we also investigated the properties of X-2 in the present paper. In the equinox J2000.0, the peak positions of X-1 and X-2 were respectively determined at (R.A., Dec.)~=~(\timeform{03h45m55s}, \timeform{+68D04'59''}) and (\timeform{03h46m15s}, \timeform{+68D11'17''}), which are consistent with previous studies \citep{kubo01,kong03,bau03,mak11}. Source events were extracted from a circle of an adaptively chosen radius of 130\arcsec, 130\arcsec, 140\arcsec, and 120\arcsec, for X-1 in Su1, X-1 in Su2, X-2 in Su1, and X-2 in Su2, respectively (figures~\ref{fig1}b and c). Whereas, background events were extracted from a common region. Here, we masked circles with a radius of 1\arcmin\ around the two relatively faint sources-6 and 11. Other two relatively faint sources near each ULX (source-9 for X-1 and source-12 for X-2; \cite{kong03,bau03}) are not resolved in the XIS images, but make no signification contamination. Source-9 has 10\% of the X-ray flux of X-1, while source-12 has 3\% of X-2. For the spectral analysis, we selected only the events with the standard ASCA grade set of 0, 2, 3, 4, and 6. We calculated their redistribution matrix functions (RMFs) using the {\texttt{xisrmfgen}} tool. Whereas, the ancillary response files (ARFs) were simulated by the {\texttt{xissimarfgen}} tool \citep{ishi07}. The influence by relatively faint sources was removed. \medskip \subsubsection{HXD PIN} HXD consists of two components: PIN and GSO respectively sensitive to 10--70~keV and 40--600~keV X-rays. In this paper, we focus on the PIN detector only, because the object is too faint for a GSO detection. PIN is a non-imaging detector, which has a full width at zero intensity (FWZI) view of $\sim$70\arcmin\ square. The effective area monotonically decreases as the distance increases from the field center. If PIN has a high temperature, the data are affected by increased noise. In order to deal with this situation, we further screened PIN events by applying the temperature threshold for observations in the epoch~10 (2010 December 1 to 2011 May 24)\footnote{See http://heasarc.nasa.gov/docs/suzaku/analysis/pinepochs.html for details.}. The threshold excludes a noisy energy band of each PIN unit. We used the detector response files distributed by the instrument team. \subsection{XMM-Newton} The details of the instruments and the observations can be found in \citet{feng09}, \citet{kaj09}, and \citet{mak11}. We used the Science Analysis System (SAS) version~11.0.0 \citep{gab04} for reprocessing the data, extracting events, and generating response files. We used the calibration files latest as of 2011 April. We excluded the intervals with a high background rate, which is defined as the 10--15~keV count rate of the entire array larger than the average by more than 3$\sigma$. The events in the energy range are dominated by the background \citep{read03}. A common background region devoid of bright X-ray sources was adopted. We extracted source events from a circle of adaptively chosen radius of 45\arcsec\ for XM1, 55\arcsec\ for XM2, 55\arcsec\ for XM3, and 65\arcsec\ for XM4. For the spectral analysis, we selected the MOS events with PATTERN~$\le$~12, \#XMMEA\_EM, and FLAG~$=$~0, and the pn events with PATTERN~$\le$~4, \#XMMEA\_EP, and FLAG~$=$~0. The RMF and ARF were generated with the SAS tools {\texttt{rmfgen}} and {\texttt{arfgen}}, respectively. \subsection{Chandra} The details of the instrument and the observations can be found in \citet{rob04}. We reduced the data using the Chandra Interactive Analysis of Observations (CIAO) version~4.3 with the CALDB version~4.4.2. We then extracted the events (with grade 0, 2, 3, 4, and 6), and constructed the energy spectra using the ACIS Extract package \citep{bro02} version~2009-12-01. The source events were accumulated from a region around each source encircling 90\% of photons of a point-like source. The background events were from an annulus around each source. The RMF and ARF were generated with the CIAO tools {\texttt{mkrmf}} and {\texttt{mkarf}}, respectively. \subsection{Swift} The Swift observatory \citep{geh04} is equipped with some instruments. One of them is the X-ray telescope (XRT; \cite{bur05}) with an X-ray CCD device sensitive at 0.2--10~keV. The FoV (23\farcm6$\times$23\farcm6) covers the whole of IC\,342. In the observations, the photon counting mode with a time resolution of 2.5~s was used. We processed the data sets based on the CALDB version~20110513 using HEASoft version~6.10. In order to increase the photon statistics, two data sets (Sw2 and Sw3) with very close observation dates were merged. Hereafter we call the merged observation Sw2$+$3. Source and background regions were determined by the method same as for XMM-Newton, and we then extracted the spectra from grade 0--12. We used the RMF file in the CALDB, and we created the ARF using the {\texttt{xrtmkarf}} tool. \section{Timing Analysis and Results} Figure~\ref{fig2} shows the background-subtracted Suzaku XIS (FI) light curves and the hardness ratio (the 3.0--10.0~keV band count rate to the 0.5--3.0~keV band one) variations for each ULX. No significant short-term variation was found in the light curves nor the hardness ratio variations at 99.9\% significance, indicating that the spectra did not change within the individual exposure. We thus stacked all the events in each observation to construct spectra. For X-1, the time-averaged count rate and hardness ratio changed at most by several percent between the two Suzaku observations. The time-averaged count rate was measured to be 0.083$\pm$0.001~counts~s$^{-1}$ (Su1) and 0.086$\pm$0.001~counts~s$^{-1}$ (Su2), while the time-averaged harness ratio was 0.66$\pm$0.02 (Su1) and 0.71$\pm$0.02 (Su2). For X-2, the count rate became to half, from 0.203$\pm$0.002~counts~s$^{-1}$ (Su1) to 0.093$\pm$0.001~counts~s$^{-1}$ (Su2). Its hardness ratio (1.63$\pm$0.03 for Su1 and 1.74$\pm$0.05 for Su2) also changed only by several percent like X-1, but was clearly higher than X-1 as mentioned from the three-color images (figure~\ref{fig1}). Here, the errors are the 1$\sigma$ values. The other data sets with XMM-Newton, Chandra, and Swift exhibit no short-term variation in each exposure. We thus stacked all the events in each observation to construct spectra. \section{Spectral Analysis and Results} First, we explain spectral fitting models in $\S$~4.1. Next, we report our Suzaku results based on the following three steps: In $\S$~4.2.1, we fit the spectral models to the XIS spectra of X-1 and X-2, and describe the results. Note that the spectra of X-2 are analyzed in order to examine the contribution to the PIN spectrum. In $\S$~4.2.2, we estimate the source signals above 10~keV with the PIN detector by inspecting the NXB accuracy. In $\S$~4.2.3, the XIS and PIN spectra are jointly analyzed. Finally, we present the fitting results of the other satellites in $\S$~4.3. \begin{figure} \begin{center} \FigureFile(68mm,107mm){./f02.eps} \end{center} \caption{Background-subtracted XIS FI light curves (black) and hardness ratio variations (gray) of the two ULXs in 0.5--10~keV. The curves are binned with 2000~s~bin$^{-1}$. The hardness is calculated as the count rate ratio of 3--10~keV against 0.5--3~keV. \label{fig2}} \end{figure} \begin{table*} \begin{center} \caption{Best-fit parameters for the Suzaku XIS spectra of the two ULXs.\label{table2}} \small \begin{tabular}{cccccccc} \hline Name & Data & Model & $N^{\prime}_{\rm{H}}$ & ${\it{\Gamma}}$ or $\tau$ & $E_{\rm{cut}}$, $k T_{\rm{e}}$, or $k T_{\rm{in}}$ & Luminosity\footnotemark[$*$] & Red--$\chi^2$ \\ & label & & $(10^{22}\;\rm{cm}^{-2})$ & & (keV) & $(10^{39}\;\rm{erg \; s^{-1}})$ & (d.o.f.) \\ \hline X-1 & Su1 & PL & $ 0.36 \pm 0.04 $ & $ 1.83 \pm 0.04 $ & $\cdots$ & $ 5.6 \pm 0.1 $ & 1.16(183) \\ & & cutoff-PL & $ 0.27 \pm 0.06 $ & $ 1.5 \pm 0.2 $ & $ 11^{+12}_{-4} $ & $ 5.1^{+0.3}_{-0.2} $ & 1.11(182) \\ & & compTT & $ 0.32^{+0.05}_{-0.04} $ & $ 7.1 $\footnotemark[$\dagger$] & $ 2.4 $\footnotemark[$\dagger$] & $ 5.3^{+0.2}_{-0.3} $ & 1.12(181) \\ & & MCD & $ <0.02 $ & $\cdots$ & $ 1.85 \pm 0.05 $ & $ 4.50 \pm 0.07 $ & 1.70(183) \\ \cline{2-8} & Su2 & PL & $ 0.22 \pm 0.04 $ & $ 1.67 \pm 0.04 $ & $\cdots$ & $ 4.9 \pm 0.1 $ & 1.15(175) \\ & & cutoff-PL & $ 0.21^{+0.05}_{-0.06}$ & $ 1.63^{+0.08}_{-0.18} $ & $ >17 $ & $ 4.8^{+0.2}_{-0.1} $ & 1.16(174) \\ & & compTT & $ 0.22 \pm 0.02 $ & $ 6.9 $\footnotemark[$\dagger$] & $ 2.8 $\footnotemark[$\dagger$] & $ 4.8 \pm 0.1 $ & 1.17(173) \\ & & MCD & $ <0.003 $ & $\cdots$ & $ 1.94 \pm 0.05 $ & $ 4.33 \pm 0.07 $ & 2.25(175) \\ \hline X-2 & Su1 & PL & $ 3.6 \pm 0.1 $ & $ 2.49 \pm 0.04 $ & $\cdots$ & $ 34 \pm 2 $ & 1.50(396) \\ & & MCD & $ 1.82 \pm 0.08 $ & $\cdots$ & $ 1.78 \pm 0.03 $ & $ 15.0 \pm 0.2 $ & 1.05(396) \\ \cline{2-8} & Su2 & PL & $ 2.2 \pm 0.2 $ & $ 1.71^{+0.07}_{-0.06} $ & $\cdots$ & $ 10.1^{+0.5}_{-0.4} $ & 1.19(175) \\ & & MCD & $ 1.1 \pm 0.1 $ & $\cdots$ & $ 2.7 \pm 0.1 $ & $ 8.9 \pm 0.2 $ & 1.08(175) \\ \hline \multicolumn{8}{@{}l@{}}{\hbox to 0pt{\parbox{180mm}{\footnotesize \par\noindent \footnotemark[$*$] The 0.5--10~keV luminosity for the PL, the cutoff-PL, and the Comptonization models, and the bolometric luminosity for the MCD model. \par\noindent \footnotemark[$\dagger$] Only the best-fit value is described for $\tau$ and $k T_{\rm{e}}$, because both parameters are strongly coupled. }\hss}} \end{tabular} \end{center} \end{table*} \subsection{Fitting Models} We used the X-ray spectral fitting package XSPEC version~12.6.0 for the spectral analysis. We applied two interstellar extinctions: One is the Galactic extinction represented by the \texttt{wabs} model \citep{mor83} with a hydrogen column density fixed at the Galactic value toward IC\,342; $N_{\rm{H}}$~=~3$\times 10^{21}$~cm$^{-2}$ \citep{dic90,sta92}. The other is a thawed additional extinction for each ULX (as another \texttt{wabs}). All the spectra are featureless, so we first applied two simple models: the PL model and the multi-color disk black body model (MCD; \texttt{diskbb} in XSPEC; \cite{mitsu84}), which represent PL-shaped spectra and convex-shaped spectra, respectively. These models are often employed for Galactic BHBs and ULXs, including the IC\,342 ULXs (e.g., \cite{kubo01,rob04,mak11}). If a spectrum is represented by the PL model, we further applied two other models in order to understand the physical condition of the PL state. One is the cutoff-PL model, which approximates the Compton emission by constraining two phenomenological parameters: the photon index and the cutoff (turnover) energy. The improvement from the PL model is checked by the $F$-test. We consider that the improvement with the $\ge$99.9\% confidence level is significant. The other is a Comptonization model (\texttt{compTT} in XSPEC; \cite{tit94}), which can constrain the electron temperature ($kT_{\rm{e}}$) and the optical depth for electrons ($\tau$) as model parameters. If the spectrum has no turnover, both parameters are strongly coupled and cannot be constrained. \subsection{Suzaku Results} \subsubsection{XIS spectra} We fitted the FI and BI spectra simultaneously. The energy range of 1.8--2.0~keV is excluded due to calibration uncertainties. In addition, for X-2, we did not use data below 1.5~keV to characterize the hard X-ray spectrum, since a soft excess reported by \citet{feng09} and \citet{mak11} was recognized in the Suzaku data. \begin{figure*} \begin{center} \FigureFile(112mm,150mm){./f03.eps} \end{center} \caption{Suzaku spectra (pluses) and the best-fit models (solid histogram) of (a--b) X-1 and (c--d) X-2. Bottom panels show the residuals from the best-fit by a PL and MCD. \label{fig3}} \end{figure*} The Suzaku XIS spectra and the best-fit models are shown in figure~\ref{fig3}, and all the best-fit parameters (90\% confidence level) are summarized in table~\ref{table2}. The parameters $N^{\prime}_{\rm{H}}$, ${\it{\Gamma}}$, $kT_{\rm{in}}$, and $E_{\rm{cut}}$ indicate the absorption column density additional to that in our Galaxy, the photon index, the innermost disk temperature, and the turnover energy, respectively. The 0.5--10~keV luminosity $L_{\rm{X}}$ for the PL, the cutoff-PL, and the Comptonization models is calculated as $4\pi D^2 f_{\rm{X}}^{\prime}$. Assuming the disk inclination angle of 60\arcdeg, the bolometric luminosity $L_{\rm{bol}}$ for the MCD model is calculated as $2\pi D^2 f_{\rm{bol}}^{\prime} (\cos{60\arcdeg})^{-1}$. Here, $f_{\rm{X}}^{\prime}$ and $f_{\rm{bol}}^{\prime}$ are respectively the absorption-corrected flux in the 0.5--10~keV and the 0.01--100~keV ranges, and $D$ is the distance to IC\,342 (3.3~Mpc). \medskip The X-1 spectra were unaccountable by the MCD model (reduced-$\chi^2$~=~1.70 for Su1 and 2.25 for Su2), whereas they were successfully fitted by the PL model (reduced-$\chi^2$~=~1.15--1.16). Based on the derived parameters (table~\ref{table2}), the spectral shape of X-1 slightly changed between the two observations; in the Su1 observation, the X-ray luminosity and the photon index were respectively $L_{\rm{X}}$~=~(5.6$\pm$0.1)$\times$10$^{39}$~erg~s$^{-1}$ and ${\it{\Gamma}}$~=~1.83$\pm$0.04, while in Su2, they were $L_{\rm{X}}$~=~(4.9$\pm$0.1)$\times$10$^{39}$~erg~s$^{-1}$ and ${\it{\Gamma}}$~=~1.67$\pm$0.04. These parameters are within the typical values (${\it{\Gamma}}$~=~1.6--2.0) measured previously from this ULX; see, e.g., \citet{kubo01} with ASCA, \citet{feng09} with XMM-Newton, and \citet{rob04} with Chandra. The cutoff-PL model provided a fit goodness similar to that by the PL model, but we conclude that both X-1 spectra in Su1 and Su2 do not have a turnover at least below 10~keV, because the turnover energy ($E_{\rm{cut}}$~=~11$^{+12}_{-4}$~keV for Su1 and $>$17~keV for Su2; table~\ref{table2}) is determined mostly above the XIS fitting range (0.5--10~keV). In the following ($\S$~4.2.3), we confirm this conclusion by using the PIN data. The Comptonization model (\texttt{compTT}) fitting is also consistent with the result by the cutoff-PL model (i.e., no turnover), because $kT_{\rm{e}}$ and $\tau$ were strongly coupled in both spectra, shown in $\S$~5.1.2 for details. The seed photon temperature was $kT_{\rm{seed}}$~$<$~0.4~keV in both spectra. \medskip Unlike X-1, the X-2 spectrum in Su1 cannot be explained by the PL model (reduced-$\chi^2$~=~1.50). Instead, it has a convex shape represented by the MCD model with a reduced-$\chi^2$ of 1.05 (figure~\ref{fig3}c). On the other hand, the X-2 spectrum in Su2 was reproduced by both models, although the MCD model provided a slightly good fitness (figure~\ref{fig3}d; reduced-$\chi^2$~=~1.19 by PL and 1.08 by MCD). The source spectrum dramatically changed between the two observations; the luminosity halved from $L_{\rm{bol}}$~=~(1.50$\pm$0.02)$\times$10$^{40}$ to (8.9$\pm$0.2)$\times$10$^{39}$~erg~s$^{-1}$, and its disk temperature increased ($kT_{\rm{in}}$~=~1.78$\pm$0.03 to 2.7$\pm$0.1~keV). Such a spectral behavior is opposite to that of Galactic BHBs in the canonical high-soft or slim disk states. Its physical interpretation is put aside for the forth-coming paper. \medskip \subsubsection{Significance of the source signal with PIN} For the detailed PIN spectral analysis, we need to consider two background sources: the non-X-ray background (NXB) and the cosmic X-ray background (CXB). We adopted NXB spectra provided by the instrument team \citep{fuka09}. Meanwhile, for CXB, we simulated spectra based on the HEAO-1 model \citep{bol87}. The count rates of the observed PIN signal, the modeled NXB, the simulated CXB, and the source signal are summarized in table~\ref{table3}. \begin{figure*} \begin{center} \FigureFile(136mm,60mm){./f04.eps} \end{center} \caption{Time-averaged signal spectrum (black) and source (blue) spectrum with Suzaku PIN in (a) Su1 and (b) Su2. The provided NXB and the simulated CXB are represented in red and green, respectively. The NXB for Su2 is corrected for the rescaling factor ($\S$~4.2.2). The gray shaded range indicates the notable energy range of 14--20~keV used for the spectral analysis. \label{fig4}} \end{figure*} Normally, the Earth-occultation data with an elevation angle below $-$5\arcdeg\ are utilized to inspect the reproducibility of the NXB model. However, Suzaku did not undergo the Earth-occultation during our observations. We, instead, compared the observed PIN signal in the 30--55~keV range with the modeled NXB in the same range, because such PIN data are considered to be dominated by the NXB events. Below, we subtracted the CXB contribution from the PIN data to derive the correct rescaling factor. For Su1, we found no significant difference between the data and the NXB model in this energy range (the NXB/PIN ratio of 1.5$\pm$2.5\% in 90\% confidence level). This consistency allows us to adopt for the PIN data in Su1, the NXB model without any correction. However, the modeled NXB count rate in Su2 is significantly higher than the observed PIN one by 5.5$\pm$2.5\%. Such a difference is not reasonable from the current uncertainty of the NXB model (typically 1.3\%; \cite{fuka09}). We therefore decreased the NXB model count rate by 5.5\% for Su2. \begin{table} \begin{center} \caption{Signal and background count rates from Suzaku PIN.\label{table3}} \small \begin{tabular}{ccc} \hline Name & \multicolumn{2}{c}{Count rate (s$^{-1}$)\footnotemark[$*$]} \\ \cline{2-3} & \multicolumn{2}{c}{Su1} \\ \cline{2-3} & 14--20~keV & 30--55~keV \\ \hline Observed signal & 0.190$\pm$0.002 & 0.082$\pm$0.001 \\ NXB & 0.173 & 0.082 \\ CXB & 0.011 & 0.002 \\ Source net signal & 0.0055$\pm$0.0029 & $\cdots$ \\ \hline & \multicolumn{2}{c}{Su2} \\ \cline{2-3} & 14--20~keV & 30--55~keV \\ \hline Observed signal & 0.187$\pm$0.002 & 0.081$\pm$0.001 \\ NXB & 0.180 (0.170)\footnotemark[$\dagger$] & 0.084 (0.079)\footnotemark[$\dagger$] \\ CXB & 0.011 & 0.002 \\ Source net signal & (0.0068$\pm$0.0028)\footnotemark[$\dagger$] & $\cdots$ \\ \hline \multicolumn{3}{@{}l@{}}{\hbox to 0pt{\parbox{85mm}{\footnotesize \par\noindent \footnotemark[$*$] The errors are the 1$\sigma$ values. \par\noindent \footnotemark[$\dagger$] The parenthetic number indicates the corrected value for the \\ rescaling factor. }\hss}} \end{tabular} \end{center} \end{table} Figure~\ref{fig4} shows the ultimately derived PIN spectra. The source spectrum after subtracting NXB and CXB is represented in blue. For the spectral analysis, we focus on the 14--20~keV band. The net count rates (after the NXB and CXB were subtracted) of Su1 and Su2 are calculated as (5.5$\pm$2.9)$\times$$10^{-3}$~counts~s$^{-1}$ and (6.8$\pm$2.8)$\times$$10^{-3}$~counts~s$^{-1}$, respectively (table~\ref{table3}). These net count rates are de-convolved to the X-ray flux in the same band as (1.2$\pm$0.6)$\times$$10^{-12}$~erg~cm$^{-2}$~s$^{-1}$ and (1.4$\pm$0.6)$\times$$10^{-12}$~erg~cm$^{-2}$~s$^{-1}$ for Su1 and Su2, respectively. Here, the errors of the PIN count rate and the de-convolved PIN flux are the 1$\sigma$ values, in which the statistical and systematic uncertainties of the NXB were taken into account. \medskip Next, we confirm that the majority of the detected (net) PIN fluxes are from the two ULXs by inspecting two X-ray catalogs: First, we used the Swift BAT 54~months catalog \citep{cus10}, which lists no source in our region, and calculated the probability that sources (other than X-1 and X-2) possibly contributing to the PIN flux are located within the PIN view. The detailed procedure is performed in the following two steps: (i) The flux limit of the catalog is 1$\times$10$^{-11}$~erg~cm$^{-2}$~s$^{-1}$ in 15--150~keV. In order to compare with the PIN flux limit ($\sim$5$\times$10$^{-13}$~erg~cm$^{-2}$~s$^{-1}$ in 14--20~keV), we calculated the 14--20~keV flux limit of the catalog by assuming a PL shape with ${\it{\Gamma}}$~=~2.0. Its value is 1.5$\times$10$^{-12}$~erg~cm$^{-2}$~s$^{-1}$, which is greater than that of PIN. The fact indicates that sources (including the two ULXs) with a 14--20~keV flux below the catalog flux limit contribute the detected PIN flux. (ii) By extrapolating the logN--logS relation of the catalog (without Galactic sources at $|l|$~$<$~10\arcdeg), we derived that the expected number of sources with a 14--20~keV flux between the flux limit of the catalog and that of PIN is about 2,000 in all sky. From this number and the area ratio of the PIN view and all sky, we conclude the probability that such three or more sources are simultaneously located within the PIN view to be at most 0.1\%. Second, we used the Chandra catalog \citep{liu11} and examined the total 14--20~keV flux of 56 X-ray point-like sources (other than X-1 and X-2) within the IC\,342 region. Each source shows the 0.3--8.0~keV flux of $10^{-12}$ to $10^{-15}$~erg~cm$^{-2}$~s$^{-1}$. When a PL shape with ${\it{\Gamma}}$~=~2.0 is assumed, the 14--20~keV flux of each source is calculated as 10$^{-13}$ to 10$^{-16}$~erg~cm$^{-2}$~s$^{-1}$. Consequently, its total flux is summed up $\sim$2.9$\times$10$^{-13}$~erg~cm$^{-2}$~s$^{-1}$, which corresponds to at most 20\% of the detected PIN flux. \begin{figure*} \begin{center} \FigureFile(160mm,48mm){./f05.eps} \end{center} \caption{XIS plus PIN joint unfolded spectra (pluses). Black and red are respectively XIS spectra of X-1 and X-2, while green is PIN one. (a) The spectral models (histograms) are PL for X-1 and MCD for X-2 in Su1, (b) PL for X-1 and PL for X-2 in Su2, and (c) PL for X-1 and MCD for X-2 in Su2. \label{fig5}} \end{figure*} \medskip \subsubsection{XIS + PIN spectra} In the XIS analysis ($\S$~4.2.1), all the XIS spectra were successfully fitted by a phenomenological model (PL, MCD, or cutoff-PL). Now, we investigate spectral shapes above 10~keV of the ULXs by extrapolating the successful XIS models (with the best-fit central parameters in table~\ref{table2}) to the PIN band. We note that X-1 and X-2 are treated simultaneously, since the PIN spectrum contains the signals from both ULXs. The PIN effective areas to each ULX are much the same\footnote{ See ``The Suzaku Technical Description'' available at http://www.astro.isas.jaxa.jp/suzaku/doc/suzaku\_td\_ao7.pdf for details.}. We took into account the known calibration uncertainty between XIS and PIN normalizations\footnote{ See http://www.astro.isas.ac.jp/suzaku/doc/suzakumemo/suzakumemo-2008-06.pdf for details.}, namely the PIN normalization is larger than the XIS one by a factor of 1.164. Table~\ref{table4} shows the spectral model components for X-1 and X-2. We check the validity of the extrapolation by comparing the modeled flux with the de-convolved PIN flux in 14--20~keV. For Su1, we examined the PL(X-1)$+$MCD(X-2) combination and the cutoff-PL$+$MCD one ($E_{\rm{cut}}$~=~11~keV for X-1 is assumed). Clearly, the PIN spectrum was better reproduced by the former combination (figure~\ref{fig5}a) than the latter. This is because the modeled flux of the former (0.91$\times$10$^{-12}$~erg~cm$^{-2}$~s$^{-1}$) is comparable to the de-convolved PIN flux of (1.2$\pm$0.6)$\times$10$^{-12}$~erg~cm$^{-2}$~s$^{-1}$, while the flux of the latter (0.52$\times$10$^{-12}$~erg~cm$^{-2}$~s$^{-1}$) is smaller than the lower limit (1$\sigma$) of the de-convolved flux. Consequently, the X-1 spectrum in Su1 with no turnover (below 20~keV) was confirmed as mentioned in $\S$~4.2.1. For Su2, we examined the PL and MCD models to X-2 since its XIS spectrum was reproduced by both models ($\S$~4.2.1), while only the PL model was considered to X-1. Figure~\ref{fig6}~(b) clearly indicates that the PL$+$PL combination with a flux of 2.9$\times$10$^{-12}$~erg~cm$^{-2}$~s$^{-1}$ exceeds the PIN spectrum with (1.4$\pm$0.7)$\times$10$^{-12}$~erg~cm$^{-2}$~s$^{-1}$. By replacing the PL component with the MCD one for X-2, the model flux of 1.3$\times$10$^{-12}$~erg~cm$^{-2}$~s$^{-1}$ became consistent with the observed one, as displayed in figure~\ref{fig6}~(c). Based on the present joint analysis, we consider the spectral properties as follows: (i) In both observations, the X-1 spectra have a PL tail extending up to at least 20~keV. (ii) Both of the X-2 spectra cannot be explained by a single PL model. They instead have a convex shape. \subsection{XMM-Newton, Chandra, and Swift Results} We analyzed the spectra obtained with the three satellites using the method same as for the Suzaku XIS ($\S$~4.2.1). For the XMM-Newton EPIC data, we fitted the MOS and pn spectra simultaneously. The MOS spectra were multiplied by a constant factor of 1.0--1.1, because \citet{mat09} reported that the MOS flux is higher than pn by 7--9\% ($<$4.5~keV) and 10--13\% ($\gtrsim$4.5~keV). The best-fit parameters are summarized in table~\ref{table5}. \medskip Same as the Suzaku results, most X-1 spectra (XM1, XM3, Ch1, Ch2, Sw1, and Sw2$+$3) were better reproduced by the PL model. They were also reproduced by the cutoff-PL model, but we did not derive a significant improvement. Their X-ray luminosities (except for Sw1) are comparable to that in the Suzaku observations ($\sim$5$\times 10^{39}$~erg~s$^{-1}$). In the Comptonization model fitting for these spectra, $kT_{\rm{e}}$ and $\tau$ were coupled, and the upper limit of the seed photon temperature was determined at 0.5~keV. \begin{table} \begin{center} \caption{Modeled and de-convolved fluxes in 14--20~keV.\label{table4}} \footnotesize \begin{tabular}{ccccccc} \hline Data & \multicolumn{2}{c}{Model} & & \multicolumn{2}{c}{Flux in 14--20~keV\footnotemark[$*$]} & Notes \\ \cline{2-3} \cline{5-6} label & X-1 & X-2 & & Model & Net\footnotemark[$\dagger$] & \\ \hline Su1 & PL & MCD & & 0.91 & 1.2$\pm$0.6 & figure~\ref{fig5}~(a) \\ Su1 & cutoff-PL & MCD & & 0.52 & 1.2$\pm$0.6 & --- \\ Su2 & PL & PL & & 2.9 & 1.4$\pm$0.6 & figure~\ref{fig5}~(b) \\ Su2 & PL & MCD & & 1.3 & 1.4$\pm$0.6 & figure~\ref{fig5}~(c) \\ \hline \multicolumn{7}{@{}l@{}}{\hbox to 0pt{\parbox{85mm}{\footnotesize \par\noindent \footnotemark[$*$] The PIN band flux in unit of $10^{-12}\;\rm{erg \; cm^{-2} \; s^{-1}}$. \par\noindent \footnotemark[$\dagger$] The de-convolved PIN flux from the net count rate obtained by subtracting the NXB and CXB from the observed PIN count rate. The errors are the 1$\sigma$ values. }\hss}} \end{tabular} \end{center} \end{table} On the other hand, for the two spectra in XM2 and XM4, the PL model yielded negative residuals particularly at $\gtrsim$6~keV, indicating the existence of a turnover (figure~\ref{fig6}). In fact, these residuals were significantly improved by applying the cutoff-PL model (figure~\ref{fig6} and table~\ref{table5}). The reduced-$\chi^2$(d.o.f.) is from 1.18(233) by PL to 0.97(232) by cutoff-PL in XM2, and from 1.12(147) by PL to 0.93(146) by cutoff-PL in XM4, indicating the $F$-test significance of $>$99.9\%. The derived turnover energies are $E_{\rm{cut}}$~=~$6.1^{+1.9}_{-1.2}$~keV in XM2 and $5.1^{+2.3}_{-1.2}$~keV in XM4. Their X-ray luminosities ($\sim$1$\times 10^{40}$~erg~s$^{-1}$) are higher than the other (most) observations with no turnover blow 10~keV (or 20~keV with Suzaku) by a factor of 2 to 3. These results suggest that the spectral turnover (below 10~keV) is prominent only in the higher luminosity phase. In addition, for such spectra (XM2 and XM4), the Compton parameters were determined as $kT_{\rm{e}}$~$\sim$~1.8~keV and $\tau$~$\sim$~8.5, although the upper limit of the seed photon temperature was 0.5~keV like the other observations. \begin{figure} \begin{center} \FigureFile(68mm,116mm){./f06.eps} \end{center} \caption{Residuals from the best-fit by the PL model and the cutoff-PL model in (a) XM2 and (b) XM4. Black and red represent MOS and pn, respectively. \label{fig6}} \end{figure} \begin{table*} \begin{center} \caption{Best-fit parameters for the XMM-Newton, Chandra, and Swift spectra.\label{table5}} \small \begin{tabular}{cccccccc} \hline Name & Data & Model & $N^{\prime}_{\rm{H}}$ & ${\it{\Gamma}}$ or $\tau$ & $E_{\rm{cut}}$, $k T_{\rm{e}}$, or $k T_{\rm{in}}$ & Luminosity\footnotemark[$*$] & Red--$\chi^2$ \\ & label & & $(10^{22}\;\rm{cm}^{-2})$ & & (keV) & $(10^{39}\;\rm{erg \; s^{-1}})$ & (d.o.f.) \\ \hline X-1 & XM1 & PL & $ 0.24^{+0.07}_{-0.06} $ & $ 1.63 \pm 0.09 $ & $\cdots$ & $ 4.0^{+0.2}_{-0.1} $ & 1.00(90) \\ & & cutoff-PL & $ 0.22^{+0.07}_{-0.11} $ & $ 1.6^{+0.1}_{-0.4} $ & $ >7.8 $ & $ 4.0^{+0.2}_{-0.4} $ & 1.01(89) \\ & & compTT & $ <0.26 $ & $ 8.5 $\footnotemark[$\dagger$] & $ 2.3 $\footnotemark[$\dagger$] & $ 3.9 \pm 0.3 $ & 0.97(88) \\ & & MCD & $ <0.02 $ & $\cdots$ & $ 1.9 \pm 0.1 $ & $ 3.9 \pm 0.2 $ & 1.29(90) \\ \cline{2-8} & XM2 & PL & $ 0.53^{+0.03}_{-0.02} $ & $ 2.02 \pm 0.03 $ & $\cdots$ & $ 11.1 \pm 0.2 $ & 1.18(233) \\ & & cutoff-PL & $ 0.39 \pm 0.04 $ & $ 1.4^{+0.1}_{-0.2} $ & $ 6.1^{+1.9}_{-1.2} $ & $ 9.4 \pm 0.4 $ & 0.97(232) \\ & & compTT & $ 0.27^{+0.20}_{-0.08} $ & $ 8.5^{+0.6}_{-1.4} $ & $ 1.8 \pm 0.1 $ & $ 8.2^{+1.8}_{-0.4} $ & 0.95(231) \\ & & MCD & $ 0.15 \pm 0.02 $ & $\cdots$ & $ 1.54 \pm 0.03 $ & $ 9.2 \pm 0.1 $ & 1.72(233) \\ \cline{2-8} & XM3 & PL & $ 0.30 \pm 0.03 $ & $ 1.82 \pm 0.05 $ & $\cdots$ & $ 4.8 \pm 0.1 $ & 1.06(123) \\ & & cutoff-PL & $ 0.30 \pm 0.03 $ & $ 1.81^{+0.05}_{-0.08} $ & $ >0.01 $ & $ 4.8 \pm 0.1 $ & 1.07(122) \\ & & compTT & $ 0.29 \pm 0.03 $ & $ 5.0 $\footnotemark[$\dagger$] & $ 4.3 $\footnotemark[$\dagger$] & $ 5.2 \pm 0.1 $ & 1.08(121) \\ & & MCD & $ <0.01 $ & $\cdots$ & $ 1.78 \pm 0.05 $ & $ 4.2 \pm 0.1 $ & 2.69(123) \\ \cline{2-8} & XM4 & PL & $ 0.55 \pm 0.04 $ & $ 1.91 \pm 0.05 $ & $\cdots$ & $ 14.2 \pm 0.4 $ & 1.12(147) \\ & & cutoff-PL & $ 0.37 \pm 0.07 $ & $ 1.2 \pm 0.2 $ & $ 5.1^{+2.3}_{-1.2} $ & $ 11.8^{+0.7}_{-0.6} $ & 0.93(146) \\ & & compTT & $ 0.46 \pm 0.05 $ & $ 8.4 \pm 0.9 $ & $ 1.8^{+0.3}_{-0.2} $ & $ 12.7^{+0.4}_{-0.3} $ & 0.93(145) \\ & & MCD & $ 0.17 \pm 0.03 $ & $\cdots$ & $ 1.71 \pm 0.07 $ & $ 12.1 \pm 0.2 $ & 1.22(147) \\ \cline{2-8} & Ch1 & PL & $ 0.3 \pm 0.1 $ & $ 1.8 \pm 0.2 $ & $\cdots$ & $ 4.7 \pm 0.3 $ & 0.98(25) \\ & & MCD & $ <0.05 $ & $\cdots$ & $ 1.7 \pm 0.2 $ & $ 3.7 \pm 0.2 $ & 1.54(25) \\ & & cutoff-PL & $ 0.3 \pm 0.1 $ & $ 1.8^{+0.2}_{-0.3} $ & $ >0.01 $ & $ 4.6^{+0.3}_{-0.7} $ & 1.02(24) \\ & & compTT & $ 0.3 \pm 0.1 $ & $ 4.5 $\footnotemark[$\dagger$] & $ 5.4 $\footnotemark[$\dagger$] & $ 4.6 \pm 0.3 $ & 1.07(23) \\ \cline{2-8} & Ch2 & PL & $ 0.3 \pm 0.1 $ & $ 1.7^{+0.2}_{-0.1} $ & $\cdots$ & $ 5.0 \pm 0.3 $ & 0.57(27) \\ & & cutoff-PL & $ 0.3^{+0.1}_{-0.2} $ & $ 1.7^{+0.2}_{-0.7} $ & $ >0.01 $ & $ 5.0^{+0.3}_{-0.8} $ & 0.59(26) \\ & & compTT & $ 0.3 \pm 0.1 $ & $ 4.2 $\footnotemark[$\dagger$] & $ 6.4 $\footnotemark[$\dagger$] & $ 5.0 \pm 0.3 $ & 0.62(25) \\ & & MCD & $ <0.08 $ & $\cdots$ & $ 1.8 \pm 0.2 $ & $ 4.0 \pm 0.2 $ & 0.98(27) \\ \cline{2-8} & Sw1 & PL & $ 0.5^{+0.3}_{-0.2} $ & $ 1.8 \pm 0.3 $ & $\cdots$ & $ 13^{+2}_{-1} $ & 0.69(29) \\ & & cutoff-PL & $ <0.7 $ & $ <1.9 $ & $ 0.001 $--$ 0.002 $ & $ 9^{+4}_{-2} $ & 0.64(28) \\ & & compTT & $ <0.7 $ & $ 11.4 $\footnotemark[$\dagger$] & $ 1.4 $\footnotemark[$\dagger$] & $ 10^{+3}_{-2} $ & 0.66(27) \\ & & MCD & $ <0.3 $ & $\cdots$ & $ 1.8^{+0.4}_{-0.3} $ & $ 10 \pm 1 $ & 0.61(29) \\ \cline{2-8} & Sw2+3 & PL & $ 0.5^{+0.4}_{-0.3} $ & $ 1.8 \pm 0.4 $ & $\cdots$ & $ 4.9^{+1.1}_{-0.7} $ & 0.72(12) \\ & & cutoff-PL & $ <0.9 $ & $ <2.2 $ & $ >2.1 $ & $ 5^{+1}_{-2} $ & 0.78(11) \\ & & compTT & $ <0.8 $ & $ 4.9 $\footnotemark[$\dagger$] & $ 5.3 $\footnotemark[$\dagger$] & $ 4^{+5}_{-1} $ & 0.83(10) \\ & & MCD & $ <0.3 $ & $\cdots$ & $ 1.8^{+0.5}_{-0.4} $ & $ 3.7^{+0.5}_{-0.4} $ & 0.91(12) \\ \hline \multicolumn{8}{@{}l@{}}{\hbox to 0pt{\parbox{180mm}{\footnotesize \par\noindent \footnotemark[$*$] Same as table~\ref{table2}. \par\noindent \footnotemark[$\dagger$] Same as table~\ref{table2}. }\hss}} \end{tabular} \end{center} \end{table*} \section{Discussion} In the present study, we examine the X-ray emission properties of the ULX IC\,342 X-1, aiming at understanding the nature of accretion flow in ULXs during the PL state. In contrast with the convex-shaped (thermal) spectra, from which we can easily obtain concrete information, such as the mass accretion rate and the BH spin (as long as the blackbody approximation is adopted), it is not so easy to do so for the PL spectra. Nevertheless, we retrieved some information from the PL slope, the existence (or absence) of a turnover, a turnover energy (if a turnover exists), its timing properties, and so on. We will pay special attention to the turnover energy, since it provides a key to understanding the nature of the PL state. \subsection{Summary of the Observations of IC\,342 X-1} \subsubsection{Luminosities and spectral shapes} We plot the X-ray luminosity (in the energy range of 0.5--10~keV) against the observed date and the photon index in figure~\ref{fig7}. We can see two groups of data occupying separate regions, as was reported previously \citep{feng09,mak11}, in these plots: the low luminosity group with (4--6)$\times 10^{39}$~erg~s$^{-1}$ (Su1, Su2, XM1, XM3, Ch1, Ch2, and Sw2$+$3) and the high luminosity group with (1.1--1.4)$\times 10^{40}$~erg~s$^{-1}$ (XM2, XM4, and Sw1). Since both groups have been called the ``PL'' state previously, we expediently hereafter call them the low luminosity PL state and the high luminosity PL state respectively, although the spectra in the high luminosity PL state have a turnover (see below). No significant difference was seen in the photon indices between the two PL states. The unpublished data (Su1, Su2, Sw1, and Sw2$+$3) also follow these trends. Although the strict duty cycle cannot be estimated with this sparse data sampling, it seems that the low luminosity PL state appears more frequently than the high luminosity PL state. Its appearing fraction is low:high~=~7:3 based on the number of the present samples in each state. \begin{table*} \begin{center} \caption{Basic properties of the two PL states of IC\,342 X-1.\label{table6}} \small \begin{tabular}{lll} \hline \multicolumn{1}{c}{Properties} & \multicolumn{2}{c}{Luminosity State} \\ \cline{2-3} & \multicolumn{1}{c}{Low PL} & \multicolumn{1}{c}{High PL} \\ \hline Data set & Su1, Su2, XM1, XM3, Ch1, Ch2, Sw2$+$3 & XM2, XM4, Sw1 \\ Appearing fraction (present samples) & 70\% & 30\% \\ Range of $L_{\rm{X}}$ (erg~s$^{-1}$) & (4--6)$\times$$10^{39}$ & (1.1--1.4)$\times$$10^{40}$ \\ Turnover energy & Unconstraint ($>$20~keV) & $\sim$6~keV \\ \hline \end{tabular} \end{center} \end{table*} \begin{figure*} \begin{center} \FigureFile(136mm,57mm){./f07.eps} \end{center} \caption{(a) Long-term X-ray variability, and (b) $L_{\rm{X}}$ (X-ray luminosity) versus ${\it{\Gamma}}$ (photon index) diagram of IC\,342 X-1. The parameters are derived by spectral fitting with the PL model. The data taken by the different satellites are indicated by the different colors. \label{fig7}} \end{figure*} Besides the luminosity differences, X-1 also showed somewhat different spectral features in hard X-rays between these two states. Figure~\ref{fig8} displays the unfolded $\nu$$F_{\rm{\nu}}$ spectra in the unit of keV$^2$~cm$^{-2}$~s$^{-1}$~keV$^{-1}$ in five observations (Su1--2 and XM2--4 with good statistics). Evidently, the spectral shapes differ at $\gtrsim$6~keV. During the high luminosity PL state (XM2 and XM4), the spectrum has a turnover at around 6~keV. We can confirm this property by comparing the spectral fittings to the data with the PL model and with the cutoff-PL model. The spectra in the high luminosity PL state are significantly improved, when we apply the cutoff-PL model with the $F$-test significance at $>$99.9\% ($\S$~4.3). Whereas, during the low luminosity PL state (Su1, Su2, and XM3), the spectrum has no turnover below 10~keV. In fact, the spectra are not improved by the cutoff-PL model ($\S$~4.2.1 and $\S$~4.3). We estimate that a PL tail extends up to at least 20~keV, which we firstly confirmed by our Suzaku PIN detector observation ($\S$~4.2.3). We expect similar hard X-ray emissions from this source in the other observations during the low luminosity PL state, since the derived values ($L_{\rm{X}}$ and ${\it{\Gamma}}$ in figure~\ref{fig7}b) are similar, although they were too faint to detect with satellites other than Suzaku. We thus conclude that there exists a certain trend in the spectral changes with an increase in $L_{\rm{X}}$. Table~\ref{table6} summarizes the basic properties of the two PL states. \begin{figure} \begin{center} \FigureFile(72mm,51mm){./f08.eps} \end{center} \caption{Unfolded spectra of IC\,342 X-1 in Su1, Su2, XM2, XM3, and XM4 shown with different colors. \label{fig8}} \end{figure} \medskip \subsubsection{Spectral fitting with the Comptonization model} The PL-like spectra of Galactic BHBs during the VHS are usually explained in terms of the Comptonization model \citep{done07}. From an overall spectral similarity, we may assume that the PL-like spectra in both PL states are generated by the inverse Compton scattering of soft photons. However, cautions should be taken here, since there exist some differences in the observed properties between the PL states of IC\,342 X-1 and the VHS of Galactic BHBs; that is, (i) the photon indices of the PL states (${\it{\Gamma}}$~=~1.6--2.0) are smaller than that of the VHS ($>$2.4; \cite{mcc06}). (ii) The spectral turnover at a lower energy, around 6~keV, found in the high luminosity PL state is not observed during the VHS. (iii) The significant variability observed in the VHS is not confirmed in the PL states. \begin{figure*} \begin{center} \FigureFile(136mm,60mm){./f09.eps} \end{center} \caption{(a) Significance contours of the best-fit physical parameters ($\tau$ versus $kT_{\rm{e}}$) by the Comptonization model for IC\,342 X-1. The results in different observations are shown with different colors: Su1 (black), Su2 (magenta), XM2 (red), XM3 (blue), and XM4 (green). The confidence level is shown for the 3$\sigma$ range. The dashed lines represent the constant Compton $y$-parameter of 0.5, 1, and 2. (b) $\tau$ against $kT_{\rm{e}}$ of Holmberg\,XI X-1 by \citet{kiki10a}. The data in black (solid line) show a generally trend to increase in the optical depth as the X-ray luminosity increases, while the data in gray (dotted line) do not. The bolometric luminosities in black from top to bottom are 11.1, 9.41, 6.92, and 6.63$\times 10^{39}$~erg~s$^{-1}$, and those in gray from top to bottom are 5.92, 17.3, 9.27, and 11.1$\times 10^{39}$~erg~s$^{-1}$. \label{fig09}} \end{figure*} In order to understand the physical condition of the two PL states, we show, in figure~\ref{fig09}~(a), the best-fit confidence contours between $\tau$ and $kT_{\rm{e}}$ derived from the Comptonization model, for the five observations (Su1--2 and XM2--4). Some specific values of Compton $y$-parameters ($y$~=~0.5, 1, and 2) are also indicated in this figure with the dashed lines. There are two important facts seen in figure~\ref{fig09}~(a): First, the two states occupy different regions in this $\tau$--$kT_{\rm{e}}$ plot: $kT_{\rm{e}}$~$\sim$~1.8~keV and $\tau$~$\sim$~8.5 in the high luminosity PL state, while $kT_{\rm{e}}$ ($>$2~keV) and $\tau$ ($<$8) are unconstrained in the low luminosity PL state. Second, despite different values of $kT_{\rm{e}}$ and $\tau$, both states show the region with similar $y$-values; i.e., between $y$~=~0.5 and 2. This is a noteworthy feature and its reason should be considered carefully, since the $y$-parameter is closely connected to the physical condition of a Comptonizing corona (see the next subsection). Note that PL-like spectra are produced by unsaturated inverse Compton scattering in the energy range of $kT_{\rm{seed}}$~$\ll$~$h\nu$~$\ll$~$kT_{\rm{e}}$ on the condition that the seed photon temperature ($kT_{\rm{seed}}$) is much less than the electron temperature of a Comptonizing corona. This is the case that we encounter here, since from the spectral fitting we estimate $kT_{\rm{seed}}$~$\lesssim$~0.5~keV, while $kT_{\rm{e}}$~$>$~1~keV in the high luminosity PL state. A similar argument applies to the low luminosity PL state, as well. In the following subsection, we interpret the PL states based on these spectral properties. \subsection{Physical Interpretations of the Two PL States} \subsubsection{The high luminosity PL state --- possible scenario invoking a Comptonizing corona} First, we consider the high luminosity PL state, which is characterized by a turnover below 10~keV. \citet{kubo02} firstly reported the existence of a turnover below 10~keV in the PL-shaped spectrum of a ULX (IC\,342 X-1) with ASCA. \citet{miyawa09} also reported a turnover in the PL-shaped spectrum of another ULX (M\,82 X-1) with Suzaku. They interpreted that such ULXs are in a state corresponding to the ``normal'' VHS of Galactic BHBs. We, however, already pointed out some distinctions between the VHS of Galactic BHBs and the high luminosity PL state (see $\S$~5.1.2). The PL-shaped spectra with a turnover below 10~keV were reported in the high quality spectra of several ULXs analyzed by \citet{gla09}. They found that such spectra can be explained by the emission from a low-temperature optically-thick corona with $kT_{\rm{e}}$~$<$~2.5~keV and $\tau$~$>$~6. They proposed to call it the ultraluminous state (see \cite{sor07}), claiming that the low-temperature optically-thick corona is formed by supercritical accretion flows. Their idea has been confirmed with global radiation-hydrodynamic simulations by \citet{kawa12}. \citet{kiki10a} also reported the results of the PL-shaped spectra with a turnover, which they have seen from a long-term monitoring observation of the ULX Holmberg\,IX X-1 (one of the ULXs analyzed by \cite{gla09}) taken with XMM-Newton and Swift. They discovered a general trend of decreasing electron temperatures and increasing optical depths as the X-ray luminosity increases, especially at $\tau$~$>$~6 (the black solid lines in figure~\ref{fig09}b). Notably, its $y$-parameter is $\sim$1. This $\tau$--$kT_{\rm{e}}$ relation (including $y$-parameter) is similar to that of IC\,342 X-1. From these similarities, we suggest that IC\,342 X-1 is in a similar physical condition to that of some ULXs reported by \citet{gla09} and \citet{kiki10a}. In other words, IC\,342 X-1 (at least during the high luminosity PL state) is likely to undergo supercritical accretion flows, if the interpretation by \citet{gla09} and \citet{kawa12} is correct. However, \citet{kiki10a} also reported somewhat different behavior. First, Holmberg\,IX X-1 showed rather continuous spectral variations, while IC\,342 X-1 seems to stay in two separate states, although the data sampling is rather sparse. Second, spectral changes of Holmberg\,IX X-1 are not always uniquely determined by its X-ray luminosity (see the gray dotted lines in figure~\ref{fig09}b). It is interesting to point that Galactic BHBs exhibit hysteretic spectral transitions; their spectral states differ in the rising and decaying phases (see, e.g., \cite{miyamo93}; \cite{fend04}). We may expect similar hysteretic behavior, and if so, distinct spectral states appear even at similar luminosities. Certainly, we need more continuous observations to see if these differences really exist. We briefly comment on the case of the Galactic BHB GRS\,1915$+$105, in which the mass of the BH is known to be 14$\pm$4~$M_{\odot}$ \citep{gre01}. At near the Eddington luminosity (apparent $L$~$\sim$~0.7$L_{\rm{Edd}}$) under the supercritical accretion situation, the source showed a unique state with a low electron temperature ($kT_{\rm{e}}$~$\sim$~3~keV) and optically-thick ($\tau$~$\sim$~9) Comptonizing corona \citep{kiki10b}. Such features are quite reminiscent of those of the PL state with a turnover of ULXs, and this similarity may suggest similar physical conditions; i.e., the supercritical accretion may also be occurring in the ULXs. We need to point an annoying fact, however; that is the seed photon temperatures are distinct: GRS\,1915$+$105 exhibits a disk-dominated spectrum with $kT_{\rm{seed}}$~$\sim$~2~keV, while our data of IC\,342 X-1 show much lower seed photon temperatures, below 0.5~keV. The origin of this discrepancy is left as an open question. \medskip \subsubsection{A possible theoretical interpretation of the high luminosity PL state} Here, we try to understand a physical cause of yielding $y$~$\sim$~1 observed in the high luminosity PL state based on the framework of the disk-corona model. Let us consider the energy balance of the corona and the accretion disk. The former can be described by $Q^{+}_{\rm{corona}}$~=~$Q^{-}_{\rm{corona}}$, where $Q^{+}_{\rm{corona}}$ and $Q^{-}_{\rm{corona}}$ represent the heating and the cooling rates of the corona, respectively. The latter is approximated by $Q^{-}_{\rm{corona}}$~$\sim$~$(y/2) c U_{\rm{seed}}$, where $U_{\rm seed}= a T_{\rm seed}^4$ is the seed photon energy density generated on the surface of the disk, and $a$ and $c$ are the radiation constant and the speed of light, respectively (see, e.g., \cite{mey00}). From the energy balance within the disk, on the other hand, we have $Q^{+}_{\rm{disk}}$~=~$Q^{-}_{\rm{disk}}$~=~$\sigma T_{\rm{seed}}^{4}$~=~$c U_{\rm{seed}}/4$, where $Q^{+}_{\rm{disk}}$, $Q^{-}_{\rm{disk}}$, and $\sigma$ are the heating rate, the cooling rate of the disk, and the Stefan-Boltzmann constant. Suppose that most accretion energy liberated within the disk is transported to the corona by the magnetic field (e.g., \cite{haa91}). Since about a half of the radiation scattered in the corona heats up the disk material, the energy balance in the disk leads to $Q^{+}_{\rm{corona}}/2$~$\sim$~$Q^{+}_{\rm{disk}}$. Finally we obtain $y$~$\sim$~1. Note that the above calculations can apply only when a Comptonizing corona is optically thick ($\tau$~$\gtrsim$~1). \medskip \subsubsection{The high luminosity PL state --- possible scenario invoking a disk} In the above scenario, the existence of a low-temperature optically-thick corona was assumed. Inversely, we here consider another possibility; that is, the spectral turnover in the high luminosity PL state may be given by direct disk emissions. The origin of the turnover cannot be the standard disk, since both spectra in XM2 and XM4 (namely the high luminosity PL state) were unaccountable by the MCD model (table~\ref{table5}). Hence, we examined the slim disk emission by applying an extended MCD model (so-called the $p$-free model) with the additional parameter $p$, which assumes a flatness of the temperature ($T$) profile of the disk; i.e., $T(r)$~$\propto$~$r^{-p}$, where $r$ is the radius of the disk. The extended MCD model with $p$~=~0.75 is equal to the ``normal'' MCD model. If the spectra are from the slim disk, it is expected that $p$ approaches from 0.75 to 0.5 as the X-ray luminosity increases (e.g., \cite{wata01}). The same conclusion was derived, even if mass loss by a radiation-pressure driven outflow is taken into account \citep{take09}. In fact, such a $p$--$L_{\rm{X}}$ variation was reported in a BHB \citep{kubo04a} and a ULX \citep{iso09} in the slim disk state. For IC\,342 X-1, the individual spectra can be reproduced by the extended MCD model (reduced-$\chi^2$~=~0.95 in XM2 and 0.92 in XM4), but the derived $p$--$L_{\rm{X}}$ variation ($p$~=~0.54$\pm$0.01 and $L_{\rm{X}}$~=~1.0$\times 10^{40}$~erg~s$^{-1}$ in XM2 to $p$~=~0.57$\pm$0.02 and $L_{\rm{X}}$~=~1.2$\times 10^{40}$~erg~s$^{-1}$ in XM4) does not agree with the expected one. We thus conclude that, at least, the IC\,342 X-1 spectra in the high luminosity PL state are from neither the standard disk nor the slim disk. \medskip \subsubsection{The low luminosity PL state} Next, we discuss the low luminosity PL state, which is characterized by a PL tail extending up to at least 20~keV. Since the low luminosity PL state also shows $y$~$\sim$~1, like the high luminosity PL state, these two PL states can be described by a similar corona model with similar energy balance. In other words, the distinction between the two PL states can be understood only by different coronal temperatures or coronal optical depth. Two unresolved issues remain: First, it is not clear which of $kT_{\rm{e}}$ or $\tau$ is the principle (controlling) parameter solely from the spectral fitting results. If we stand on a scenario with the supercritical accretion rate, increase in $\tau$ is a cause of the electron temperature, since the amount of outflow gas (i.e., $\tau$) can vary, depending on the disk luminosity \citep{kawa09}. The observed behavior of the two PL states is consistent with their calculation. \begin{figure} \begin{center} \FigureFile(72mm,57mm){./f10.eps} \end{center} \caption{Spectral energy distributions for the Comptonization model with parameters of each PL state. The black solid line shows the high luminosity PL state with $kT_{\rm{e}}$~=~1.8~keV and $\tau$~=~8.5. The gray lines show the low luminosity PL state with $kT_{\rm{e}}$~=~7~keV and $\tau$~=~3.6 (solid) and $kT_{\rm{e}}$~=~20~keV and $\tau$~=~1.8 (dashed). The seed photon temperature of both states is assumed to be 0.1~keV. The range used in the present paper is shown with gray shading (0.5--10~keV and 14--20~keV). The energy range of the Astro-H and the NuSTAR satellites is shown at top-right. \label{fig10}} \end{figure} Second, the electron temperatures of the Comptonizing corona (or outflow, if it every exists) in the low luminosity PL state has not yet been precisely measured so far. If the low luminosity PL state is somewhat similar to that of the VHS of Galactic BHBs, we expect higher electron temperatures, say, $kT_{\rm{e}}$~$\gtrsim$~20~keV. Or alternatively, there may be no states like the VHS in ULXs, which is supported by the absence of large luminosity variations in ULXs. In order to settle down this issue, more sensitive, higher energy ($>$20~keV) observations are required. We illustrated the expected spectra of two representative cases in figure~\ref{fig10}. In one case, there is a Comptonizing corona (or outflow) with a similar coronal electron temperature to that of the VHS ($kT_{\rm{e}}$~=~20~keV), and hence the turnover energy is $\sim$60~keV. While in the other case, the electron temperature is much less, $kT_{\rm{e}}$~=~7~keV, leading to a lower turnover energy is $\sim$20~keV. Here, we fixed the optical depths to be $\tau$~=~1.8 and 3.6, respectively. Such observations will be made possible by the launch of Astro-H \citep{koku08,taka10} and NuSTAR \citep{har05}. \medskip \subsubsection{Relation with the convex state} Finally, we briefly mention the relation between the PL states and the convex state. In IC\,342 X-1, the convex state has been observed on 1993 September with ASCA \citep{oka98,kubo01}. Its spectral shape is apparently different from those in the PL states. In fact, the spectrum in the convex state can be reproduced by the MCD model, which does not reproduce the spectra in the PL states. However, the 0.5--10~keV luminosity in the convex state ($\sim$1.3$\times 10^{40}$~erg~s$^{-1}$ assuming 3.3~Mpc) is comparable to that in the high luminosity PL state. That is, two distinct states appear at a similar luminosity. These facts imply that the spectral state of ULXs (at least IC\,342 X-1) is not uniquely determined by only the luminosity (or the accretion rate) but is determined in a more complicated fashion. One possibility is a hysteretic relation known in Galactic BHBs, as we mentioned earlier ($\S$~5.2.1). More frequent observations are required to elucidate complex spectral variations. \section{Summary} We have analyzed X-ray spectra of the unique ULX IC\,342 X-1 by using our two Suzaku data and archival data from multiple XMM-Newton, Chandra, and Swift observations. X-1 clearly showed two distinctive sub-states: (i) The low luminosity PL state appears in 70\% of all the present observations. The X-ray luminosities are (4--6)$\times 10^{39}$~erg~s$^{-1}$ (0.5--10~keV), and the spectra have a PL tail extending up to at least 20~keV. The coronal parameters derived from a Comptonization model are strongly coupled ($kT_{\rm{e}}$~$>$~2~keV and $\tau$~$<$~8). (ii) The high luminosity PL state appears in 30\%. The X-ray luminosities are (1.1--1.4)$\times 10^{40}$~erg~s$^{-1}$ (0.5--10~keV), and the spectra have a turnover at about 6~keV. The coronal parameters are determined at $kT_{\rm{e}}$~$\sim$~1.8~keV and $\tau$~$\sim$~8.5. Since the two PL states showed the similar Compton $y$-parameter of $y$~$\sim$~1, we suggest that the two PL states are described by a similar corona model with similar energy balance. The electron temperature and the optical depth respectively decreases and increases as an increasing X-ray luminosity. This behavior of IC\,342 X-1 is similar to that of some ULXs (especially Holmberg\,IX X-1) probably under the supercritical accretion situation, or that of the Galactic BHB GRS\,1915$+$105 in the unique state with the supercritical accretion. These facts imply a possibility that the two PL states of IC\,342 X-1 are formed by the supercritical accretion flows. On the other hand, an unresolved issue for the relation between the high and low luminosity PL states remains. More sensitive and higher energy observations achieved by the next generation X-ray satellites Astro-H and NuSTAR would unveil the detailed physical condition of the low luminosity PL state. \bigskip We thank Kazuo Makishima for helpful comments for the paper. This research has made use of data obtained from Data ARchives and Transmission System (DARTS), provided by Center for Science-satellite Operation and Data Archives (C-SODA) at ISAS/JAXA, and from the High Energy Astrophysics Science Archive Research Center (HEASARC), provided by NASA's Goddard Space Flight Center. The Digitized Sky Survey was produced at the Space Telescope Science Institute under US Government grant NAG W-2166. Images of this survey are based on photographic data obtained using the Oschin Schmidt Telescope on Palomar Mountain and the UK Schmidt Telescope. The Second Digitized Sky Survey image used in this paper was produced with Montage, which is an image mosaic service supported by the NASA Earth Sciences Technology Office Computing Technologies program. Tessei Yoshida is supported by the Japan Society for the Promotion of Science Research Fellowship for Young Scientists.
1908.09605
\section{Introduction} Neural Machine Translations (NMT) have set several state-of-the-art new benchmarks \cite{bojar-etal-2018-findings,barrault-etal-2019-findings}. Recently, unsupervised NMT (UNMT) has attracted great interests in the machine translation community~\cite{DBLP:journals/corr/abs-1710-11041,lample2017unsupervised,P18-1005,lample2018phrase,sun-etal-2019-unsupervised}. Typically, UNMT relies solely on monolingual corpora in similar domain rather than bilingual parallel data for supervised NMT (SNMT) to model translations between the source language and target language and has achieved remarkable results on several translation tasks~\cite{DBLP:journals/corr/abs-1901-07291}. The available training data is ever increasing; however, only the related-domain corpora, also called in-domain corpora, are able to improve the NMT performance \cite{koehn-knowles-2017-six}. Additional unrelated corpora, also called out-of-domain corpora, are unable to improve or even harm the NMT performance for some domains such as TED talks and some tasks such as IWSLT \cite{wang-etal-2017-instance} Domain adaptation methods have been well-studied in SNMT \cite{chu-etal-2017-empirical,DBLP:conf/aclnmt/ChenCFL17,wang-etal-2017-sentence,wang-etal-2017-instance,DBLP:conf/emnlp/WeesBM17,DBLP:conf/wmt/FarajianTNF17,DBLP:conf/coling/ChuW18} while they have not been well-studied in UNMT. For UNMT, in addition to inconsistent domains between training data and test data for SNMT, there also exist other inconsistent domains between monolingual training data in two languages. Actually, it is difficult for some language pairs to obtain enough source and target monolingual corpora from the same domain in the real-world scenario. In this paper, we first define and analyze several scenarios for UNMT with specific domain. On the basis of the characteristics of these scenarios, we revisit the existing domain adaptation methods including batch weighting and fine tuning methods in UNMT. Finally, we proposed modified domain adaptation methods to improve the performance of UNMT in these scenarios. To the best of our knowledge, this paper is the first work to explore domain adaptation problem in UNMT. \begin{table*}[th] \centering \scalebox{.88}{ \begin{tabular}{c|l|cccc} \toprule Scenarios & Abbreviation & $L_1$ in-domain & $L_2$ in-domain & $L_1$ out-of-domain & $L_2$ out-of-domain \\ \midrule \multirow{3}{*}{\begin{tabular}[c]{@{}l@{}}Monolingual corpora \\from same domains\end{tabular}} & $II$ &\checkmark &\checkmark &$\times$& $\times$\\ \cline{2-6} & $OO$ & $\times$& $\times$&\checkmark &\checkmark\\ \cline{2-6} & $IIOO$ &\checkmark &\checkmark &\checkmark &\checkmark\\ \midrule \multirow{6}{*}{\begin{tabular}[c]{@{}l@{}}Monolingual corpora \\ from different domains\end{tabular}} & \multirow{2}{*}{$IOO$} &$\times$ & \checkmark &\checkmark& \checkmark\\ \cline{3-6} & &\checkmark & $\times$ &\checkmark& \checkmark\\ \cline{2-6} & \multirow{2}{*}{$IIO$} &\checkmark &\checkmark & \checkmark& $\times$\\ \cline{3-6} & &\checkmark &\checkmark & $\times$& \checkmark\\ \cline{2-6} & \multirow{2}{*}{$IO$} & $\times$ & \checkmark &\checkmark& $\times$\\ \cline{3-6} & & \checkmark& $\times$ &$\times$ & \checkmark\\ \bottomrule \end{tabular}} \caption{The statistics of monolingual training corpora for different scenarios. $\checkmark$ denotes having this monolingual corpus in one scenario; $\times$ denotes having no this monolingual corpus in one scenario.} \label{tab:scenario} \end{table*} \section{UNMT Domain Adaptation Scenarios} \label{scenarios} In SNMT, all the corpora are parallel and the domains of source and target corpora are the same. Therefore, the domain adaptation technologies focus on the domain shift between the training and test corpora. In UNMT, there is only monolingual corpora and the domains of source and target corpora are sometimes different. Therefore, there are more scenarios of UNMT domain adaptation. Given two different languages $L_1$ and $L_2$, we define two main scenarios according to the domains of two languages in the training set: monolingual training corpora from the same domain, and monolingual training corpora from different domains, as shown in Table \ref{tab:scenario}. Take monolingual corpora from different domains as an example, we further divide this scenario into three sub-scenarios: $IOO$, $IIO$, and $IO$, where ``$I$" denotes the in-domain data for one language and ``$O$" denotes the out-of-domain data for one language. Further, $IOO$ denotes there are resource-rich out-of-domain monolingual corpora for both languages and resource-poor in-domain monolingual corpora for language $L_2$. Especially, we regard ``$L_2$ in-domain + $L_1$ out-of-domain" and ``$L_1$ in-domain + $L_2$ out-of-domain" as the same scenario $IO$. Note that scenario $II$ and $OO$ were only as the baselines to evaluate other four scenarios. In this paper, we consider other four scenarios to improve translation performance. \section{Domain Adaptation Methods} According to our introduced scenarios, we revisited two simple domain adaptation methods, that is, batch weighting and fine tuning. \subsection{Batch Weighting for UNMT} \label{Weighting} \textbf{Original:} The batch weighting method \cite{wang-etal-2017-instance} for SNMT is difficult to be directly transferred to the UNMT training because the training data of the source and target languages are sometimes unbalanced (such as the $IO$ and $IIO$ scenarios). Regardless of training cross-lingual language model or UNMT model, the model causes over-fitting in one language which includes the smaller amount of in-domain monolingual corpus. In other words, the large-scale out-of-domain monolingual corpus for other language is not fully utilized. \noindent \textbf{Modified:} To address this issue, we propose a batch weighting method for UNMT domain adaptation to make full use of out-of-domain corpus to build a robust UNMT model when there exists only one large-scale out-of-domain monolingual corpus in one scenario. Specifically, we adjust the weight of out-of-domain sentences to increase the amount of out-of-domain sentences rather than to increase that of in-domain sentences~\cite{8360031} in every training batch. In our batch weighting method, the out-of-domain sentence ratio is estimated as \begin{equation} \begin{aligned} \mathcal{R}_{out} &=\frac{\mathcal{N}_{out}}{\mathcal{N}_{out}+\mathcal{N}_{in}}, \label{eq:rout} \end{aligned} \end{equation} where $\mathcal{N}_{in}$ is the number of mini-batches loaded from in-domain monolingual corpora in intervals of $\mathcal{N}_{out}$ mini-batches loaded from out-of-domain monolingual corpora. For the $IO$ and $IIO$ scenario, we apply the proposed batch weighting method to train cross-lingual language model and UNMT model in turn since the quantity of training data in two languages is quite different in the $IO$ and $IIO$ scenario. For $IOO$ and $IIOO$ scenario, there are two large-scale out-of-domain monolingual corpora and their quantity is similar. Therefore, batch weighting method is not so necessary for these scenarios. \iffalse \subsection{Data Selection} \label{select} \textbf{Original:} Data selection method \cite{moore-lewis-2010-intelligent,axelrod-etal-2011-domain} for UNMT domain adaptation can be applied to the $IO$ and $IOO$ scenarios since there is only in-domain corpus for language $L_2$ as shown in Table \ref{tab:scenario}. We plan to select pseudo in-domain data from out-of-domain data for another language $L_1$ to further improve translation performance. However, the data selection for SNMT domain adaptation \cite{wang-etal-2017-sentence,8360031} is not suitable for UNMT since the in-domain language model could not be trained where does not exist in-domain corpus for language $L_1$. \noindent \textbf{Modified:} To address this issue, we back-translate the language $L_2$ in-domain data to language $L_1$ pseudo in-domain data, using an UNMT baseline system. Then, we use these corpora to train a cross-lingual language model as the in-domain language model. For the $IO$ scenario that just exists out-of-domain corpus for language $L_1$ as shown in Table \ref{tab:scenario}, we randomly select language $L_1$ out-of-domain corpus that is similar in size to the language $L_2$ in-domain corpus and take the same approach to train a cross-lingual language model as the out-of-domain language model. For the $IOO$ scenario that exist out-of-domain corpora for both languages, we randomly select out-of-domain corpora that are similar in size to the language $L_2$ in-domain corpus, respectively. Then we train a cross-lingual out-of-domain language model, using these corpora. In practice, we adopt the data selection method \cite{moore-lewis-2010-intelligent,axelrod-etal-2011-domain}, and rank a out-of-domain sentence $s$ using: \begin{equation} \begin{aligned} CE_I(s)-CE_O(s), \end{aligned} \end{equation} where $CE_I(s)$ denotes the cross-entropy of the sentence $s$ computed by the in-domain language model; $CE_O(s)$ denotes cross-entropy of the sentence $s$ computed by the out-of-domain language model. This measure biases towards sentences that are both like the in-domain corpus and unlike the out-of-domain corpus. Then, we select the lowest-scoring sentences as the pseudo in-domain corpus to further enhance translation performance described in Section \ref{FineTuning}. Compared with the previous data selection for NMT domain adaptation, we just select the pseudo in-domain corpus for the language for which there is no in-domain corpus, not for both languages. \fi \subsection{Fine Tuning} \label{FineTuning} \textbf{Original:} For the $IIOO$ and $IIO$ scenarios, we first train UNMT model on the corresponding corpora until convergence. Then we further fine tune parameters of the UNMT model on the resource-poor in-domain monolingual corpora for both languages. However, The original fine tuning method is difficult to directly transferred to the UNMT training under the $IOO$, $IO$ scenarios since there only exist in-domain data for language $L_2$ under these scenarios as shown in Table \ref{tab:scenario}. \noindent \textbf{Modified:} We propose modified data selection method \cite{moore-lewis-2010-intelligent,axelrod-etal-2011-domain} to select pseudo in-domain data from out-of-domain data for another language $L_1$. The traditional data selection for SNMT domain adaptation \cite{wang-etal-2017-sentence,8360031} is not suitable for UNMT because in-domain language model could not be trained where does not exist in-domain corpus for language $L_1$. To address this issue, we back-translate the language $L_2$ in-domain data to language $L_1$ pseudo in-domain data, using an UNMT baseline system. Then, we use these corpora to train a cross-lingual language model as the in-domain language model. For the $IO$ scenario that just exists out-of-domain corpus for language $L_1$ as shown in Table \ref{tab:scenario}, we randomly select language $L_1$ out-of-domain corpus that is similar in size to the language $L_2$ in-domain corpus and take the same approach to train a cross-lingual language model as the out-of-domain language model. For the $IOO$ scenario that exist out-of-domain corpora for both languages, we randomly select out-of-domain corpora that are similar in size to the language $L_2$ in-domain corpus, respectively. Then we train a cross-lingual out-of-domain language model, using these corpora. In practice, we adopt the data selection method \cite{moore-lewis-2010-intelligent,axelrod-etal-2011-domain}, and rank an out-of-domain sentence $s$ using: \begin{equation} \begin{aligned} CE_I(s)-CE_O(s), \end{aligned} \end{equation} where $CE_I(s)$ denotes the cross-entropy of the sentence $s$ computed by the in-domain language model; $CE_O(s)$ denotes cross-entropy of the sentence $s$ computed by the out-of-domain language model. This measure biases towards sentences that are both like the in-domain corpus and unlike the out-of-domain corpus. Then we select the lowest-scoring sentences as the pseudo in-domain corpus Finally, we further fine tune parameters of the UNMT model on the resource-poor in-domain monolingual corpora for language $L_2$ and the pseudo in-domain corpus for language $L_1$ after we apply modified data selection method to achieve the pseudo in-domain corpus for language $L_1$. \begin{table}[th] \centering \scalebox{1}{ \begin{tabular}{lccc} \toprule Scenarios & Batch weighting & Fine tuning \\ \midrule $IIOO$ &- &\checkmark \\ $IOO$ &- &\checkmark \\ $IIO$ &\checkmark&\checkmark \\ $IO$ & \checkmark& \checkmark \\ \bottomrule \end{tabular}} \caption{The suitability of the proposed methods for different scenarios. $\checkmark$ denotes that the method is used in this scenario; $-$ denotes that the method is not used in this scenario.} \label{tab:method} \end{table} Overall, batch weighting method is used in the case that there is no out-of-domain monolingual corpus for one language, including scenario $IIO$ and $IO$; fine tuning method is suitable to all our considered scenarios, including scenario $IIOO$, $IOO$, $IIO$, and $IO$, as shown in Table~\ref{tab:method}. \section{Experiments} \subsection{Datasets} We considered two language pairs to do simulated experiments on the French (Fr)$\leftrightarrow$English (En) and German (De)$\leftrightarrow$En translation tasks. For out-of-domain corpora, we used 50M sentences from WMT monolingual news crawl datasets for each language. For in-domain corpora, we used 200k sentences from the IWSLT TED-talk based shuffled training corpora for each language. To make our experiments comparable with previous work \cite{8360031}, we reported results on IWSLT test2010 and test2011 for Fr$\leftrightarrow$En and IWSLT test2012 and test2013 for De$\leftrightarrow$En. For preprocessing, we followed the same method of \newcite{lample2018phrase}. That is, we used a shared vocabulary for both languages with 60k subword tokens based on BPE \cite{sennrich2015neural}. We used the same vocabulary including in-domain and out-of-domain corpora for different scenarios. If there exists only one in-domain monolingual corpus in one scenario, we chose Fr/De in-domain monolingual corpus; if there exists only one out-of-domain monolingual corpus in one scenario, we chose En out-of-domain monolingual corpus for uniform comparison. \subsection{Language Model and UNMT Settings} We used the XLM UNMT toolkit\footnote{\url{https://github.com/facebookresearch/XLM}} and followed settings of \newcite{DBLP:journals/corr/abs-1901-07291}. We first trained cross-lingual language model, and followed settings of \newcite{DBLP:journals/corr/abs-1901-07291}: 6 layers for the encoder. The dimension of hidden layers was set to 1024. The Adam optimizer \cite{kingma2014adam} was used to optimize the model parameters. The initial learning rate was 0.0001, $\beta_1 = 0.9$, and $\beta_2 = 0.98$. We trained a specific cross-lingual language model for each scenario, respectively. The cross-lingual language model was used to initialize the encoder and decoder of the whole UNMT model and select pseudo in-domain monolingual corpus. The UNMT model included 6 layers for the encoder and the decoder. The other parameters were the same as that of language model. We used the case-sensitive 4-gram BLEU score computed by $multi-bleu.perl$ script from Moses \cite{koehn-etal-2007-moses} to evaluate the test sets. The baselines in different scenarios are the UNMT systems trained on the mixed monolingual corpora including in-domain and out-of-domain data in the corresponding scenarios. \subsection{Main Results} \begin{table} \centering \scalebox{0.72}{ \begin{tabular}{clclcccccccc} \toprule \multirow{2}{*}{\#}&\multirow{2}{*}{Scenario} & \multirow{2}{*}{Supervision} &\multirow{2}{*}{Method} & \multicolumn{2}{c}{De-En} & \multicolumn{2}{c}{En-De} & \multicolumn{2}{c}{Fr-En} & \multicolumn{2}{c}{En-Fr} \\ &&&&test2012 & test2013 &test2012&test2013&test2010 & test2011 &test2010&test2011\\ \midrule 1&\multirow{2}{*}{$II$} &\multirow{2}{*}{Yes}&\newcite{8360031} &n/a &n/a&23.07&25.40 &n/a& n/a&32.11& 35.22\\ 2&&&Base &33.68 &35.41 & 28.09&30.48 &36.13 & 40.07&36.43 &37.58 \\ \midrule 3&$II$&\multirow{2}{*}{No}&Base&24.42&25.65&21.99&22.72&25.94&29.73&25.32&27.06\\ 4&$OO$&&Base & 21.21&21.66&10.25&9.90&24.28&28.77&23.08&26.08\\ \midrule 5&\multirow{2}{*}{$IIOO$} &\multirow{2}{*}{No}&Base&24.87&26.00 &21.64 &22.57 &26.05 &30.18&26.35&30.12\\ 6&&&FT&29.82&31.57&26.48& 28.18 &31.23&35.94&29.08&33.67\\ \midrule 7&\multirow{3}{*}{$IOO$} &\multirow{3}{*}{No}&Base& 20.94 & 21.52 & 16.53 & 16.80 & 25.16 &29.88 & 25.18 & 28.73 \\ 8&&&FT(original)&22.75&23.14&21.09&21.78&28.37&33.57&26.16&30.14\\ 9&&&FT(modified)&24.33 & 24.77 &24.43 & 25.59 & 29.13 & 34.38 & 26.45 &30.69\\ \midrule 10&\multirow{3}{*}{$IIO$}&\multirow{3}{*}{No} &Base&11.11 &10.30 &11.54 &11.95 &17.88 &20.32 & 17.02& 18.16 \\ 11&&&FT+BW(original) &19.91&20.19&17.05&17.23 &26.84&29.61&23.18&25.18 \\ 12&&&FT+BW(modified)&26.12 & 27.33 &22.63 & 23.72 & 27.88 & 32.16 &25.42 &28.05 \\ \midrule 13&\multirow{3}{*}{$IO$} &\multirow{3}{*}{No} &Base& 10.79 & 10.77 & 11.44 & 11.82 &18.00&20.91&16.19&16.84\\ 14&&&BW(original)&8.15&7.05&9.28&9.70&18.00&19.52&16.39&17.72\\ 15&&&FT+BW(modified)&19.76 &20.22 & 18.32 & 18.99 & 22.59 & 26.55 & 20.61 & 22.79 \\ \bottomrule \end{tabular}} \caption{The BLEU scores in the different scenarios for En-De and En-Fr language pairs. Base denotes the baseline in the different scenarios; FT denotes fine tuning method; BW denotes batch weighting method. Original denotes the original method for SNMT; modified denotes our modified method for UNMT. \#1 and \#2 are the results of supervised NMT; others are the results of UNMT. $\mathcal{N}_{in}=10$, $\mathcal{N}_{out} =1$ in original batch weighting method, $\mathcal{N}_{in}=1$, $\mathcal{N}_{out} =30$ in modified batch weighting method, and selected pseudo in-domain corpus size is set to 20K for fine tuning method in scenario $IO$ and $IOO$. Note that $L_2$ in-domain data and all $L_1$ out-of-domain data were used in original fine tuning method for scenario $IOO$.} \label{tab:baseline} \end{table} Table \ref{tab:baseline} shows the detailed BLEU scores of all UNMT systems on the De$\leftrightarrow$En and Fr$\leftrightarrow$En test sets. \#1 and \#2 are the BLEU scores of SNMT and \#3-to-\#12 are the BLEU scores of UNMT. Our observations are as follows: 1) The BLEU scores of baselines in the $IIOO$, $IOO$, $IIO$, and $IO$ scenario were presented in the \#5, \#7, \#9, and \#11, respectively. The BLEU scores of UNMT systems after introducing our proposed methods in these scenarios were reported in the \#6, \#8, \#10, and \#12, respectively. Compared with original methods, our modified methods are beneficial for improving the performance of UNMT in the defined four scenarios. 2) In the scenario where monolingual training corpora are from same domains, such as $IIOO$, fine tuning method could further improve UNMT performance, achieving an average improvement of 4.8 BLEU scores on all test sets. 3) In the scenario where monolingual training corpora are from different domains (unique scenario for UNMT domain adaptation), our modified methods achieved average improvements of 4.4, 11.9, and 6.6 BLEU scores in the scenario $IOO$, $IIO$, and $IO$, respectively 4) Our modified batch weighting method improved UNMT performance in the case that there is no out-of-domain monolingual corpora for one language such as scenario $IIO$ and $IO$. Our modified fine tuning method could further improve translation performance in the case that there is no in-domain monolingual corpora for one language such as scenario $IOO$ and $IO$. \section{Discussion} We now further analyze batch weighting and fine tuning methods and perform an ablation analysis in the unique scenarios for UNMT domain adaptation. \subsection{Batch Weighting Analysis} \label{BW} In Figure \ref{fig:lambda}, we empirically investigated how the out-of-domain ratio $\mathcal{R}_{out}$ in Eq. (\ref{eq:rout}) affects the UNMT performance on the En$\leftrightarrow$De task in the $IO$ scenario. $\mathcal{N}_{in}$ was set to 1. The selection of $\mathcal{N}_{out}$ influences the weight of out-of-domain sentences every batch across the entire UNMT training process. Larger values of $\mathcal{N}_{out}$ enable more out-of-domain sentences utilized in the UNMT training. The smaller the value of $\mathcal{N}_{out}$ is, the more important are in-domain sentences. As the Figure \ref{fig:lambda} shows, $\mathcal{N}_{out}$ ranging from 10 to 100 all enhanced UNMT performance and a balanced $\mathcal{N}_{out} = 30$ achieved the best performance. \begin{figure}[ht] \setlength{\abovecaptionskip}{0pt} \begin{center} \scalebox{1}{ \pgfplotsset{height=5.6cm,width=8.5cm,compat=1.14,every axis/.append style={thick}} \begin{tikzpicture} \tikzset{every node}=[font=\small] \begin{axis} [width=7cm,enlargelimits=0.13, tick align=outside, legend style={cells={anchor=west},legend pos=south east, legend columns=2,every axis legend/.append style={ at={(1,0)}}}, xticklabels={ $1$, $10$,$20$, $30$, $40$,$50$, $100$}, xtick={0,1,2,3,4,5,6}, axis y line*=left, axis x line*=left, ylabel={BLEU score},xlabel={$\mathcal{N}_{out}$ value},font=\small] \addplot+ [sharp plot,densely dashed,mark=square*,mark size=1.2pt,mark options={solid,mark color=cyan}, color=cyan] coordinates { (0,11.44)(1,16.01)(2,16.52)(3,16.65)(4,16.71)(5,16.15)(6,15.22) }; \addlegendentry{\tiny En-De-test2012} \addplot+ [sharp plot,densely dashed,mark=square*,mark size=1.2pt,mark options={solid,mark color=orange}, color=orange] coordinates { (0,11.82)(1,16.60)(2,17.18)(3,17.66)(4,17.10)(5,16.79)(6,15.05) }; \addlegendentry{\tiny En-De-test2013} \addplot+ [sharp plot, mark=*,mark size=1.2pt,mark options={solid,mark color=cyan}, color=cyan] coordinates { (0,10.79)(1,17.78)(2,18.42)(3,18.60) (4,18.50)(5,17.92)(6,17.41)}; \addlegendentry{\tiny De-En-test2012} \addplot+[sharp plot, mark=*,mark size=1.2pt,mark options={solid,mark color=orange}, color=orange] coordinates { (0,10.77)(1,18.00)(2,19.43)(3,19.48) (4,19.24)(5,18.64)(6,17.68)}; \addlegendentry{\tiny De-En-test2013} \end{axis} \end{tikzpicture}} \caption{\label{fig:lambda}Effect of mini-batch size $\mathcal{N}_{out}$ for UNMT performance after introducing batch weighting method on the En$\leftrightarrow$De dataset in the $IO$ scenario.} \end{center} \end{figure} Moreover, We explored the performance of two batch weighting methods, that is, the existing batch weighting method \cite{8360031} used in NMT domain adaptation and our modified bacth weighting method focused on UNMT domain adaptation. As shown in Table~\ref{tab:two_batch}, +BW \cite{8360031} ($\mathcal{N}_{in}=10$, $\mathcal{N}_{out} =1$) achieved worse performance than the baseline. Our modified batch weighting method outperformed the baseline by 4.6$\sim$7.2 BLEU scores. This validates that the supervised domain adaptation method proposed by \newcite{8360031} was not suitable for UNMT. Our modified batch weighting method could build a more robust UNMT model. \begin{table}[ht] \centering \scalebox{1}{ \begin{tabular}{lrrrr} \toprule \multirow{2}{*}{Method} & \multicolumn{2}{c}{De-En} & \multicolumn{2}{c}{En-De} \\ &test2012 & test2013 &test2012&test2013\\ \midrule Base&10.79 &10.77 &11.44 &11.82 \\ \midrule \;\;\;+BW \cite{8360031}&8.15&7.05&9.28&9.70\\ \;\;\;+BW (our) &17.78 &18.00 & 16.01 & 16.60\\ \bottomrule \end{tabular}} \caption{The results of two batch weighting methods in $IO$ scenario on En-De language pairs. } \label{tab:two_batch} \end{table} In addition, we also investigated the training time cost between our batch weighting method and the baseline in the $IO$ scenario. As shown in Figure \ref{fig:time}, both our batch weighting method and the baseline take 30 hours during the whole training process on the $IO$ scenario. The BLEU score of the baseline decreased rapidly after certain epochs due to over-fitting while our proposed batch weight method could continuously improve translation performance during training process. Over the course of training process, our proposed batch weight method performed significantly better than baseline. These demonstrate that our proposed batch weighting method is robust and effective. \begin{figure}[ht] \setlength{\abovecaptionskip}{0pt} \begin{center} \scalebox{1}{ \pgfplotsset{height=5.6cm,width=8.5cm,compat=1.14,every axis/.append style={thick},every axis legend/.append style={ at={(1.7,0.21)}},legend columns=2} \begin{tikzpicture} \tikzset{every node}=[font=\small] \begin{axis} [width=7cm,enlargelimits=0.13, tick align=outside, xticklabels={ $5$, $10$,$15$, $20$, $25$,$30$}, xtick={0,1,2,3,4,5}, axis y line*=left, axis x line*=left, ylabel={BLEU score},xlabel={Training time (hour)},font=\small] \addplot+ [sharp plot,densely dashed,mark=square*,mark size=1.2pt,mark options={solid,mark color=cyan}, color=cyan] coordinates { (0,13.49)(1,14.51)(2,15.07)(3,15.52)(4,15.95)(5,16.01)}; \addlegendentry{\tiny En-De-BW} \addplot+ [sharp plot,densely dashed,mark=square*,mark size=1.2pt,mark options={solid,mark color=brown}, color=brown] coordinates { (0,9.11)(1,10.39)(2,9.01)(3,7.64)(4,7.42)(5,6.75)}; \addlegendentry{\tiny En-De-Base} \addplot+ [sharp plot, mark=*,mark size=1.2pt,mark options={solid,mark color=cyan}, color=cyan] coordinates { (0,15.46)(1,16.30)(2,16.49)(3,16.91) (4,17.56)(5,17.78)}; \addlegendentry{\tiny De-En-BW} \addplot+[sharp plot, mark=*,mark size=1.2pt,mark options={solid,mark color=brown}, color=brown] coordinates { (0,8.64)(1,10.07)(2,9.21)(3,8.09) (4,7.98)(5,7.60)}; \addlegendentry{\tiny De-En-Base} \end{axis} \end{tikzpicture}} \caption{\label{fig:time}The learning curve between baseline and batch weighting model on the En$\leftrightarrow$De test2012 in $IO$ scenario.} \end{center} \end{figure} \subsection{Fine Tuning Analysis} \label{FT} \begin{figure}[ht] \setlength{\abovecaptionskip}{0pt} \begin{center} \scalebox{1}{ \pgfplotsset{height=5.6cm,width=8.5cm,compat=1.14,every axis/.append style={thick}} \begin{tikzpicture} \tikzset{every node}=[font=\small] \begin{axis} [width=7cm,enlargelimits=0.13, tick align=outside, legend style={cells={anchor=west},legend pos=south east, legend columns=2,every axis legend/.append style={ at={(1,0)}}}, xticklabels={$0$, $5K$,$10K$, $20K$,$30K$,$50K$, $1M$,$10M$}, xtick={0,1,2,3,4,5,6,7}, axis y line*=left, axis x line*=left, ylabel={BLEU score},xlabel={Corpus size},font=\small] \addplot+ [sharp plot,densely dashed,mark=square*,mark size=1.2pt,mark options={solid,mark color=cyan}, color=cyan] coordinates { (0,16.01)(1,18.23)(2,18.28)(3,18.32)(4,18.11)(5,17.89)(6,17.2)(7,17) }; \addlegendentry{\tiny En-De-test2012} \addplot+ [sharp plot,densely dashed,mark=square*,mark size=1.2pt,mark options={solid,mark color=orange}, color=orange] coordinates { (0,16.66)(1,18.32)(2,18.72)(3,18.99)(4,18.93)(5,18.83) (6,17.87)(7,17.29)}; \addlegendentry{\tiny En-De-test2013} \addplot+ [sharp plot, mark=*,mark size=1.2pt,mark options={solid,mark color=cyan}, color=cyan] coordinates { (0,17.78)(1,19.01)(2,19.4)(3,19.76) (4,19.7)(5,19.66)(6,19.37)(7,18.67)}; \addlegendentry{\tiny De-En-test2012} \addplot+[sharp plot, mark=*,mark size=1.2pt,mark options={solid,mark color=orange}, color=orange] coordinates { (0,18)(1,19.65)(2,19.7)(3,20.22) (4,20.1)(5,20.05)(6,19.91)(7,19.16)}; \addlegendentry{\tiny De-En-test2013} \end{axis} \end{tikzpicture}} \caption{\label{fig:ds}Effect of selected in-domain corpus size for the performance of fine tuning UNMT model on the En$\leftrightarrow$De dataset in the $IO$ scenario. Corpus size ``0" indicates the result of the UNMT model only with batch weighting method.} \end{center} \end{figure} As shown in Figure \ref{fig:ds}, we empirically investigated how the selected pseudo in-domain corpus size for fine tuning affects the performance of fine tuning UNMT on the En$\leftrightarrow$De task in the $IO$ scenario. The larger corpus size brought more pseudo in-domain corpus participate in UNMT further training; the smaller corpus size made pseudo in-domain corpus more precise. Corpus size ranging from 5k to 10M all enhanced UNMT performance and UNMT model achieved the best performance when corpus size was set to 20K as shown in Figure \ref{fig:ds}. This indicates that our modified fine tuning method is robust and effective. Moreover, we evaluated the different data selection criteria before fine tuning UNMT system on the En$\leftrightarrow$De task in $IO$ scenario. CED outperformed CE by approximately 1 BLEU score as shown in Table \ref{tab:CE}. This demonstrates that pseudo in-domain corpus selected by CED is more precise for improving UNMT performance. \begin{table}[ht] \centering \scalebox{1}{ \begin{tabular}{lrrrr} \toprule \multirow{2}{*}{Method} & \multicolumn{2}{c}{De-En} & \multicolumn{2}{c}{En-De} \\ &test2012 & test2013 &test2012&test2013\\ \midrule CED & 19.76 & 20.22 & 18.32 & 18.99 \\ CE &18.53 & 18.87 &17.19 & 17.81\\ \bottomrule \end{tabular}} \caption{Different data selection criteria on the En-De language pairs in $IO$ scenario. CED denotes cross entropy difference criterion $CE_I(s)-CE_O(s)$; CE denotes cross entropy criterion $CE_I(s)$. Pseudo in-domain corpus size is set to 20K.} \label{tab:CE} \end{table} We also investigated the necessity of denoising auto-encoder during fine tuning process in the $IIOO$ scenario on the En-De language pairs. As shown in Table \ref{tab:FT}, the fine tuning model with denoising performed slightly better than that without denoising. This demonstrates that denoising auto-encoder can further enhance model learning ability during fine-tuning on in-domain data. \begin{table}[ht] \centering \scalebox{1}{ \begin{tabular}{lrrrr} \toprule \multirow{2}{*}{Method} & \multicolumn{2}{c}{De-En} & \multicolumn{2}{c}{En-De} \\ &test2012 & test2013 &test2012&test2013\\ \midrule w/o denoising & 29.80 & 30.99 & 26.39 & 27.84 \\ w denoising &29.82 & 31.57 &26.48 & 28.18\\ \bottomrule \end{tabular}} \caption{Denoising analysis on the En-De language pairs in $IIOO$ scenario. } \label{tab:FT} \end{table} \subsection{Ablation Analysis} We performed an ablation analysis to understand the importance of our proposed methods in the $IO$ and $IIO$ scenarios (unique scenarios for UNMT domain adaptation). \begin{table}[ht] \centering \scalebox{1}{ \begin{tabular}{lrrrr} \toprule \multirow{2}{*}{Method} & \multicolumn{2}{c}{De-En} & \multicolumn{2}{c}{En-De} \\ &test2012 & test2013 &test2012&test2013\\ \midrule Base&10.79 &10.77 &11.44 &11.82 \\ \midrule \;\;\;+FT&12.63&12.36&12.22&13.32\\ \;\;\;+BW& 17.78&18.00 &16.01 &16.60\\ \;\;\;+FT+BW & 19.76 & 20.22 & 18.32 &18.99\\ \bottomrule \end{tabular}} \caption{Ablation analysis on the En$\leftrightarrow$De dataset in $IO$ scenario. +BW denotes that a UNMT system was trained with batch weighting method; +FT denotes that fine tuning was applied to a UNMT baseline system; +FT+BW denotes that fine tuning was applied to a UNMT system trained with batch weighting. } \label{tab:Ablation3} \end{table} As shown in Table \ref{tab:Ablation3}, we observed that both of +FT and +BW outperformed the Base in the $IO$ scenario and +BW was more suitable for this scenario, achieving much more improvement in BLEU score. Moreover, +FT+BW can complement each other to further improve UNMT performance, achieving the best performance in the $IO$ scenario. \begin{table}[ht] \centering \scalebox{1}{ \begin{tabular}{lrrrr} \toprule \multirow{2}{*}{Method} & \multicolumn{2}{c}{De-En} & \multicolumn{2}{c}{En-De} \\ &test2012 & test2013 &test2012&test2013\\ \midrule Base&11.11 &10.30 &11.54 &11.95 \\ \midrule \;\;\;+BW &18.96 &18.87 & 20.23 & 20.81\\ \;\;\;+FT& 19.78&20.70 &17.24 &18.02\\ \;\;\;+FT+BW &26.12 & 27.33 &22.63 & 23.72\\ \bottomrule \end{tabular}} \caption{Ablation analysis on the En$\leftrightarrow$De dataset in $IIO$ scenario.} \label{tab:Ablation} \end{table} As shown in Table~\ref{tab:Ablation}, we observed that both of +FT and +BW outperformed the Base in the $IIO$ scenario. In particular, the +FT+BW was further better than both +FT and +BW. This means that our modified batch weighting and fine tuning methods can improve the performance of UNMT in this $IIO$ scenario, especially, both of them can complement each other to further improve translation performance. \section{Related Work} Recently, UNMT \cite{DBLP:journals/corr/abs-1710-11041,lample2017unsupervised,P18-1005}, that has been trained via bilingual word embedding initialization, denoising auto-encoder, and back-translation and sharing latent representation mechanisms, has attracted great interest in the machine translation community. \newcite{lample2018phrase} achieved remarkable results on some similar language pairs by concatenating two bilingual corpora as one monolingual corpus and using monolingual embedding to initialize the embedding layer of UNMT. \newcite{wu-etal-2019-extract} proposed an extract-edit approach, to extract and then edit real sentences from the target monolingual corpora instead of back-translation. \newcite{sun-etal-2019-unsupervised} proposed bilingual word embedding agreement mechanisms to improve UNMT performance. More recently, \newcite{DBLP:journals/corr/abs-1901-07291} achieved state-of-the-art UNMT performance by introducing the pretrained cross-lingual language model. However, previous work only focuses on how to build state-of-the-art UNMT systems on specific domain and ignore the effect of UNMT on different domain. Research on domain adaptation for UNMT has been limited while domain adaptation methods have been well-studied in SNMT. \newcite{DBLP:conf/coling/ChuW18} gave a survey of domain adaptation techniques for SNMT. Domain adaptation for SNMT could be categorized into two main categories: data optimization and model optimization. Data optimization methods included synthetic parallel corpora generation using in-domain monolingual corpus \cite{P16-1009,hu-etal-2019-domain} and data selection for out-of-domain parallel corpora \cite{wang-etal-2017-sentence,DBLP:conf/emnlp/WeesBM17,zhang-etal-2019-curriculum}. Training objective optimization including instance weighting \cite{wang-etal-2017-instance,DBLP:conf/aclnmt/ChenCFL17} and fine tuning~\cite{Luong-Manning:iwslt15,P16-1009,DBLP:journals/corr/FreitagA16,DBLP:journals/corr/ServanCS16,chu-etal-2017-empirical}, architecture optimization~\cite{DBLP:conf/ranlp/KobusCS17,britz-etal-2017-effective,gu-etal-2019-improving} and decoding optimization \cite{DBLP:journals/corr/FreitagA16,khayrallah-etal-2017-neural,saunders-etal-2019-domain} were common model optimization methods for domain adaptation. \section{Conclusion} In this paper, we mainly raise the issue of UNMT domain adaptation since domain adaptation methods for UNMT have never been proposed. We empirically show different scenarios for domain-specific UNMT. Based on these scenarios, we revisit the effect of the existing domain adaptation methods including batch weighting and fine tuning methods in UNMT. Experimental results show our modified corresponding methods improve the performance of UNMT in these scenarios. In the future, we will try to investigate other unsupervised domain adaptation methods to further improve domain-specific UNMT performance. \bibliographystyle{coling}
1705.00375
\section{Introduction}\label{sec:intro} \input{sections/intro} \section{Related work}\label{sec:related} \input{sections/related} \section{Preliminaries} \input{sections/notation}\label{sec:notation} \section{The {{\tt Targeted}} matrix-completion framework}\label{sec:targeted} \input{sections/targeted} \section{Low-rank submatrix discovery}\label{sec:lrsd} \input{sections/lrsd} \section{Experiments}\label{sec:experiments} \input{sections/experiments} \section{Conclusions}\label{sec:conclusions} \input{sections/conclusion} \bibliographystyle{abbrv} \subsection{Evaluation of {targeted} matrix-completion} \input{sections/exp_completion}\label{sec:compexp} \begin{figure*} \centering \captionsetup[subfigure]{justification=centering} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[scale=.5]{final/f1a.pdf} \caption{Rank $\ell$ of {\bS}.} \label{fig:rank} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[scale=.5]{final/f1b.pdf} \caption{Size of {\bS}} \label{fig:size} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[scale=.5]{final/f1c.pdf} \caption{Background rank $r$.} \label{fig:backrank} \end{subfigure} \begin{subfigure}{0.24\textwidth} \centering \includegraphics[scale=.5]{final/f1d.pdf} \caption{$\pi$} \label{fig:pi} \end{subfigure} \caption{F-Score achieved by the {\texttt{SVP}}, {\tt BinaryLoops}, and {\tt LRSC} algorithms as: the rank $\ell$ of {\bS}, the size of {\bS}, the background rank $r$, or $\pi$ vary.\label{fig:results1}} \end{figure*} \subsection{Evaluation of {\texttt{SVP}}} In the previous experiments we have demonstrated the improvement the {{\tt Targeted}} framework brings to matrix completion. We now isolate and analyze the algorithmic approach to Step 1 -- the task of finding a low-rank submatrix on \emph{fully known} matrices. Our results demonstrate that {\texttt{SVP}} is effective, efficient, and outperforms other heuristics for the same problem for a large variety of instances. For experiments we set to rank of the matrix {\bM} to $r=1000$ and the rank of the planted submatrix {\bS} to $\ell=5$. Since the objective of our problem is to find the indices $\ensuremath{R_s}$ and $\ensuremath{C_s}$, we evaluate the accuracy of our framework using the combination of precision and recall to the standard F-Score$=2\frac{Pr\times Rcl }{ Pr+Rcl}$. The F-score takes values in $[0,1]$ and the higher its value the better. We compare {\texttt{SVP}} against the algorithm by Rangan in~\cite{rangan2012simple} (which we call {\tt BinaryLoops}) and algorithms for Subspace Clustering; the approaches are discussed in detail in Section~\ref{sec:related}. For Subspace Clustering we show results for {\tt LRSC} by Vidal and Favaro in~\cite{vidal2014low} since it offers the best balance of accuracy and efficiency; we used the authors' original implementation. The baseline algorithms required minor modifications since none of them explicitly output the indices of a submatrix. For {\tt BinaryLoops} we set $\gamma_{\text{row}}=\gamma_{\text{col}}=0.75$, according to the author's recommendation. For {\tt LRSC} we set $k=2$ when clustering and select the output $\{\ensuremath{R_s},\ensuremath{C_s}\}$ with the highest accuracy (a user without ground truth would pick the one with lowest empirical rank). All algorithms were coded in Matlab. \spara{Varying size and rank:}\label{sec:res} First we examine a variety of problem instances by comparing the F-Scores of the different algorithms as we vary (a) the rank $\ell$ of {\bS}, (b) the size of {\bS}, (c) the rank $r$ of the background matrix, and (d) $\pi$. Figure~\ref{fig:results1} shows that {\texttt{SVP}} accurately locates the low-rank submatrix {\bS} for the majority of instances, whether {\bS} is large or small. In Figure~\ref{fig:pi}, we see that in line with our analysis, {\texttt{SVP}} succeeds precisely as the value of $\pi$ grows larger than one. In contrast to the resilience of {\texttt{SVP}} observed in the problem instances, {\tt LRSC} and {\tt BinaryLoops} are more sensitive to changes. {\tt LRSC} performs best when the rank and size of {\bS} are large, and the background rank is small. On the other hand, {\tt BinaryLoops} has an opposite behavior, performing best on instances where the rank of {\bS} is less than five, and the background rank is large. These behaviors are in agreement with the analysis and the design of these algorithms in the original papers in which they were introduced. \begin{figure}[H] \centering \captionsetup[subfigure]{justification=centering} \begin{subfigure}{0.22\textwidth} \centering \includegraphics[scale=.5]{final/f3a.pdf} \caption{\label{fig:incomplete}Incomplete data $\mathbf{M}$. \end{subfigure} \hspace*{\fill} \begin{subfigure}{0.22\textwidth} \centering \includegraphics[scale=.5]{final/f4a.pdf} \caption{Multiple submatrices. \label{fig:many} \end{subfigure} \caption{F-Scores of {\texttt{SVP}}, {\tt BinaryLoops}, and {\tt LRSC} on (a) incomplete data as a function of the percentage of known entries, and (b) data with multiple low-rank submatrices as a function of their rank $\ell$.\label{fig:twoplots}} \end{figure} \spara{Data with missing entries:} To compare {\texttt{SVP}} to {\tt BinaryLoops}, and {\tt LRSC} on incomplete data, we setup the experiment as described in the setup in Section~\ref{sec:experiments}. Figure~\ref{fig:incomplete} shows the F-score of each algorithm as a function of the percentage of known entries. The results indicate that when the number of known entries is 20\% or above the performance of {\texttt{SVP}} is significantly better than that of {\tt BinaryLoops} and {\tt LRSC}. \spara{Multiple submatrices:} Finally we test whether the different algorithms are affected by the presence of multiple low-rank submatrices. We setup the experiment as described in the start of Section~\ref{sec:experiments} and plant a second submatrix $\bS'$ of the same size and rank as {\bS}, with $\pi=1.2$. Figure~\ref{fig:many} shows the F-Score of each algorithm for the task of finding one submatrix (either {\bS} or $\bS'$) as a function of the ranks $\ell$. We observe that {\texttt{SVP}} significantly outperforms {\tt LRSC} and {\tt BinaryLoops}, and is unaffected by the presence of the second submatrix. Further, after removal of the first submatrix discovered, {\texttt{SVP}} retains the high level of accuracy for subsequent submatrices. \spara{Running time:} We compare the running times of {\texttt{SVP}}, {\tt BinaryLoops} and {\tt LRSC} as the size of the input matrix {\bM} increases. The running times of the different algorithms are shown in Table~\ref{tab:times} in seconds, recall all are in Matlab. \begin{table}[H] \small \centering \begin{tabular}{c|r|r|r|r|r} \toprule $n$ & $400$ & $1000$ & $2000$ & $4000$ & $6000$ \\ \midrule {\texttt{SVP}} & $0.12 $ & $0.23$ & $\mathbf{0.75} $ & $\mathbf{2.67} $ & $\mathbf{11.17} $ \\ \hline {\tt BinaryLoops} & $\mathbf{0.02}$&$\mathbf{0.20 }$ &$1.36$ &$8.68$ & $30.69$ \\ \hline {\tt LRSC} & $0.44$ & $2.19$ & $9.56$ & $83.59 $ & $ 310.87 $ \\ \bottomrule \end{tabular} \caption{The running time of {\texttt{SVP}}, {\tt BinaryLoops}, and {\tt LRSC} in seconds for an $n\times n$ input matrix {\bM}. \label{tab:times}} \end{table} We observe that the difference in the running times is more pronounced for large matrices. For those matrices {\texttt{SVP}} is the most efficient one, followed by {\tt BinaryLoops}. The running time of {\tt LRSC} is $10$ times larger than the running time of {\tt BinaryLoops} and $30$ times larger than that of {\texttt{SVP}}. Note that the running time of {\texttt{SVP}} is dominated by computing the first singular vectors of {\bM} and our implementation of {\texttt{SVP}} uses the off-the-shelf SVD decomposition of MatLab. In principle, we could improve this running time by using other SVD speedups and approximations, such as the one proposed in~\cite{drineas06fast}. \subsection{Exposing low-rank submatrices.}\label{sec:lowranksubm} \input{sections/lrsd1_lowranksubm} \subsection{Extracting low-rank submatrices.}\label{sec:algo} \input{sections/lrsd2_algo} \subsection{Dealing with incomplete matrices.}\label{sec:incomplete} \input{sections/lrsd3_incomplete}
2202.09854
\section{Introduction} In the last two decades, networks, or graphs, have attracted an enormous amount of attention as an effective way of describing pairwise relations in complex systems (\cite{barabasi2002linked}). The ever increasing abundance, and variety of graph data has motivated a great deal of applications of statistical models to graphs \citep[see, for example][for a review]{newman2010networks}. More recently, the availability of time varying networks' data has stimulated the development of models for temporal networks \citep[][]{hanneke2010discrete, sewell2015latent, giraitis2016estimating, mazzarisi2020dynamic, di2019score}. While the vast majority of this literature focuses on \textit{binary graphs}, i.e. graphs that are defined solely by a set of nodes and a set of links between pairs of nodes, often one can associate a weight to each link. Links' weights are typically positive, discrete or continuous, numbers and can be associated, for instance, to the strength of the relation described by each link. In standard binary descriptions, such relevant information is completely lost. For example, in a network of exposures among financial institutions the weight could be the value of the credit. In this case, a binary network would describe in the same way a link associated to an exposure of $1$ million to that associated to an exposure of $1$ billion. Indeed it is very common for network data to have also informative weights associated with their links. Some additional examples of the importance of weights in networks can be found, for example in: the International Trade Network \citep{leamer1995international, fagiolo2010evolution}, migration flows \citep{fagiolo2016revisiting}, scientific collaborations \citep{newman2001scientific}, transportation networks \citep{barrat2004architecture}, just to mention a few. In these cases the data are better described by a weighted, or valued, network, that can be associated with a positive, real valued matrix ${Y_{ij}} \in \mathcal{R}^+ $. ${Y_{ij}}$ is the weight of the directed link from node $i$ to node $j$, and ${Y_{ij}} = 0$ if the link is not present. Moreover, it is well known that real world networks, both binary and weighted, are very often found to be sparse, i.e. their adjacency matrices have an abundance of zero entries. That is the case, for example, of interbank networks (\cite{anand2017missing}), a class of weighted temporal networks of paramount importance, that are known to be extremely relevant to financial stability (\cite{allen2011networks,haldane2011systemic}), and have motivated the application and development of a number of statistical models for networks (see, for example, \cite{bargigli2015multiplex, mazzarisi2020dynamic} and references therein). In spite of their relevance, networks' weights have received less attention in the literature on models for temporal networks. Indeed there are only a few models for temporal networks that take them into account. In this paper we propose a novel model for sparse and weighted temporal networks, that also accommodates for the dependency of the network dynamics on external covariates. {Our efforts are originally motivated by the need to properly model weighted temporal network data, describing overnight exposures in the Euro interbank market (eMID). In a previous work~\citep{di2019score}, we disregarded the information associated with the links' weight and we only focused on the temporal evolution of binary relations. We achieved that by extending the Exponential Random Graph models (ERGM), a class of statistical models for random networks, whose first and probably most famous example is the Erd{\H{o}}s-R{\'e}nyi model~\citep{erdds1959random}. In the novel framework, the ERGM parameters change over time following an observation-driven dynamics~\citep{cox1981statistical}. The relevant information for the time evolution is encoded in the filtration $\mathcal{F}_t$. It determines the update of the time varying parameters through an autoregressive process whose innovation term corresponds to the scaled score of the observation probability mass function. Such specification, known as Dynamic Conditional Score-driven (DCS) or Generalized Autoregressive Score (GAS), has been pioneered and recently introduced in the econometric literature by~\citep{creal2013generalized,harvey2013dynamic}. In this paper, we extend the well known fitness model~\citep{Holland81anexponential, PhysRevLett.89.258702,PhysRevE.78.015101, chatterjee2011random, yan2016asymptotics, yan2019statistical} for static binary networks, combining it with a simple generalized linear model to handle the weights and the DCS approach to time varying parameters. The generalization is non trivial. We need to explicitly account for the abundance of zeros that follow from the sparse nature of real world networks. We solve the issue by resorting to zero augmentation (ZA). The resulting modeling framework is very general and extremely flexible. It allows to decouple the probability of a link to exist from its expected weight -- a fact that will prove to be crucial in the forecasting exercise -- leaving us full flexibility concerning the specification of the weight distribution and the possibility to explore the influence of external covariates on the network's dynamics.} We provide convincing Monte Carlo evidence that the score driven model is an effective filter of the latent fitness dynamics in miss-specified settings. We document a clear computational advantage, in terms of mean squared error and mean absolute difference, with respect to competitor models also in presence of external covariates and omitted variables. Interestingly, by we exploitIn the empirical analysis, we explore from different perspectives the role played by the Euro Overnight Index Average (EONIA) rates on the dynamics of the lending relations. Consistently with similar findings discussed in the literature, we observe that lower interest rates are related with a reduction of network interconnectdness but an increase of the average liquidity flow for the loans that are present. However, the novelty of our results rely on two important facts: i) the time varying fitness model accounts explicitly for bank specific effects and thus provides a measure of the impact of EONIA rates decoupled from node specific effects; ii) our results leverage the full information available in the description of the eMID network without the need to collapse the network matrices into a single statistic, as done in \cite{akram2010interbank} and \cite{brunetti2019interconnectedness}. Finally, concerning the link and weight persistence analysis, we complement the work of \cite{hatzopoulos2015quantifying} highlighting the tendency of banks to form links whose weight is positively related with the weights at previous time steps. We call weight persistence this aspect of the dynamics of interbank relations. The rest of the paper is organized as follows. The next section provides an overview of the main contributions proposed in the literature for the modeling of temporal weighted networks. Section~\ref{sec:SDGFM} defines a novel observation driven model for sparse and weighted temporal network, generalizing the well-known fitness model for binary networks to the sparse weighted case and leveraging the score driven approach to time varying parameters. Section~\ref{sec:num_sim_dwfm} presents an extensive Monte Carlo analysis of the score driven generalized fitness model. Section~\ref{sec:emid_app} details the application of the new model to the eMID market. Section~\ref{sec:conclusions} draws the main conclusions and provides some ideas for future research. \subsection{Literature Review}\label{sec:lit_rev} An econometric model for weighted temporal networks can be described in terms of a probability distribution $P\pt{{Y_{ij}^{\tonde{t}}}\vert {\mathbf{Y}}\e{t-1},\dots {\mathbf{Y}}\e{1}, {\mathbf{X}}_1\e{t}, \dots, {\mathbf{X}}_{K}\e{t}}$ that describes the probability of the weight ${Y_{ij}^{\tonde{t}}}$ of the link between node $i$ and $j$ at time $t$ as potentially depending on previous realizations of the network and a set of contemporaneous, matrix valued, external variables ${\mathbf{X}}_1 \dots {\mathbf{X}}_K$. The external variables can be different for each link. Even disregarding the dependency on external variables, it is immediately evident that the matrix valued nature of networks' data implies that the problem is typically very high dimensional. Formally, observations of a weighted temporal network ${Y_{ij}^{\tonde{t}}}$ are no different from balanced panel data, where the cross sectional index $ij$ runs over all possible $N^2$ links. Thus, one could directly apply the non-linear panel regression methods \citep[described, for example, in][]{wooldridge2010econometric} to the sequence of positive valued matrix observations and estimate, for example, the effect of external variables. Such a direct application would nevertheless disregard the network structure and the fact that links observations are very much influenced by their association with specific nodes. Moreover, the high dimensional nature of the problem complicates the estimation and interpretation of link specific fixed effects. For this reasons, the direct approach of treating sequences of networks as standard panel data, possibly with lagged dependency, is not widespread in the literature, and restrictions to reduce the number of parameters to be estimated are used in most cases. For example, \cite{giraitis2016estimating}, driven by system specific insights, select a set of network statistics $G_i\pt{{\mathbf{Y}}}$ and estimate the dependency of each link ${Y_{ij}^{\tonde{t}}}$ on $G\pt{{\mathbf{Y}^{\tonde{t-1}}}}$, by means of a Tobit model with few covariates, for each link. Moreover they use a local-likelihood method to estimate time varying coefficients of the regression. The censored regression used in their paper has the downside of requiring a joint modeling of the presence of a link and of its weight. While this choice allows for straightforward estimates and builds on the well known Tobit regression, it models jointly the effect that a covariate has on the probability of observing a link and the one that it has on its expected weight. In this work we propose a more flexible method that, among other things, allows to disentangle the probability of a link being present from its expected weight. Another widespread approach is that of dynamical latent space models (\cite{kim2018review}) where a set of time varying parameters is associated to each node, and, at each time, the probability of observing a link between two nodes depends on a measure of distance between two nodes in such a latent space. Latent space models have also been considered for sparse weighted networks \citep[for example in][]{sewell2015latent}, with the aim of inferring from the network's dynamics the positions of nodes in a latent space. The resulting embedding of the nodes is typically analyzed and compared with available metadata on the nodes. While very informative, the analysis of networks' embeddings is fundamentally different from the purpose of our work, as one of main aims is to leverage the availability of data on external variables that are expected to be related with network dynamics and estimate how much the latter depends on them. Models that allow each one of the matrix elements ${Y_{ij}^{\tonde{t}}}$ to depend on each of the ${Y_{ij}}^{\tonde{t-1}}$ have also been considered in the literature. For example, \cite{billio2018bayesian} estimate a tensor regression (very similar to a VAR on $vec({\mathbf{Y}})$), with rank restrictions on the (huge) matrix of model's parameters. Differently from our work, they do not take the sparse nature of networks explicitly into account. \cite{doi:10.1080/02664763.2017.1357684} consider a penalized logistic auto-regression model for binary networks (basically a logistic regression for each link, using all lagged matrix elements, and also products, with a lasso penalization). The same approach can in principle be extended to sparse weighted networks, and in the ZA framework. We conclude this literature review citing some contributions that are relevant to the present work, even if they do not address directly the temporal evolution of weighted networks. Among the many papers that address the issue of modeling static sparse weighted networks, \cite{cimini2014reconstructing} consider a combination of the fitness and the gravity model of \cite{tinbergen1962shaping} to reconstruct sparse weighted financial networks. Even if they do not investigate temporal networks, their approach has proved to be very effective in modeling sparse weighted networks describing financial systems \citep[as discussed in][]{anand2015filling}. Another interesting recent contribution is~\cite{gandy2021compound}, where the authors propose a modelling framework for weighted network data based on the compound Poisson model. They incorporate the binary fitness model as a special case but use an additional set of fitness to describe the distribution of the weights, in addition to the probability of a link being present. Finally, we mention three recent contributions related to our work that model binary temporal networks. The first one is that of \cite{mazzarisi2020dynamic}, where the authors consider a model with time varying fitness and combine it with Discrete Auto-regressive Models \citep{jacobs1978discrete} to investigate link persistence in financial networks. Their approach is very much related to ours as it extends the binary fitness model to the temporal domain, by allowing the fitness to evolve in time. In their case the fitness follows a parameter driven dynamics. Moreover they consider a mechanism to copy links from the past, and explore the possibility of decoupling the link persistence implied by this mechanism from the probability to form new connections captured by the fitness. The second one is the recent \citep{Williams22} which introduces a non-Markovian model of binary temporal networks based on an extension of Discrete Autoregressive Models. Interestingly, authors considers also a local likelihood estimation approach to take into account non-stationarty and infer time varying parameters. The third paper, anticipated in the Introduction, is~\citep{di2019score} where we extend the ERGMs for binary networks -- a class which includes the fitness model -- to the temporal context by allowing its parameters to evolve in time. The updating equation of the time varying parameters follows an autoregressive process whose innovation corresponds to the scaled score of the observation distribution. We show how to exploit the model as an effective filter of miss-specified dynamics and how to deal with cases where one does not have full knowledge of the likelihood function. The present paper owns a lot to the score-driven extension of the ERGM framework (SD-ERGM). Nonetheless, as it will be clear in the next section, the generalization of the static fitness model to the temporal sparse weighted case following the steps of the SD-ERGM is far from being trivial. \section{Score Driven Generalized Fitness Model}\label{sec:SDGFM} In this work, we propose a model for sparse and weighted temporal networks that we refer to as \textit{score driven generalized fitness model}. Our model combines the well known \textit{fitness model} - that was first discussed in \citep{zermelo1929berechnung}, appropriately extended to a weighted zero augmented version to handle the weights, and the DCS approach to time varying parameter models \citep{creal2013generalized, harvey2013dynamic}. Before presenting the details of our model, we deem appropriate to review some preliminary concepts on the binary fitness model and the score driven framework. \subsection{Preliminary Concepts} The first concept that we use in our work is the fitness model. Multiple variations of the fitness model have been considered in the literature \citep{Holland81anexponential, PhysRevLett.89.258702,PhysRevE.78.015101, chatterjee2011random, yan2016asymptotics, yan2019statistical} and the same specification of the probability for a binary, directed or undirected, networks is known also as \textit{beta model} or \textit{configuration model}. Hereafter we will define the fitness model as a model for binary networks where, in the case of directed networks, two parameters are assigned to each node $i$, the in-fitness ${\overleftarrow{\binpar}}_i $ and the out-fitness $ {\overrightarrow{\binpar}}_i$, that describe the tendency of each node to form incoming and outgoing connections, respectively. Because the occurrence of a link is very likely influenced by both intrinsic and external factors, we focus in the following on a specification that allows for the links' probability to depend also on the external, possibly link specific, covariates \citep{yan2019statistical}. Hence the probability of a link to exist is described by \begin{equation*} p_{ij} = \frac{1}{1+ e^{ - ( {\overleftarrow{\binpar}}_i + {\overrightarrow{\binpar}}_j + X_{ij} \beta )}}, \end{equation*} where $\beta$ is the coefficient tied to the information of the, link specific, covariate ${\mathbf{X}}$. This model is very convenient because it allows for a parsimonious modeling of the matrix associated with a network, via node specific fitness. The second key ingredient of our approach is the DCS framework for time varying parameter models, introduced by~\cite{creal2013generalized} and~\cite{harvey2013dynamic}. In order to review it, let us consider a sequence of observations $\graffe{z^{\tonde{t}}}_{t=1}^T$, where each $z^{\tonde{t}} \in\mathbb{R}^M$, and a conditional probability density $P\tonde{z^{\tonde{t}}\vert f^{\tonde{t}}}$, that depends on a vector of time varying parameters $f^{\tonde{t}} \in \mathbb{R}^K$. Defining the score as $\nabla^{\tonde{t}} = \frac{\partial \log{P\tonde{z^{\tonde{t}}\vert f^{\tonde{t}}}}}{\partial f^{\tonde{t}\prime}}$, a score-driven model assumes that the time evolution of $f^{\tonde{t}}$ is ruled by the recursive relation \begin{equation} \label{eq:gasupdaterule} f^{\tonde{t+1}} = w +\boldsymbol{b} f^{\tonde{t}} + \boldsymbol{a} \boldsymbol{\mathcal{I}^{\tonde{t}}} \nabla^{\tonde{t}} , \\ \end{equation} where $w$, $\boldsymbol{a}$ and $\boldsymbol{b}$ are static parameters, $w$ being a $K$ dimensional vector and $\boldsymbol{a}$ and $\boldsymbol{b}$ $K\times K$ matrices. $\boldsymbol{\mathcal{I}^{\tonde{t}}}$ is a $K\times K$ scaling matrix, that is often chosen to be the inverse of the square root of the Fisher information matrix associated with $P\tonde{y^{\tonde{t}}\vert f^{\tonde{t}}}$, i.e. $\boldsymbol{\mathcal{I}^{\tonde{t}}} = \mathbb{E}\quadre{\frac{\partial \log{P\tonde{y^{\tonde{t}}\vert f^{\tonde{t}}}}}{\partial f^{\tonde{t}\prime}} \frac{\partial \log{P\tonde{y^{\tonde{t}}\vert f^{\tonde{t}}}}}{\partial f^{\tonde{t}\prime}}^\prime }^{-\frac{1}{2}}$. However, this is not the only possible specification and different choices for the scaling are discussed in~\cite{creal2013generalized}. A score driven model can be regarded both as \textit{data generating process} (DGP) or as a filter of an unknown dynamics. In both cases, the most important feature of \eqref{eq:gasupdaterule} is the role of the score as the driver of the dynamics of $f^{\tonde{t}}$. The structure of the conditional observation density determines the score, from which the dependence of $f^{\tonde{t+1}}$ on the vector of observations $y^{\tonde{t}}$ follows. When the model is viewed as a DGP, the update results in a stochastic dynamics thanks to the random occurrence of $z^{\tonde{t}}$. When the score-driven recursion is regarded as a filter, the update rule in \eqref{eq:gasupdaterule} is used to obtain a sequence of filtered $\pg{\hat{f}^{\tonde{t}}}_{t=1}^T$. In this setting, the static parameters are estimated maximizing the log-likelihood of the whole sequence of observations \citep[for a detailed discussion, see ][]{harvey2013dynamic, blasques2014maximum}, \begin{equation*} \pt{\hat{w},\hat{\boldsymbol{b}},\hat{\boldsymbol{a}}} = \argmax{\pt{w,\boldsymbol{b},\boldsymbol{a}}} \sum_{t=1}^T \log{P\pt{ z^{\tonde{t}} \vert f^{\tonde{t}} \pt{w,\boldsymbol{b},\boldsymbol{a},\pg{z^{\pt{t^\prime}}}_{t^\prime=1}^{t-1}} }}. \end{equation*} Score driven models have seen an explosion of interest in recent years due to their flexibility and ease of estimation. Indeed, many state of the art wildly popular econometric models turn out to belong to the family of score driven models. Examples are the Generalized Autoregressive Conditional Heteroskedasticity (GARCH) model of \cite{BOLLERSLEV1986307}, the Exponential GARCH model of \cite{nelson1991conditional}, the Autoregressive Conditional Duration model of \cite{engle1998autoregressive}, and the Multiplicative Error Model of \cite{engle2002new}. Moreover, there are motivations, originating in information theory, for the optimality of the score-driven updating rule \cite{blasques2015information}. The introduction of this framework in its full generality has been investigated from a theoretical point of view \citep[][]{blasques2014maximum, blasques2015information, blasques2017finite, harvey2020modeling} and opened the way to applications in various contexts~\footnote{Please refer to \url{http://www.gasmodel.com/index.htm} for the updated collection of papers dealing with GAS models.}. \subsection{Definition of the Model}\label{sec:new_mod_def} In order to introduce the \textit{score driven weighted fitness model}, let us describe a sequence of networks with a set of random variables ${Y_{ij}^{\tonde{t}}}$, one for each link. We propose to use Zero Augmentation to model separately the probability of observing a link, ${Y_{ij}^{\tonde{t}}} > 0$, and the probability to observe a specific weight ${Y_{ij}^{\tonde{t}}}$. We prefer Zero Augmentation over censoring, as we believe the former to be much more flexible in the context of network data. With this choice the probability distribution for the link $ij$ is \begin{equation}\label{eq:zero_aug_w_nets_pdf} P\tonde{{Y_{ij}^{\tonde{t}}} = y} = \left\lbrace\begin{array}{ll} 1-p_{ij}^{\tonde{t}} \quad &for \quad y=0 \\ & \\ p_{ij}^{\tonde{t}} g_{ij}^{\tonde{t}}\tonde{y} \quad &for \quad y>0 \, . \end{array}\right. \\ \end{equation} where $ g_{ij}^{\tonde{t}}\tonde{y}$ is the distribution for the positive continuous weight for link $ij$, conditional on the presence of the link. We then model the binary temporal network and its weights by means of time varying fitness, allowing also for the dependency on external covariates $X_{ij}$. In our model the probability of a link to exist is described by \begin{equation}\label{eq:gen_fit_bin_prob} p_{ij}^{\tonde{t}} = \frac{1}{1+ e^{ - ( {\overleftarrow{\binpar}}_i^{\tonde{t}} + {\overrightarrow{\binpar}}_j^{\tonde{t}} + X^{\tonde{t}}_{{ij}} \beta_{bin} )}}, \end{equation} where, for simplicity we consider only one external covariate ${X_{ij}^{\tonde{t}}}$ with its own associated parameter $\beta_{bin}$, while in general nothing prevents us to have multiple covariates. With this choice, the log-likelihood for the single observation $\pt{{\mathbf{Y}}^{\tonde{t}}}$ in \eqref{eq:zero_aug_w_nets_pdf} is \begin{equation*} \begin{aligned} \log{P\tonde{{\mathbf{Y}}^{\tonde{t}} \vert {\overleftarrow{\binpar}}^{\tonde{t}}, {\overrightarrow{\binpar}}^{\tonde{t}}, \beta_{bin}, \beta_{w}, {\mathbf{X}}^{\tonde{t}}}} &= \sum_{ij} \pt{\mathbb{I}\pt{Y_{ij}^{\tonde{t}}} - 1} \pt{{\overleftarrow{\binpar}}_i^{\tonde{t}} + {\overrightarrow{\binpar}}_j^{\tonde{t}} + \beta_{bin} X^{\tonde{t}}_{ij}} \\ &- \log{\tonde{1 + e^{-\pt{{\overleftarrow{\binpar}}_i^{\tonde{t}} + {\overrightarrow{\binpar}}_j^{\tonde{t}} + \beta_{bin} X^{\tonde{t}}_{ij}}} }} + \mathbb{I}\pt{Y_{ij}^{\tonde{t}}} \log{g_{ij}^{\tonde{t}}} \\ &=\sum_i {\overleftarrow{\bindeg}}_i {\overleftarrow{\binpar}}_i^{\tonde{t}} + {\overrightarrow{\bindeg}}_i {\overrightarrow{\binpar}}_i^{\tonde{t}} \\ & + \sum_{ij} \mathbb{I}\pt{Y_{ij}^{\tonde{t}}} \beta_{bin} X^{\tonde{t}}_{ij} + {\overleftarrow{\binpar}}_i^{\tonde{t}} + {\overrightarrow{\binpar}}_j^{\tonde{t}} + \beta_{bin} X^{\tonde{t}}_{ij}\\ &- \log{\tonde{1 + e^{-\pt{{\overleftarrow{\binpar}}_i^{\tonde{t}} + {\overrightarrow{\binpar}}_j^{\tonde{t}} + \beta_{bin} X^{\tonde{t}}_{ij}}} }} + \mathbb{I}\pt{Y_{ij}^{\tonde{t}}} \log{g_{ij}^{\tonde{t}}}, \end{aligned} \end{equation*} where the indicator function $\mathbb{I}$ is zero if its argument is less or equal than zero and one otherwise. We denote the in and out degrees of vertex $i$ at time $t$ respectively as ${\overleftarrow{\bindeg}}_i = \sum_j\mathbb{I}\pt{{Y_{ij}^{\tonde{t}}}}$ and ${\overrightarrow{\bindeg}}_i = \sum_j\mathbb{I}\pt{{Y_{ji}^{\tonde{t}}}}$. As in the standard fitness model for binary networks, the time varying \textit{binary fitness} parameters $\pt{{\overleftarrow{\binpar}}^{\tonde{t}}, {\overrightarrow{\binpar}}^{\tonde{t}}}$ describe the tendency of nodes at time $t$ to form links, that are not explained by the external covariate ${\mathbf{X}}^{\tonde{t}}$. In order to model the weights of the observed links, we consider a generalized version of the fitness model where we associate to each node $i$, at time $t$, also the parameters $ {\overleftarrow{\wpar}}_i^{\tonde{t}}, {\overrightarrow{\wpar}}_i^{\tonde{t}}$, that we call \textit{weighted fitness}. They describe the propensity of a node to have more or less heavy weights in incoming and outgoing links respectively, and are related to the distribution of the weights of present links $g_{ij}^{\tonde{t}}$ by \begin{equation}\label{eq:gen_fit_cond_exp} \mathbb{E}\pq{{Y_{ij}}^{\tonde{t}}\vert {Y_{ij}^{\tonde{t}}}>0 } = e^{\tonde{{\overleftarrow{\wpar}}_{i}^{\tonde{t}} + {\overrightarrow{\wpar}}_{j}^{\tonde{t}} + X_{ij}^{\tonde{t}} \beta_{w}}}, \end{equation} where we considered the dependency on a single external covariate ${X_{ij}^{\tonde{t}}}$ and indicated the associated regression coefficient with $\beta_{w}$ to distinguish it from the coefficient for the binary part in \eqref{eq:gen_fit_bin_prob}. This choice of linking the weighted fitness to the mean of the distribution $g$ provides dynamics and heterogeneity only to one parameter of the conditional distribution, as shown in the following with a concrete example. The score driven weighted fitness model can be defined for a generic distribution $g$, for both continuous and discrete data, as we discuss in Appendix~\ref{sec:weight_distrib_SI}. Nevertheless in the following, for concreteness, we will focus on the gamma distribution to model links' weights \begin{equation*} g_{ij}\tonde{y} = \frac{\pt{\mu_{ij}}^{-\sigma} y\e{\sigma-1}}{\Gamma\pt{\sigma}} e^{-{\frac{y}{\mu_{ij}}}}\,, \end{equation*} where $\sigma$ is a positive constant. Given a sequence of observed weighted adjacency matrices $\graffe{{\mathbf{Y}^{\tonde{t}}}}_{t=1}^T$, we denote by $f^{\tonde{t}}$ a $K$ dimensional vector, where $K=4\times N$, containing all the time varying fitness parameters ${\overleftarrow{\binpar}}^{\tonde{t}}, {\overrightarrow{\binpar}}^{\tonde{t}}, {\overleftarrow{\wpar}}^{\tonde{t}}, {\overrightarrow{\wpar}}^{\tonde{t}}$. With this notation, the model's distribution takes the following form \begin{equation}\label{eq:gen_gamma_fit_mod} P\tonde{{Y_{ij}^{\tonde{t}}} = y \vert f^{\tonde{t}} , \beta_{bin}, \beta_{w}, \sigma } = \left\lbrace\begin{array}{ll} \frac{e^{ - ( {\overleftarrow{\binpar}}_i^{\tonde{t}} + {\overrightarrow{\binpar}}_j^{\tonde{t}} + X^{\tonde{t}}_{{ij}} \beta_{bin} )}}{1+ e^{ - ( {\overleftarrow{\binpar}}_i^{\tonde{t}} + {\overrightarrow{\binpar}}_j^{\tonde{t}} + X^{\tonde{t}}_{{ij}} \beta_{bin} )}} \quad & for \quad y=0 \\ \\ \frac{ \pt{\mu_{ij}^{\tonde{t}}}^{-\sigma} \Gamma\pt{\sigma}^{-1} }{1+ e^{ - ( {\overleftarrow{\binpar}}_i^{\tonde{t}} + {\overrightarrow{\binpar}}_j^{\tonde{t}} + X^{\tonde{t}}_{{ij}} \beta_{bin} )}} y\e{\sigma-1} e^{-{\frac{y}{\mu_{ij}^{\tonde{t}}}}} \quad &for \quad y>0 \, . \end{array}\right. \end{equation} with $$\mu_{ij}^{\tonde{t}} = \sigma^{-1} e^{\tonde{{\overleftarrow{\wpar}}_{i}^{\tonde{t}} + {\overrightarrow{\wpar}}_{j}^{\tonde{t}} + X_{ij}^{\tonde{t}} \beta_{w}}}.$$ We let the fitness, both binary and weighted, evolve in time, following the score-driven recursive update rule in \eqref{eq:gasupdaterule}, that in this case takes the form \begin{equation} \label{eq:fit_sd_update} f^{\tonde{t+1}} = w +b f^{\tonde{t}} + a \boldsymbol{\mathcal{I}^{\tonde{t}}} \frac{\partial \log{P\tonde{{\mathbf{Y}^{\tonde{t}}}\vert f^{\tonde{t}}}}}{\partial f^{\tonde{t}\prime}} , \\ \end{equation} where $w$, $a$ and $b$ are three $K$ dimensional vectors of static parameters~\footnote{Hence, in our definition we have three static parameters for each time varying fitness. While in the general formulation of score driven models \citep[as given, for example, in][]{creal2013generalized}, the parameters $b$ and $a$ are defined as matrices, it is very common to impose some restriction on them in order to limit the number of parameters to be estimated, as we do here.}. $\boldsymbol{\mathcal{I}^{\tonde{t}}}$ is a $K\times K$ scaling matrix. Hence, conditionally on the value of the parameters $f^{\tonde{t}}$ at time $t$ and the observed adjacency matrix ${\mathbf{Y}}^{\tonde{t}}$, the parameters at time $t+1$ are deterministic. The element $k$ of the score for the in and out binary fitness takes the following form \begin{align}\label{eq:score_fit_bin} \frac{\partial \log{P\tonde{{\mathbf{Y}} \vert f^{\tonde{t}}, \beta_{bin}, \beta_{w}, {\mathbf{X}}^{\tonde{t}}}} }{\partial {\overleftarrow{\binpar}}^{\tonde{t}\prime}_k} &= {\overleftarrow{\bindeg}}_k - \sum_{j} \frac{1}{1+ e^{ - ( {\overleftarrow{\binpar}}_k^{\tonde{t}} + {\overrightarrow{\binpar}}_j^{\tonde{t}} + X^{\tonde{t}}_{{kj}} \beta_{bin} )}} \nonumber \\ \nonumber \\ \frac{\partial \log{P\tonde{{\mathbf{Y}} \vert f^{\tonde{t}}, \beta_{bin}, \beta_{w}, {\mathbf{X}}^{\tonde{t}}}} }{\partial {\overrightarrow{\binpar}}^{\tonde{t}\prime}_k} &= {\overrightarrow{\bindeg}}_k - \sum_{i} \frac{1}{1+ e^{ - ( {\overleftarrow{\binpar}}_i^{\tonde{t}} + {\overrightarrow{\binpar}}_k^{\tonde{t}} + X^{\tonde{t}}_{{ik}} \beta_{bin} )}} \end{align} and does not depend on the choice of $g$. The element $k$ of the score for the weighted in and out fitness are \begin{align}\label{eq:score_fit_w} \frac{\partial \log{P\tonde{{\mathbf{Y}} \vert f^{\tonde{t}}, \beta_{bin}, \beta_{w}, {\mathbf{X}}^{\tonde{t}}}}}{\partial {\overleftarrow{\wpar}}^{\tonde{t}\prime}_k} &= \sum_{j} \mathbb{I}\pt{Y_{kj}^{\tonde{t}}} \frac{\partial \log g_{kj}}{\partial {\overleftarrow{\wpar}}_k} = \sum_j \mathbb{I}\pt{Y_{kj}^{\tonde{t}}} \pt{\frac{Y^{\tonde{t}}_{kj}}{\mu^{\tonde{t}}_{kj}} - \sigma} \nonumber \\ \nonumber \\ \frac{\partial\log{P\tonde{{\mathbf{Y}} \vert f^{\tonde{t}}, \beta_{bin}, \beta_{w}, {\mathbf{X}}^{\tonde{t}}}}}{\partial {\overrightarrow{\wpar}}^{\tonde{t}\prime}_k} &= \sum_{i} \mathbb{I}\pt{Y_{ik}^{\tonde{t}}} \frac{\partial \log g_{ik}}{\partial {\overrightarrow{\wpar}}_k} = \sum_i \mathbb{I}\pt{Y_{ik}^{\tonde{t}}} \pt{\frac{Y^{\tonde{t}}_{ik}}{\mu^{\tonde{t}}_{ik}} - \sigma} . \end{align} In \eqref{eq:score_fit_bin} and \eqref{eq:score_fit_w}, as scaling matrix we use the Hessian of the log-likelihood. As any score driven model, our model can be regarded both as a DGP or as a filter of a misspecified dynamics. When used as a DGP, it describes a stochastic dynamics because, at each time $t$, the adjacency matrix is not known in advance. It is randomly sampled from $P\tonde{{\mathbf{Y}^{\tonde{t}}}\vert f^{\tonde{t}}}$ and then used to compute the score that, as a consequence, becomes itself stochastic. When the sequence of networks $\graffe{{\mathbf{Y}^{\tonde{t}}}}_{t=1}^T$ is observed, and the model is applied as a filter of the time varying parameters, the static parameters $\tonde{w,b,a}$, that best fit the data, can be estimated maximizing the log-likelihood of the whole time series. Taking into account that each network ${\mathbf{Y}^{\tonde{t}}}$ is independent from all the others \textit{conditionally} on the value of $f^{\tonde{t}}$, the log-likelihood can be written as \begin{equation}\label{eq:sd_fit_mod_likelihood} \log{P\tonde{\graffe{{\mathbf{Y}^{\tonde{t}}}}_{t=1}^T \vert w,b, a}} = \sum_{t=1}^T \log{P\tonde{ {\mathbf{Y}^{\tonde{t}}} \vert f^{\tonde{t}}\tonde{w, b, a,\graffe{{\mathbf{Y}}^{\tonde{t^\prime}}}_{t^\prime=1}^{t-1}} }}. \end{equation} Then the filtered time varying fitness $\widehat{f}^{\tonde{t}}$ are obtained by an iterative application of \eqref{eq:fit_sd_update} using as parameter values the maximum likelihood estimates (MLEs). For ease of exposition, so far we introduced a baseline version of our model where the probabilities depend on a single external covariate through the scalar $\beta_{bin}$ and $\beta_w$. This implies uniform, i.e. equal across all links, dependency on the covariates. In order to explore node specific dependency on the external variables, we could easily consider the specification $X_{ij} \pt{\beta_{bin_i} + \beta_{bin_j}}$, where we associate two covariate coefficients to each node, one for outgoing links and one for incoming links. \subsubsection{Poisson Distribution for Discrete Weights and Inference from Partial Information } As mentioned before, our method is not restricted to a specific distribution for the weights and we can easily consider a different specification for $g_{ij}$ in \eqref{eq:zero_aug_w_nets_pdf}, instead of the gamma distribution. For example, in some settings it might be more appropriate to model weights as discrete variables. In this section we show how we can describe discrete weights by means of the Poisson distribution, defined as follows \begin{equation*} g_{ij}\tonde{y} = \frac{ \mu_{ij}^{y} e^{-\mu_{ij}} }{ y!}\,~~~~~y\in {\mathbb N}. \end{equation*} If we consider a specification without external covariates, hence where $\mu_{ij}^{\tonde{t}} = \sigma^{-1} e^{\tonde{{\overleftarrow{\wpar}}_{i}^{\tonde{t}} + {\overrightarrow{\wpar}}_{j}^{\tonde{t}}}}$, the log-likelihood takes the following form \begin{equation*} \begin{aligned} \log{P\tonde{{\mathbf{Y}}^{\tonde{t}} \vert f^{\tonde{t}}}} &=\sum_i {\overleftarrow{\bindeg}}_i^{\tonde{t}} {\overleftarrow{\binpar}}_i^{\tonde{t}} + {\overrightarrow{\bindeg}}_i^{\tonde{t}} {\overrightarrow{\binpar}}_i^{\tonde{t}} + \sum_{ij} {\overleftarrow{\binpar}}_i^{\tonde{t}} + {\overrightarrow{\binpar}}_j^{\tonde{t}} - \log{\tonde{1 + e^{-\pt{{\overleftarrow{\binpar}}_i^{\tonde{t}} + {\overrightarrow{\binpar}}_j^{\tonde{t}} }} }} \\ & + \sum_{ij} \mathbb{I}\pt{{Y_{ij}^{\tonde{t}}}} \log{g_{ij}^{\tonde{t}}}\\ &=\sum_i {\overleftarrow{\bindeg}}_i^{\tonde{t}} {\overleftarrow{\binpar}}_i^{\tonde{t}} + {\overrightarrow{\bindeg}}_i^{\tonde{t}} {\overrightarrow{\binpar}}_i^{\tonde{t}} + \sum_{ij} {\overleftarrow{\binpar}}_i^{\tonde{t}} + {\overrightarrow{\binpar}}_j^{\tonde{t}} - \log{\tonde{1 + e^{-\pt{{\overleftarrow{\binpar}}_i^{\tonde{t}} + {\overrightarrow{\binpar}}_j^{\tonde{t}} }} }} \\ &+ \sum_i \pt{{\overleftarrow{\wdeg}}_i^{\tonde{t}} {\overleftarrow{\wpar}}_i^{\tonde{t}} + {\overrightarrow{\wdeg}}_i^{\tonde{t}} {\overrightarrow{\wpar}}_i^{\tonde{t}} - {\overleftarrow{\bindeg}}_i^{\tonde{t}} e^{{\overleftarrow{\wpar}}_i^{\tonde{t}}} - {\overrightarrow{\bindeg}}_i^{\tonde{t}} e^{{\overrightarrow{\wpar}}_i^{\tonde{t}}}} - \log\pt{{Y_{ij}^{\tonde{t}}} !}, \end{aligned} \end{equation*} where we defined the in and out strengths as ${\overleftarrow{\wdeg}}_i^{\tonde{t}} = \sum_j{Y_{ij}^{\tonde{t}}}$ and ${\overrightarrow{\wdeg}}_i^{\tonde{t}} = \sum_j{Y_{ji}^{\tonde{t}}}$, respectively. Interestingly the log-likelihood of the model based on the Poisson distribution depends only on the degrees $\pt{{\overleftarrow{\bindeg}}^{\tonde{t}}, {\overrightarrow{\bindeg}}^{\tonde{t}}}$ and strengths $\pt{{\overleftarrow{\wdeg}}^{\tonde{t}}, {\overrightarrow{\wdeg}}^{\tonde{t}}}$ and, apart from the last not essential additive term, does not depend on the full matrix ${\mathbf{Y}^{\tonde{t}}}$. The score of $\pt{\ibinpar , \obinpar }$ does not depend on the distribution of the weights and reads as in \eqref{eq:score_fit_bin}, while the element $k$ of the score for the weighted in and out fitness is \begin{align*} \frac{\partial \log{P\tonde{{\mathbf{Y}} \vert f^{\tonde{t}}}}}{\partial {\overleftarrow{\wpar}}^{\tonde{t}\prime}_k} &= {\overleftarrow{\wdeg}}_k^{\tonde{t}} - {\overleftarrow{\bindeg}}_k^{\tonde{t}} e^{{\overleftarrow{\wpar}}_k^{\tonde{t}}} \nonumber \\ \nonumber \\ \frac{\partial\log{P\tonde{{\mathbf{Y}} \vert f^{\tonde{t}}}}}{\partial {\overrightarrow{\wpar}}^{\tonde{t}\prime}_k} &= {\overrightarrow{\wdeg}}_k^{\tonde{t}} - {\overrightarrow{\bindeg}}_k^{\tonde{t}} e^{{\overrightarrow{\wpar}}_k^{\tonde{t}}} . \end{align*} Hence we could use the score driven update and estimate the model only relying on the sequences of strengths and degrees, ignoring the full matrix ${\mathbf{Y}^{\tonde{t}}}$. This has interesting implications as it allows for a parsimonious implementation of the inference procedure. Indeed, the observations can be compressed from a sequence of $T$ matrices each with $N\pt{N-1}$ elements~\footnote{Not considering the diagonal elements that would describe self-loops.} to only $4$ sequences of vectors each of length $N$, for a total of $4N$ elements. More importantly, it is not uncommon in practice not to have access to the full adjacency matrix, but only to the degree and strength sequences. This circumstance is so relevant that has stimulated an important stream of literature on the reconstruction of networks from partial information \citep{Mis11, anand2015filling, anand2017missing, gandy2017adjustable}. We do not further investigate this aspect in this paper but leave it for future research. \section{Numerical Simulations in Misspecified Settings}\label{sec:num_sim_dwfm} In this section we discuss the results of extensive numerical simulations~\footnote{The python code used for the simulations is available at \url{https://github.com/domenicodigangi/DynWGraphsPaper}} that we run to evaluate the performance of the score driven weighted fitness model as a misspecified filter, i.e. when the true DGP of the simulated data is not the same as the score driven update rule. This is the typical situation in practical applications, where the true DGP is unknown. When modeling a sequence of observed weighted networks with time varying fitness as in Eq. \eqref{eq:gen_gamma_fit_mod}, we compare the score driven update rule to two simple alternative filters. The first one is to specify the fitness, both binary and weighted, as static. This amounts to consider a zero augmented generalized linear model, accounting for node specific effects by means of the fitness. The probability of observing a link is that of the standard fitness model \eqref{eq:gen_fit_bin_prob}, where the parameters $\pt{\ibinpar , \obinpar }$ do not change for all time steps $T=1,\dots, T$, while the distribution of the weights depends on the constant weighted fitness $\pt{\iwpar , \owpar }$ such that \begin{equation*}\label{eq:gen_fit_cond_exp_const} \mathbb{E}\pq{{Y_{ij}}^{\tonde{t}}\vert {Y_{ij}^{\tonde{t}}}>0 } = e^{\tonde{{\overleftarrow{\wpar}}_{i} + {\overrightarrow{\wpar}}_{j} + X_{ij}^{\tonde{t}} \beta_{w}}}\,. \end{equation*} The second alternative consists in estimating a static fitness model for each observed snapshot, i.e. at each time $t$. This procedure results in a sequence of single snapshot estimates for the fitness, where clearly the number of parameters to be estimated grows with the number of networks observed. This sequence of single snapshot estimates provides a filter for the time varying fitness, and has already been discussed, for the binary case, in \cite{di2019score}. In the rest of this section we compare the score driven weighted fitness model, when appropriate, with these two alternatives in three numerical experiments. \subsection{Filtering Time Varying Fitness} In our first numerical experiment we focus on filtering the fitness that evolves in time following a DGP different from the score driven dynamics. Specifically, we assess how accurately the score driven methodology allows us to filter the fitness when no external covariates are present, and compare it with the sequence of single snapshot MLEs. Since the focus here in on filtering time varying fitness, we do not consider the alternative version with constant fitness. In \cite{di2019score} we already carried out a similar comparison for the binary part of the model and we showed that, for the binary part, the sequence of cross sectional estimates clearly under-performs the score driven fitness model, both in numerical simulations and empirical applications. Here we repeat a similar exercise considering also the weighted fitness. Specifically, we analyze sequences of networks sampled from Eq. \eqref{eq:gen_gamma_fit_mod} where each fitness, both binary and weighted, evolves according to an auto-regressive process of order one, $f_i^{\tonde{t+1}} = b_0 + b_1 f_i^{\tonde{t}} + \epsilon^{\tonde{t}} $ where $\epsilon \sim N\pt{0, 0.1}, \; b_1 = 0.98$ and $b_0$ is chosen for each parameter in order to sample networks that resemble a real world realization. We also consider one example of a deterministic DGP where the fitness evolve in time following a sin function, as shown in Figure \ref{fig:dgp_sin}. In practice, in order to obtain realistic parameters values for the DGPs, we first estimate the fitness models, both binary and weighted on the first observation available for the temporal interbank network data that we describe in Section \ref{sec:app_eonia}~\footnote{The numerical values for the fitness used are available at \url{https://github.com/domenicodigangi/dynwgraphs}. We have checked that similar results hold true using different numerical values for the fitness parameters.}. Then, we sample time series of $150$ observations from the AR(1) processes for each fitness independently. We use the simulated fitness paths as parameters to sample from the model specification in Eq. \eqref{eq:gen_gamma_fit_mod}. Using only the sequence of sampled matrices as input, with the two methods we filter the time varying evolution of the fitness. For each fitness we compute the mean squared error (MSE) over the time series and then average it across nodes, keeping separate the MSE for binary and weighted fitness. Finally, we repeat the sampling and filtering procedure $50$ times and average the obtained MSE across the repetitions. \begin{figure}[h!] \centering \includegraphics[width=0.49\linewidth ]{./missp_filt_w_fit_sin.png} \caption{Filtered paths for one of the time varying weighted fitness parameters. The path from the true DGP is in black. The blue dots are the single snapshot estimates, and the red lines the paths filtered with the score driven generalized fitness model.} \label{fig:dgp_sin} \end{figure} \begin{table}[ht] \centering \begin{tabular}{l|cc|cc} \rule{0pt}{12pt} DGP & \multicolumn{2}{c}{AR(1)} & \multicolumn{2}{c}{SIN} \\ \hline \rule{0pt}{12pt} Filter & SS & SD & SS & SD \\ \hline \hline \rule{0pt}{12pt} Avg. MSE $\pt{\ibinpar , \obinpar }$ & 0.36 & 0.11 & 0.66 & 0.04 \\ \hline \rule{0pt}{12pt} Avg. MSE $\pt{\iwpar , \owpar }$ & 0.54 & 0.25 & 0.6 & 0.18 \\ \end{tabular} \caption{Results from the first experiment: MSE of the filtered fitness averaged across all nodes, for both the single snapshot (SS) and score driven (SD) filters.} \label{tab:experiment_1} \end{table} The results, showed in Table \ref{tab:experiment_1}, confirm that, also for the weighted fitness, the score driven update rule is clearly a better choice in filtering misspecified paths for the time varying fitness. Having established that as a misspecified filter of the fitness, the sequence of single snapshot estimates under-performs the score driven alternative, also for the weighted fitness, let us mention a second limitation of the single snapshot approach. In the usual empirical setting, only one network realization is available at each time step. In such a situation, we might not be able to jointly estimate the sequence of single snapshots estimates and the coefficients $\beta_{bin}$, or $\beta_{w}$, due to the low number of observations per parameter. This is the case, for instance, if we consider as covariate a variable that is uniform across all links i.e. $X^{\tonde{t}}_{ij} = x^{\tonde{t}}$ for all $i, j$, a case that we consider in the empirical application of Section \ref{sec:app_eonia}. An identification issue arises for the single snapshot estimates for both the binary and weighted parameters. In fact, the probability of the sequence of observations, given the sequence of weighted fitness, remains unchanged under the following transformation \begin{align* {\overleftarrow{\wpar}}_i^{\tonde{t}} &\rightarrow {\overleftarrow{\wpar}}_i^{\tonde{t}} + c_1 x^{\tonde{t}}, \quad \forall i \nonumber\\ {\overrightarrow{\wpar}}_i^{\tonde{t}} &\rightarrow {\overrightarrow{\wpar}}_i^{\tonde{t}} + c_2 x^{\tonde{t}}, \quad \forall i \nonumber \\ \beta_w &\rightarrow \beta_w + c_3 , \end{align*} for any choice of $(c_1, c_2, c_3)$ such that $c_1 + c_2 + c_3 = 0$, since for each $t$ such a transformation does not change the sum ${\overleftarrow{\wpar}}^{\tonde{t}}_i + {\overrightarrow{\wpar}}^{\tonde{t}}_j + \beta_w x^{\tonde{t}}$\footnote{The need to fix an identification condition in fitness models, even without external covariates, is well known as we discuss in Appendix \ref{sec:identification_SI}. }. Hence, in the model specification with a uniform external covariate, we cannot identify the parameter of interest $\beta_w$ and this prevents us to use sequences of single snapshot estimators. With a simple change of notation we can see that the same issue arises for the binary parameters. We point out that the model with static fitness, that we use for comparison, and our score driven version, do not suffer from this identification issue, as in both cases the number of static parameters to be estimated does not increase with the number of time steps observed. For instance, in the score driven model, the sequence of time varying parameters $\pt{{\overleftarrow{\wpar}}^{\tonde{t}}, {\overrightarrow{\wpar}}^{\tonde{t}}}$ for $t=1, \dots, T$, is not estimated directly but follows from the score driven update rule \eqref{eq:fit_sd_update} that is uniquely identified given the sequence of observations and the static parameters $\tonde{w,b,a}$. For this reason, in the rest of this section, we compare the score driven weighted fitness model only with the alternative having constant fitness. \subsection{External Covariates and Omitted Variables Misspecification}\label{sec:num_sim_dwfm_unobs} We present here two numerical experiments to assess how effective the score driven weighted fitness model can be in estimating the dependency of the network dynamics on external covariates. We show that, in synthetically generated datasets, introducing the binary and weighted time varying fitness reduces errors due to unobserved variables. Our second experiment is designed to show that the score driven model properly describes the effect of external covariates, even when the time varying fitness is generated by a misspecified DGP. Moreover it highlights the importance of taking into account time varying node specific effects. In fact, assuming the fitness to be constant, when they are actually time varying, results in poor estimates of the dependency on external covariates. In order to show that, we consider samples from the model in Eq. \eqref{eq:gen_gamma_fit_mod}, where we let the fitness evolve with the AR(1) DGP of the previous experiment. We also assume that the network dynamics depends on the realization of two independent, predetermined, external variables, ${\mathbf{X}}_{bin}$ and ${\mathbf{X}}_w$. The first covariate enters the binary part of the DGP $$ p_{ij}^{\tonde{t}} = \frac{1}{1+ e^{ - ( {\overleftarrow{\binpar}}_i^{\tonde{t}} + {\overrightarrow{\binpar}}_j^{\tonde{t}} + X^{\tonde{t}}_{bin_{ij}} \beta_{bin} )}}, $$ and the second one influences the expected weights in \ref{eq:gen_gamma_fit_mod} as follows $$ \mu_{ij}^{\tonde{t}} = \sigma^{-1} e^{\tonde{{\overleftarrow{\wpar}}_{i} + {\overrightarrow{\wpar}}_{j} + X_{w_{ij}}^{\tonde{t}} \beta_{w}}}. $$ We fix $\beta_{bin} = 1$, $\beta_{w} = 1$ and consider two possible specifications for the DGP of the synthetic external covariates. In the first one, both external variables are scalar, $X^{\tonde{t}}_{bin_{ij}} = x_{bin}^{\tonde{t}}$ and $ X^{\tonde{t}}_{w_{ij}} = x_{w}^{\tonde{t}}$ for all $i, j$, and follow an AR(1) process equal to the one followed by the fitness in the previous example~\footnote{We set the parameter $b_0$ so that the unconditional mean of the AR(1) process is equal to $1$.}. In the second specification, we set $X^{\tonde{t}}_{bin_{ij}} = \mathbb{I}\pt{{Y_{ij}^{\tonde{t - 1}}}} $ and $ X^{\tonde{t}}_{w_{ij}} = \mathbb{I}\pt{{Y_{ij}^{\tonde{t - 1}}}} \log\pt{{Y_{ij}^{\tonde{t - 1}}}}$ for all $i, j$. The latter DGP, due to the explicit dependency of the network at time $t$ from its realization at previous time $t-1$, simulates a temporal network with link persistence, i.e. a higher probability of observing at time $t$ a link if this was present at time $t-1$ and weight persistence, i.e. the weight at time $t$, if the link is present, is affected by its weight at time $t-1$. We sample 50 sequences of networks, and external covariates, each of 150 time steps. For each sampled sequence, we estimate the score driven weighted fitness model maximizing the likelihood of the static parameters. We then compute the MSE between simulated and estimated values of $\beta_{bin}$ and $\beta_w$ across the sample. In order to test the effect of neglecting time varying effects, we repeat the same procedure for a version of the model with static fitness and report the results in Table \eqref{tab:experiment_2}. \begin{table}[ht] \centering \begin{tabular}{l|cc|cc} DGP & \multicolumn{2}{c|}{\rule{0pt}{15pt} AR(1) Scalar External Regressors } & \multicolumn{2}{c}{\rule{0pt}{15pt} Persistent links and weights } \\ \hline Filter \rule{0pt}{15pt} & Constant Fitness & SD Fitness & Constant Fitness & SD Fitness \\ \hline \hline \rule{0pt}{12pt} MSE $\beta_{bin}$ & 0.14 & 0.06 & 0.23 & 0.02 \\ \hline \rule{0pt}{12pt} MSE $\beta_w$ & 0.34 & 0.02 & 0.18 & 0.015 \\ \end{tabular} \caption{Results for the second experiment: MSE of the estimated regression coefficients (average over 50 samples). The fitness dynamics follow an AR(1) process. Two columns on the left: external regressors are scalar and follow two AR(1) processes. Two columns on the right: regressors induce a persistent dynamics in both links and weights.} \label{tab:experiment_2} \end{table} It emerges clearly that not taking into account the dynamics of the fitness can severely deteriorate the estimation of the effect of external covariates. In order to motivate our third and last numerical experiment, let us mention that the topic of estimation errors due to omitted variables has been discussed widely in the econometric literature \citep[][]{greene2000econometric, barreto2006introductory, wooldridge2010econometric}. The standard approach to mitigate it consists in using control variables. This approach has some known downsides \citep[refer for instance to][]{griliches1977estimating, yatchew1985specification, clarke2005phantom} and, most importantly, it is not always viable since the data on appropriate controls might not be available. Indeed, this is very common for financial networks, as the one that we consider in Section \ref{sec:app_eonia}, where the variables that one would like to use as controls are likely to be privacy protected and often unavailable to researchers. Considering, for example the case of interbank networks, we could very well expect the current leverage of a bank to have a significant influence on the decision to borrow or lend, i.e. create interbank links. Nevertheless it is very unlikely for this information to be available, at the frequency required. This issue is even more frequent when the datasets are anonymous, and the identity of the nodes is not known. For these reasons, in the third experiment we show that allowing the fitness to vary in time is extremely beneficial to mitigate the errors due to omitted variables, at least in the context of controlled numerical simulations. We assume the network dynamics, both of the links and of the weights, to be determined by two external covariates. We then assume that, for whatever reason, only one of them is observed. Specifically the considered DGP is similar to the one in Eq. \eqref{eq:gen_gamma_fit_mod} but with parameters defined as $$ p_{ij}^{\tonde{t}} = \frac{1}{1+ e^{ - ( \beta_{1,bin} x^{\tonde{t}}_{1} + \beta_{2,bin} x^{\tonde{t}}_{2} )}}, $$ and $$ \mu_{ij}^{\tonde{t}} = \sigma^{-1} e^{\tonde{ \beta_{1,w} x_{1}^{\tonde{t}} + \beta_{2, w} x_{2}^{\tonde{t}} }}. $$ We assume that $x_2^{\tonde{t}}$ is not observed and assess the effect of omitting it when estimating the coefficients $\beta_{bin_1}$ and $\beta_{w_1}$. We show that the score driven fitness compensates the impact of the neglected variable on the estimates. The external variables $x_{1}^{\tonde{t}}$ and $x_{2}^{\tonde{t}}$ are independent and follow an AR(1) model with high persistence, $b_1 = 0.98$. \begin{table}[ht] \centering \begin{tabular}{l|ccc} DGP & \multicolumn{3}{c}{\rule{0pt}{15pt} No Fitness and two regressors} \\ \hline Filter \rule{0pt}{15pt} & No Fitness & Constant Fitness & SD Fitness \\ \hline \hline \rule{0pt}{12pt} MSE $\beta_{bin_1}$ & 5.66 & 0.53 & 0.01 \\ \hline \rule{0pt}{12pt} MSE $\beta_{w_1}$ & 133 & 0.39 & 0.08 \\ \hline \end{tabular} \caption{Misspecified filtering of a DGP with two covariates and no fitness. First column: MSE for the estimates of $\beta_{bin_1}$ and $\beta_{w_1}$, when no fitness is used (average over 50 samples). Second column: MSE for estimates with constant fitness. Third column: MSE using a model with score driven fitness.} \label{tab:experiment_3} \end{table} We sample the DGP and estimate the coefficients $50$ times for a sequence of networks $150$ time step long. We then compare the MSE for the estimates of parameters $\beta_{1,bin}$ and $\beta_{1,w}$, obtained using three different specifications of the model in \eqref{eq:gen_gamma_fit_mod}: one without node heterogeneity, hence no fitness, where the probability of observing a link depends only on the observed covariate. The second is a model with constant fitness a the model with score driven fitness and the third uses the observed external covariate. From the results that we report in Table \eqref{tab:experiment_3}, it follows that considering a model with time varying fitness is extremely beneficial when the DGP is misspecified and does not feature node specific effects. We believe that the results presented in this section strongly support the choice to describe temporal weighted networks by means of time varying fitness. Moreover, they give us clear insights to interpret the results of Section \ref{sec:emid_app}, where we find a clear advantage, in terms of goodness of fit, in using the score driven weighted fitness model to describe the empirical data. \section{Link and Weight Dynamics in the Italian e-MID}\label{sec:emid_app} We apply our model to the interbank overnight loans market described as a temporal weighted network. Interbank markets are an important point of encounter for banks' supply and demand of extra liquidity, and have received much attention in the literature \citep[see][ for a review]{green2016overnight}. In particular, e-MID has been investigated in many papers \citep[see, for example][and references therein]{iori2008network,finger2013network,mazzarisi2020dynamic,barucca2018organization}. We use data from the e-MID, a market where banks can extend loans to one another for a specified term and/or collateral. Our dataset contains the list of all credit transactions in each day from June 6, 2009 to February 27, 2015. In our analysis, we investigate the interbank network of overnight loans, aggregated weekly. The standard approach in the literature to model temporal interbank networks is to disregard the size of the exposures and consider only the presence or absence of links, i.e. to consider only the binary network. Thanks to the flexibility of the score driven weighted fitness model, we are able to take into account and explicitly model the strength of the links. We consider a link from bank $j$ to bank $i$ as present at week $t$ if bank $j$ lent money overnight to bank $i$, and the associated weight as the total amount lent over that period. This results in a set of $T = 298 $ weekly aggregated weighted networks. For a detailed description of the dataset, we refer the reader to \cite{barucca2018organization}. \begin{figure}[h!] \centering \includegraphics[width=0.49\linewidth ]{./number_of_links.png} \includegraphics[width=0.49\linewidth ]{./avg_weight_mln.png} \caption{Left panel: number of links present in the data at each time step. Right panel: average weight in Millions of Euro of the present links.} \label{fig:emid_stats} \end{figure} As it is evident from the left panel of Figure \ref{fig:emid_stats}, the number of links in e-MID is significantly lower in the second half of the dataset. In particular it started declining in $2011$, most likely as a consequence of the European sovereign debt crisis and of the ECB unconventional measures, namely the long term refinancing operations (LTROs) that took place on the 22nd of December 2011 and on the 29th of February 2012 (see \cite{barucca2018organization} for an in-depth discussion). The number of links then fluctuated around a new lower level since the beginning of $2012$. As discussed in \cite{barucca2018organization}, the decreased number of links corresponds to a lower number of banks being active in the market. Both the network density~\footnote{Number of links present divided by the number of possible links, given the number of active banks.} and the average weight of present links has not followed a similar clear transition to a different level. \subsection{Link and Weight Prediction}\label{sec:weight_pred} As a first empirical application, we explore the possibility of using our approach to predict the presence of future links and the value of their weights. The problem of link prediction in temporal networks is extremely relevant in practical applications and has been discussed widely in the literature on binary networks \citep[][]{lu2011link, wang2015link, martinez2016survey, haghani2017systemic}. It can be defined in multiple ways depending on the context, the type of data at hand, and whether we want to predict the existence of a link in a partially observed network or the presence of a link in a future network. We focus on the latter case that is often referred to as temporal link prediction. On the other hand, the prediction of weights in weighted temporal networks has received much less attention so far mainly due to the lack of models suited to describe both links and weights. For this reason, here we run an exercise using the full dataset described in the previous section and focus on forecasting the weights of the network at time $t+1$, using only a subset of the information available at time $t$. This means that, when forecasting observations at time $t+1$, we use only observations from $t-T_{train}$ to $t$ for training, extremes included. This is very much in line with what we did in \cite{di2019score}, with the important difference that there we only considered binary link prediction, i.e. the sole prediction of links' presence. For a detailed discussion of how to employ the binary component of our model for the link prediction, we refer the reader to our previous work. Our first goal is to assess how effective can a model with time varying fitness in Eq. \eqref{eq:gen_gamma_fit_mod} be in forecasting the weights and to benchmark the proposed score driven approach against an alternative model. To this end, we compare the forecasts obtained using two methods, both based on Eq. \eqref{eq:gen_gamma_fit_mod} and differing only in their dynamics for the fitness. The first approach uses the score driven model to forecast the fitness at time $t+1$ with the update rule of Eq. \eqref{eq:fit_sd_update}. This is very easy to do in practice since, as mentioned in Section \ref{sec:new_mod_def}, the score driven fitness at time $t+1$ are deterministic conditionally on the observations at time $t$. The second approach is a combination of the sequence of single snapshot estimates, described at the beginning of Section \ref{sec:num_sim_dwfm}, and a set of AR(1) models, one for each fitness. In the latter approach, we first obtain the sequence of single snapshot estimates on the training set and then use them to estimate an AR(1) process for each fitness. Then, we use the estimated AR(1) models to forecast one value of the fitness at time $t+1$. Practically, we repeatedly estimate the two models on rolling windows of length $T_{train} = 100$ time steps and, once for each estimate, we forecast the first out-of-sample observation for each train window. We then compare the weights of present link at time $t+1$ with the expected values obtained from the two models and quantify the error by the mean squared error between the logarithms of observed and predicted weights $$ \text{MSE Log.} = \frac{\sum_{ij} \mathbb{I}\pt{{Y_{ij}}^{\tonde{t+1}}} \pt{\log\pt{\mathbb{E}\pq{{Y_{ij}} | \widehat{{\overleftarrow{\wpar}}}^{\tonde{t+1}} , \widehat{{\overleftarrow{\wpar}}}^{\tonde{t+1}}} } - \log \pt{{Y_{ij}}^{\tonde{t+1}}} }^2}{\sum_{ij} \mathbb{I}\pt{{Y_{ij}}^{\tonde{t+1}}}}, $$ where $\widehat{{\overleftarrow{\wpar}}}^{\tonde{t+1}} $ and $ \widehat{{\overleftarrow{\wpar}}}^{\tonde{t+1}}$ are the forecasts for the in and out weighted fitness, obtained using only observations up to time $t$. Similarly we compute the mean absolute difference (MAD) $$ \text{MAD Log.} = \frac{\sum_{ij} \mathbb{I}\pt{{Y_{ij}}^{\tonde{t+1}}} \left\vert \log\pt{\mathbb{E}\pq{{Y_{ij}} | \widehat{{\overleftarrow{\wpar}}}^{\tonde{t+1}} , \widehat{{\overleftarrow{\wpar}}}^{\tonde{t+1}}} } - \log \pt{{Y_{ij}}^{\tonde{t+1}}} \right\vert }{\sum_{ij} \mathbb{I}\pt{{Y_{ij}}^{\tonde{t+1}}}}. $$ We compare the logarithms of predicted and observed weights because the distribution of observed weights is quite heterogeneous. It roughly spans five order of magnitudes, and directly comparing the weights would result in a measure of goodness of fit mainly describing the fit of the largest weights~\footnote{Similar results hold using measures of relative error for the comparison between predicted and observed weights.}. \begin{table}[ht] \centering \begin{tabular}{l|ccc} Method \rule{0pt}{15pt} & SD & SS - AR(1) & Diebold-Mariano (p-value) \\ \hline \rule{0pt}{12pt} MSE Log. & 0.859 & 0.882 & $1.73 \times 10^{-7}$ \\ \hline \rule{0pt}{12pt} MAD Log. & 0.726 & 0.737 & $1.21 \times 10^{-7}$\\ \hline \end{tabular} \caption{ Weight prediction exercise: MSE and MAD between the logarithms of observed and predicted weights. First and second columns: results from the score driven (SD) model and the approach based on single snapshot estimates and AR(1) processes ((SS) - AR(1)), respectively. Third column: p-values of a Diebold-Mariano test for the null hypothesis that the two forecasts are equivalent.} \label{tab:weights_prediction} \end{table} From the results reported in Table \ref{tab:weights_prediction} we conclude that, similarly to the binary case, the score driven approach is a better choice to predict the weights with respect to a prediction based on sequence of single snapshot estimates, both in terms of MSE and MAD for the logarithms. The Diebold-Mariano \citep[][]{diebold2002comparing,harvey1997testing} test rejects the null hypothesis that the two forecasts are statistically equivalent. \subsubsection{Comparison With Link Specific Regressions} Among the models for weighted temporal networks reviewed in Section \ref{sec:lit_rev}, \cite{giraitis2016estimating} is the most relevant reference for the current work. Authors run a forecasting exercise on an interbank network of overnight loans. Differently from our case, they focus on the sterling interbank market that is very dense and somewhat small, when compared with the e-MID dataset. This is due to the structure of the UK interbank market that they consider. Indeed the fraction of links present at every time step for the full temporal network is always above $40\%$, while in our data it is typically equal to $6\%$. Furthermore, \cite{giraitis2016estimating} run a weight forecasting exercise restricted to the sub-network composed by the $4$ largest banks in the system, thus increasing the density of the actual considered network. They propose to model each link by means of a Tobit regression estimating the parameters with a local likelihood approach. Aside their methodological contributions, a key element of their work is the accurate selection of regressors to capture relevant network properties that are expected to influence the future network structure. They design a set of simple functions of the network at the previous time step and use them as regressors. Specifically, for the link going from bank $i$ to bank $j$, they consider as regressors the weight of that same link at the previous time step ({\it lagged}) plus the following quantities \begin{itemize} \item lagged total daily amount lent by $i$ to all other banks except $j$ ; \item lagged total daily amount borrowed by $j$ from all other banks except $i$ ; \item lagged total daily amount lent by $j$ to all other banks except $i$ ; \item lagged total daily amount borrowed by $i$ from all other banks except $j$ ; \item lagged total daily lending and borrowing not involving either $i$ or $j$. \end{itemize} Here the goal is to compare their approach to link forecasting with ours, in an application to the e-MID data introduced in the previous sections, and investigate the reasons of different performances in links and weights prediction. In order to guide our intuition, in the following we also consider a version of their regression based on a simple Zero Augmented approach instead of the censoring that defines the Tobit approach. For the sake of simplicity, in this third approach, we separately consider a logistic regression on the binary part and a linear regression for the weights, both estimated with a standard, non local, maximum likelihood approach. As before we repeatedly estimate the models on rolling windows of length $T_{train}$ and report the results in Table \ref{tab:link_pred_regression_comparison}. \begin{table}[] \centering \begin{tabular}{l|rrrr} Model & $T_{train}$ & MSE Log. & MAD Log. & AUC \\ \hline \midrule Localized Tobit & 100 & 2.351 & 1.067 & 0.714 \\ ZA Regression & 100 & 2.785 & 1.240 & 0.830 \\ Localized Tobit & 200 & 2.267 & 1.043 & 0.795 \\ ZA Regression & 200 & 2.912 & 1.282 & 0.867 \\ SD Generalized Fitness & 100 & 0.859 & 0.726 & 0.896 \\ \end{tabular} \caption{Results of the link and weights prediction exercise on e-MID data. We compare the Localized Tobit model of \cite{giraitis2016estimating} with a simple Zero Augmented regression that uses the same regressors. We also report the results of running the same exercise with the score driven generalized fitness model.} \label{tab:link_pred_regression_comparison} \end{table} It emerges clearly that the score driven generalized fitness model achieves better results in forecasting both links and weights, for the dataset that we considered, with respect to the localized Tobit regression of \cite{giraitis2016estimating}. We believe this result to be mainly driven by two factors. The first one is that the Tobit model uses censoring to assign a finite probability to zero observations, i.e. missing links. Thus, the probability of observing a link and the expected values of the weights are modelled by the same set of parameters. In contrast, the generalized score driven fitness model employs zero augmentation to separately model the probability of observing a link and the expected weight of present links. Indeed, this intuition is confirmed by looking at the performances of the simple Zero Augmented regression, that significantly improves the prediction of the existence of links -- as assessed by the area under the curve (AUC) measure -- but achieves slightly worst performances on predicting the weights of existing links with respect to the local likelihood Tobit. We empirically found the performances of both models to deteriorate significantly for the links that are rarely observed in the training set. In fact, as we show in Figure \ref{fig:density_train}, both the out-of-sample log-MSE and AUC deteriorate when computed for subsets of links with a low fraction of non zero observations in the training set. Interestingly, the log MSE of the ZA regression seems to be monotonically decreasing when the density in the training set increases, while the Tobit regression does not. We interpret this as an additional indication of the advantages in separating the modelling of links' presence from their expected weight. \begin{figure}[h!] \centering \includegraphics[width=0.9\linewidth ]{./pred_err_regr_methods.png} \caption{ Out-of-sample Forecast accuracy as a function of the density in the training set for the models based on link specific regressions. Each line is obtained by averaging the log MSE (top panel) and the AUC (bottom panel) over rolling subsets of $1000$ links sorted by the fraction of non zero observations in the training sample. } \label{fig:density_train} \end{figure} The second factor is that approaches based on running one regression for each link are extremely over-parametrized. They require the estimation of a large number of parameters that scales with the number of links ($N\pt{N -1}$), as opposed to our approach which requires the estimation of $6$ parameters per each node, thus scaling as $N$. While the idea of running a separate model for each link has clear advantages for parallelizing the execution, such a high number of parameters can result in overfit and poor out-of-sample generalization, when the dataset at hand does not allow for a large number of time steps to be used in training. Indeed we corroborate this intuition by noting that the results of both the localized Tobit and the Zero Augmented regression improve if we consider a training set of $200$ time steps, instead of one of $100$ (see the third and fourth lines of Table \ref{tab:link_pred_regression_comparison}). \subsection{The Effect of Interest Rates on Interbank Lending}\label{sec:app_eonia} In this section, by means of the score driven model, we investigate the effect of interest rates on the dynamics of the interbank network data introduced above. To track average interest rates, we use the EONIA benchmark. EONIA ``\textit{is a measure of the effective interest rate prevailing in the Euro interbank overnight market. It is computed as a weighted average of the interest rates on unsecured overnight contracts on deposits denominated in Euro, as reported by a panel of contributing banks}"~\footnote{Definition from \url{https://stats.oecd.org/}.}. Intuitively we expect that banks' funding rates and the topology of the interbank market are deeply related. This relation is of clear interest from the point of view of policymakers and has received much attention in the literature \citep[see, for instance,][]{akram2010interbank, iori2015bank, arciero2016measure, temizsoy2017network, brunetti2019interconnectedness}. Of particular relevance for the results discussed in this section is the work of \cite{akram2010interbank} that investigated the effects of banks' characteristics and the conditions of the network as a whole on the interest rates that each bank faces on the interbank market. They exploited a remarkable dataset, obtained from Norges Bank real time gross settlement system, that allowed them to model bank specific interest rates as dependent on a set of variables and controls, including overall market's liquidity. From basic supply and demand reasoning one would expect that excess liquidity in the market would have a negative pressure on the interest rates on average, and indeed they found that interest rates tend to be lower when the overall liquidity available on the market is higher. A second relevant work for the purposes of this section is \cite{brunetti2019interconnectedness}, where authors considered data on the e-MID interbank market for a period ranging from the beginning of 2006 to the end of 2012, focusing solely on the binary part of the daily temporal network of overnight loans. They computed various aggregated network statistics for each time step, thus obtaining one univariate time series for each statistic. Among other quantities, they computed the density of each network -- defined as the number of connections as a proportion of all possible connections -- and, using a standard linear regression, found it to be positively related with EONIA. In the following, we explore the impact of interest rates on the probability of observing each link and on the expected weight of observed links by applying the score driven weighted fitness model to the e-MID dataset, using as external covariate the EONIA rate. For our estimates, we use a training set comprising the first $80\%$ of time steps and left the last $20\%$ to assess goodness of fit out-of-sample. Moreover, similarly to what done in the numerical simulations discussed in Section \ref{sec:num_sim_dwfm}, we compare the score driven time varying fitness with two alternative specifications: a model without fitness and one with constant fitness, as defined in Section \ref{sec:num_sim_dwfm}, both with EONIA as the only external variable. We then compare their goodness of fit both in-sample and out-of-sample. In Table \ref{tab:emid_eonia_gof} we show the results that clearly confirm the importance of including time varying fitness to improve goodness of fit. We report the Bayesian Information Criterion (BIC) for each model, computed separately for the likelihood of observing links and the likelihood of their weights, to compare goodness of fit in-sample. The binary and weighted parts of each model are evaluated separately out of sample. We quantify out of sample accuracy in predicting link's presence by means of the AUC, while for the weights we compute the MSE of the logs of the weights, only for the present links. \begin{table}[ht] \centering \begin{tabular}{l|ccc} Model \rule{0pt}{15pt} & No Fitness & Constant Fitness & SD Fitness \\ \hline \hline \rule{0pt}{12pt} BIC Bin & $1.75 \times 10^6$& $0.53\times 10^6$ & $0.45 \times 10^6$ \\ \hline \rule{0pt}{12pt} BIC Weight & $1.90 \times 10^6$ & $1.46 \times 10^6$ & $1.46 \times 10^6$ \\ \hline \rule{0pt}{12pt} AUC - Test Set &0.48 & 0.82 & 0.92 \\ \hline \rule{0pt}{12pt} MSE Log. - Test Set & 58.15 & 1.01 & 0.78 \\ \end{tabular} \caption{EONIA effect on e-MID: in sample BIC, for both the binary and weighted part of model (\ref{eq:gen_gamma_fit_mod}), AUC for the out of sample evaluation of the binary part, and the MSE of the logarithms for the out of sample evaluation of the weighted part. First and second columns: results for a model without and with constant fitness, respectively. Third column: score driven fitness.} \label{tab:emid_eonia_gof} \end{table} Since the model without fitness is clearly not a good fit for the data we do not discuss it further, and in Table \ref{tab:emid_eonia_par} we report the estimated regression coefficients using the models with constant and score driven fitness. \begin{table}[ht] \centering \begin{tabular}{l|c|r} Model \rule{0pt}{15pt} & Constant Fitness & SD Fitness \\ \hline \hline \rule{0pt}{12pt} $\beta_{bin}$ & $0.69\pm 0.06$& $0.29\pm 0.05$\\ \hline \rule{0pt}{12pt} $\beta_{w}$ & $0.022\pm 0.029$ & $-0.13 \pm 0.02$ \\ \end{tabular} \caption{EONIA effect on e-MID. Estimates of the regression coefficients using models with constant or score driven fitness.} \label{tab:emid_eonia_par} \end{table} The model with score driven fitness is clearly the best fit for the data, both in sample and out of sample, as measured by the metrics reported in Table \ref{tab:emid_eonia_gof}. Moreover, the parameters estimated by the score driven fitness model are always statistically significant while the $\beta_w$ estimated from the constant fitness model is not. We interpret this discrepancy as a sign that disregarding the fitness dynamics can lead us also to miss-guided qualitative interpretations, consistently with the numerical results discussed in Section \ref{sec:num_sim_dwfm_unobs}. From the estimate $\beta_{bin} = 0.29\pm 0.05$, we can deduce that, in the considered period, the probability of observing a link in the network is positively related with the interest rates, hence the lowering of interest rates tends to reduce the overall market interconnectdness, even taking into account bank specific effects captured by the fitness. This result is coherent with the relation between network density and EONIA found in \cite{brunetti2019interconnectedness}, although the approach based on standard regression on aggregated network statistics is different from ours. In fact, thanks to the time varying binary fitness in our model, the estimated effect of EONIA is decoupled from bank specific effects that are instead accounted for by the fitness. Such a separation of bank specific effects from the impact of a covariate is instead not possible when considering the density of the whole network as done in \cite{brunetti2019interconnectedness}. For what concerns the effect on the liquidity exchanged through the observed links, i.e. the links' weights, the estimated $\beta_{w} = -0.13 \pm 0.02$, indicates that the weight of the observed overnight loans is negatively related with the average interest rate in the market. Our result is coherent with the work~\cite{akram2010interbank} on the Norwegian interbank market, but our methodology allows us to explore a different aspect of the relation between liquidity and interest rates. In fact, that result regards the relation between bank specific rates and aggregated liquidity, while we explore the relation between average rates on the market and the weight of present links, controlling for time varying bank specific effects by means of the time varying fitness. Thanks to zero augmentation and separate modelling of links and weights, our finding is directly related with the average magnitude of the overnight loans that are actually present, more than with the total liquidity in the market. In summary, the data considered indicate that lower interest rates are related with a reduction of network interconnectdness but an increase of the average liquidity flow for the loans that are present. Finally, we mention that, as we show in Appendix \ref{sec:filtered_fitness_dyn}, if we do not include EONIA as external covariate the dynamical fitness tend to correlate with it. This corroborates the fact that the estimated coefficients are found to be statistically significant. We point out that our results on the relation between the dynamics of the e-MID interbank network and EONIA are obtained leveraging the full information available in the description of a temporal network as a temporal sequence of matrices, and considering the impact of the covariate both on the probability of each individual link and the expected weight of observed links. Differently from \cite{akram2010interbank} and \cite{brunetti2019interconnectedness}, we do not need to collapse the matrices into a single network statistic to estimate the effects of external variables. We directly use matrix valued network data and, thanks to the time varying latent fitness parameters, we can decouple the impact of EONIA from unobserved time varying node specific effects. The advantage of using matrix valued network data will become even more evident in the next section where we consider link specific covariates and carry out an analysis that would be impossible with standard regression methods on univariate network statistics. \subsection{Link and Weight Persistence} As a final application, we use our model to contribute to the literature on the persistence in interbank networks \citep[][]{weisbuch2000market, cocco2009lending, hatzopoulos2015quantifying, mazzarisi2020dynamic} by exploring both the persistence of links and that of the weights. The existence of privileged lending relations between pairs of banks is a well known phenomenon and it is often referred to as \textit{preferential trading} \citep{weisbuch2000market}. The motivations behind it can be explained by the relevance of strong lending relationships between banks as a way to overcome monitoring of creditworthiness and limit the risk of counter-party default \citep{cocco2009lending}. The existence of preferential trading behaviours has been assessed quantitatively by means of statistical methods specifically developed for the purpose \citep{hatzopoulos2015quantifying}, in the case of binary networks. Additionally, models for binary temporal networks have been proposed that explicitly take it into account \citep{mazzarisi2020dynamic}. In this section we exploit the flexibility of our model and estimate the effect of two predetermined covariates that are meant to capture the persistence of links and weights. For what concerns link persistence of the binary network, we explore how the presence of a link at time $t-1$ influences the probability of observing a link at time $t$. That amounts to use $\mathbb{I}\pt{{Y_{ij}^{\tonde{t - 1}}}}$ as covariate in Eq. \eqref{eq:gen_fit_bin_prob}. To assess persistence in the links' weights, we estimate the effect of the weight of a link at $t-1$ in determining its weight at time $t$ by using $\log \pt{{Y_{ij}^{\tonde{t - 1}}}}$ as covariate in Eq. \eqref{eq:gen_fit_cond_exp}. Let us recall that we have tested numerically the possibility to estimate such effects in synthetically generated data in Section \ref{sec:num_sim_dwfm}. \begin{table}[ht] \centering \begin{tabular}{l|ccc} Model \rule{0pt}{15pt} & No Fitness & Constant Fitness & SD Fitness \\%& SD-Fit. $ 4 \beta \times$ node \\ \hline \hline \rule{0pt}{12pt} $\beta_{bin}$ & $0.064 \pm 0.038$& $2.875\pm 0.045$ & $2.048 \pm 0.045$ \\%& -- \\ \hline \rule{0pt}{12pt} $\beta_{w}$ & $1.050 \pm 0.003 $ & $0.073\pm 0.002$& $0.064\pm 0.002$ \\%& -- \\ \hline \hline \rule{0pt}{12pt} BIC Bin & $5.66 \times 10^6$& $4.46\times 10^6$ & $4.16 \times 10^6$ \\%& $4.08 \times 10^6$ \\ \hline \rule{0pt}{12pt} BIC Weight & $2.61 \times 10^6$ & $1.45 \times 10^6$ & $1.45 \times 10^6$ \\%& $1.457 \times 10^6$ \\ \hline \rule{0pt}{12pt} AUC - Test Set &0.748 & 0.882 & 0.932 \\%& 0.933 \\ \hline \rule{0pt}{12pt} MSE Log. - Test Set & 21.22 & 0.90 & 0.76\\% & 0.759 \\ \end{tabular} \caption{ Results on estimates of link persistence in e-MID. One column for each one of the three alternative model specifications. In the third and fourth rows, we show the in sample BIC for the binary and weighted parts of the model in \eqref{eq:gen_gamma_fit_mod}, respectively. The last two rows are out of sample measures of goodness of fit. The fifth one is the out of sample AUC for the binary part. The last row is the out of sample MSE of the logarithms of the weights.} \label{tab:emid_persist} \end{table} As in the previous section, we compare three models, a model without fitness, one with constant fitness, and the score driven fitness model, all using the same external covariates and two scalar coefficients, $\beta_{bin}$ and $\beta_w$ that quantify the persistence of links and weights respectively. The results in Table \ref{tab:emid_persist} confirm that neglecting node specific time varying effects results in worst fitting of the data. This is evident by looking at the superior performances, both in-sample and out-of-sample of the models with score driven fitness, with respect to those without or with constant fitness. The three model specifications all result in positive coefficients both for the binary and the weighted covariates. With the best performing model among those three, the model with score driven fitness, we estimate $\beta_{bin} = 2.048 \pm 0.045$. This indicates that globally the presence of a link at time $t-1$ positively impacts the probability of observing that same link at time $t$. This is in agreement with the current consensus in the literature, supporting preferential trading behaviours in e-MID, that has been validated empirically only on the binary part of temporal interbank networks, for example by \cite{hatzopoulos2015quantifying} and \cite{mazzarisi2020dynamic}. The novel aspect of our analysis lays in the estimated $\beta_{w} = 0.064 \pm 0.002$, that highlights a weight persistence effect. This result complements the analysis of \cite{hatzopoulos2015quantifying}, as they considered the weighted networks of the number of loans between each pair of banks, neglecting altogether the amount lent for each loan. By design, our model allows us to highlight the tendency of banks to form links whose weight is positively related with the weight at previous steps, a tendency that we might refer to as weight persistence. \section{Conclusions}\label{sec:conclusions} In this work, we proposed a model for the description of sparse and weighted temporal networks that extends the well known fitness model for static binary networks. In the new score driven weighted fitness model, we also model links' weights with an additional set of fitnesses. Both binary and weighted fitness follow a stochastic dynamics driven by the score of the conditional likelihood. Additionally, we also considered the possibility for the network dynamics to depend on a set of external covariates. Our numerical simulations proved the advantages of the score driven fitness over static fitness and over a sequence of standard cross sectional estimates. As an empirical application, we investigated the determinants of the dynamics of links and weights in the e-MID interbank network. We proved that there is a significant advantage in using score driven time varying parameters to forecast weights, with respect to single snapshot estimates. We exploited the flexibility of the new model to estimate the impact of the EONIA rate in determining the links and weights dynamics. We used it to inform the discussion on persistence in interbank networks providing empirical evidence of weights' persistence. We run an empirical analysis on the prediction of links and weights that highlighted the superior performances of our approach with respect to alternatives based on single snapshot estimation and link specific regressions. Most notably, for the dataset that we considered, we found our model to attain clearly superior performances with respect to the local likelihood Tobit model by \cite{giraitis2016estimating}. In short, we believe this to be a direct consequence of our modelling choices that allow for a flexible description with a moderate number of parameters. Specifically, using zero augmentation we describe separately the probability of a link to exist and its expected weight, and leveraging the score driven approach we only need to estimate order $N$ parameters instead of $N^2$. Our work provides several perspectives for future research. First, the possibility to jointly model and predict the presence of a link and the associated weight could find relevant applications in the financial stability literature. Weighted financial networks are known to be among the determinants of systemic risk and their dynamical description has so far neglected the role of the weights. Second, score driven weighted fitness model could be applied on multiple instances of real world sparse and weighted temporal networks, where the standard approach of ignoring the weights might result in significant information loss. Finally, we plan to investigate the temporal evolution of the community structure of real world networks. Community detection has attracted an enormous amount of attention in various streams of literature \citep{javed2018community}, in particular in the context of temporal networks \citep{rossetti2018community}. We believe that the score driven fitness approach could provide a flexible modeling framework and offer valuable support to assess the degree of persistence of a given partition of nodes into groups. \newpage
2006.12824
\section{Introduction} \label{Sec_Intro} \indent The current paradigm in the description of the gravitational interaction has foundation in Einstein's general relativity (GR), that describes gravity as a classical field theory for the space-time dynamics. The other known fundamental interactions are very well described in terms of quantum field theory (QFT), culminating in the standard model of particle physics. Combining gravity with the other fundamental interactions remains as one of the most challenging tasks in theoretical physics. In particular, a completely (self-)consistent theory of quantum gravity is still missing. Since the space-time metric plays the role of a dynamical variable in GR, a direct approach entails a QFT treatment to the quantization of metric fluctuation around a fixed background \cite{Kiefer_book,Percacci_book}. This approach, sometimes referred as covariant quantum gravity, was readily identified as a problematic QFT due to appearance of ultraviolet (UV) divergences that could not be absorbed by standard (perturbative) renormalization techniques. This problem, however, should not be taken as a dead end for the covariant quantum gravity approach. \begin{itemize} \item The most immediate way out to this problem relies on the interpretation of this approach as an effective field theory (EFT) \cite{Donoghue_EFT_QG}, which provides a consistent framework for quantum gravity calculations valid below some cutoff scale $\Lambda_{\textmd{QG}}$. \item The problem of perturbatively non-renormalizable interactions can be circumvented by the inclusion of curvature squared terms in the action describing the gravitational dynamics \cite{Stelle}. This approach, however, seems to imply unitarity violation (and instabilities, at the classical level) due to the appearance of higher-derivative terms. In the last few years, the interest in theories with higher curvature terms was renewed with some interesting ideas that might conciliate unitarity and (perturbative) renormalizability within this framework (see, for example, Refs. \cite{Holdom,Modesto,ModestoShapiro,Donoghue+Menezes,Anselmi}). \item Beyond the perturbative paradigm, the asymptotic safety program for quantum gravity \cite{Weinberg_ASQG,Reuter_PRD} has been investigated as a candidate for a consistent UV complete scenario for covariant quantum gravity. In this context, UV completion is achieved as a consequence of quantum scale-symmetry emerging as result of a possible fixed point in the renormalization group flow. By now, there is vast collection of results indicating the viability of this scenario \cite{Percacci_book,Reuter_Book} including possible phenomenological consequences (see the reviews \cite{Astrid_review1,Astrid_review2} and references therein). \end{itemize} A consistency check in quantum gravity models based on standard QFT techniques is the investigation of quantum corrections to the Newtonian potential. This question was originally addressed in the seminal paper by Donoghue within the EFT approach for quantum gravity \cite{Donoghue_EFT_QG}. Since then EFT and other methods have been used by several authors to carry out quantum gravitational corrections to the inter-particle potentials (see, for example, Refs. \cite{MV_PRD52,HL_PLB357,ABS_PLB395,KK_JETP95,Bjerrum_PRD66,BDH_PRD67,Faller_PRD}). Although the usual research of non-relativistic potentials concentrates in the monopole-monopole sector, a series of works in the literature also consider the contributions of spin and velocity. In this case, spin-orbit and spin-spin interactions may appear. For instance, in Ref. \cite{gupta1966} the authors calculated the potentials related to one-graviton exchanged between particles with different spins. Long-range gravitational potentials and its spin-dependent interactions were obtained in Refs. \cite{KK_JETP98,K_NPB728,RH_JPA,HR_0802.0716} by taking into account gravitational scattering at one-loop approximation within the EFT formalism. In a similar way, the spin contributions of one-loop diagrams with mixed gravitational-electromagnetic scattering were investigated in Refs. \cite{Butt_PRD74,HR_0802.0717}. For reviews of theoretical and experimental researches on the role of spin in gravity, we point out Refs. \cite{wtNi, wtNi2}. In this work we investigate spin- and velocity-dependent contribution to the gravitational inter-particle potential within a framework motivated by quantum gravity models. Our main goal is to present a detailed discussion on the structure of possible quantum corrections to each sector beyond the monopole-monopole interaction. For this purpose, we combine the effective action formalism with an expansion in terms of form factors to introduce quantum corrections at the level of the graviton propagator. This strategy allows us to explore structural aspects of spin- and velocity-dependent contributions without relying in any specific perturbative calculation. This paper is organized as follows: in Section \ref{Sec_Potentials}, we present our methodology and carry out the inter-particle gravitational potentials for interactions involving spin-0 or spin-1/2 external particles in terms of general form factors. After that, we analyse each sector beyond monopole-monopole interaction and discuss the comparative aspects between spin-0 and spin-1/2 cases. In addition, we also establish comparisons with the inter-particle potentials mediated by electromagnetic interaction. In Section \ref{Sec_App}, we apply our results to particular examples motivated by non-perturbative approaches to quantum gravity. Next, in Section \ref{Singularities}, we discuss some aspects related to the cancellation of Newtonian singularities in higher-derivative gravity models. Finally, in Section \ref{Sec_Concluding}, we present our concluding remarks and perspectives. In the Appendix, we display some useful integrals and definitions. Throughout this work we adopted natural units where $\hbar = c = 1$, the Minkowski metric with signature $(+,-,-,-)$. The Riemann and Ricci curvature tensors were defined as $R^{\mu}_{\,\,\,\nu\alpha\beta} = \partial_\alpha \Gamma^\mu_{\nu\beta} + \Gamma^{\mu}_{\alpha\lambda} \Gamma^{\lambda}_{\nu\beta} - (\alpha\leftrightarrow\beta)$ and $R_{\mu \nu} = R^{\alpha}_{\,\,\,\mu\alpha\nu}$, respectively. \section{Non-relativistic potentials} \label{Sec_Potentials} \indent Let us initially introduce the methodology adopted for computing inter-particle potentials and present the approximations we are dealing with. In order to obtain spin- and velocity-dependent contributions to non-relativistic (NR) potentials mediated by gravity, we employ the first Born-approximation, namely \begin{equation} V(r)= -\int \frac{d^3 \vec{q}}{(2\pi)^3} \mathcal{M}_{_{\textmd{NR}}} (\vec{q}) \, e^{i \vec{q} \cdot \vec{r}} \, , \label{pot_def} \end{equation} where $\mathcal{M}_{_{\textmd{NR}}} (\vec{q})$ indicates the NR limit of the Feynman amplitude, $\mathcal{M}$, associated with the process $1+2 \to 1^\prime + 2^\prime$ represented in Fig. \ref{Scattering-Process}. Following Ref. \cite{livro_Maggiore}, we note that the NR limit involves an appropriate normalization factor such that \begin{equation} \mathcal{M}_{_{\textmd{NR}}} (\vec{q}) = {\lim}_{_{\textmd{NR}}} \prod_{i=1,2} (2E_i)^{-1/2} \prod_{j=1,2} (2E'_j)^{-1/2} \, \mathcal{M} (\vec{q}) \, . \label{prescription} \end{equation} \begin{figure}[ht] \begin{center} \leavevmode \includegraphics[width=0.3\textwidth]{Diagram_1.eps} \put(-159,-5){(1)} \put(-152,20){$p_1$} \put(3,-5){(2)} \put(0,20){$p_2$} \put(-159,120){($1^\prime$)} \put(-152,95){$p_1^\prime$} \put(3,120){($2^\prime$)} \put(0,95){$p_2^\prime$} \end{center} \caption[Feynman diagram 1]{\footnotesize{Representation of a process with particles labeled by $1$ and $2$ scattering into final states labeled by $1^\prime$ and $2^\prime$. The arrows indicate the momenta assignments adopted in this paper.}} \label{Scattering-Process} \end{figure} The most direct way to include quantum corrections to the NR gravitational potential relies on the perturbative approach. In this case, the amplitude associated with the process in Fig. \ref{Scattering-Process} involves all the connected Feynman diagrams up to a fixed order in perturbation theory. This approach has been successfully applied to the computation of quantum corrections to the gravitational inter-particle potential in the context of EFT \cite{Donoghue_EFT_QG,HL_PLB357,ABS_PLB395,KK_JETP95,Bjerrum_PRD66,BDH_PRD67,Faller_PRD,KK_JETP98,K_NPB728,RH_JPA,HR_0802.0716}. Alternatively, one can think in terms of the effective action formalism. In this case, the amplitude associated with the process represented in Fig. \ref{Scattering-Process} is constructed as a sum over connected ``tree-level'' diagrams with propagator and vertices extracted from the effective action $\Gamma$. The typical evaluation of the effective action $\Gamma$ relies on perturbative methods and, therefore, produce equivalent results with respect to the approach described in the previous paragraph. The effective action formalism might be useful in order to access information beyond the perturbative approach. For example, in Ref. \cite{Knorr}, Knorr and Saueressig proposed the reconstruction of an effective action for quantum gravity starting from non-perturbative data obtained via causal dynamical triangulation. Furthermore, the effective action is expanded in terms of form factors carrying (non-)perturbative quantum corrections. For a recent discussion on form factors for quantum gravity in connection with functional renormalization group methods, see Refs. \cite{Bosma_PRL_123, Knorr_Form_Factors}. In this paper we combine the effective action formalism with an expansion in terms of form factors in order to include quantum corrections on the NR inter-particle gravitational potential beyond monopole-monopole interactions. As a first approach we include only quantum corrections to the graviton propagator. In this case, the relevant contribution to the process depicted in Fig. \ref{Scattering-Process} corresponds to the diagram represented in Fig. \ref{Diagrams_3}. Within this approximation, quantum corrections to the vertices are not considered and the relativistic amplitude takes the form \begin{align} \label{Scattering_Amp_General} i\mathcal{M} = i\,T^{\mu\nu}(p_1,p_1^\prime) \, \langle h_{\mu\nu}(-q) h_{\alpha\beta}(q)\rangle \, i\,T^{\alpha\beta}(p_2,p_2^\prime) \, \end{align} where $T^{\mu\nu}$ stands for the tree-level energy momentum tensor associated with the scattered particles and $\langle h_{\mu\nu}(-q) h_{\alpha\beta}(q)\rangle$ denotes the graviton full-propagator. The fact that we are not taking into account quantum corrections to the vertex imposes some limitation in the range of validation of our results. In particular, there is no \textit{a priori} reason to argue that vertex corrections should be suppressed in our investigation. In this sense, the approach adopted here should be interpreted as a first step towards the inclusion of non-perturbative effects, encoded in a form factor expansion, to the NR gravitational potential with contributions beyond the static regime. In principle, vertex corrections can also be implemented in a form factor expansion \cite{Knorr_Form_Factors,Draper}, however, this goes beyond the purposes of the present work. \begin{figure}[ht] \begin{center} \leavevmode \includegraphics[width=0.5\textwidth]{Diagram_3.eps} \put(0,0){(2)} \put(0,20){$p_2$} \put(-247,0){(1)} \put(-244,20){$p_1$} \put(0,90){($2^\prime$)} \put(0,70){$p_2^\prime$} \put(-250,90){($1^\prime$)} \put(-244,70){$p_1^\prime$} \put(-75,30){$q$} \put(-165,30){$q$} \end{center} \caption[Feynman diagram 3]{Diagrammatic representation of the approximation done in this paper. The arrow indicate the momentum assignments adopted in the calculation of the scattering process.} \label{Diagrams_3} \end{figure} Our purpose is not to compute the effective action for quantum gravity. Instead, we assume a ``template'' for the effective action expanded in terms of form factors and motivated by symmetry arguments. In general gauge theories, the effective action typically takes the form $\Gamma = \bar{\Gamma} + \hat{\Gamma}$, where $\delta_{\textmd{gauge}} \bar{\Gamma} = 0$ and $\delta_{\textmd{gauge}} \hat{\Gamma} \neq 0 $. Nevertheless, the ``symmetry breaking'' contribution $\hat{\Gamma}$ is controlled by Slavnov-Taylor identities for $\Gamma$. The covariant approach for quantum gravity, thought as a QFT for the fluctuation field $h_{\mu\nu}$ around a fixed background with metric $\bar{g}_{\mu\nu}$, could be faced as a gauge theory for diffeomorphism transformations. In this case, a template for the effective action in quantum gravity should take the form \begin{align} \Gamma[h;\bar{g}] = \bar{\Gamma}[g] + \hat{\Gamma}[h;\bar{g}] \,, \end{align} where $\delta_{\textmd{diff.}} \bar{\Gamma} = 0$ and $\delta_{\textmd{diff.}} \hat{\Gamma} \neq 0$. We note that the symmetric part, $\bar{\Gamma}[g]$, depends only on the full metric $g_{\mu\nu}$, while the ``symmetry breaking'' sector presents separated dependence on $\bar{g}_{\mu\nu}$ and $h_{\mu\nu}$. In the present paper the fluctuation field $h_{\mu\nu}$ was defined in terms of the linear split $g_{\mu\nu} = \bar{g}_{\mu\nu}+\kappa h_{\mu\nu}$ (with $\kappa = \sqrt{32\pi G}$). In this case, there is an additional local symmetry, namely split symmetry, corresponding to the combined transformation $\delta_{\textmd{split}} h_{\mu\nu}(x)=\kappa^{-1} \epsilon(x)$ and $\delta_{\textmd{split}} \bar{g}_{\mu\nu}(x)=-\epsilon(x)$ that leaves the full metric invariant $\delta_{\textmd{split}} g_{\mu\nu}= 0$ and, therefore, $\delta_{\textmd{split}}\bar{\Gamma}[g] = 0$. However, the separated dependence of $\bar{g}_{\mu\nu}$ and $h_{\mu\nu}$ in $\hat{\Gamma}[h;\bar{g}]$ implies $\delta_{\textmd{split}}\Gamma[h;\bar{g}] \neq 0$, leading to non-trivial Nielsen identities (or split Ward identities). For the symmetric part, we consider a template for the effective action organized in terms of a curvature expansion, given by \begin{equation} \bar{\Gamma}[g_{\mu \nu}] = \frac{2}{\kappa^2} \int d^4 x \, \sqrt{-g} \left( -2 \Lambda - R - \frac{1}{3} R F(\Box) R + C_{\mu \nu \alpha \beta} W(\Box) C^{\mu \nu \alpha \beta} \right) + \mathcal{O}(\mathcal{R}^3) \,, \label{eff_action} \end{equation} where $\Lambda$ and $C^{\mu \nu \alpha \beta}$ denote the cosmological constant and Weyl tensor, respectively, while $F(\Box)$ and $W(\Box)$ correspond to form factors encoding quantum corrections contributing to the curvature squared sector. Furthermore, $\mathcal{O}(\mathcal{R}^3)$ indicates all other contributions composed by curvature invariant with power higher than two. For the explicit computations performed in this paper, we consider flat background metric, i.e., $g_{\mu \nu} = \eta_{\mu \nu} + \kappa \, h_{\mu \nu}$. In this case, the relevant contributions for the full graviton propagator come exclusively from terms up to $\mathcal{O}(\mathcal{R}^2)$. For the symmetry breaking sector, we use a template with the same functional form as a typical gauge fixing term added to classical action, namely \begin{align} \hat{\Gamma}[h_{\mu \nu};\bar{g}] = \frac{1}{2\alpha} \int d^4x \sqrt{-\bar{g}} \,\bar{g}^{\mu\nu} F_\mu [h;\bar{g}] F_\nu[h;\bar{g}] \,, \end{align} where $F_\mu[h;\bar{g}] = \bar{\nabla}^\nu h_{\mu\nu} - \frac{1}{2} \bar{\nabla}_\mu h $. One can argue that a different choice in this sector would not affect our results since we are computing gauge-independent quantities (on-shell amplitudes) and, therefore, any gauge-dependence should drop out in the final results. Bearing in mind our template for the effective action, the graviton ``full''-propagator (around flat background) is readily computed as the inverse of the 2-point function $\delta^2 \Gamma/\delta h^2|_{h=0}$, resulting in the following expression \begin{eqnarray} \langle h_{\mu \nu} (-q) h_{\alpha \beta} (q) \rangle = \frac{i}{q^2} \Bigg[ \frac{1}{Q_2 (q^2)} \mathcal{P}_{\mu\nu\alpha\beta}^{(2)} - \frac{1}{2Q_0 (q^2)} \mathcal{P}_{\mu\nu\alpha\beta}^{(0)} \Bigg] \, + \, i\Delta_{\mu \nu \alpha \beta}(q) \,, \label{prop_simpl} \end{eqnarray} where we define \begin{subequations} \begin{align}\label{def_Q2} Q_2 (q^2 ) = 1 + \frac{2 \Lambda}{q^2} + 2 q^2 \, W(-q^2) \, , \end{align} \begin{align}\label{def_Q0} Q_0 (q^2 ) = 1 + \frac{2 \Lambda}{q^2} + 2 q^2 \, F(-q^2) \, . \end{align} \end{subequations} In addition, the tensor structures $\mathcal{P}_{\mu\nu\alpha\beta}^{(2)} $ and $\mathcal{P}_{\mu\nu\alpha\beta}^{(0)} $ are defined as \begin{subequations} \begin{align} \mathcal{P}_{\mu\nu\alpha\beta}^{(2)} = \frac{1}{2} (\eta_{\mu \alpha} \eta_{\nu \beta}+ \eta_{\mu \beta} \eta_{\nu \alpha}) - \frac{1}{3}\eta_{\mu \nu}\eta_{\alpha \beta} \,, \end{align} \begin{align} \mathcal{P}_{\mu\nu\alpha\beta}^{(0)} = \frac{1}{3}\eta_{\mu \nu}\eta_{\alpha \beta} \,. \end{align} \end{subequations} The remaining terms in the graviton propagator, represented by $i\Delta_{\mu \nu \alpha \beta}(q)$, vanish when contracted with the energy-momentum tensor of the scattered particles. It is worthwhile mentioning that using the effective action \eqref{eff_action}, where form factors $F(\Box)$ and $W(\Box)$ are introduced with scalar curvature and Weyl tensor, we obtain the propagator \eqref{prop_simpl} in which the contributions of these form factors are disconnected. In other words, from Eqs. \eqref{def_Q2} and \eqref{def_Q0}, we observe that $F(\Box)$ and $W(\Box)$ contribute only to scalar and graviton modes, respectively. In what follows we present our results for the NR gravitational potential, taking into account the scattering of both massive spin-0 and spin-1/2 particles, with quantum corrections being included in terms of general form factors $F(\Box)$ and $W(\Box)$. As usually done in the literature of spin- and velocity-dependent potentials, we adopt the center-of-mass (CM) reference frame, described in terms of the $3-$momentum transfer $\vec{q}$ and average momentum $\vec{p}$. The CM variables are related to the momentum assignments depicted in Fig. \ref{Diagrams_3} in terms of the following expressions \begin{equation} \vec{p}_1 = -\vec{p}_2 = \vec{p} - \frac{\vec{q}}{2} \, , \qquad \vec{p'}_1 = -\vec{p'}_2 = \vec{p} + \frac{\vec{q}}{2} \, . \label{momentum_CM} \end{equation} Since we are dealing with an elastic scattering, the total energy of the system is conserved. With this assumption and using the momentum attributions in \eqref{momentum_CM}, it is possible to show that $ \vec{q} \cdot \vec{p} = 0$. This result implies $ E_1 = E'_1 $ and $E_2 = E'_2 $ or equivalently $ q^\mu = (0,\vec{q})$. In the non-relativistic limit we take $m_i^2 >> \vec{q}^{\, 2}, \vec{p}^{\, 2}$, leading to the approximation $E_i \approx m_i + \frac{1}{2m_i} (\vec{p}^{\,2} + \vec{q}^{\,2}/4)$. In what follows we apply these conditions and approximations to the energy-momentum tensor appearing in Eq. \eqref{Scattering_Amp_General} to arrive at the non-relativistic amplitude. This prescription defined directly in terms of CM variables is equivalent to an expansion in powers of $\vec{p}_i/m_i$ and $\vec{p}_i^{\,\,\prime}/m_i$. In our calculations we consider contributions up to second order in these expansion variables. \subsection{Spin-0 external particles} \label{spin_0_subsection} \indent Within the working setup above described, we first investigate the case of gravitationally interacting spin-0 particles. The investigation performed in this paper takes into account an approach where quantum correction to the vertices are neglected. In this sense, our template for the effective action in the spin-0 sector essentially corresponds to the classical action of a scalar field minimally coupled to gravity, \begin{eqnarray} \Gamma_{\textmd{scalar}}[\phi,g] = \int{d^4x \, \sqrt{-g} \Bigg( \frac{1}{2} g^{\mu\nu} \partial_\mu \phi \partial_\nu \phi - \frac{1}{2}m^2 \phi^2} \Bigg) \, . \end{eqnarray} By expanding up to first order in the fluctuation field $h_{\mu\nu}$, we can directly extract the Fourier representation of the energy momentum tensor associated with the external legs in Fig. \ref{Diagrams_3}, namely \begin{align}\label{scalar_vertex} T_{\mu \nu}(p, p') = -\frac{\kappa}{2} \Big(\, p_\mu p'_\nu + p_\nu p'_\mu - \eta_{\mu \nu} \left( p \cdot p' - m^2 \right) \Big) \, . \end{align} We adopt conventions where the momenta $p$ and $p'$ are respectively assigned as incoming and outgoing with respect to the vertex. The relativistic scattering amplitude is computed in terms of Eq. \eqref{Scattering_Amp_General} along with Eqs. \eqref{prop_simpl} and \eqref{scalar_vertex}. We note that the external legs in Fig. \ref{Diagrams_3} are considered to be on-shell and, therefore, $p_1^2 = p_1^{\prime 2} = m_1^2$ and $p_2^2 = p_2^{\prime 2} = m_2^2$. In fact, using the conservation of the energy-momentum tensor ($q_\mu \, T^{\mu\nu}(p_i,p_i^\prime) = 0$, for on-shell in- and out-states) we arrive at the intermediary result \begin{eqnarray}\label{amp_esc_1} i \mathcal{M}^{(s=0)} = \frac{i}{q^2} \left[ \left( \frac{1}{3 Q_2} + \frac{1}{6 Q_0} \right) T_{1 \, \, \mu}^\mu T_{2 \, \, \beta}^\beta - \frac{1}{Q_2} \, T_1^{\mu \nu} T_{2 \, \mu \nu} \right] \, , \end{eqnarray} where we work with the shorthand notations $ T^{\mu \nu}_i \equiv T^{\mu \nu}(p_i, p'_i) $ and $Q_i \equiv Q_i (q^2)$. After some simple algebraic manipulations using the explicit expression for the energy-momentum tensor we find the following result for the scattering amplitude \begin{eqnarray}\label{amp_esc_4} \mathcal{M}^{(s=0)} &=& \frac{\kappa^2}{6 q^2 \, Q_2} \Bigg( 2 m_1^2 m_2^2 - 3 (p_1 \cdot p_2) (p_1' \cdot p_2') - 3 (p_1 \cdot p_2') (p_1' \cdot p_2) \nonumber \\ &+& 2 (p_1 \cdot p_1') (p_2 \cdot p_2') - m_1^2 \, p_2 \cdot p_2' - m_2^2 \, p_1 \cdot p_1' \Bigg) \nonumber \\ &+& \frac{\kappa^2}{6 q^2 \, Q_0} \Bigg( (p_1 \cdot p_1') (p_2 \cdot p_2') - 2 m_1^2 \, p_2 \cdot p_2' - 2 m_2^2 \, p_1 \cdot p_1' + 4 m_1^2 m_2^2 \Bigg) \, . \end{eqnarray} In order to obtain the NR description, we use the prescription \eqref{prescription}. In the CM reference frame with momentum attributions \eqref{momentum_CM}, we have \begin{eqnarray} \mathcal{M}^{(s=0)}_{\textrm{NR}} &=& \frac{ \kappa^2 m_1 m_2 }{6 \, Q_2 \, \vec{q}^{\,2}} \, \left\{ 1 + \vec{p}^{\,2} \left( \frac{3}{m_1 m_2} + \frac{1}{m_1^2} + \frac{1}{m_2^2} \right) + \frac{ \vec{q}^{\,2} }{8} \, \left( \frac{1}{m_1^2} + \frac{1}{m_2^2} \right) + \mathcal{O}(3) \right\} \nonumber \\ &-& \frac{ \kappa^2 m_1 m_2 }{24 \, Q_0 \, \vec{q}^{\, 2} } \, \left\{ 1 - \frac{\vec{p}^{\, 2} }{2} \left( \frac{1}{m_1^2} + \frac{1}{m_2^2} \right) - \frac{5 \, \vec{q}^{\, 2}}{8} \left( \frac{1}{m_1^2} + \frac{1}{m_2^2} \right) + \mathcal{O}(3)\right\} \, ,\label{amp_NR_esc} \end{eqnarray} with $\mathcal{O}(3)$ indicating terms higher than second order in $|\vec{p}|/m_{1,2}$ and/or $|\vec{q}|/m_{1,2}$, which we shall neglect. Finally, by taking the Fourier integral, Eq. \eqref{pot_def}, we promptly obtain the inter-particle gravitational potential with contributions beyond the monopole-monopole sector \begin{eqnarray} V^{(s=0)}(r) &=& - \frac{\kappa^2 m_1 m_2}{6} \Bigg\{ I_1^{(2)}(r) + \vec{p}^{\,2} \left( \frac{3}{m_1 m_2} + \frac{1}{m_1^2} + \frac{1}{m_2^2} \right) I_1^{(2)}(r) \nonumber \\ &+& \frac{1}{8} \left( \frac{1}{m_1^2} + \frac{1}{m_2^2} \right) I_0^{(2)}(r) \Bigg\} + \frac{\kappa^2 m_1 m_2}{24} \Bigg\{ I_1^{(0)}(r) \nonumber \\ &-& \frac{ \vec{p}^{\,2} }{2} \left( \frac{1}{m_1^2} + \frac{1}{m_2^2} \right) I_1^{(0)}(r) - \frac{5}{8} \left( \frac{1}{m_1^2} + \frac{1}{m_2^2} \right) I_0^{(0)}(r) \Bigg\} , \label{spin_0_pot} \end{eqnarray} where the integrals $I_n^{(a)}(r)$ are defined in Appendix, Eq. \eqref{I_a_n} with $n=0,1$ and $a=0,2$. \subsection{Spin-1/2 external particles} \label{spin_meio_subsection} \indent In present subsection, we describe the gravitational interaction between two spin-1/2 particles. Since we do not take into account any vertex correction, our template for this sector basically corresponds to the classical action of the Dirac field minimally coupled to gravity, namely \begin{eqnarray} \Gamma_{\textmd{ferm}}[\bar{\psi},\psi,g] = \int{d^4x \, \sqrt{-g} \left( \frac{i}{2} (\bar{\psi} \, \gamma^\mu_g \, \nabla_\mu \psi - \nabla_\mu \bar{\psi} \, \gamma^\mu_g \, \psi ) - m\bar{\psi} \psi \right)} \, . \end{eqnarray} To define the fermion in a curved space-time we use the spin-base formalism, where the covariant derivative is defined according to $\nabla_\mu \psi = \partial_\mu \psi + \Gamma_\mu \psi$ and $\nabla_\mu \bar{\psi} = \partial_\mu \bar{\psi} - \bar{\psi}\,\Gamma_\mu$, with $\bar{\psi}$ and $\Gamma_\mu$ representing the Dirac conjugate and an appropriate connection, respectively (see Refs. \cite{spinbase1,spinbase2,spinbase3} for more details). In addition, the matrices $\gamma^\mu_g$ satisfy the Clifford algebra $\left\{ \gamma^\mu_g , \gamma^\nu_g \right\} = 2 g^{\mu \nu} \bm{1}$. The tree-level vertex involving two fermions and one graviton is extracted by expanding $\Gamma_{\textmd{ferm}}[\bar{\psi},\psi,g]$ up to first order in the fluctuation field $h_{\mu\nu}$. Based on the resulting expression, we can obtain the energy-momentum tensor \begin{align}\label{fermion_vertex} T_{\mu \nu}(p, p') =& \, \frac{\kappa}{8} \Big( 2 \eta_{\mu \nu} \big( (p +p')_\alpha \,\mathcal{J}^\alpha(p, p') - 2m \, \rho(p, p')\big) \nonumber \\ &\,\,- (p + p')_\mu \mathcal{J}_\nu(p, p') - (p + p')_\nu \mathcal{J}_\mu(p, p') \Big) \, . \end{align} We define the bi-linear structures $\mathcal{J}^\mu(p, p') = \bar{u}(p') \gamma^\mu u(p)$ and $\rho(p, p') = \bar{u}(p') u(p)$, where $u(p)$ denotes the free positive energy solution for the four-component spinor and $\bar{u}(p)={u}^\dagger(p) \gamma^0$. Here, it should be noted that $\gamma^\mu$ corresponds to the usual gamma matrices in a flat background, satisfying $\left\{ \gamma^\mu, \gamma^\nu \right\} = 2 \eta^{\mu \nu} \bm{1}$. Then, combining Eqs. \eqref{Scattering_Amp_General} and \eqref{prop_simpl}, we arrive in a similar expression as in the case of spin-0 particles, namely \begin{eqnarray}\label{amp_fermion} i \mathcal{M}^{(s=1/2)} = \frac{i}{q^2} \left[ \left( \frac{1}{3 Q_2} + \frac{1}{6 Q_0} \right) T_{1 \, \mu}^{\,\, \mu} T_{2 \, \beta}^{\,\, \beta} - \frac{1}{Q_2} T_{1}^{\, \mu \nu} T_{2 \, \mu \nu} \right] \, . \end{eqnarray} Expanding the energy-momentum tensor in terms of the bi-linears $\mathcal{J}^\mu$ and $\rho$, we find the relativistic scattering amplitude \begin{eqnarray} \mathcal{M}^{(s=1/2)} &=& \frac{ \kappa^2}{q^2 Q_2} \Bigg\{ \frac{1}{16} (p_1 + p_1')_\mu (p_2 + p_2')_\nu \mathcal{J}_1^{\mu} \mathcal{J}_2^{\nu} - \frac{m_1}{8} \rho_1 (p_2 + p_2')_\mu \mathcal{J}_2^{\mu} - \frac{m_2}{8} \rho_2 (p_1 + p_1')_\mu \mathcal{J}_1^{\mu}\nonumber \\ &-& \frac{1}{32} (p_1 + p'_1)^\nu (p_2 + p'_2)_\nu \mathcal{J}_1^{\mu} \mathcal{J}_{2\mu}- \frac{1}{32} (p_1 + p_1')_\mu (p_2 + p_2')_\nu \mathcal{J}_2^{\mu} \mathcal{J}_1^{\nu} + \frac{m_1 m_2}{3} \rho_1 \rho_2 \Bigg\} \nonumber \\ &+& \frac{ \kappa^2}{q^2 Q_0} \, \Bigg\{ \frac{3}{32} (p_1 + p_1')_\mu (p_2 + p_2')_\nu \mathcal{J}_1^{\mu} \mathcal{J}_2^{\nu} + \frac{2 m_1 m_2}{3} \rho_1 \rho_2 \nonumber \\ &-& \frac{m_1}{4} \rho_1 (p_2 + p_2')_\mu \mathcal{J}_2^{\mu} - \frac{m_2}{4} \rho_2(p_1 + p_1')_\mu \mathcal{J}_1^{\mu} \Bigg\} \, , \label{amp_fermion_rel} \end{eqnarray} where we use the shorthand notation $\rho_j = \rho (p_j, p_j')$ and $\mathcal{J}_j^{\mu} =\mathcal{J}^{\mu} (p_j, p_j')$. In order to extract the NR scattering amplitude, we first remember that $u(p)$ satisfies the on-shell condition $\left[ \gamma^\mu p_\mu - m \bm{1} \right] \, u(p) = 0$. In the standard Dirac representation, we obtain \begin{align}\label{ferm_field} u(p) = \sqrt{E + m} \left( \begin{array}{c}\xi \\ \frac{\vec{\sigma} \cdot \vec{p}}{E + m} \, \xi\end{array} \right) . \end{align} with $\xi$ and $\vec{\sigma}$ being the basic spinor and Pauli matrices, respectively. In the NR limit, the relevant bi-linear structures $\rho$ and $\mathcal{J}^\mu$ are written as (in the CM frame) \begin{subequations} \begin{align} \rho_{j}|_{\textmd{NR}} = 2\,m_j \bigg[ 1 + \frac{1}{8m_j^2} \bigg( \vec{q}^{\,2} - 4i(\vec{q} \times \vec{p}\,) \cdot \vec{S}_j \bigg) + \mathcal{O}(3) \bigg] \,, \end{align} \begin{align} \mathcal{J}^0_{j}|_{\textmd{NR}} = 2\,m_j \bigg[ 1 + \frac{1}{2m_j^2} \bigg( \vec{p}^{\,2} + i(\vec{q} \times \vec{p}\,) \cdot \vec{S}_j \bigg) + \mathcal{O}(3) \bigg] \,, \end{align} \begin{align} \vec{\mathcal{J}}_{j}|_{\textmd{NR}} = 2\, \chi_j \bigg[ \vec{p} - i (\vec{q} \times \vec{S}_j) \bigg] \label{noapp} \,, \end{align} \end{subequations} where $j$ indicates the particle label and we have defined $\chi_1 = 1$, $\chi_2 = -1$ and the spin $\vec{S}_j = \frac{1}{2} \, \xi'^\dagger_j \vec{\sigma}\xi_j$. In addition, factors of $\xi'^\dagger_j \xi_j$ have been omitted. As in the scalar case, the terms in $\mathcal{O}(3)$ are neglected. We highlight that there is no further approximation in Eq. (\ref{noapp}). After some algebraic manipulations, we find that \begin{eqnarray} \mathcal{M}^{(s=1/2)}_{\textrm{NR}} &=& \frac{ \kappa^2 m_1 m_2 }{6 Q_2 \, \vec{q}^{\,2}} \, \Bigg\{ 1 + \vec{p}^{\,2} \left( \frac{3}{m_1 m_2} + \frac{1}{m_1^2} + \frac{1}{m_2^2} \right) \nonumber \\ &+& i \left[ \left( \frac{1}{m_1^2} + \frac{3}{2} \frac{1}{m_1 m_2} \right) \vec{S}_1 + \left( \frac{1}{m_2^2} + \frac{3}{2} \frac{1}{m_1 m_2} \right) \vec{S}_2 \right] \cdot \left( \vec{q} \times \vec{p} \, \right) \nonumber \\ &-& \frac{3}{4} \frac{\vec{q}^{\, 2}}{m_1 m_2} \vec{S}_1 \cdot \vec{S}_2 + \frac{3}{4} \frac{1}{m_1 m_2} \left( \vec{q} \cdot \vec{S_1} \right) \left( \vec{q} \cdot \vec{S_2} \right) + \mathcal{O}(3) \Bigg\} \nonumber \\ &-& \frac{ \kappa^2 m_1 m_2 }{24 Q_0 \, \vec{q}^{\,2} } \, \Bigg\{ 1 - \frac{\vec{p}^{\,2}}{2} \left( \frac{1}{m_1^2} + \frac{1}{m_2^2} \right) \nonumber\\ &-& \frac{i}{2} \left[ \frac{1}{m_1^2} \vec{S}_1 + \frac{1}{m_2^2} \vec{S}_2 \right] \cdot \left( \vec{q} \times \vec{p} \, \right) + \mathcal{O}(3) \Bigg\} \, . \label{amp_NR_ferm} \end{eqnarray} The NR gravitational potential associated with the scattering of spin-1/2 particles is obtained by performing the Fourier integral \eqref{pot_def}, resulting in the following expression \begin{align} &V^{(s=1/2)}(r) = -\frac{\kappa^2 m_1 m_2}{6} \, \Bigg\{ I^{(2)}_1(r) + \vec{p}^{\,2} \left( \frac{3}{m_1 m_2} + \frac{1}{m_1^2} + \frac{1}{m_2^2} \right) I^{(2)}_1(r) \nonumber\\ &\quad+ \left[ \left( \frac{1}{m_1^2} +\frac{3}{2} \frac{1}{m_1 m_2} \right) \vec{S}_1 + \left( \frac{1}{m_2^2} +\frac{3}{2} \frac{1}{m_1 m_2} \right) \vec{S}_2 \right] \cdot \frac{\vec{L}}{r} \frac{d}{dr} I^{(2)}_1(r) \nonumber \\ &\quad- \frac{3}{4} \frac{\vec{S}_1 \cdot \vec{S}_2}{m_1 m_2} \, I^{(2)}_0(r) + \frac{3}{4} \sum_{i,j=1}^{3} \frac{(\vec{S}_1)_i \, (\vec{S}_2)_j}{m_1 m_2} \, I_{ij}^{(2)}(r) \Bigg\} \nonumber \\ &\quad+ \frac{\kappa^2 m_1 m_2}{24} \, \Bigg\{ I^{(0)}_1(r) - \frac{\vec{p}^{\,2}}{2} \left( \frac{1}{m_1^2} + \frac{1}{m_2^2} \right) I_1^{(0)}(r) - \frac{1}{2} \left[ \frac{\vec{S}_1}{m_1^2} + \frac{\vec{S}_2}{m_2^2} \right] \cdot \frac{\vec{L}}{r} \frac{d}{dr} I^{(0)}_1(r) \Bigg\} \, , \label{spin_meio_pot} \end{align} where $\vec{L} = \vec{r} \times \vec{p}$ stands for the orbital angular momentum and the anisotropic integral $I_{ij}^{(2)}(r)$ is defined in the Appendix, Eq. \eqref{I_ij}. The appearance of a derivative in spin-orbit interactions is related to some manipulations of the Fourier integral and spherical symmetry. For more details, see Eq. \eqref{int_A}. \subsection{Comparative aspects of spin- and velocity-dependent potentials} \label{partial_conclusions_subsection} At this stage, it is relevant to compare structural aspects of the potentials for spin-0 and spin-1/2 cases. First of all, we note that the potential for spin-0 particles is characterized by two different sectors, monopole-monopole and velocity-velocity contributions, namely \begin{subequations} \begin{align} V_{\textmd{mon-mon}}^{(s=0)}(r) = &-\frac{\kappa^2 m_1 m_2}{6} \bigg( I^{(2)}_1(r) - \frac{1}{4} I^{(0)}_1(r) \bigg) \, \nonumber\\ &-\frac{\kappa^2 m_1 m_2}{48} \left( \frac{1}{m_1^2} + \frac{1}{m_2^2} \right) \bigg[I^{(2)}_0(r) +\frac{5}{4} I^{(0)}_0(r)\bigg] \,, \label{pot_monopole_s=0} \end{align} \begin{align} V_{\textrm{vel-vel}}^{(s=0)}(r) = -\frac{\kappa^2 m_1 m_2}{6} \, \vec{p}^{\, 2} \Bigg\{ \left( \frac{1}{m_1^2} + \frac{1}{m_2^2} \right) \left[ I^{(2)}_1(r) + \frac{1}{8} I^{(0)}_1(r) \right] + \frac{3}{m_1 m_2} I^{(2)}_1(r) \Bigg\} \, . \label{pot_vel_s=0} \end{align} \end{subequations} The potential associated with spin-1/2 particles, on the other hand, receives contributions from four different sectors: monopole-monopole, velocity-velocity, spin-orbit and spin-spin interactions. These contributions are given by \begin{subequations} \begin{align} V_{\textmd{mon-mon}}^{(s=1/2)}(r) = -\frac{\kappa^2 m_1 m_2}{6} \bigg( I^{(2)}_1(r) - \frac{1}{4} I^{(0)}_1(r) \bigg) \,, \label{pot_monopole_s=1/2} \end{align} \begin{align} V_{\textrm{vel-vel}}^{(s=1/2)}(r) = -\frac{\kappa^2 m_1 m_2}{6} \, \vec{p}^{\, 2} \Bigg\{ \left( \frac{1}{m_1^2} + \frac{1}{m_2^2} \right) \left[ I^{(2)}_1(r) + \frac{1}{8} I^{(0)}_1(r) \right] + \frac{3}{m_1 m_2} I^{(2)}_1(r) \Bigg\} \, , \label{pot_vel_s=1/2} \end{align} \begin{align} V_{\textrm{spin-orbit}}^{(s=1/2)}(r) = &-\frac{\kappa^2 m_1 m_2}{6} \, \left[ \left( \frac{1}{m_1^2} +\frac{3}{2} \frac{1}{m_1 m_2} \right) \vec{S}_1 + \left( \frac{1}{m_2^2} +\frac{3}{2} \frac{1}{m_1 m_2} \right) \vec{S}_2 \right] \cdot \frac{\vec{L}}{r} \frac{d}{dr} I^{(2)}_1(r) \nonumber \\ & -\frac{\kappa^2 m_1 m_2}{48} \bigg(\frac{1}{m_1^2} \vec{S}_1+\frac{1}{m_2^2} \vec{S}_2\bigg) \cdot \frac{\vec{L}}{r} \frac{d}{dr} I^{(0)}_1(r) \,, \end{align} \begin{align} V_{\textrm{spin-spin}}^{(s=1/2)}(r) = -\frac{\kappa^2 m_1 m_2}{6} \bigg[ \!-\! \frac{3}{4} \frac{\vec{S}_1 \cdot \vec{S}_2}{m_1 m_2} \, I^{(2)}_0(r) + \frac{3}{4} \sum_{i,j=1}^{3} \frac{(\vec{S}_1)_i \, (\vec{S}_2)_j}{m_1 m_2} \, I_{ij}^{(2)}(r) \bigg] \,. \end{align} \end{subequations} We first note that the static limit is obtained by taking the combined limit \begin{align} \frac{1}{m_1 m_2} V_{\textmd{stat.}}^{(s)}(r) = \lim_{ \substack{\vec{p}\to 0\\m_i\to \infty} } \,\frac{1}{m_1 m_2} V^{(s)}(r) . \end{align} In this case, the only remaining contribution comes from the monopole-monopole sector, which results in \begin{align} V_{\textmd{stat.}}^{(s)}(r) = -\frac{\kappa^2 m_1 m_2}{6} \bigg( I^{(2)}_1(r) - \frac{1}{4} I^{(0)}_1(r) \bigg) \,, \label{pot_static} \end{align} both for spin-0 and spin-1/2 particles. In the particular case of vanishing form factors, i.e. without deviations from the classical Einstein-Hilbert action, we recover the usual Newtonian potential $V_{\textmd{stat.}}^{(s)}(r) = -\frac{\kappa^2 m_1 m_2}{32\pi r} \equiv -\frac{G m_1 m_2}{r}$. Moving away from the static regime we note the similarities and differences between spin-0 and spin-1/2 cases. The monopole-monopole sectors, Eqs. \eqref{pot_monopole_s=0} and \eqref{pot_monopole_s=1/2}, contain universal contributions appearing both in the spin-0 and spin-1/2. However, as we can observe from Eq. \eqref{pot_monopole_s=0}, $V_{\textmd{mon-mon}}^{(s=0)}(r)$ has an additional term which is not present in $V_{\textmd{mon-mon}}^{(s=1/2)}(r)$. This additional term has a sub-leading behavior as we are going to see in the next section from explicit examples. Beyond the monopole-monopole terms, we observe that the velocity-dependent sector $V_{\textrm{vel-vel}}^{(s)}(r)$ has the same form both for spin-0 and spin-1/2 cases. On the other hand, spin-orbit and spin-spin interactions are present only in the potential associated with spin-1/2 particles. While spin-orbit terms ($\sim \vec{L}\cdot \vec{S}_i$) interact via spin-2 and spin-0 graviton modes, spin-spin contributions ($\sim (\vec{S}_1)_i \, (\vec{S}_2)_j I_{ij}^{(2)} $ and $\sim \vec{S}_1 \cdot \vec{S}_2$) exhibit only interactions via spin-2 graviton modes. It is worthy to highlight that our methodology is applicable to modified (classical) theories of gravity with higher-order derivatives and other non-local functions. Once we have developed the potentials with arbitrary form factors, we just need to reinterpret the effective action as a classical one and redefine the $Q_0$ and $Q_2$ factors. We shall return to this point in Section \ref{Singularities}. Furthermore, we comment that, for the gravitational interaction of spin-0 particles, it is possible to generalize our results to arbitrary dimensions, as already discussed in the literature for modified theories of gravity in monopole-monopole sector (see \cite{Accioly_el_al_CQG_2015,Accioly_et_al_PRD2018} and references therein). However, for spin-$\frac{1}{2}$ case and its spin-dependent contributions, this extension shall be a non-trivial task, since the definition of spin is particular to the dimension we are dealing with. For instance, when considering space-time with odd dimension and parity symmetry (typically to electromagnetic and gravitational interactions), a reducible representation is adoptable in order to conciliate the parity symmetry with massive fermions. In these cases, new spin-dependent effects have been discussed \cite{Dorey_Mavromatos_NPB,Leo_Helayel_PRD}. In other words, the inclusion of the spin-dependent interactions should be carefully done for each particular dimension, especially when discrete symmetries are desired. \subsection{Comparisons with NR electromagnetic potentials} \label{partial_conclusions_subsection_2} Before we proceed with specific form factors motivated by quantum gravity models, it is interesting to compare our results with the case of NR potentials mediated by electromagnetic interaction. Adopting the same strategy as in the gravitational case, we consider the following template for the electromagnetic effective action \begin{align} \Gamma_{\textmd{EM}}[A] = -\frac{1}{4}\int d^4x \, F_{\mu\nu}(1+H(\Box))F^{\mu\nu} - \frac{1}{2\alpha}\int d^4x \,(\partial_\mu A^\mu)^2 + \mathcal{O}(F^3) , \end{align} where $H(\Box)$ denotes a form factor modeling quantum corrections up to $\mathcal{O}(A^2)$. In this case, the photon ``full''-propagator is subjected to the parameterized form \begin{align} \langle A_\mu(-q) A_\nu(q) \rangle = -\frac{i}{q^2(1+H(-q^2))} \eta_{\mu\nu} + i\Delta_{\mu\nu}(q)\,, \end{align} where $\Delta_{\mu\nu}(q)$ indicates those contributions that vanishes when contracted with external vector currents. The photon propagator is mapped in terms of quantities defined in Ref. \cite{Gustavo_Pedro_Leo_PRD}. Therefore, we can readily import the results from \cite{Gustavo_Pedro_Leo_PRD}, leading to the following expressions \begin{subequations} \begin{align}\label{EM_pot_mon-mon_s=0} V_{\textmd{EM, mon-mon}}^{(s=0)}(r) = e_1 e_2 I^{\textmd{EM}}_1(r) \,, \end{align} \begin{align} V_{\textmd{EM, vel-vel}}^{(s=0)}(r) = \frac{e_1 e_2 }{m_1 m_2} \, \vec{p}^{\,2} I^{\textmd{EM}}_1(r) \,, \end{align} \end{subequations} for spin-0 particles, and \begin{subequations} \begin{align}\label{EM_pot_mon-mon_s=1/2} V_{\textmd{EM, mon-mon}}^{(s=1/2)}(r) = e_1 e_2 \left[ I^{\textmd{EM}}_1(r) - \frac{1}{8} \left( \frac{1}{m_1^2} + \frac{1}{m_2^2} \right) I^{\textmd{EM}}_0(r) \right]\,, \end{align} \begin{align} V_{\textmd{EM, vel-vel}}^{(s=1/2)}(r) = \frac{e_1 e_2 }{m_1 m_2} \, \vec{p}^{\,2} I^{\textmd{EM}}_1(r)\,, \end{align} \begin{align} V_{\textmd{EM, spin-orbit}}^{(s=1/2)}(r) = e_1 e_2 \, \left[ \left( \frac{1}{2m_1^2}+\frac{1}{m_1 m_2} \right) \vec{S}_1 + \left( \frac{1}{2m_2^2} +\frac{1}{m_1 m_2} \right) \vec{S}_2 \right] \cdot \frac{\vec{L}}{r} \frac{d}{dr} I_1^{\textmd{EM}}(r) , \end{align} \begin{align} V_{\textrm{EM, spin-spin}}^{(s=1/2)}(r) = e_1 e_2 \bigg[ \!-\! \frac{\vec{S}_1 \cdot \vec{S}_2}{m_1 m_2} \, I_0^{\textmd{EM}}(r) + \sum_{i,j=1}^{3} \frac{(\vec{S}_1)_i \, (\vec{S}_2)_j}{m_1 m_2} \, I_{ij}^{\textmd{EM}}(r) \bigg] \,, \end{align} \end{subequations} in the case of spin-1/2 particles. The integrals $I^{\textmd{EM}}_n(r)$ and $I_{ij}^{\textmd{EM}}(r)$ follow the same definition as Eqs. \eqref{I_a_n} and \eqref{I_ij}, but replacing $Q_a(\vec{q}^{\,2})$ by $1+H(\vec{q}^{\,2})$. As we can observe, the NR potentials mediated by electromagnetic interaction present some similarities in comparison with the gravitational case. In the monopole-monopole sector, Eqs. \eqref{EM_pot_mon-mon_s=0} and \eqref{EM_pot_mon-mon_s=1/2}, we note the appearance of universal leading order contributions (terms involving $I^{\textmd{EM}}_1(r)$) both in the case of spin-0 and spin-1/2 scattered particles. On the other hand, in contrast with the gravitational case, the additional non-universal contribution (involving $I^{\textmd{EM}}_0(r)$) appears only in the spin-1/2 case. Beyond the monopole-monopole contribution, we first note that in the velocity-velocity sector, as in the gravitation case, exhibits the same result both for spin-0 and spin-1/2 particles. For spin-orbit and spin-spin contributions, only present in the case of spin-1/2 particles, we observe the same kind of interaction structures (terms with $\vec{S}_i\cdot \vec{L}$, $\vec{S}_1 \cdot \vec{S}_2$ and $(\vec{S}_1)_i \, (\vec{S}_2)_j I_{ij}^{(2)} $) both for electromagnetic and gravitational potentials. \section{Form Factors Motivated by Quantum Gravity Models} \label{Sec_App} \indent The results presented in the previous section carry some model independent features at the level of the graviton propagator. It allows to study structural aspects of quantum contributions to the gravitational potential beyond the monopole-monopole sector. However, a more detailed analysis depends on the evaluation of basic integrals defined in Appendix \ref{Appendix_int} for specific form factors. In what follows, we work out some examples with form factors motivated by recent investigations in the context of non-perturbative approaches for quantum gravity. \subsection{Form factors motivated by CDT data} \label{quadratic_case} \indent As a first example we consider form factors motivated by an approach of reconstruction of the effective action for quantum gravity based in data obtained via Causal Dynamical Triangulation (CDT). In Ref. \cite{Knorr}, the authors put forward a reverse engineered procedure to reconstruct the effective action starting from an Euclidean template of the form \begin{align}\label{eff_Action_Knorr} \Gamma = \frac{2}{\kappa^2} \int d^4x\, \sqrt{g} \bigg( 2\Lambda - R - \frac{b^2}{6} R \,\Box^{-2} R \bigg) \,, \end{align} and adjusting the free parameter $b$ by matching the autocorrelation of the 3-volume operator with data from CDT. The same class of effective action has been motivated by cosmological considerations. In fact, in Ref. \cite{Maggiore_1} the authors proposed an effective model with non-localities of the type $R \, \Box^{-2} R$ as an alternative model for dark energy. It can be found an extended version involving non-local term of the type $C_{\mu\nu\alpha\beta} \, \Box^{-2} C^{\mu\nu\alpha\beta}$ in Ref. \cite{Maggiore_2}. For an up-to-date overview on the various aspects of cosmological evolution driven by this class of non-localities see Ref. \cite{Maggiore_3}. Furthermore, contributions like $R \, \Box^{-2} R$ and $C_{\mu\nu\alpha\beta} \, \Box^{-2} C^{\mu\nu\alpha\beta}$ were earlier obtained as a consequence of a decoupling mechanism in a renormalization group analysis \cite{Shapiro_Ex1}. In the present paper we consider the same type of non-locality appearing in Eq. \eqref{eff_Action_Knorr}, but also including the term $C_{\mu\nu\alpha\beta} \, \Box^{-2} C^{\mu\nu\alpha\beta}$. In this sense, we consider the following class of form factors \begin{eqnarray}\label{Form_factor_Ex1} F(\Box) = - \frac{\rho_0}{\Box^2} \, , \qquad \textmd{and} \qquad W(\Box) = - \frac{\rho_2}{\Box^2} \, , \end{eqnarray} with $\rho_0$ and $\rho_2 $ being positive parameters. Before proceed, we must clarify some points regarding these form factors. First of all, we note that the reconstruction approach proposed in \cite{Knorr}, in the context of this paper, is simply used as a motivation for choosing the functional form of $F(\Box)$ and $W(\Box)$. Then, we do not impose any restriction on the parameters $\rho_0$ and $\rho_2$ coming from the matching template approach discussed in Ref. \cite{Knorr}. It all important to point out that the effective action in Eq. \eqref{eff_Action_Knorr} is written according to Euclidean signature and the passage to the Lorentizian signature is done by means of ``naive'' Wick rotation. We emphasize, however, that a completely well defined Wick rotation in quantum gravity remains as an open problem and is not addressed here. Considering the class of form factors introduced above, as well as the definition of the $Q$-factors defined in Eqs. \eqref{def_Q2} and \eqref{def_Q0}, the relevant integrals contributing to the NR gravitational potential are given by \begin{subequations} \begin{align} I^{(s)}_1(r) = \int \frac{d^3\vec{q}}{(2 \pi)^3} \, \frac{1}{\vec{q}^{\,2} + \mu_s^2}\, e^{i \vec{q} \cdot \vec{r}} = \frac{e^{-\mu_s r}}{4 \pi r} \label{Knorr_I_1} , \end{align} \begin{align} I^{(s)}_0(r) = \int \frac{d^3\vec{q}}{(2 \pi)^3} \, \frac{ \vec{q}^{\,2} }{\vec{q}^{\,2} + \mu_s^2}\, e^{i \vec{q} \cdot \vec{r}} = \delta^3 (\vec{r})- \mu_s^2 \, \frac{e^{-\mu_s r}}{4 \pi r} \label{Knorr_I_0} , \end{align} \begin{align} I_{ij}^{(s)}(r) &= \int \frac{d^3\vec{q}}{(2 \pi)^3} \, \frac{ \vec{q}_i \vec{q}_j}{ \vec{q}^{\,2} + \mu_s^2 } \,e^{i \vec{q} \cdot \vec{r}} \nonumber \\ &= \frac{1}{3} \delta_{ij} \delta^3(\vec{r}) + \bigg\{ (1 + \mu_s r) \delta_{ij} - (3 + 3 \mu_s r + \mu_s^2 r^2) \frac{x_i x_j}{r^2} \bigg\} \frac{e^{-\mu_s r}}{4 \pi r^3} \, , \end{align} \end{subequations} where we define $\mu_s^2 = 2(\rho_s - \Lambda)$. We shall consider $\rho_s > \Lambda$ such that the non-local form factors \eqref{Form_factor_Ex1} introduce mass terms in the graviton propagator. The resulting contributions to the NR potential are written as follows (throwing away Dirac delta terms) \begin{subequations} \begin{align} V_{\textmd{mon-mon}}^{(s=0)}(r) = &-\frac{\kappa^2 m_1 m_2}{24 \pi \,r} \left( e^{-\mu_2 r} - \frac{1}{4}e^{-\mu_0 r}\right) \nonumber \\ &+\frac{\kappa^2 m_1 m_2}{192\pi \,r} \bigg(\frac{1}{m_1^2} + \frac{1}{m_2^2}\bigg) \left( \mu_2^2 \, e^{-\mu_2 r} + \frac{5}{4} \mu_0^2 \,e^{-\mu_0 r} \right) \,, \end{align} \begin{align} V_{\textrm{vel-vel}}^{(s=0)}(r) = -\frac{\kappa^2 m_1 m_2\,\vec{p}^{\,2}}{24\pi r} \bigg\{ \bigg(\frac{1}{m_1^2} + \frac{1}{m_2^2}\bigg) \left( e^{-\mu_2 r} + \frac{1}{8} e^{-\mu_0 r}\right) + \frac{3}{m_1 m_2} e^{-\mu_2 r} \bigg\} \,, \end{align} \end{subequations} for spin-0 particles, and \begin{subequations} \begin{align} V_{\textmd{mon-mon}}^{(s=1/2)}(r) = -\frac{\kappa^2 m_1 m_2}{24 \pi \,r} \left( e^{-\mu_2 r} - \frac{1}{4}e^{-\mu_0 r}\right) \,, \end{align} \begin{align} V_{\textrm{vel-vel}}^{(s=1/2)}(r) = -\frac{\kappa^2 m_1 m_2\,\vec{p}^{\,2}}{24\pi r} \bigg\{ \bigg(\frac{1}{m_1^2} + \frac{1}{m_2^2}\bigg) \left( e^{-\mu_2 r} + \frac{1}{8} e^{-\mu_0 r}\right) + \frac{3}{m_1 m_2} e^{-\mu_2 r} \bigg\} \,, \end{align} \begin{align} V_{\textrm{spin-orbit}}^{(s=1/2)}(r) &= \frac{\kappa^2 m_1 m_2}{24\pi r^3} \bigg(\frac{1}{m_1^2} \vec{S}_1 \cdot\vec{L} + \frac{1}{m_2^2} \vec{S}_2\cdot\vec{L} + \frac{3(\vec{S}_1+\vec{S}_2)\cdot\vec{L}}{2\,m_1 m_2} \bigg) \, (1+r\mu_2)e^{-\mu_2 r} \nonumber\\ &\,+\frac{\kappa^2 m_1 m_2}{192\pi r^3} \bigg(\frac{1}{m_1^2} \vec{S}_1 \cdot\vec{L} + \frac{1}{m_2^2} \vec{S}_2\cdot\vec{L}\bigg) \, (1+r\mu_0)e^{-\mu_0 r} \, , \end{align} \begin{align} V_{\textrm{spin-spin}}^{(s=1/2)}(r) = &-\frac{\kappa^2}{32\pi \,r^3} \vec{S}_1 \cdot \vec{S}_2\, (1+r\mu_2+r^2 \mu_2^2) e^{-\mu_2 r} \nonumber \\ &+\frac{\kappa^2}{32\pi \,r^3} (\hat{r}\cdot\vec{S}_1 )\, (\hat{r}\cdot \vec{S}_2)\, (3+3r\mu_2+r^2 \mu_2^2) e^{-\mu_2 r} \,, \end{align} \end{subequations} in the case of spin-1/2 scattered particles. As we observe, both monopole-monopole and velocity-velocity sectors are composed exclusively by terms scaling with usual $r^{-1}$ behavior, but with an additional exponential damping as a result of mass-like terms in the graviton propagator. By a simple comparison of $V_{\textmd{mon-mon}}^{(s)}(r)$ and $V_{\textmd{vel-vel}}^{(s)}(r)$ we quickly infer the suppression of velocity-velocity contribution due to the ``overall'' ratio $\vec{p}^{\,2}/(m_im_j)$ ($\ll 1$ in the NR limit). Since both sectors exhibit similar $r$-dependencies, the dominance of $V_{\textmd{mon-mon}}^{(s)}(r)$ over $V_{\textmd{vel-vel}}^{(s)}(r)$ is valid for all distance scales (at least, within our approximations). Before we move on to spin-dependent contributions, let us have a closer look at the monopole-monopole sector associated with spin-0 particles. As anticipated in the previous section, $V_{\textmd{mon-mon}}^{(s=0)}(r)$ shows an additional contribution beyond the usual terms appearing in the static limit. In the present example, this extra contribution is given by \begin{align} \Delta V_{\textmd{mon-mon}}^{(s=0)}(r) = \frac{\kappa^2 m_1 m_2}{192\pi \,r} \bigg(\frac{1}{m_1^2} + \frac{1}{m_2^2}\bigg) \left( \mu_2^2 \, e^{-\mu_2 r} + \frac{5}{4} \mu_0^2 \,e^{-\mu_0 r} \right) \,. \end{align} The suppression mechanism regarding this term is readily understood in terms of some physical considerations involving the static limit, namely \begin{align} V_{\textmd{static}}(r) = -\frac{\kappa^2 m_1 m_2}{24 \pi \,r} \left( e^{-\mu_2 r} - \frac{1}{4}e^{-\mu_0 r}\right) . \end{align} In order to avoid significant deviations from the usual Newtonian potential within regions where the later has been experimentally verified, we impose upper bounds on the mass parameters $\mu_2$ and $\mu_0$. A rough estimate is obtained by assuming the Newtonian potential as a faithful description up the solar system radius. Taking solar system radius as $r_\textmd{S} \sim 10 \,\textmd{AU}$, we recover the appropriated Newtonian potential (for $r<r_\textmd{S}$) provided that $\mu_i \,r_\textmd{S} \ll 1$, leading to the rough limit $\mu_{i} \ll 10^{-25} \,\textmd{MeV} $. In this case, the suppression of the extra term $\Delta V_{\textmd{mon-mon}}^{(s=0)}(r)$ occurs as a consequence of the ratios $\mu_i^2/m_j^2$ that are much smaller than one, even if we consider the elementary particles scattering (with masses of order $\sim \textmd{MeV}$). Concerning the spin-dependent contributions, $V_{\textrm{spin-orbit}}^{(s=1/2)}(r)$ and $V_{\textrm{spin-spin}}^{(s=1/2)}(r)$, we observe the appearance of terms with different scaling behaviors in comparison with the previously discussed sectors. In particular, we note that spin-orbit sector involves interactions proportional to $r^{-2}$ and $r^{-1}$ (recall that $\vec{L} \sim \vec{r}$), while spin-spin interactions also involve terms scaling with $r^{-3}$. In all cases we observe the exponential damping as well. The long-range potential is dominated by $r^{-1}$-terms, which receives contributions from all the sectors investigated in the paper. Nevertheless, even in the set of interactions scaling with $r^{-1}$, the leading order long-ranging contribution corresponds to the usual static term in the monopole-monopole sector. In this case, the remaining $r^{-1}$-terms are suppressed by factors involving $\vec{p}^{\,2}/(m_im_j)$ and $\mu_i^2/m_j^2$. The situation turns out to be more intriguing in the short-distance regime, since in this case we observe different dominant sectors for spin-0 and spin-1/2 particles. In the spin-0 case, the leading order short-range contribution come from the usual static terms in the monopole-monopole sector, \begin{align} V_{\textmd{short-range}}^{(s=0)}(r) = -\frac{\kappa^2 m_1 m_2}{32 \pi \,r} + \cdots \end{align} In the case of spin-1/2 particles, on the other hand, the dominant contribution appears with spin-spin interactions, namely \begin{align} V_{\textmd{short-range}}^{(s=1/2)}(r) = -\frac{\kappa^2}{32\pi \,r^3} \left( \vec{S}_1 \cdot \vec{S}_2 -3 (\hat{r}\cdot\vec{S}_1 )\, (\hat{r}\cdot \vec{S}_2) \right) + \cdots\,. \end{align} In both cases, the leading order short-range contribution does not involve any parameter associated with the form factors considered in this example, see Eq. \eqref{Form_factor_Ex1}. This fact may be interpreted as direct consequence of the infrared nature of this form factors class. \subsection{Form factors motivated by FRG approach for quantum gravity} \label{linear_case} \indent In the second explicit example we consider form factors motivated by a recent strategy employed in the functional renormalization group (FRG) approach for asymptotically safe quantum gravity \cite{Bosma_PRL_123}. The main idea is to adopt an expansion of a coarse-grained version of the effective action, $\Gamma_k$, in terms of $k$-dependent form factors, where $k$ stands for an infrared cutoff scale introduced in the realm of the FRG framework. Within this formulation, it is possible to use the FRG-equation in order to derive (integro-differential) flow equations for the form factors \cite{Bosma_PRL_123,Knorr_Form_Factors}. This strategy was applied in the search for an asymptotically safe solution in terms of form factors. After some approximations, the authors of Ref. \cite{Bosma_PRL_123} found a fixed point solution that could be fitted into a simple functional dependence of the form factor $W(\Box)$, namely \begin{align} W (\Box) = \frac{\rho}{ \Box + \beta} + w \, , \label{W_factor_Bosma} \end{align} with the parameters $\rho$, $\beta$ and $w$ being adjusted according with numerical solutions of the fixed point equations. It is worth to mention that due to approximations employed in Ref. \cite{Bosma_PRL_123}, the form factor associated with the sector $R F(\Box) R$ decouples from the flow equation and it is set to zero at the level of the flowing effective action $\Gamma_k$. Keeping this in mind, in this section we mainly focus on the contribution of $W(\Box)$ to the NR potentials. For the sake of simplicity, in this example we set the cosmological constant to zero ($\Lambda = 0$). Furthermore, we should also emphasize that our analysis involve two important assumptions: (i) while the result obtained in Ref. \cite{Bosma_PRL_123} is based on non-perturbative euclidean approach, we consider a naive continuation to Minkowski space-time; (ii) we assume that shape of the form factor $W(\Box)$ remains the same once we integrate down to $k=0$. For these reasons, we explore other regions of the parameters space $\rho$, $\beta$ and $w$ instead of restricting ourselves to the particular values obtained in Ref. \cite{Bosma_PRL_123}. Taking into account this class of form factors, the relevant integrals contributing to the spin-2 sector of the NR potential involve the following term \begin{align} \frac{1}{Q_2} = - \frac{1}{2w} \frac{\vec{q}^{\,2} + \beta}{\left( \vec{q}^{\, 2} + A_+ \right) \left( \vec{q}^{\, 2} + A_- \right)} \, , \label{inverse_Q2_Bosma} \end{align} which is possible be mapped, by means of partial fraction decomposition, in the standard integrals reported in the Appendix \ref{Appendix_int}. Note that we define \begin{align} A_{\pm} = \frac{\left( - 1 + 2 \rho + 2 \,w \beta \right) \pm \sqrt{ \left( - 1 + 2 \rho + 2 w \beta \right)^2 + 8 w \beta } }{4 w} \, . \label{def_A_pm} \end{align} Before we discuss the main results of this section, it is important to observe that an appropriate mapping in terms of the standard integrals \eqref{int_1}-\eqref{int_3} requires some restrictions on $A_\pm$. Therefore, one must have a closer look at the dependence of $A_{\pm}$ with respect to the parameters $\rho$, $w$ and $\beta$. In particular, we want to probe the existence of regions in the parameters space $\rho$, $w$ and $\beta$ where one of the following conditions is verified \begin{itemize} \item[(i)] $A_{\pm} \in \mathbb{R} $, with $A_\pm > 0$, \item[(ii)] $A_{\pm} \in \mathbb{C} $, such that $\textmd{Re}(A_\pm) > 0$ and $A_\pm^* = A_\mp$. \end{itemize} In the first case, the resulting potential is composed by a sum of terms with $r$-dependency characterized by $1/ r^\alpha$ and $e^{- \sqrt{A_\pm}\,r}/ r^\alpha$ (with $\alpha=1,2,3$). When $A_{\pm}$ takes complex values we also observe oscillatory terms (modulated by an exponential dumping) coming from the imaginary part of $A_{\pm}$. In this case, the additional restriction $A_\pm^* = A_\mp$ appears as a reality condition for the resulting potential. In Fig. \ref{RegionPlots} we show the existence of regions in the parameters space defined by $\rho$, $w$ and $\beta$ where the aforementioned conditions are verified. We note that, since the non-trivial dependence of $A_\pm$ occurs with respect to $\rho$ and the quantity $\beta w$, we summarized the results in terms of two region-plots in the plane $(\rho,\beta \,|w|)$. Apart from $\rho$ and $\beta \,|w|$, the shape of viable regions depends on the sign of $w$. Both signs of $w$ admit dense regions satisfying conditions of type-i (red) or type-ii (blue). \begin{figure}[ht] {\includegraphics[width = 2.5in]{RegionPlot_w_pos}} \qquad {\includegraphics[width = 2.5in]{RegionPlot_w_neg}} \caption{Regions in the space of parameters ($\rho$, $\beta |w|$) for positive and negative values of $w$. The red regions correspond to the values in which $A_{\pm} \in \mathbb{R}$ and positive (type i). The blue region (with horizontal dashing) indicates values where type-ii restriction is verified ($A_{\pm} \in \mathbb{C} $, such that $\textmd{Re}(A_\pm) > 0$ and $A_\pm^* = A_\mp$).} \label{RegionPlots} \end{figure} Since the complete expressions are quite long, here we shall not report the full results for $V^{(s=0)}(r)$ and $V^{(s=1/2)}(r)$. Nonetheless, we note that it is directly obtained in terms of Eqs. \eqref{spin_0_pot} and \eqref{spin_meio_pot} by putting together the explicit integrals \begin{subequations} \begin{align} I^{(2)}_1(r) =& - \frac{1}{2w} \, \frac{1}{\left( m_+^2 - m_-^2 \right)} \Bigg\{ \frac{ \left( m_+^2 - m_-^2 \right)}{m_+^2 m_-^2} \frac{\beta}{4 \pi r} \nonumber \\ &- \left( 1 - \frac{\beta}{m_+^2} \right) \frac{e^{- m_+ r}}{4 \pi r} + \left( 1 - \frac{\beta}{m_-^2} \right) \frac{e^{- m_- r}}{4 \pi r} \Bigg\} \, , \label{I2_1_Bosma} \end{align} \begin{align} I^{(2)}_0(r) = -\frac{1}{2w} \, \frac{1}{\left( m_+^2 - m_-^2 \right)} \Bigg\{ \left( m_+^2 - \beta \right) \frac{e^{- m_+ r}}{4 \pi r} - \left( m_-^2 - \beta \right) \frac{e^{- m_- r}}{4 \pi r} \Bigg\} \, , \label{I2_0_Bosma} \end{align} \begin{align} I^{(2)}_{ij}(r) = &- \frac{1}{2w} \, \frac{1}{\left( m_+^2 - m_-^2 \right)} \Bigg\{ \beta \, \frac{ \left( m_+^2 - m_-^2 \right)}{m_+^2 m_-^2} \left( \delta_{ij} - 3 \frac{x_i x_j}{r^2} \right) \frac{1}{4 \pi r^3} \nonumber \\ &- \left( 1 - \frac{\beta}{m_+^2} \right) \bigg[ \left( 1 + m_+ r\right) \delta_{ij} -\left( 3 + 3 m_+ r + m_+^2 r^2 \right) \frac{x_i x_j}{r^2} \bigg] \frac{e^{- m_+ r}}{4 \pi r^3} \nonumber \\ &+ \left( 1 - \frac{\beta}{m_-^2} \right) \bigg[ \left( 1 + m_- r\right) \delta_{ij} - \left( 3 + 3 m_- r + m_-^2 r^2 \right) \frac{x_i x_j}{r^2} \bigg] \frac{e^{- m_- r}}{4 \pi r^3} \Bigg\} \, ,\label{I2_ij_Bosma} \end{align} \end{subequations} in which $m_\pm = \sqrt{A_\pm}$ and we assume $m_+^2-m_-^2 \neq 0$. In this case, the contact terms ($\sim \delta^3(\vec{r})$) resulting from integrals of the form \eqref{int_2} and \eqref{int_3} completely cancel out in the final expression. The scaling of the different sectors contributing to the NR potentials is summarized in Table \ref{Table_1}. \begin{table}[h] \begin{tabular}{|c|c|c|c|c|c|c|} \hline \hline & $\quad\,\, 1/r \quad\,\,$ & $\quad\, 1/r^{2} \quad\,$ & $\quad\, 1/r^{3} \quad\,$ & $\,\,e^{-m_\pm r}/r\,\,$ & $\,\,e^{-m_\pm r}/r^2\,\,$ & $\,\,e^{-m_\pm r}/r^3\,\,$ \\\hline mon-mon & $\checkmark_{0,\,1/2}$ & & & $\checkmark_{0,\,1/2}$ & & \\\hline vel-vel & $\checkmark_{0,\,1/2}$ & & & $\checkmark_{0,\,1/2}$ & & \\\hline $\vec{L}\cdot\vec{S}_{1,2}$ & & & $\checkmark_{1/2}$ & & $\checkmark_{1/2}$ & $\checkmark_{1/2}$ \\\hline $\vec{S}_1\cdot\vec{S}_2$ & & & $\checkmark_{1/2}$ & $\checkmark_{1/2}$ & $\checkmark_{1/2}$ & $\checkmark_{1/2}$ \\\hline $(\hat{r}\cdot\vec{S}_1)(\hat{r}\cdot\vec{S}_2)$ & & & $\checkmark_{1/2}$ & $\checkmark_{1/2}$ & $\checkmark_{1/2}$ & $\checkmark_{1/2}$ \\ \hline \hline \end{tabular} \caption{Scaling behavior of the different sectors contributing to the NR potentials $V^{(s=0)}(r)$ and $V^{(s=1/2)}(r)$. The subscript indicates if the correspondent behavior appears for spin-0 and/or spin-1/2 cases.} \label{Table_1} \end{table} In the static regime the only remaining contribution comes from the monopole-monopole sector, resulting in the following expression \begin{align}\label{Static_Pot_Ex2} V_{2,\textmd{static}}(r) = -\frac{\kappa^2 m_1 m_2}{24\pi r} \left( 1 - \frac{1}{2w} \frac{1-\beta/m^2_{-}}{m^2_{+}-m^2_{-}} e^{-m_{-}r} + \frac{1}{2w} \frac{1-\beta/m^2_{+}}{m^2_{+}-m^2_{-}} e^{-m_{+}r} \right) \,, \end{align} with the subscript ``2'' indicating we count only spin-2 contributions in the graviton propagator. Deviations from the $1/r$-behavior within experimentally tested scales is avoided when $m_{\pm} r_{\textmd{min}} \gg 1$, with $r_{\textmd{min}}$ being the smaller distance in which the Newtonian $1/r$-law is validated (see Refs. \cite{Short_distance_1,Short_distance_2,Short_distance_3} for short-distance probes of the Newtonian potential). In this case, the exponential factors strongly suppress the second and third term in Eq. \eqref{Static_Pot_Ex2} and the large distance behavior is dominated by the $1/r$-contribution. As it is noted in Ref. \cite{Bosma_PRL_123}, the static potential in Eq. \eqref{Static_Pot_Ex2} has a particularly interesting behavior at small distances. In this regime, the $1/r$ terms cancel out among different contributions, resulting in a finite potential at $r=0$. Beyond the static limit, the NR potential receives multiple contributions scaling with different $r$-dependencies as it is summarized in the Table \ref{Table_1}. In the large distance regime, even if we include contribution beyond the monopole-monopole sector, the leading order term, in the spin-0 and spin-1/2 cases, corresponds to the usual $1/r$ decay, \begin{align}\label{Long_range_Ex2} V_{2,\,\textmd{long-range}}(r) = -\frac{\kappa^2 m_1 m_2}{24\pi r} + \cdots \,. \end{align} In this limit, all remaining terms are suppressed either by exponential decay (with $m_{\pm} r \gg 1$) or by sub-leading behavior of $1/r^3$ in comparison with $1/r$. In the short-distance regime we observe more intriguing features once contributions beyond of monopole-monopole sector are taken. In this case, the leading order terms are given by \begin{subequations} \begin{align}\label{Short-range_Ex2_s=0} V_{\textmd{short-range}}^{(s=0)}(r) = \frac{\kappa^2\, m_1 m_2}{384 \pi w \,r} \left( \frac{1}{m_1^2} + \frac{1}{m_2^2}\right) + \cdots \,, \end{align} \begin{align}\label{Short-range_Ex2_s=1/2} V_{\textmd{short-range}}^{(s=1/2)}(r) = -\frac{\kappa^2}{128\pi w\,r} \left( \vec{S}_1 \cdot \vec{S}_2 + (\hat{r}\cdot\vec{S}_1)(\hat{r}\cdot\vec{S}_2) \right) + \cdots . \end{align} \end{subequations} Similarly to the example explored in the previous section, we also note different leading order contributions to $V_{\textmd{short-range}}^{(s=0)}(r)$ and $V_{\textmd{short-range}}^{(s=1/2)}(r)$. However, a different aspect in the present case is that the leading order terms at small distances exhibit a dependence with respect to the form factor $W(\Box)$ due to the parameter $w$ in Eqs. \eqref{Short-range_Ex2_s=0} and \eqref{Short-range_Ex2_s=1/2}. This fact indicates that the form factor studied along this section plays an important role the in the UV aspect of the NR potential, even in the presence of terms beyond the monopole-monopole sector. Despite of this fact, our results, for short-range regime, point out an important difference with respect to the static case, Eq. \eqref{Static_Pot_Ex2}, namely, the cancellation of the Newtonian singularity at $r=0$ does not survive beyond the static limit. \section{Remarks on the cancellation of Newtonian singularities} \label{Singularities} The observation at the end of the previous section trigger a question regarding the cancellation of Newtonian singularities. In particular, it would be interesting to investigate whether the regular behavior at $r=0$, observed in higher-derivative models of gravity \cite{Stelle,Accioly_et_al_PRD2018,Cancellation_1,Cancellation_2}, persists after the inclusion of contributions beyond the static limit. This particular test is easily addressed in terms of the results presented in Section \ref{Sec_Potentials}, however, it requires a slight modification in the way we interpret our framework. In the case of higher-derivative models, the form factor expansion appears at the level of the classical action \cite{Accioly_et_al_PRD2018,Cancellation_1,Cancellation_2}, given by \begin{align} S_{\textmd{HD}}[g_{\mu \nu}] = \frac{2}{\kappa^2} \int d^4 x \, \sqrt{-g} \left( - R - \frac{1}{3} R F(\Box) R + C_{\mu \nu \alpha \beta} W(\Box) C^{\mu \nu \alpha \beta} \right) + \mathcal{O}(\mathcal{R}^3) \,, \label{Class_action_HD} \end{align} with polynomial form factors ($p,q \in \mathbb{N}$) \begin{align}\label{Form_Factor_Poly} F(\Box) = \sum_{n=0}^p f_n \,(-\Box)^n \qquad \textmd{and} \qquad W(\Box) = \sum_{n=0}^q w_n \,(-\Box)^n \,. \end{align} In this case, all the results presented in Section \ref{Sec_Potentials} remain unchanged, however, keeping in mind that Eq. \eqref{prop_simpl} should be interpreted as the tree-level graviton propagator. The relevant integrals appearing in Eqs. \eqref{spin_0_pot} and \eqref{spin_meio_pot} are computed by means of the partial decomposition \begin{align}\label{Decomposition_HD} \frac{1}{\vec{q}^{\,2}\,Q_a(-\vec{q}^{\,2})} = \frac{1}{\vec{q}^{\,2}} + \sum_{i=1}^{\mathcal{N}_a} \frac{\mathcal{R}_{i}^{(a)}}{\vec{q}^{\,2}+\mu^2_{a,i}} \, , \qquad \textmd{with $a=0,2$}\,, \end{align} where $\mathcal{N}_a = p\,\delta_{a,0}+q\,\delta_{a,2}+1$ and we define the residues \begin{align} \mathcal{R}_{n}^{(a)} = -\prod_{ \substack{l = 1\\ l \neq n} }^{\mathcal{N}_a} \frac{ \mu^2_{a,l} }{ \mu^2_{a,l} - \mu^2_{a,n} } \,. \end{align} The mass parameters $\mu_{a,l}$ are defined as the zeros of the $Q_a$-factors, namely $Q_a(\mu^2_{a,l})=0$. In order to avoid complications with degenerate poles we assume $\mu^2_{a,i} \neq 0$ and $ \mu^2_{a,i} \neq \mu^2_{a,j}$ if $i\neq j$. In such a case, we can decompose $I_n^{(a)}(r)$ and $I_{ij}^{(a)}(r)$ in terms of the standard integrals \eqref{int_1}-\eqref{int_3}, as displayed below \begin{subequations} \begin{align}\label{Int_HD_1} I_n^{(a)}(r) = \mathcal{I}_n(r,0)+ \sum_{l=1}^{\mathcal{N}_a} \mathcal{R}_{l}^{(a)} \,\mathcal{I}_n(r,\mu_{a,l})\,, \end{align} \begin{align}\label{Int_HD_2} I_{ij}^{(a)}(r) = \mathcal{I}_{ij}(r,0)+ \sum_{l=1}^{\mathcal{N}_a} \mathcal{R}_{l}^{(a)} \,\mathcal{I}_{ij}(r,\mu_{a,l}) \,. \end{align} \end{subequations} The explicit NR potentials are obtained by using Eqs. \eqref{Int_HD_1} and \eqref{Int_HD_2}. The scaling dependence of the different sectors exhibits the same behavior of the previous subsection (see Table \ref{Table_1}). The static limit (Eq. \eqref{pot_static}) has been computed before, e.g. see Refs. \cite{Accioly_et_al_PRD2018,Cancellation_1,Cancellation_2}, resulting in the following expression \begin{align} V_{\textmd{static}}(r) &= -\frac{\kappa^2 m_1 m_2}{32\pi \,r} \bigg( 1 + \frac{4}{3} \sum_{l=1}^{q+1} \mathcal{R}_{l}^{(2)} \,e^{-\mu_{2,l} r} - \frac{1}{3} \sum_{l=1}^{p+1} \mathcal{R}_{l}^{(0)} \,e^{-\mu_{0,l} r} \bigg) \,, \nonumber\\ &\!\! \underset{r\to 0}{=} -\frac{\kappa^2 m_1 m_2}{32\pi \,r} \bigg( 1 + \frac{4}{3} \sum_{l=1}^{q+1} \mathcal{R}_{l}^{(2)} - \frac{1}{3} \sum_{l=1}^{p+1} \mathcal{R}_{l}^{(0)} \bigg) + \textmd{finite} \,. \end{align} The cancellation of the $1/r$ singularity follows from the property $\sum_{l=1}^{\mathcal{N}_a} \mathcal{R}_{l}^{(a)} = -1$ (see Ref. \cite{Cancellation_2}). Taking contributions beyond the static sector, the regularity of the NR potential at $r=0$ becomes more subtle. As an example, we consider the particular case corresponding to Stelle's Quadratic Gravity ($p=q=0$) \cite{Stelle}. In such a case, the leading order short-distance contribution is given by \begin{subequations} \begin{align}\label{Short-range_Stelle_s=0} V_{\textmd{Stelle}}^{(s=0)}(r) = - \frac{\kappa^2\, m_1 m_2 }{192 \pi \,r} \left( \frac{1}{m_1^2} + \frac{1}{m_2^2}\right) \left( \mu_{2}^2 + \frac{5}{4} \mu_{0}^2 \right) + \textmd{finite} \,, \end{align} \begin{align}\label{Short-range_Stelle_s=1/2} V_{\textmd{Stelle}}^{(s=1/2)}(r) = \frac{\kappa^2\,\mu_{2}^2}{64 \pi \,r} \left( \vec{S}_1 \cdot \vec{S}_2 + (\hat{r}\cdot\vec{S}_1)(\hat{r}\cdot\vec{S}_2) \right) + \textmd{finite} . \end{align} \end{subequations} We note quite a similar behavior in comparison with the example of the previous section. As we can observe, the $1/r$ singularity reappears once we include contributions beyond the static limit. This result indicates that additional UV modifications should be included in order to keep the NR potential finite at $r=0$. Indeed, this is actually the case as one can easily see by taking into account higher terms in the polynomial form factor defined in Eq. \eqref{Form_Factor_Poly}. A simple example is the sixth-order higher-derivative gravity ($p=q=1$) which results in a singularity-free potential, even after the inclusion of contributions beyond the static limit. The same behavior is also observed for any $p,q\geq1$. A similar conclusion for the cancellation of singularities at $r=0$, but at the level of the Kretschmann scalar, was obtained in Refs. \cite{Breno_Tiberio_1}. It is important to reinforce that the discussion presented here is restricted to the cancellation of the Newtonian singularity in the classical (tree-level) contribution to the NR potential and, therefore, our results should not be interpreted as a definitive claim concerning the problem of singularity resolution. \section{Concluding Comments} \label{Sec_Concluding} In this paper, we investigate quantum effects in the NR gravitational inter-particle potential, including contributions beyond the static regime. We consider both the gravitational scattering of spin-0 and spin-1/2 particles. Our results are based on the form factor expansion of the effective action in the covariant approach for quantum gravity. Within this formalism, the quantum corrections are encoded in the form factors $F(\Box)$ and $W(\Box)$ associated with curvature squared terms in the effective action. Considering metric fluctuations around flat background, these form factors capture all the relevant information concerning the (flat) graviton propagator. Our main results are summarized as follows: \begin{itemize} \item In the monopole-monopole sector, the NR potentials associated with spin-0 and -1/2 particles exhibit a universal leading-order contribution but differ with respect to a sub-leading term. The velocity-velocity sector exhibits the same result for spin-0 and spin-1/2 particles. \item The NR potential associated with the scattering of spin-1/2 particles also involves spin-orbit and spin-spin interactions. We observe that the form factors $F(\Box)$ and $W(\Box)$ may contribute to spin-orbit, while only $W(\Box)$ can generate corrections to spin-spin interactions. \item Comparing our results with previous investigations concerning the electromagnetic NR potential, we observe similar interaction structures appearing in both cases. \end{itemize} We apply the results obtained in Sec. \ref{Sec_Potentials} to explicit examples of form factors motivated by non-perturbative approaches for quantum gravity. In the first example, we consider form factors motivated by an approach where the effective action was obtained by matching a predefined template with CDT data \cite{Knorr}. In the second one, we explore a form factor obtained in the context of the FRG approach for asymptotically safe quantum gravity \cite{Bosma_PRL_123}. The two cases have the contributions to the NR potentials reduced to the form $1/r^{\alpha}$ or $e^{-m r}/r^{\alpha}$ with $\alpha = 1,2,3$. Furthermore, the dominant short-range contributions depend on the type of particle being scattered. The example studied in Sec. \ref{linear_case} presents the reappearance of the singularity at $r=0$ once contributions beyond the static regime are accounted. Motivated by this result, in Sec. \ref{Singularities} we revisit the cancellation of Newtonian singularities in higher-derivative models. Within this class of models, our results indicate that the cancellation of singularities at $r=0$ requires a higher number of derivatives when compared with the static approximation. The analysis performed here only includes quantum corrections at the level of the graviton propagator, while the vertices are taken to be tree-level ones. This is an important approximation in our approach and deserves further investigation. In principle, we could also adopt a form factor expansion in order to capture quantum corrections at the level of gravity-matter systems (see, for example, Ref. \cite{Knorr_Form_Factors}). However, this approach increases considerably the calculations of the inter-particle potentials. In a recent work \cite{Draper}, a remarkable progress was made by taking into account the most general parameterized relativistic amplitude for gravity-mediated scattering of scalar particles. In addition, we only consider the scattering of spin-0 and spin-1/2 particles, but we could also include the scattering of spin-1 particle. As discussed in Ref. \cite{HR_0802.0716}, the NR potentials associated with spin-1 scattered particle exhibit new interactions involving the polarization, besides the velocity- and spin-dependent contributions. These points remain to be investigated in a future work. \section*{Acknowledgements} \noindent We would like to thank J.A. Helayël-Neto, J.T. Guaitolini Junior and P.C. Malta for reading the manuscript and the constructive comments. GPB is grateful for the support by CNPq (Grant no.~142049/2016-6) and thanks the DFQ Unesp-Guaratinguet\'a for the hospitality. LPRO is supported by the PCI-DB funds (CNPq/MCTIC). MGC is supported by CNPq funds.
1204.6122
\section{Introduction}\label{sec1} The putative tori surrounding the accretion disks of active galactic nuclei (AGNs) play a fundamental role in the unification scheme of AGNs \citep{ant93}. These tori most likely provide the material reservoir that feeds the accretion disk (e.g., \citealt{kro88}). Infrared interferometric observations can resolve these mas-scale tori in the near-infrared (NIR) (e.g., \citealt{swa03,wit04,kis09a,pot10,kis11a}) and mid-infrared (MIR) (e.g., \citealt{jaf04,mei07,tri07,tri09,bec08,kis09b,rab09,bur09,bur10,tri11}, and \citealt[][Paper~I]{kis11b}). In this paper, we present the first NIR interferometric observation of an AGN with the AMBER/VLTI instrument. \begin{table*} \caption{Observation log of our AMBER LR observations of NGC~3783 and its calibrator CD-37 7391. The data in the first 11 lines were observed in the two-telescope mode and the data in the last 4 lines in the three-telescope mode. The table lists the names, times of observations, projected baseline lengths, position angles PA, detector integration times DIT, seeing, number of interferograms, and derived target visibilities (errors $\pm0.09$).} {\tiny \begin{tabular}{lllllrrrr} \hline \hline Name & date & Time of & Telescopes/ & PA & DIT & Seeing & Number &Target \\ & & observation & proj. baseline & (\degr) &(ms) & (arcsec) & of frames &visibility\\ & & (UTC) & lengths (m) & & & & \\ \hline NGC 3783 & 09/04/14 & 01:19 - 01:31 & UT2-3/46.6 & 28.8 & 800 & 0.93 & 11x70 &0.89\\ CD-37 7391 & 09/04/14 & 01:45 - 01:55 & UT2-3 & & 800 & 0.84 & 10x70 &\\ NGC 3783 & 09/04/14 & 03:16 - 03:29 & UT2-3/45.2 & 44.8 & 800 & 0.82 & 11x70 &0.96\\ NGC 3783 & 09/04/14 & 03:32 - 03:41 & UT2-3/44.8 & 46.3 & 400 & 0.79 & 10x70 &0.93\\ CD-37 7391 & 09/04/14 & 04:31 - 04:34 & UT2-3 & & 400 & 0.78 & 4x120 &\\ \hline NGC 3783 & 09/04/14 & 03:52 - 04:03 & UT3-4/62.0 & 120.9 & 800 & 0.68 & 10x70 &0.97\\ CD-37 7391 & 09/04/14 & 02:11 - 02:16 & UT3-4 & & 800 & 1.45 & 5x70 &\\ \hline CD-37 7391 & 09/04/14 & 02:00 - 02:05 & UT2-4 & & 800 & 0.99 & 5x120 &\\ NGC 3783 & 09/04/14 & 03:44 - 03:49 & UT2-4/87.1 & 90.0 & 800 & 0.72 & 5x70 &0.86\\ NGC 3783 & 09/04/14 & 04:12 - 04:20 & UT2-4/84.6 & 94.7 & 800 & 0.81 & 8x70 &0.85\\ CD-37 7391 & 09/04/14 & 04:41 - 04:45 & UT2-4 & & 800 & 0.87 & 3x120 &\\ \hline NGC 3783 & 11/05/18 & 02:39 - 02:44 & UT1-2-4/51.2/113.8/80.3 & 38.8/77.3/100.7 & 400 & 0.63 & 7x120 &\\ NGC 3783 & 11/05/18 & 02:50 - 02:55 & UT1-2-4/50.6/111.2/78.7 & 39.8/78.9/102.8 & 400 & 0.68 & 7x120 &\\ CD-37 7391 & 11/05/18 & 02:00 - 02:06 & UT1-2-4/ & & 400 & 0.75 & 7x120 &\\ CD-37 7391 & 11/05/18 & 02:09 - 02:17 & UT1-2-4/ & & 400 & 0.63 & 7x120 &\\ \hline \end{tabular} } \label{tab:obslog} \end{table*} \section{Observations and data reduction} \label{sec2} We observed the Seyfert 1.5 AGN NGC~3783 in 2009 and 2011 (IDs 083.B-0212 and 087.B-0578) with the ESO VLTI and the AMBER instrument \citep{pet07}. For these observations in the $K$-band (see Table~\ref{tab:obslog}), the AMBER low spectral resolution mode (LR) was employed. Figure~\ref{fig:obs} (top) presents examples of target interferograms to illustrate the noise problem ($K \sim$ 10.1). Long detector integration times (DIT) of 400 and 800~ms were chosen to be able to recognize the faint fringes during data recording and correct drifts of the optical path differences (OPDs) between the telescope beams. The interferograms shown in Fig.~\ref{fig:obs} are two-telescope interferograms, not AMBER-standard three-telescope interferograms. In 2009 (see Table~\ref{tab:obslog}), we recorded two-telescope interferograms since it is easier to correct OPD drifts in two-telescope than in three-telescope interferograms. In 2011, we recorded three-telescope interferograms, which provide closure phases. For data reduction of the two-telescope interferograms, we used our own software (developed by one of us, KHH), which is able to reduce the non-standard two-telescope interferograms. It is based on the same P2VM algorithm \citep{tat07,che09} as the AMBER \textit{amdlib}\footnote{The AMBER - reduction package \textit{amdlib} is available at: http://www.jmmc.fr/data\_processing\_amber.htm} software. To reduce the effect of the instantaneous OPDs on the visibility, we applied a preprocessing method that equalizes the OPD histograms of the target and calibrator interferograms \citep{kre12}. Figure~\ref{fig:obs} and Table~\ref{tab:obslog} show the derived visibilities, which are wavelength averages over the wavelength range of 2.0--2.40~$\mu$m. For data reduction of the 2011 three-telescope data, we used the standard AMBER data reduction package \textit{amdlib} version 3.0. Figure~\ref{fig:obs} (bottom) shows the closure phases of NGC~3783 derived from the three-telescope 2011 interferograms. The average closure phase is $3.3 \pm 26\degr$. Closure phases are a measure of asymmetry. However, the large errors do not allow us to detect any asymmetry. We also tried to derive calibrated visibilities from the 2011 data, but without success since the transfer function was unstable. \begin{table*} \caption[]{NGC~3783 torus radius $R_{\rm torus}$, 2MASS fluxes of the nuclear core, and flux contributions of the host galaxy and the AD. } {\tiny \begin{tabular}{lccccccccccccccccccc} \hline \hline $J$ flux &$H$ flux &$K$ flux &$J$ &$H$ &$K$ &host & AD &$R_{\rm torus}$ & $R_{\rm torus}$ & $R_{\rm \tau_K} ^c$\\ (mJy) & (mJy) & (mJy) & (mag.) & (mag.) & (mag.) & fraction$^a$ & fraction$^b$ & (mas) & (pc) & (pc) \\ \hline 18.8 & 34.2 & 61.8 & 12.3 & 11.2 & 10.1 & 0.005$\pm$0.002 & 0.21$\pm$0.07 & $0.74\pm0.23$ & $0.16\pm0.05$ &0.071$\pm$0.025 \\ \hline \end{tabular} } \\ $^a$$K$-band flux contribution in the AMBER FOV. $^b$AD flux contribution to the point-like core in the 2MASS $K$-band image. $^c$Reverberation radius $R_{\rm \tau_K}$ \citep{gla92}. \label{tab_2mass} \end{table*} \section{Geometric model fits} To interpret our $K$-band visibilities (Fig.~\ref{fig:obs} middle), we first fitted a geometric thin-ring model (i.e., ring width = outer radius minus inner radius = 0) to the visibilities and derived a ring-fit radius of $\sim$0.67~mas. This is only a very rough estimate of the torus size since the observed visibilities may not only depend on the torus but also on the underlying galaxy within the 60\,mas field-of-view (FOV) of AMBER and on the accretion disk (AD), which is thought to remain unresolved (\citealt{kis07,kis09a,kis09b}). Therefore, we have to estimate the flux contributions from the host galaxy and the AD point source and take these contributions into account when fitting the visibilities. From the $K$-band image of NGC~3783 in the 2MASS catalog, we estimated the flux contribution of the host galaxy within the 60~mas AMBER FOV to $0.5 \pm 0.2$\,\%. To obtain the NIR flux from the torus and the AD, we used two-dimensional fits to separate the point-like core component in the 2MASS $J$-, $H$-, and $K$-band images from the underlying host galaxy (see Table~\ref{tab_2mass}), following the same procedure as described by \citet{kis09a}. Using the derived NIR core fluxes, we can estimate the flux contribution of the AD component in the $K$ band. We assume here that the core component flux originates from the hot dust and from the AD. Therefore, we fitted a power-law spectrum for the AD and a blackbody for the dust emission, as described in \citet{kis09a}. We also applied a small correction for Galactic reddening with $E_{B-V}$ = 0.119. By assigning an uncertainty of the NIR AD spectral index of 0.3, we also obtained the uncertainty of the $K$-band AD flux contribution. The AD flux fraction in the $K$-band was estimated to be as small as $21 \pm 7$\%, which is similar to the values of several other AGNs reported by \citet{kis07,kis09a}. If we now take into account these estimated flux contributions of $\sim$21\% from the unresolved AD and of $\sim$0.5\% from the host galaxy, we can derive the visibilities of the torus itself and can fit the radius of the torus (i.e., this radius is the only fit parameter; the AD contributes just a constant of 0.21 to the total visibilities). We derive a torus radius $R_{\rm torus}$ of $0.74 \pm 0.23$~mas or $0.16\pm0.05$~pc (thin-ring fit; see Fig.~\ref{fig:obs} middle, red curve). \section{Interpretation and discussion} \subsection{NIR interferometric and reverberation radii } Figure~\ref{fig_rev_ring} compares the derived ring-fit radius $R_{\rm torus} \sim 0.16$~pc of NGC~3783 (red) with eight interferometric $K$-band radii (blue) reported by \citet{kis09a,kis11a}. These radii are plotted against the UV luminosity $L$, defined as a scaled $V$-band luminosity of $6\ \nu f_{\nu} (V)$, with the $V$ flux extrapolated from the flux at 1.2 $\mu$m \citep{kis07}. We can compare these torus radii with reverberation radii $R_{\rm \tau_K}$ (black) derived from the light traveling distances corresponding to the time lag between the $K$-band and the UV/optical \citep{sug06}. They are known to be proportional to $\sim$$ L^{1/2}$ and are likely probing the dust sublimation radius. The dotted line is the fit curve of the reverberation radii (different luminosity values are obtained for the same object because of variability and uncertainties of the luminosity derivation). The reverberation radius of NGC~3783 is $\sim$0.071~pc, which is smaller than the interferometric torus radius $R_{\rm torus} \sim 0.16$~pc (Sect. 3). Figure~\ref{fig_rev_ring} shows that several interferometric torus radii are larger than the reverberation radii. Our interpretation is that the interferometric torus radii are averages over the radial dust distribution that emits the $K$-band light, whereas the reverberation radii probably trace the dust closer to the inner dust torus boundary radius (\citealt{kis09a}). Furthermore, we note that $R_{\rm torus} \sim 0.16$~pc is a fit radius calculated with a thin-ring model (i.e., ring width = outer radius minus inner radius = 0). If a dust distribution with a certain thin-ring fit radius is ring-like and has a ring width larger than zero, then the inner ring radius would be smaller than the thin-ring fit radius. We have not fitted a ring model with a larger ring width, since the ring width cannot be constrained with the available visibilities. \begin{figure} \centering \includegraphics[width=7.8cm]{n3783-relation.eps} \caption{$K$-band torus radii of NGC~3783 (red dot) and eight other AGNs (blue; from \citealt{kis11a}) versus their UV luminosities. The black symbols and the dotted line are the reverberation radii $R_{\rm \tau_K}$ \citep{gla92,sug06} and their fit curve, respectively.} \label{fig_rev_ring} \end{figure} \begin{figure} \centering \includegraphics[width=6.0cm]{n3783-model.eps} \par\vspace{-0pt} \caption{Temperature/density-gradient model including an additional inner hot 1400~K ring component. {\it Top:} SED observations (black and gray symbols), model SEDs (black solid line: model including the 1400~K ring component; black dotted line: model without the 1400~K component), and correlated fluxes (yellow, green, and blue; see \citetalias{kis11b} for more details). The different colors (see top color bar) correspond to different spatial wavelengths measured in units of the dust sublimation radius $R_{\rm in}$. {\it Bottom:} New NIR visibilities (purple symbols) and our MIR visibilities from \citetalias{kis11b} (the symbols with colors from green to red correspond to 8.5 to 13~$\mu$m; see color-coding bar; note that the spatial frequency is given in units of cycles per $R_{\rm in}$). The solid and dashed lines are the visibilities of the temperature/density-gradient model including an additional inner 1400~K ring. The red, green, and purple lines are the model visibilities at 13, 8.5, and 2.2~$\mu$m, respectively (solid lines: model curves for the PA along the equatorial axis; dashed line: along the polar direction; see \citetalias{kis11b}). The dotted lines are the visibilities and SED of the same temperature/density-gradient model, but without an inner hot 1400~K ring component. } \label{fig_model} \end{figure} \subsection{Simultaneous modeling of the NIR AMBER visibilities, the MIR MIDI visibilities, and the SED} Mid-infrared (MIR) MIDI interferometry of NGC~3783 was reported by \citet{bec08}, \citet{kis09b}, and \citet{kis11b} (= \citetalias{kis11b}). For the interpretation of these observations, we used a temperature/density-gradient model including an additional hot inner ring component with a temperature of 1400~K (\citetalias{kis11b}). This simple model assumes that the face-on surface brightness distribution of the torus is dominated by the IR radiation from dust clouds directly illuminated and heated by the AD. These dust clouds are probably located near the torus surface since clouds deep inside the torus are not directly illuminated. The surface brightness distribution of this temperature/density-gradient model depends on two distributions, namely a radial temperature and a radial surface density distribution (see Eq. 8 in \citetalias{kis11b}). The maximum dust temperature $T_{\rm max}(r)$ at distance $r$ is assumed to be proportional to $(r/R_{\rm in})^{\beta}$, where $r$ is the radial distance, ${\beta}$ is the power-law index, and $R_{\rm in}$ is the dust sublimation radius empirically given by the NIR reverberation radius $R_{\rm \tau_K}$ \citep{gla92}, i.e., we define $R_{\rm in} = R_{\rm \tau_K}$ (\citetalias{kis11b}, Eq. 1). Furthermore, the surface brightness distribution depends on the surface density function $f_{\epsilon} (r) = f_0 (r/R_{\rm in})^{\alpha}$ of the heated dust clouds near the surface (power law with index $\alpha$). The emissivity factor $f_{0}$ is equal to $f_{\epsilon} (r)$ at $r$ equal to the sublimation radius $R_{\rm in}$. Our IR observations are only sensitive to the dust clouds near the surface, which have the temperature $T_{\rm max}(r)$, and not to the cold dust inside the torus. $f_{\epsilon} (r)$ can be regarded as a surface filling factor multiplied by the emissivity (\citetalias{kis11b}). If the emissivity of optically thick illuminated clouds does not depend sensitively on the radial distance from the illuminating source or the observing wavelength (e.g., see Fig. 3 of \citealt{hoe10}), the factor $f_{\epsilon} (r)$ is roughly proportional to the radial surface density distribution of the heated dust. Interestingly, the application of this temperature/density-gradient model to several AGNs in \citetalias{kis11b} and to the NGC~3783 observations reported in this paper (see Fig.~\ref{fig_model}) has shown that an additional inner hot model component is required with a temperature of 1400~K and a radius of one or a few dust sublimation radii in order to explain all observations. This hot component might play a similar role as the puffed-up inner rim discovered in several young stellar objects near the dust sublimation radius. We simultaneously fitted this temperature/density-gradient model including an inner 1400~K ring to our new $K$-band data as well as the MIR data and the SED from \citetalias{kis11b}. The goal of this modeling is to further constrain physical parameters of the dust distribution. Figure~\ref{fig_model} shows that this model is able to simultaneously reproduce all NIR and MIR visibilities as well as the SED. In Fig.~\ref{fig_model} (bottom), the NIR visibilities (purple) and the MIR visibilities are shown (from green to red, wavelengths 8.5 to 13~$\mu$m; see color-coding bar). The purple, red, and green curves are the model visibilities at 2.2, 13, and 8.5~$\mu$m, respectively. The solid lines are the model curves along the PA of the equatorial axis, the dashed line along the polar direction, as defined by optical polarization measurements (see \citetalias{kis11b}, where this elliptical and a circular symmetric model are presented). If there is no hot inner ring component added to the above temperature/density-gradient model, then the $K$-band model visibilities (blue dotted line in Fig.~\ref{fig_model}, bottom) are systematically higher than observed and the model SED (black dotted line in Fig.~\ref{fig_model}, top) has a deficiency in the NIR. Therefore, the above inner 1400~K ring component is required to explain the data. This new modeling including $K$-band visibilities (Fig.~\ref{fig_model}) is more detailed than that in \citetalias{kis11b}. Some of the parameters are similar as in \citetalias{kis11b}; a temperature power-law index $\beta = -0.37$, density index $\alpha$ = 0.07, and emissivity factor $f_0 $ = 0.12 for the power-law component of the elliptical model with an inner temperature of 700~K (\citetalias{kis11b}, Table 8). However, in our new modeling, the emissivity factor of the inner 1400~K ring is $0.038 \pm 0.016$ and the radius of the hot 1400~K ring is $2.29 \pm 0.47 R_{\rm \tau_K}$ or $\sim$0.16\,pc (with the above $R_{\rm \tau_K} = 0.071$~pc), which is no longer fixed to 1~$R_{\rm \tau_K}$ as in \citetalias{kis11b}. This 1400~K ring radius of $\sim 2.29 R_{\rm \tau_K}$, which is relatively large compared to the reverberation radius $R_{\rm \tau_K}$, is a representative thin-ring radius and not the inner radius of an extended ring (see discussion in Sect. 4.1). This large radius probably indicates a relatively shallow, extended innermost dusty structure (\citetalias{kis11b}) in NGC~3783. \section{Conclusion} We have derived a torus radius of $0.74 \pm 0.23$~mas or $0.16 \pm 0.05$~pc (thin-ring fit). To derive this NGC~3783 torus radius, we took into account an estimated relative flux contribution of 0.5\% from the underlying host galaxy in the 60~mas AMBER FOV and 21\% from the unresolved accretion disk. This torus radius is approximately 2.3 times larger than the $K$-band reverberation radius $R_{\rm \tau_K}$~$\sim$ 0.071~pc (see discussion in Sect. 4.1). For the interpretation of the observations, we employed a temperature/density-gradient model including a hot inner 1400~K ring. We simultaneously fitted our new NIR AMBER visibilities, the MIR MIDI visibilities from \citetalias{kis11b}, and the SED to constrain physical parameters of the dust distribution. For the power-law component of the model, we derived a temperature power-law index $\beta \sim -0.37$ and a surface density index $\alpha \sim$ 0.07. For the required 1400\,K ring component, a radius of $\sim$2.3 reverberation radii or $\sim$0.16\,pc was found, whereas in the modeling in \citetalias{kis11b}, the 1400~K ring radius was assumed to be one reverberation radius. This 1400~K ring radius of $\sim$0.16\,pc, which is relatively large compared to the reverberation radius, is a representative thin-ring radius and not the inner radius of an extended ring. This radius probably indicates a relatively shallow, extended inner dusty structure. Our study of NGC~3783 and the results in \citetalias{kis11b} show that the simultaneous modeling of both NIR and MIR interferometric observations is a powerful tool for future detailed studies of AGN tori. \begin{acknowledgements} We thank the ESO VLTI team on Paranal for the excellent collaboration and the referee for his valuable comments. This publication makes use of the SIMBAD database operated at CDS, Strasbourg, France. Some of the data presented here were reduced using the publicly available data-reduction software package {\it amdlib} kindly provided by the Jean-Marie Mariotti Center (http://www.jmmc.fr/data\_processing\_amber.htm). \end{acknowledgements} \bibliographystyle{aa}
1601.00629
\section{Introduction} Recent work has demonstrated the potential usefulness of employing the surface or fundamental $f$-mode in local helioseismology for detecting subsurface solar magnetism \citep{HBBG08,DABCG11,FBCB12,FCB13}. While turbulence generally tends to lower the $f$-mode frequency \citep{FSTT92,MR93b,DKM98} relative to its theoretical value given by $\omega_f=\sqrt{gk}$, where $g$ is the gravitational acceleration and $k$ is the horizontal wavenumber, horizontal magnetic fields can increase the frequency \citep{MR93a}, while vertical or inclined fields lead to a nonuniform behavior, depending on the value of the horizontal wavenumber \citep{SBCR15}. More importantly, however, horizontal {\em variability} of the subsurface magnetic field leads to a fanning of the $f$-mode, where changes in the integrated mode amplitude and position give clues about the depth of such a field \citep{SBR14}. While these investigations demonstrated a number of previously unknown effects of the $f$-mode, they were restricted to idealizing conditions of an isothermal layer. In this Letter, we use observations with the Helioseismic and Magnetic Imager (HMI) on board the {\em Solar Dynamics Observatory} ({\em SDO}) to search for possible similarities between observations and simulations. We focus on the possibility of using changes in the $f$-mode to predict the emergence of active regions (ARs) days before they can be seen in magnetograms. Owing to the very nature of the $f$-mode being confined to the proximity of the surface, our approach is most sensitive to magnetic fields at shallow depths of just a few megameters (Mm), and ceases to be sensitive when the AR begins to become fully developed. Earlier attempts of predicting the emergence of ARs employed time-distance seismology using $p$-modes and have suggested the occurrence of perturbations at larger depths of $40$--$75\,{\rm Mm}$ \citep{Ilo11,Kho13}. On the other hand, the rising flux tube scenario suggests a retrograde flow at a depth of $30\,{\rm Mm}$ \citep{BBF10}, which has not been observed \citep{Birch16}. Also morphological studies in the case of AR~11313 have suggested incompatibilities with the rising flux tube model \citep{Get16}. By contrast, in the distributed dynamo scenario \citep{B05}, magnetic flux concentrations form spontaneously near the surface \citep{BKKMR11,BKR13}, which might explain the aforementioned field concentrations at shallow depths. Spontaneous surface flux concentrations have also been seen in the deep hydromagnetic convection simulations of \cite{SN12}, where an unstructured magnetic field is allowed to enter the bottom of their computational domain. Such near-surface magnetic concentrations are expected to affect the $f$-mode as its eigenfunction peaks only a few $\,{\rm Mm}$ below the solar surface \citep[cf.][]{Sch99}. It is possible that these perturbations could manifest themselves through detectable signatures. Readers familiar with the conventional picture of buoyant flux tube emergence \citep[as reviewed by, e.g.,][]{Cha10} might be concerned about depths as shallow as just a few Mm, because buoyant tubes of several kilogauss would reach the surface within an hour \citep[$\sim$~3 hours from the depth of 7.5 Mm in the simulations of][]{CRTS10}, but this picture ignores the formation process and implants flux tubes as alien objects within the turbulent convection zone. By contrast, ARs and sunspots might instead be generated by the subsurface turbulence in ways similar to what has so far only been seen in idealized simulations \citep{BKR13,WLBKR13,MBKR14}. The point here is not to defend this idea, but to raise awareness of alternative viewpoints that would facilitate the understanding of the results presented below in the present work. Once the AR has been detected in magnetograms and becomes fully developed, the $f$-mode amplitude begins to be suppressed. This might be explained by the fact that the interaction of both $f$- and $p$-modes with ARs or sunspots leads to mode conversion, resulting in the absorption of mode power \citep{TCN82,CBZ94,CB97}. This would explain the observed reduction of the mode amplitude \emph{after} the analyzed AR has been formed. However, what was not discussed earlier is that the mode amplitude from the same region can undergo a {\em transient} growth phase prior to the actual flux emergence. This results in a nonmonotonic temporal variation in the normalized mode power which first rises, reaches a maximum value a few days before there is any sign of flux emergence, and then decreases as the strength of the magnetic field in that region increases. Although a proper explanation of this is not yet available, one might speculate that this could also be due to some kind of scattering, whereby $p$-modes would scatter off the magnetic flux concentrations and leak into enhanced $f$-mode power. \section{Data analysis} We use line-of-sight (LoS) Dopplergrams and magnetograms from observations with HMI, mostly in the cylindrical equal-area projection mappings that are publicly available on the Joint Science Operations Center at Stanford\footnote{\url{http://jsoc.stanford.edu/}}. Our analysis is based on 45 seconds cadence data with a projection scale of $0.03\hbox{$^\circ$}$ per pixel, where the data represent the LoS Doppler velocity $v(x,y,t)$ as a function of horizontal position $(x,y)$ and time $t$. For each of the regions of interest, we consider a patch of $512^2$ pixels covering an area of about $(180\,{\rm Mm})^2$ $\approx(15\hbox{$^\circ$})^2$ on the solar surface. We track these patches for several days using a frame of reference corotating with the mean (Carrington) rotation rate $\Omega_0$ with $\Omega_0/2\pi=424\,{\rm nHz}$. To capture transient signatures, we use data cubes $v(x,y,t)$ of only 8 hours duration for the entire tracking period of our target region. To reduce the noise level arising from solar convection \citep{Zha15} and effects from latitudinal differential rotation (J.\ Zhao, private communication), we use a running difference to the original images before storing $v(x,y,t)$. We divide our five or six day stretches into 15 or 18 intervals of 8 hrs, each resulting in a data cube of $512^2\times640$ points of $v(x,y,t)$ that is Fourier transformed to give $\hat{v}(k_x,k_y,\omega)$, which too has the dimension $\,{\rm m}\,{\rm s}^{-1}$ in our normalization. We then construct power spectra from $P=|\hat{v}|^2$ and select $k_x=0$ in the subsequent analysis. Thus, we ignore longitudinal variations that could be affected by the cylindrical equal-area projection, as the latitudinal directions are expected to be the least sensitive to artifacts resulting from projection and also differential rotation. Also, our target regions were chosen such that the patches were always far from the limb during the entire tracking period. The thus obtained power spectra $P(k_x=0,k_y,\omega)$ are then used to construct a diagnostic $k\omega$ diagram in the $k_y$-$\omega$ plane; see \Figp{QS2010_kyo_psi}{a} which displays the $f$- and $p$-ridges where the horizontal wavenumber is $k=k_y$. \begin{figure \begin{center} \includegraphics[width=\columnwidth]{QSMAY2010_kyo_psi.eps} \end{center} \caption[]{ (a) Typical $k\omega$ diagram where the lowest ridge is the $f$-mode, here for the quiet sun during 2010 May 14; (b) example of a vertical cut at a specified value of $k_y R_\odot$ (plus symbols) together with the model fit (solid, red curve) and $P_{\rm cp}$ (dashed, blue line); (c) $f$-mode ridge ($P_f$, plus symbols) and the corresponding fit (solid, red curve); (d) $\psi(k_y)$ for the full range enclosed within the vertical dashed lines in (a).} \label{QS2010_kyo_psi} \end{figure} We now take a cut parallel to the frequency axis at a fixed $k_y R_\odot$ to get the line profiles of the $f$- and lowest two $p$-ridges. We then apply boxcar smoothing along the frequency axis with a box width of $0.24\,{\rm mHz}$. To determine the strength of the $f$-mode, we remove first the continuum and the lowest two $p$-ridges, which are represented by a superposition of parabolic and Lorentzian fits, respectively and denoted by $P_{\rm cp}=|\hat{v}|^2_{\rm cp}$, where the subscript cp stands for the sum of continuum and $p$-modes; see \Figsp{QS2010_kyo_psi}{b}{c}. In most cases we repeat the same procedure at all wavenumbers in the range $k_y R_\odot \in [1200, 2000]$, and determine the $f$-mode power as $P_f(k_y,\omega)=|\hat{v}_f|^2=P-P_{\rm cp}$. We may define the integrated $f$-mode amplitude assuming circularly symmetric rings in the $k_x$-$k_y$ plane as \begin{equation} \bra{v^2}_f=2AT\int_0^\infty\int_0^\infty k P_f(k,\omega)\, {{\rm d} {} k\over2\pi}\, {{\rm d} {}\omega\over2\pi}, \label{v2f} \end{equation} where $k^2=k_x^2+k_y^2$, and write $\bra{v^2}_f$ as \begin{equation} \bra{v^2}_f=L\sum_{k}\; k P_{f,k} \; \mbox{with}\; P_{f,k} = 2 \sum_{\omega} P_f(k,\omega), \label{v2f2} \end{equation} where $A=L^2$ is the area of the chosen patch, $L$ is the side length and $T$ is the tracking time of the data cube. Thus, we can determine the energy of the $f$-mode, $E_f$, characterizing its strength, as: \begin{equation} E_f(t)\equiv{1\over2}\bra{v^2}_f(t)={1\over2}\left(L\over R_\odot\right) \sum_{k}\; \psi(k) \label{ef} \end{equation} with $\psi(k)=k R_\odot\, P_{f,k}$; see \Figp{QS2010_kyo_psi}{d}. Note that we determine the above quantities by setting $k_x=0$ and choosing a high-wavenumber range, $k_y R_\odot \in [1200,2000]$, unless otherwise specified. Although this choice of considering only high wavenumbers in assessing the strength of the mode is not a standard procedure, we nevertheless focus on this regime as this ``precursor signal'' appears to be localized at such large wavenumbers; see \Sec{results} below. The time dependence of $E_f$ may now be determined by computing the above quantities from the sequence of $8\,{\rm h}$ data cubes prepared for all tracked regions of interests. Even in the quiet phase during solar minimum, $E_f$ shows a systematic dependence on the angular distance $\alpha$ from the disk center, given by \begin{equation} \cos\alpha=\cos\vartheta\cos\varphi\;;\quad \varphi=\varphi_*-\varphi_0+\Omega_{\rm syn}t, \label{cos} \end{equation} with $\vartheta$ and $\varphi$ being respectively the latitude and longitude of the point of interest, $\varphi_*$ is the corresponding Carrington longitude, $\varphi_0$ is the Carrington longitude of the disk center at the time when we began tracking the target patch, and $\Omega_{\rm syn}=2\pi/27.275\,{\rm days}$ is the mean synodic Carrington rotation rate of the Sun (i.e., the apparent rotation rate as viewed from the Earth). As suggested by earlier work \citep{SBR14}, we focus on $E_f$ for fairly large $k_y$. We track a particular position on the solar surface in time using the average (Carrington) rotation rate. Normalizing by the solar radius $R_\odot=700\,{\rm Mm}$ gives the spherical harmonic degree $k_y R_\odot$. For a fixed range of $k_y R_\odot$, we compute the dependence of $E_f$ on $t$. Empirically, the value of $E_f$ for the quiet sun (the position where no AR emerges within the next few days) shows a systematic variation that is approximately of the form \begin{equation} \zeta(\cos\alpha)=\cos\alpha\left[q+(1-q)\cos\alpha\right]\; \mbox{with}\;q=0.5. \end{equation} This function obeys $\zeta=1$ at $\alpha=0$ (disk center). It is then useful to define \begin{equation} \widetilde{E}_f\equiv E_f/\zeta, \end{equation} which fluctuates moderately about some average value in the quiet phase of the Sun. However, several days {\em prior} to the emergence of an AR, our studies show elevated values of $\widetilde{E}_f$ at that corotating patch where this AR later emerges. It would be interesting to see whether there are other indicators, for example in the magnetic field itself, which could also give early indications of AR formation. Magnetic properties from regions of interest on the solar disk might offer insight into the process of developing ARs. The LoS magnetic field ($B$) varies randomly in space and time, and has a narrow distribution with positive and negative polarities nearly balancing themselves out when the localized patch is magnetically quiet. Let us denote by $f_B$ the normalized probability distribution function (PDF) of $B$ in a chosen patch at any given time, such that \begin{equation} \int_{-\infty}^\infty f_B \, {\rm d} {} B =1. \end{equation} The kurtosis, $\mbox{\rm kurt\,} B$, of the distribution $f_B$ is defined as, \begin{equation} \mbox{\rm kurt\,} B=\frac{1}{\sigma_B^4}\int_{-\infty}^\infty \left(B-\overline{B}\right)^4 f_B \, {\rm d} {} B, \label{kurt} \end{equation} where the mean ($\overline{B}$) and the variance ($\sigma_B$) of $f_B$ are \begin{equation} \overline{B}=\int_{-\infty}^\infty B f_B \, {\rm d} {} B\;\;\mbox{and}\;\; \sigma_B^2=\int_{-\infty}^\infty (B-\overline{B})^2 f_B \, {\rm d} {} B, \end{equation} respectively. For a normal distribution, $\mbox{\rm kurt\,} B=3$, while excess kurtosis, $\mbox{\rm kurt\,} B\gg3$, indicates a heavy-tailed distribution. We monitor the temporal evolution of $\mbox{\rm kurt\,} B$ from the localized patches that we track on the solar disk as the Sun rotates. It is useful to make a simultaneous comparison with the value for relatively quiet patches under otherwise identical local conditions. This may be realized as follows: corresponding to each target region at $(\vartheta, \varphi)$, we consider a (quiet) mirror region at $(\vartheta^\dag, \varphi)$ in the opposite hemisphere with the same dimensions, and track both these patches simultaneously, where $\vartheta^\dag=-\vartheta$ for the entire tracking period. We refer to the $f$-mode energy from such a mirror region as $E_f^\dag$. We find that, while the rms magnetic field $B_{\rm rms}$ rises when the AR emerges, the value in the mirror region, $B_{\rm rms}^\dag$, remains close to a constant background value. \section{Sample selection} We have selected a number of ARs, which may be broadly classified under the following two categories: \begin{itemize} \item \emph{Isolated ARs:} In these examples, ideally a single AR emerges in isolation, with the rest of the Sun being nearly magnetically quiet. As the seismic signals may well be nonlocal, we first need to study isolated ARs to assess the $f$-mode perturbation due to subsurface magnetic fields associated with newly developing ARs. This would allow us to avoid contamination that might be caused by the presence of already existing ARs in the neighborhood of the patch where a new AR is going to form later. There are not many instances since the launch of {\em SDO}, where only a single AR appears on the entire solar disk, and therefore we have included a few more cases wherein the other ARs are at least far ($>300\,{\rm Mm}$) from the AR in emergence. The chosen examples in this class include ARs 11130, 11158, 11242, 11105, 11072 and 11768. \item \emph{Crowded ARs:} It would be a serious limitation if the proposed technique applies only to isolated ARs, and therefore we have also studied the effects of a newly forming AR on a patch that lies in close proximity to one or several already existing ARs. To highlight the challenges one might face in extracting the signal from such an AR, we have studied ARs 12051 and 11678. \end{itemize} Furthermore, in order to avoid systematic effects close to the limb, we have restricted our sample to only those cases which lie within $\pm60\hbox{$^\circ$}$ in both latitude and longitude of the disk center. Yet another requirement limiting our sample size is that the corresponding mirror patches in the opposite hemisphere must be magnetically quiet for the entire tracking period, thus offering an easy and simultaneous control. We also studied four magnetically quiet patches at two different phases of the solar magnetic activity cycle. Two such patches, symmetrically located in the northern and the southern hemispheres, were chosen when the Sun was just coming out of its minimum during 2010 May. This offers another control when the Sun did not show much magnetic activity for a few days. We then chose a magnetically quiet patch lying next to AR~12529 during 2016 April, and also followed simultaneously its mirror counterpart. \begin{figure \begin{center} \includegraphics[width=\columnwidth]{QS2010_sef_1200-2000_time_trace} \includegraphics[width=\columnwidth]{QS2010_MGs} \end{center} \caption[]{ Time traces of $\widetilde{E}_f$ (solid red; $\vartheta=+20\hbox{$^\circ$}$) and $\widetilde{E}_f^\dag$ (dashed blue; $\vartheta=-20\hbox{$^\circ$}$) as a function of $t-t_{\rm CM}$ in panel (a), evolutions of the kurtosis, $\mbox{\rm kurt\,} B$ (solid red) and $\mbox{\rm kurt\,} B^\dag$ (dashed blue) in panel (b), $B_{\rm rms}$ (solid line with shaded area underneath) together with $B_{\rm rms}^\dag$ (dashed blue line) in panel (c), as well as magnetograms at $t=t_{\rm CM}-2{\rm d}$ (d) and $t=t_{\rm CM}$ (e) for the quiet sun during 2010 May 14--19. The dash-dotted (red) and triple-dot-dashed (orange) lines denote the time-traces of $0.08\,B_{\rm max}$ and $-0.08\,B_{\rm min}$, respectively from the patch in the northern hemisphere. }\label{t-trace_QS2010} \end{figure} \begin{figure \begin{center} \includegraphics[width=\columnwidth]{11130_sef_1200-2000_time_trace} \includegraphics[width=\columnwidth]{11130_MGs} \end{center} \caption[]{ Time traces of $\widetilde{E}_f$ (solid red line) and $\widetilde{E}_f^\dag$ (dashed blue line) as a function of $t-t_{\rm AR}$, with $t_{\rm CM}$ marking the time of central meridian crossing in panel (a), evolutions of the excess kurtosis, $\gamma_B$ (solid red) and $\gamma_B^\dag$ (dashed blue) in panel (b), $B_{\rm rms}$ (solid line with shaded area underneath) together with $B_{\rm rms}^\dag$ (dashed blue line) as a function of $t-t_{\rm AR}$ (c), as well as magnetograms at $t=t_{\rm AR}-2{\rm d}$ (d) and $t=t_{\rm AR}$ (e) for AR~11130. The dash-dotted (red) and triple-dot-dashed (orange) lines denote the time-traces of $0.08\,B_{\rm max}$ and $-0.08\,B_{\rm min}$, respectively from the patch where the AR~11130 develops. }\label{t-trace_11130} \end{figure} \begin{figure \begin{center} \includegraphics[width=\columnwidth]{11130_coadd_tAR-2d_ky_1200-2000} \vskip0.05in \includegraphics[width=\columnwidth]{11130_coadd_tAR_ky_1200-2000} \end{center} \caption[]{ Images of $\widetilde{E}_f$ for a relatively quiet phase of the Sun in 2010 when an isolated AR 11130 emerged on 2010 November 29. Top and bottom panels show images of $\widetilde{E}_f$ two days before the AR emergence and at $t=t_{\rm AR}$, respectively. Red filled circle denotes the location of AR~11130. Postel projection mapping was used in constructing these images. }\label{images} \end{figure} \section{Results} \label{results} \subsection{Isolated ARs} \label{iar} In \Figs{t-trace_QS2010}{t-trace_11130} we compare the quiet sun during 2010 May 14--19 with an active sun during 2010 November 26--30. We show the time traces of $\widetilde{E}_f$ for corotating patches. In \Fig{t-trace_QS2010}, $t_{\rm CM}$ denotes the time of central meridian crossing while in \Fig{t-trace_11130}, $t_{\rm AR}$ is the time at which later the AR emerges. We compare with the time traces of the mirror region $\widetilde{E}_f^\dag$ in panels (a) of those and subsequent figures, $\mbox{\rm kurt\,} B$ from those patches in panels (b), the rms magnetic field $B_{\rm rms}$ within those patches in panels (c), as well as the corresponding full disk LoS magnetograms either at $t=t_{\rm AR}-2{\rm d}$ or at $t=t_{\rm AR}-1{\rm d}$ in panels (d) and at $t=t_{\rm AR}$ in panels (e), which are the times when the ARs emerge and were assigned their numbers. The dash-dotted (red) and triple-dot-dashed (orange) lines in panels (c) denote respectively the time-traces of $0.08\,B_{\rm max}$ and $-0.08\,B_{\rm min}$ from the patch of interest where an AR develops. All six ARs show similar characteristics: an early rise of $\widetilde{E}_f$ with a maximum $1$--$2$ days prior to $t_{\rm AR}$, followed by a decline at and after $t_{\rm AR}$, as well as a delayed increase of $\widetilde{E}_f^\dag$, sometimes with a maximum near $t_{\rm AR}$. We speculate that the delayed increase of $\widetilde{E}_f^\dag$ might be caused by a correlated response at a distant mirror patch. This would indicate that the early $f$-mode strengthening, i.e., the precursor signal, appears to have an associated causal response at later times, at distant mirror patches. Interestingly enough, in most cases, the $\mbox{\rm kurt\,} B$ from the patch where an AR forms also shows a peak before the AR is fully developed, and thus offers yet another advance indication of AR formation. By contrast, during a suitably chosen time in 2010, the Sun was nearly completely quiet, and both $\widetilde{E}_f$ and $\widetilde{E}_f^\dag$ follow each other rather closely (\Fig{t-trace_QS2010}), although their time traces still show considerable variability. This might be caused by fluctuations in the subsurface turbulence and small-scale magnetic fields even for the quiet sun, or perhaps by instrumental effects. The fact that $\widetilde{E}_f$ and $\widetilde{E}_f^\dag$ remain close to each other at all times shows that in the quiet phase of the Sun, the integrated $f$-mode amplitudes in the two hemispheres evolve {\em symmetrically}, so that the difference is small and therefore not significant. Note also that, since no AR has emerged during that time, we replaced $t_{\rm AR}$ by the time of central meridian crossing $t_{\rm CM}$ of an arbitrarily chosen comoving patch in \Fig{t-trace_QS2010}. Based on these findings, the following hypotheses may be formulated. In regions with low or no surface magnetic activity, a nearly flat time trace without systematic differences between $\widetilde{E}_f$ and $\widetilde{E}_f^\dag$ suggests low subsurface magnetic activity, while a gradual and systematic enhancement of $\widetilde{E}_f$ relative to $\widetilde{E}_f^\dag$ is suggestive of a build-up of subsurface magnetic activity. In already established ARs, on the other hand, $\widetilde{E}_f$ is visibly depressed and $\widetilde{E}_f^\dag$ may or may not show a marked rise, depending on the complexity of the already established surface activity. We adopt a root-mean-square error estimation for $\widetilde{E}_f$ based on the results shown in \Fig{t-trace_QS2010} for a magnetically quiet sun. The mean error ($\sigma_{\rm m}$) is obtained from: \begin{equation} \sigma_{\rm m}=\frac{1}{2}\sqrt{\sigma_E^2+ \sigma_{E^\dag}^2}\;;\; \sigma_E=\sqrt{\left\langle\left(\widetilde{E}_f- \overline{\widetilde{E}}_f\right)^2\right\rangle}. \end{equation} Here, $\langle \widetilde{E}_f \rangle \equiv \overline{\widetilde{E}}_f$ denotes the mean value of $\widetilde{E}_f(t)$. We use $\sigma_{\rm m}$ to display error bars in figures showing $\widetilde{E}_f(t)$. \begin{figure \begin{center} \includegraphics[width=\columnwidth]{11158_sef_1200-2000_time_trace} \includegraphics[width=\columnwidth]{11158_MGs} \end{center} \caption[]{ Same as \Fig{t-trace_11130}, but for AR~11158. }\label{11158} \end{figure} \begin{figure \begin{center} \includegraphics[width=\columnwidth]{11242_sef_1200-2000_time_trace} \includegraphics[width=\columnwidth]{11242_MGs} \end{center} \caption[]{ Same as \Fig{t-trace_11130}, but for AR~11242. }\label{11242} \end{figure} \begin{figure \begin{center} \includegraphics[width=\columnwidth]{11105_sef_1200-2000_time_trace} \includegraphics[width=\columnwidth]{11105_MGs} \end{center} \caption[]{ Same as \Fig{t-trace_11130}, but for AR~11105. }\label{11105} \end{figure} \begin{figure \begin{center} \includegraphics[width=\columnwidth]{11072_sef_1200-2000_time_trace} \includegraphics[width=\columnwidth]{11072_MGs} \end{center} \caption[]{ Same as \Fig{t-trace_11130}, but for AR~11072. }\label{11072} \end{figure} \begin{figure \begin{center} \includegraphics[width=\columnwidth]{11768_sef_1200-2000_time_trace} \includegraphics[width=\columnwidth]{11768_MGs} \end{center} \caption[]{ Similar to \Fig{t-trace_11130}, but for AR~11768. }\label{11768} \end{figure} \begin{figure \begin{center} \includegraphics[width=\columnwidth]{11768_sef_1200-1300_time_trace} \end{center} \caption[]{ Similar to \Figp{11768}{a}, but using $k_y R_\odot \in [1200,1300]$ instead of $[1200,2000]$ in determining $E_f$ using \Eq{ef}. }\label{11768_a2} \end{figure} Let us now discuss the individual examples in more detail. AR~11130 was a solitary AR during 2010 when the overall solar activity was still rather low. It is therefore an example where interference from other locations on the Sun is minimal. Indeed, it displays most strikingly the ``symmetry breaking'' between $\widetilde{E}_f$ and $\widetilde{E}_f^\dag$, with $\widetilde{E}_f$ showing a maximum about 1.5 days before this AR emerges; see \Fig{t-trace_11130}. Also, $\mbox{\rm kurt\,} B$ shows a peak more than one day before this AR is fully developed; see the solid red line in \Figp{t-trace_11130}{b}. As an extension of this work, we also calculate images of $\widetilde{E}_f$ for the solar disk. This gives more explicit information of where the next AR might form; see \Fig{images} showing images at times when AR~11130 was forming. It is remarkable that the maximum in $\widetilde{E}_f$ at time $t=t_{\rm AR}-2{\rm d}$ coincides with the location (marked by a red filled circle) where AR~11130 is going to form later. We also note from the top image in \Fig{images} that the strengthening of the $f$-mode about two days prior to the emergence of the AR is nonlocal in space, with patches progressively farther from the predicted location of AR~11130 showing almost monotonically decreasing $\widetilde{E}_f$. Although we see a moderate degree of fluctuation, there are also systematic effects---especially near the limb. Whether or not these are caused by instrumental effects such as variations of the modulation transfer function \citep{Wachter} is unclear. If so, the remaining variations may either also be related to instrumental effects or they could be caused by weaker subsurface magnetic fields that must always be present---even during solar minimum. Next, we consider AR~11158 (\Fig{11158}), which was a rapidly growing AR that produced the first X-class flare of solar cycle 24 on 2011 February~15 \citep{MVA12} with an Earth-directed halo coronal mass ejection \citep{Sch11}. It also produced several M-class flares during February 13--16 \citep{Ino13}, after being assigned its number on February~13. Also in this case, $\widetilde{E}_f$ shows a clear increase with $\widetilde{E}_f-\widetilde{E}_f^\dag\sim200\,{\rm m}^2\,{\rm s}^{-2}$ about a day before $B_{\rm rms}$ reaches a plateau of about $220\,{\rm G}$. The energy increase of about $\sim300\,{\rm m}^2\,{\rm s}^{-2}$ seen about three days prior to the AR emergence appears to be indicative of a subsurface concentration of the magnetic field resulting in a rapid growth of $B_{\rm rms}$ in the photosphere. Thus, the same general trend is found here too, although the potential for using $\widetilde{E}_f(t)$ as a precursor was less clear in the sense that it showed a maximum only about a day in advance. The subsequent increase in $\widetilde{E}_f^\dag$ is noticeable here as well. In this case, $\mbox{\rm kurt\,} B$ shows a peak already at $t=t_{\rm AR}-2{\rm d}$. AR~11242 (\Fig{11242}) was assigned its NOAA number on 2011 June 29, a day before it fully emerged in isolation. Here we find elevated values of $\widetilde{E}_f$ relative to $\widetilde{E}_f^\dag$ for all times during our tracking period, where $\widetilde{E}_f$ shows a maximum about 1--2 days prior to AR formation. Again, early strengthening of $\widetilde{E}_f$ at $t=t_{\rm AR}-3{\rm d}$ appears as a precursor to the rise of $B_{\rm max}$ and the peak in $\mbox{\rm kurt\,} B$ at $t=t_{\rm AR}-1{\rm d}$. In this case too, we have strong evidence of $f$-mode strengthening about 1--3 days before there is any visible magnetic activity at the patch where AR~11242 develops later. \cite{Smi13} reported long-period oscillations of 200--400 min associated with this AR, using simultaneous data from HMI and ground-based radio emission measurements at 37 GHz from Mets\"ahovi radio observatory at Aalto University in Finland. They interpreted their results based on the shallow sunspot model of \cite{SK09}, which may even show some resemblance to the magnetic flux concentrations that form spontaneously in strongly stratified turbulence simulations \citep{BKR13}. Now we consider the case of AR~11105 (\Fig{11105}), which was assigned its NOAA number nearly at the time of onset of $B_{\rm rms}$ on 2010 September 3. For AR~11105, similar to the previous example, $\widetilde{E}_f$ remains larger than $\widetilde{E}_f^\dag$ during the tracking, and shows the usual post-emergence damping. Unlike the other examples, the time trace of $\mbox{\rm kurt\,} B$ is in this case featureless. Next we turn to AR~11072 (\Fig{11072}), which was identified on 2010 May 23 when $B_{\rm rms}$ had reached its peak value, although $B_{\max}$ from the same region showed an early growth already about two days earlier. About four days prior to $t_{\rm AR}$, the patch in this case was much closer to the limb than in the other examples and the data might have suffered some systematic effects, as discussed above. However, we do find weak signatures of relative strengthening of $\widetilde{E}_f$ at $t\approx t_{\rm AR}-3.5{\rm d}$, although the damping of the $f$-mode after the flux emergence is not seen. This might be due to the episodic flux emergences in this case, as is apparent from \Fig{11072}. Interestingly, $\mbox{\rm kurt\,} B$ shows a sharp rise at about the same time when we find signs of $f$-mode strengthening, and it exhibits a double-peaked feature, which is all much before $B_{\rm rms}$ saturates in this region. For AR~11768, we now perform the following experiment to highlight the significance of wavenumber dependence of the proposed precursor signal, i.e., the $f$-mode strengthening, and have presented our results for this case in \Figs{11768}{11768_a2}. We considered two different wavenumber intervals in determining $E_f$ using \Eq{ef}; while \Figp{11768}{a} corresponds to the same range, $k_y R_\odot \in [1200,2000]$, as used in the other cases, for \Fig{11768_a2} we chose a much narrower wavenumber range, $k_y R_\odot \in [1200,1300]$, which explains the lower values of $\widetilde{E}_f$. As shown in \Fig{11768_a2}, there is again the characteristic symmetry breaking between $\widetilde{E}_f$ and $\widetilde{E}_f^\dag$, with $\widetilde{E}_f$ showing a maximum at $t=t_{\rm AR}-2{\rm d}$. In this case the initial rise is sharper than, say, for AR~11130. However, no such relative strengthening of $\widetilde{E}_f$ is seen before emergence when the larger wavenumber range is considered, although the usual rise of $\widetilde{E}_f$ followed by the post-emergence damping is clearly visible; see \Figp{11768}{a}. This provides a hint of a possible wavenumber dependence of the effect causing the $f$-mode strengthening prior to AR formation. The perturbed wavenumbers of the $f$-mode correspond to horizontal scales of around $3000\,{\rm km}$ and we speculate that these might be the typical scales of magnetic structures that are gradually growing in both strength and size while retaining their imprints in terms of causing the observed $f$-mode strengthening correspondingly at such high wavenumbers. Note that in this case, large-scale patches of weak magnetic fields are present in the opposite hemisphere, with $B_{\rm rms}^\dag \approx 2B_{\rm rms}$ at early times, being highest of all the other cases. This could affect the values of $\widetilde{E}_f^\dag$. Here too, $\mbox{\rm kurt\,} B$ shows a peak about a day before the magnetic flux associated with this AR is fully emerged. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{12529} \end{center} \caption[]{ Panel (a) shows the AR~12529 with colors indicating the strength of the LoS magnetic field $B$ in $\,{\rm kG}$; panel (b) shows temporal evolution of its rms strength $B_{\rm rms}$ (solid line with shaded area underneath) together with $B_{\rm rms}^\dag$ (dashed blue line). The dash-dotted (red) and triple-dot-dashed (orange) lines denote time-traces of $0.08\,B_{\rm max}$ and $-0.08\,B_{\rm min}$, respectively from the patch shown in panel (a). }\label{QSnear12529_fg} \end{figure} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{QS1_near_12529_sef_1200-2000_time_trace} \includegraphics[width=\columnwidth]{QS_near_12529_MGs} \end{center} \caption[]{ Similar to \Fig{t-trace_11130}, but for a magnetically quiet patch lying next to AR~12529. Here, $t=t_{\rm AR}$ corresponds to a maximum in $|B_{\rm min}|$. }\label{QSnear12529} \end{figure} \begin{figure \begin{center} \includegraphics[width=\columnwidth]{12051_sef_1200-2000_time_trace} \includegraphics[width=\columnwidth]{12051_MGs} \end{center} \caption[]{ Same as \Fig{t-trace_11130}, but for AR~12051, which is in close proximity to already existing ARs. }\label{12051} \end{figure} Comparing now all the six ARs in our sample, we see that the three ARs that appeared in the north (ARs~11130, 11242, and 11105) had slightly larger values of $\widetilde{E}_f$ (2600--2800~${\rm m^2s^{-2}}$) than the three in the south (ARs~11158, 11072, and 11768 with $\widetilde{E}_f$=(2200--2400~${\rm m^2s^{-2}}$). This is consistent with the strong north-south asymmetry of cycle~24 with stronger activity and an earlier maximum in the north and weaker activity and a later maximum in the south \citep{CCG13,SHLZ15}. This shows that the value of $\widetilde{E}_f$ reflects the general subsurface magnetic activity even over the time scale of the solar cycle. \subsection{Crowded ARs} \label{car} As argued above, the ARs cause damping of the $f$-mode after their emergence and this might influence the signal from a newly forming AR in the neighborhood. In order to extract precursor signals from a developing AR in a crowded environment, we perform an experiment demonstrating the nonlocality of the high-wavenumber $f$-mode damping caused by already established ARs. Here, we consider a magnetically quiet patch lying just above AR~12529, which was an already existing strong AR during 2016 April; see \Figp{QSnear12529_fg}{a}, which shows a close-up of this AR and the temporal evolution of its $B_{\rm rms}$ in panel (b). Similar to, say, \Fig{t-trace_11130}, we show time traces of $\widetilde{E}_f$ (corresponding to the quiet patch above AR~12529) and $\widetilde{E}_f^\dag$ in \Fig{QSnear12529}. Note that both patches being tracked in this experiment are magnetically quiet and that their corresponding kurtoses are essentially featureless. We find a significant damping of $\widetilde{E}_f$ as compared to $\widetilde{E}_f^\dag$ at $t\approx t_{\rm AR}+1{\rm d}$, after which there are some data gaps in the observations. Both $\widetilde{E}_f$ and $\widetilde{E}_f^\dag$ attain similar values at late stages. We now turn to the case of AR~12051, which lies next to bigger and stronger ARs that had appeared already in the southern hemisphere; see the magnetograms in panels (d) and (e) of \Fig{12051}. Here too we find that the evolution of $\widetilde{E}_f$ obtained from the patch where later AR~12051 emerges is not flat; see \Fig{12051}. It rises from a level of about $\sim$2150 ${\rm m^2s^{-2}}$ and attains a maximum of $\sim$2400 ${\rm m^2s^{-2}}$ more than two days before it was assigned its number on 2014 May 2 and nearly three days before $B_{\rm rms}$ reached its maximum value of about $150\,{\rm G}$. On May 3, this AR developed a so-called $\delta$-class spot with M class flares a few days later. However, the essential difference here is that the $\widetilde{E}_f^\dag$ from the relatively quiet mirror patch remains larger than $\widetilde{E}_f$ at all times. This might well be expected based on our experiments and results presented above and may be understood as follows: as the southern hemisphere is already ``polluted'' by many ARs, the $f$-mode is expected to be damped in this hemisphere and therefore the time-trace of $\widetilde{E}_f$ for AR~12051, while showing early precursor signatures, does not overcome $\widetilde{E}_f^\dag$ from the northern hemisphere where the $f$-mode remains undamped and shows a much smaller variation. Having discussed the possible difficulties in predicting a new AR emerging in a crowded environment, we now wish to describe plausible procedures that might be useful in still extracting the precursor signals in such ``polluted'' medium. One may be able to find guidance from a standard technique of optical astronomy where one routinely subtracts emission from a bright foreground star in order to detect and study a faint background source. In the present context, this would require a more detailed knowledge of the $f$-mode damping mechanism caused by existing ARs on the solar disk, so that one could apply a similar \emph{cleaning} procedure. We make such an attempt for our final case of AR~11678, which emerged next to a group of compact ARs on 2013 February 19. In \Fig{11678} we show such a plot. Although the time-trace of $\widetilde{E}_f$ shows a peak about a day before this AR emerges, it remains smaller than $\widetilde{E}_f^\dag$ at all times of tracking, as would be expected in this case. Based on the other cases discussed earlier, we find that the amount of observed damping of $\widetilde{E}_f$ could be as large as about $25\%$ of its peak value. Therefore, we applied a uniform boost of $25\%$ to the original $\widetilde{E}_f$ in an attempt to correct against the expected damping, and show the thus boosted time-trace of $1.25\widetilde{E}_f$ by dashed red line with filled circles in \Figp{11678}{a}. This immediately reveals the relative strengthening---similar to what is observed in the case of isolated ARs. This experiment with a uniform boost is meant to highlight the necessary correction procedure. Clearly, we need better knowledge of the post-emergence effects on the $f$-mode, not only locally but also in the surrounding medium, to be able to apply a realistic, non-uniform boost that depends on the magnetic activity in the neighborhood. \begin{figure \begin{center} \includegraphics[width=\columnwidth]{11678_sef_1200-2000_time_trace} \includegraphics[width=\columnwidth]{11678_MGs} \end{center} \caption[]{ Same as \Fig{12051}, but for AR~11678. Here, the dashed (red) line in panel (a) corresponds to $1.25\widetilde{E}_f$. }\label{11678} \end{figure} \section{Implications} If we accept that $\widetilde{E}_f(t)$ can be used as a precursor to AR formation, we must ask about its possible physical origin and relevance. Earlier idealized simulations \citep{SBR14,SBCR15} have demonstrated that, while uniform magnetic fields lead to a frequency shift and a weakening of the $f$-mode, a nonuniform subsurface field can lead to a fanning and associated strengthening of the $f$-mode, provided the magnetic field is at least one or two pressure scale heights below the surface. While such studies should be repeated with more realistic models, they do confront us with the question of how a magnetic field can remain undetected once it is only a few Mm below the surface. The fact that the $f$-mode resides near the top few $\,{\rm Mm}$ of the Sun is suggestive of a gradual build-up of the AR near the surface, instead of a buoyant rise, which would happen in just a few hours \citep{CRTS10}. This is in stark contrast to the conventional picture of an $\Omega$-shaped flux tube rising from the bottom of the convection zone and forming an AR as it pierces the surface \citep{Fan01}. Earlier simulations of \cite{CRTS10} with a magnetic field implanted at a depth of nearly $10\,{\rm Mm}$ below the surface have produced surface manifestations just a few hours later. Such simulations do not address the physics of the {\em formation} of magnetic flux concentrations. By contrast, several simulations in large enough domains performed by several groups \citep{SN12,WLBKR13,MBKR14,KBKKR16,MS16} have demonstrated the spontaneous emergence of magnetic flux concentrations right at the surface. This highlights the potential significance of $f$-mode-related precursors at constraining our still very sketchy understanding of the solar dynamo \citep{Ossen03,B05,Cha10}. Yet another important quantity to be investigated in dynamo models is the kurtosis of the magnetic field. Models exhibiting a peak in $\mbox{\rm kurt\,} B$ well before $B_{\rm rms}$ saturates are expected to be better constrained and might become more favorable. \section{Conclusions} All six examples of isolated ARs presented in \Sec{iar} show that, several days prior to magnetic field emergence, the strength of the $f$-mode, as presented by the value of $\widetilde{E}_f$, rises and then reaches a maximum before displaying the known post-emergence damping. Also, prior to AR emergence, the value of $\widetilde{E}_f$ remains larger for long times with significant energy difference compared to the value obtained from the corresponding quiet sun location, $(\vartheta^\dag, \varphi)$. For the two examples of crowded ARs presented in \Sec{car}, however, this is different and, as explained above, the reason for this is in fact expected. We summarise our findings as follows: \begin{itemize} \item The solar $f$-mode is perturbed and shows a strengthening at high wavenumbers caused by the subsurface magnetic fields associated with emerging ARs about 1--2 days before there is any visible magnetic activity in the photosphere. This appears to be independent of the phase within the solar cycle. \item We discussed the wavenumber dependence of the precursor signal and showed that the $f$-mode strengthening occurs at fairly large wavenumbers. \item In many cases, the kurtosis of the magnetic field from the patch in which the AR develops shows a peak much before $B_{\rm rms}$ from that region saturates. \item As discussed in earlier works, we find that the $f$-mode suffers damping after the emergence of the AR. \item The $f$-mode strengthening prior to AR formation, followed by its post-emergence damping, are nonlocal in space, and thus could influence the neighboring patches. \item We proposed a plausible cleaning procedure to extract precursor signal from patches in a crowded environment with one or more pre-existing ARs. \end{itemize} Calculating images of $\widetilde{E}_f$ for the solar disk, as shown by an example in \Fig{images}, appears to provide explicit information of where the next AR might form. But we need more studies to better understand the post-emergence damping of the $f$-mode and its effects on the surrounding medium in order to calibrate the necessary correction/cleaning that must be applied to the data to extract precursor signals from a polluted medium. \acknowledgments We thank Charles Baldner, Aaron Birch, Rick Bogart, Robert Cameron, Brad Hindman, Maarit K\"apyl\"a, Charlie Lindsey, Matthias Rheinhardt, Jesper Schou, Hannah Schunker, Sami Solanki, Junwei Zhao, and the referee for their comments and suggestions. This work has been supported in parts by the Swedish Research Council grant No.\ 621-2011-5076 as well as a startup grant from CU-Boulder.
1003.4671
\section{Introduction} \label{s-intro} The Ultra--Luminous X--ray sources (ULXs) are a class of very bright ($L_X\simeq 10^{39}-10^{41}{{~\rm erg}\; \rm s^{-1}}$), point--like X--ray sources detected off the nuclei in several galaxies \citep{Fab06}. Although they were discovered more than 20 years ago (by the {\sl Einstein} X--ray satellite: \citealp{Lon83,Fab89}), their nature remains unclear. If the ULXs are powered by accretion, their luminosity exceeds the Eddington limit for a stellar--sized object, sometimes by a factor~$\sim1000$. This has led some authors \citep{Col99} to postulate that the ULXs are black holes with masses $10^2-10^4{~M_\odot}$, intermediate between the black holes expected from the final evolution of massive stars and the super--massive black holes of $10^6-10^9{~M_\odot}$ powering the Active Galactic Nuclei. Possible formation scenarios of these intermediate mass black holes (IMBH) are discussed in \citet{Mil02} (formation in globular clusters), and \citet{Por04} (in young super--massive star clusters). An alternative view places the ULXs in the more familiar realm of stellar--sized black holes. According to \citet{Kin01}, the ULXs are black holes of $M_\bullet \simeq 10{~M_\odot}$, accreting mass from a disc, at a rate above their Eddington limit. In this regime, the inner accretion disc is thick, and may collimate the outgoing radiation. The anisotropy leads the observer to overestimate the luminosity (and the mass of the hole) by a large factor, up to $10-100$. Alternatively, some models suggest that the ULXs may actually radiate above the Eddington limit. They are slim disc models \citep{Ebi03}, photon bubble dominated discs \citep{Beg02, Beg06} and two--phase radiatively efficient discs \citep{Soc06}. A combination of beaming and super--Eddington accretion is also possible \citep{Pou07, Kin08, Kin09}. More recently, a third possibility has been emerging, namely that the ULXs are black holes of $30-90{~M_\odot}$ produced by the final evolution of massive stars in a metal poor environment \citep{Zam09}. In such an environment the black hole's progenitor does not loose much of its original mass by the action of line--driven winds, and might collapse directly into a black hole without exploding as a supernova, producing more massive black holes than in a metal rich environment. The accretion--powered models do not exhaust the possible scenarios able to explain the nature of the ULXs. In a sample of $154$ ULXs observed with {\sl Chandra}, \citet{Swa04} argue that about~$20\%$ of them may be well described by a thermal model, consistent with young supernova remnants strongly interacting with the surrounding environment. In this interpretation, the Eddington limit is clearly not an issue, and indeed the ULXs have X--ray luminosities comparable to those of the brightest supernovae ($L_X\sim 10^{41}{{~\rm erg}\; \rm s^{-1}}$, \citealp{Imm03}). The Cartwheel galaxy is a peculiar nursery of ULXs. This galaxy underwent a collision with a smaller galaxy about $10^8{~\rm yr}$ ago \citep{Hig96, Map08}. This episode not only conferred the galaxy its peculiar shape, but also triggered a massive star formation in its ring. The Cartwheel harbours the largest number of ULXs than any other galaxy: 15 sources are more luminous than $10^{39}{{~\rm erg}\; \rm s^{-1}}$ \citep{Wol04}, and at least some of them are known to be variable \citep{Cri09}. In this paper we address the properties of the ULX labelled N10 by \cite{Wol04}. In the {\sl Chandra} observation of~2001 it was the brightest ULX in the Cartwheel (and also among the brightest known ULXs), but a couple of subsequent {\sl XMM--Newton} observations taken in 2004 and 2005 showed that its luminosity is fading \citep{Wol06}. Two additional {\sl Chandra} observations were performed in~2008 to follow the variability pattern of this ULX. We present here the {\sl Chandra} observations taken in 2008 and compare them with that of 2001. The {\sl ACIS} angular resolution is instrumental to reduce to a minimum the possible contamination of N10 from the surrounding diffuse gas and the neighbour sources crowding the Cartwheel ring. In our analysis we do not use the {\sl XMM--Newton} observations, since the analysis of the variability with {\sl XMM--Newton} data was already published by \citet{Wol06}. In addition, the use of {\sl XMM--Newton} data would require specific modelling of the contamination of the neighbouring sources and the Cartwheel's ring due to the relatively large PSF of {\sl XMM--Newton}, incrementing the uncertainties in the derived spectral parameters. Our main goal is to compare the spectral parameters of N10 to the theoretical models put forward to explain the ULXs. In particular, we shall discuss several accretion models and the supernova model. Although N10 is a bright source, its large distance ($D=122{~\:\rm Mpc}$, \citealp{Wol04} and references therein) limits the number of collected photons, preventing us from rejecting or confirming a spectral model on purely statistical grounds. For this reason, we shall assess the likelihood of any model more in the light of its own self--consistency than of its statistical evidence. The outline of the paper is the following. In section~\ref{s-prep} we describe the {\sl Chandra} data and their preparation. In Section~\ref{s-plaw} we present a simple spectral model of N10, which will be a thread for a more detailed analysis. The accretion and supernova models are presented in section~\ref{s-accretion} and~\ref{s-supernova}, respectively. Finally, we discuss and summarise our results in section~\ref{s-sum}. \section{Data Preparation} \label{s-prep} {\sl Chandra} observed the Cartwheel once in 2001 and twice in 2008. All the observations were carried out with the back--illiminated (BI) chip {\sl ACIS-S3}. This detector was operated in {\sl FAINT} mode in the first observation (no.~2019) and in {\sl VFAINT} mode in the remaining observations (nos.~9531 and~9807). New level=2 event files have been re-created with {\sl CIAO-4.1.2} in order to work with a homogeneous data set. The script {\sl wavdetect} was used to detect the sources within the CCD field of view. No observation was seriously contaminated by flares; to check this we have first extracted a light curve from the source--free areas in the spectral band $0.3-10{~\rm keV}$. The {\sl Chandra} script {\sl lc\_sigma\_clip} used this curve to excise the periods with a count rate more than $3$~sigmas above the average. This correction excludes but a small fraction of the observations, as shown in the summary Table~\ref{t-data}. The spectra of N10 were extracted from a circle of radius $R=2.7''$, and the background spectrum from a source--free region near the source. The chosen extraction circle includes virtually all the photons from the source. In all our analysis we fit the spectra in the window $0.3-7{~\rm keV}$, where the signal to noise ratio is higher. In this window the first spectrum (observation no.~2019) contains $400$~photons, the second one (observation no.~9531) $173$ photons and the third (observation no.~9807) only $59$ photons before background subtraction. There are two contributions to the background: instrumental and due to the diffuse gas of the galaxy ring and to unresolved low luminosity sources. Both of them give a negligible contamination on account of the small extraction area: we estimate that the instrumental background contributes for less than~$2$ photon in each observation, and $3-5$ photons come from the gas of Cartwheel galaxy's ring. On account of the relatively poor statistics of our data, we would not like to loose spectral resolution by binning the data to the standard minimum value of $\sim 25$~counts/bin required by a consistent use of the $\chi^2$ statistics (see e.g. \citealp{Cas79}). For this reason we choose to bin the spectra with a minimum of $5$~counts/bin and analyse them with the implementation of the Cash statistics (called {\em cstat}) provided in the version~12.5.1 of {\sc XSPEC}. The Cash statistics assumes that the counts are distributed according to a Poisson law, which does not allow to subtract the background from the spectra \citep{Cas79}. In our case the background contribution is small, and we simply neglect it. The {\em cstat} statistics differs from the canonical Cash statistics because the value of {\em cstat} provides a goodness of fit (similar to the $\chi^2$ statistics), if each spectral bin contains at least $5$ counts. The fit is considered acceptable if the value of the ``reduced'' {\em cstat} (i.e. the ratio of {\em ctstat} to the number of degrees of freedom of the model) is close to one (\citealp{Arn96} \footnote{See also the {\sc XSPEC} manual page http://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/manual/XSappendixCash.html.}). We also check that the distribution of residuals in any spectral model is not skewed and does not show peculiar features. All the statistical uncertainties of the best--fitting parameters are quoted at $90\%$ confidence level. The X--ray luminosities of the source are consistently calculated in the spectral band $0.5-10{~\rm keV}$, and they are corrected for absorption. \section{The Power Law Model} \label{s-plaw} The count rate between the three observations varies. In order to avoid complications in comparing count rates of observations taken in {\sl FAINT} and {\sl VFAINT} modes, we evaluate this variability with a simple absorbed power law model (${\tt wabs*powerlaw}$). Despite its simplicity, it is informative of some basic parameters useful to our further analysis. Separate fits of each observation return comparable values of the absorption column $n_H$ and the spectral photon index $\Gamma$. For this reason we tie $n_H$ and $\Gamma$ of each data set to a common value and fit them simultaneously. Table~\ref{t-plaw} summarises the results of the fit, and Figure~\ref{f-pl} shows the model and its residuals. The value of the C-statistics is $C/{\rm dof}=100.8/101$. The best fitting absorption column is $n_H=3.7_{-1.0}^{+1.0}\times 10^{21}{~\rm cm}^{-2}$, and the spectral photon index is $\Gamma=1.88_{-0.24}^{+0.25}$, both consistent with those presented by \citet{Wol04}. The value of $n_H$ is higher than Galactic, but consistent both with other X-ray measures in the Cartwheel and with the intrinsic absorption in the optical band (see \citealp{Wol04}). The intrinsic luminosities in the spectral band $0.5-10{~\rm keV}$ is $1.2\times 10^{41}{{~\rm erg}\; \rm s^{-1}}$, $8.6\times 10^{40}{{~\rm erg}\; \rm s^{-1}}$ and $2.8\times 10^{40}{{~\rm erg}\; \rm s^{-1}}$ for the first, second and third observation respectively. Compared to the Eddington luminosity of a $1{~M_\odot}$ collapsed object, these luminosities seem to imply a black hole of $\gtrsim 200{~M_\odot}$. It is perhaps worth to remark that the Eddington luminosity is {\em bolometric}, while in the present paper we refer to the luminosity in the X--ray band. This is an approximation of the bolometric luminosity within a factor $\gtrsim 2$. Formally, a power law model provides an adequate description of the {\sl Chandra} data sets, and more refined models are not statistically required. Nevertheless, we would like to know whether these observations may somehow constrain some of the theoretical models suggested for the ULXs. The following sections aim at this goal. \section{The Accretion Models} \label{s-accretion} In the framework of an accretion scenario, the engine of N10 is a black hole accreting mass from a donor star through a disc. In this section we present the results of the spectral analysis of three accretion models: i)~a multi--colour disc ({\tt diskbb} in the language of {\sc XSPEC}), ii)~a model of a disc around a maximally rotating Kerr black hole ($\tt kerrd$ in {\sc XSPEC}) and finally iii)~a \citet{Kaw03} slim disc model \footnote{ Slim disc models are not available in the main release of the package, but only as tabular models to be downloaded separately from the URL \url{http://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/models/slimdisk.html}.}. All models are acceptable on statistical grounds and they all return similar values of the C-stat. For an easier comparison, we have summarised the results of these models in Table \ref{t-acc}. We also shall discuss the hyperaccretion model suggested by \citet{Kin01}, but since no spectral model is available for this, our discussion will be mainly qualitative. \subsection{The multicolour disc} \label{s-multicolour} The first model we consider is a multicolour disc (MCD) modified by the interstellar absorption. The two free parameters of the model are the effective temperature of the inner rim $T_{\rm in}$ and the apparent inner disc radius $R_{\rm in}$, related to the actual inner radius $R_{\rm disc}$ through the relation \citep{Kub98} \begin{equation} \label{e-corr} R_{\rm disc} = \xi \; f^2 \; R_{\rm in}, \end{equation} where $\xi\simeq 0.412$ is a correction factor that takes into account that the maximum temperature $T_{\rm in} $ does not occur exactly at the inner edge of the disc \citep{Kub98}. For typical parameters, the disc optical opacity is dominated by electron scattering. The electron scattering may also Comptonise the emerging spectrum, and this effect is taken into account by introducing the so called hardening factor $f$, i.e. the ratio $f\equiv T_{\rm col} /T_{\rm eff}$ between the colour temperature $T_{\rm col}$ and the effective temperature $T_{\rm eff}$. The value of $f$ is insensitive to the disc viscosity and the mass of the accretor, but is sensitive to the mass accretion rate. For a mass accretion rate close to the Eddington limit the value of $f$ lies in the range $f\simeq 1.7-2.0$ \citep{Shi95}. \citet{Kaw03} argues that if the accretion rate largely exceeds the Eddington limit (${\dot M}\, c^2/L_{\rm Edd}> 100$), the hardening factor is somewhat larger, in the range $f\simeq 2.3-6.5$. Introducing the radiation efficiency \begin{equation} \label{e-eff} \eta=L/{\dot M} c^2, \end{equation} this limit translates into \begin{equation} \label{e-kaw} L>100\, \eta\: L_{\rm Edd}. \end{equation} For a disc accreting on a Schwarzschild black hole $\eta\simeq 0.06$ \citep{Fra02}, so \citeauthor{Kaw03}'s hardening parameter applies if $L\gtrsim 6\, L_{\rm Edd}$. The normalisation of the model only involves geometrical parameters, so we link the normalisation of the three spectra to a common value, and leave the disc temperatures free to vary. Since the absorption columns inferred from each spectrum are consistent, we also link them to a common value. The interstellar absorption column is $n_H=1.8_{-0.6}^{+0.6}\times 10^{21}{~\rm cm}^{-2}$. The disc inner temperature is $T_{\rm in}=1.33_{-0.17}^{+0.22}{~\rm keV}$ for the first observation; $T_{\rm in}=1.21_{-0.16}^{+0.17}{~\rm keV}$ for the second observation and $T_{\rm in}=0.89_{-0.11}^{+0.14}{~\rm keV}$ for the third observation. The normalisation of the model is $K = 7.4_{-3.3}^{+5.2}\times 10^{-4}$, and is related to the apparent inner radius $R_{\rm in}$ through the expression \begin{equation} R_{\rm in} = 1.22\times 10^2 {~\rm km}\; \left(\frac{K}{10^{-4}}\right)^{1/2} \; \left(\frac{D}{122{~\:\rm Mpc}}\right) \; \bigl(\cos\theta\bigr)^{-1/2}, \end{equation} where $\theta$ is the inclination of the disc with respect to the line of sight ($\theta=0$ is face--on), and $D$ is the distance to the source. Inserting the appropriate value of $K$ we find \begin{equation} R_{\rm disc} \simeq 3.3_{-0.8}^{+1.0} \times 10^2 \; \xi\; f^2 / (\cos\theta)^{1/2}\; {~\rm km}. \end{equation} If the inner edge of the disc is located at the last stable orbit of a Schwarzschild black hole, this implies \begin{equation} \label{e-mbh} M_\bullet \simeq 61_{-12}^{+18} {~M_\odot}\; \left(\frac{f}{2.0}\right)^2 \; \left(\frac{\xi}{0.41}\right) \; \frac{1}{(\cos\theta)^{1/2}}. \end{equation} The resulting disc luminosities are $L_1=4.1\times 10^{40}{{~\rm erg}\; \rm s^{-1}}$, $L_2=2.7\times 10^{40}{{~\rm erg}\; \rm s^{-1}}$ and $L_3=8.0\times 10^{39}{{~\rm erg}\; \rm s^{-1}}$, for the first, second and third observation, respectively. All these values exceed the Eddington luminosity $L_{\rm Edd}\simeq 7.9 \times 10^{39}{{~\rm erg}\; \rm s^{-1}}$ for a black hole of~$M_\bullet\simeq 60{~M_\odot}$ by a factor $L/L_{\rm Edd} \lesssim 5$, consistent with our choice of the hardening parameter $f=2$. A long--standing problem of the application of MCD models to the ULXs is that they tend to overestimate the disc's inner temperature $T_{\rm in}$. Indeed, the luminosity of a MCD is \citep{Kub98} \begin{equation} \label{e-lbol} L_{\rm bol} = 4\,\pi\: \left(\frac{R_{\rm disc}}{\xi}\right)^2 \: \sigma_{\rm SB} \; \left(\frac{T_{\rm in}}{f}\right)^4 \end{equation} \citep{Mak00}. If the temperature is too high (for the same luminosity and the other parameters), the disc radius $R_{\rm disc}$ is underestimated, so is the black hole's mass. A possible explanation for this effect is the presence of a hot corona embedding the disc \citep{Sto06}: a single MCD model applied to such system would return spuriously high disc temperatures. We investigate how the presence of a hot corona would affect our estimate~\eqref{e-mbh} of the black hole mass. We describe N10 with a Comptonised disc emission, where the temperature of the Compton seed photons is set equal to the disc inner temperature. On account of our rather low statistics, the temperature of the hot corona is not well constrained, and it has been fixed to $k_B T_C=50{~\rm keV}$, while its optical depth has been left free to vary. The best-fitting value of the Compton thickness is $\tau\simeq 0.5_{-0.2}^{+0.4}$: this results strongly depends on the temperature $T_C$, with cooler coronae returning larger optical depths. The temperature of the disc, however, is not sensitive to the precise values of $T_C$ and $\tau$. As noted by \citet{Sto06}, a Compton--thick ($\tau>1$) corona obscures the inner region of the disc, and in this case the MCD parameters cannot provide reliable estimates of the black hole's mass (see also \citealp{Gla09}). The disc temperature of the three observations are respectively $T_{\rm in}=0.53_{-0.12}^{+0.11}{~\rm keV}$, $T_{\rm in}=0.36_{-0.09}^{+0.16} {~\rm keV}$ and $T_{\rm in}=0.09_{-0.04}^{+0.05} {~\rm keV}$. The normalisation of the disc component is $K= 7.6_{-3.9}^{+10.6}\; \times 10^{-3}$ (in {\sc XSPEC} units), yielding \begin{equation} \label{e-mbh3} M_\bullet = 141_{-42}^{+78}{~M_\odot} \; \left(\frac{\xi}{0.41}\right) \; \left(\frac{f}{1.7}\right)^2 \end{equation} Using the above disc inner temperatures we infer the luminosity of the disc component to be $L_1=9.0\times 10^{39}{{~\rm erg}\; \rm s^{-1}}$ (first observation), $L_2=1.5\times 10^{39}{{~\rm erg}\; \rm s^{-1}}$ (second observation) and $L_3=5\times 10^{35}{{~\rm erg}\; \rm s^{-1}}$ (last observation). The ratio between $L_1$ and the Eddington luminosity of a black hole of $M_\bullet\simeq 141{~M_\odot}$ is~$\sim0.5$, which justifies our choice $f=1.7$ of the hardening parameter. As expected (see \citealp{Sto06}), a hot corona lowers our estimate of disc's temperature and increases the black hole's mass with respect to a simple MCD model. \subsection{The Kerr disc} \label{s-kerr} The second model we consider is the spectrum emitted by a disc orbiting a maximally rotating Kerr black hole. We fix the distance of the Cartwheel to its known value, and the hardening parameter to~$f=1.7$: the quality of our fit is independent of this choice. The inclination of the disc with respect to the line of sight has been fixed to $\theta=0^\circ$; the value of $\theta$ affects the determination of $M_\bullet$, as we shall discuss at the end of this section. We also set the radius of the accretion disc to the last stable orbit of a maximally rotating Kerr BH ($\simeq 1.235\, G \,M_\bullet/c^2$), and the disc outer radius to its default value (the model is insensitive to this parameter). The only free parameters of the model (apart from the hydrogen column density) are the mass $M_\bullet$ of the central object and its mass accretion rate $\dot{M}$. In the fitting procedure we link all the parameters for the three observations except the mass accretion rates. The fit returns the absorption column $n_H=2.2_{-0.6}^{+0.6}\times 10^{21}{~\rm cm}^{-2}$, and the mass of the central object results \begin{equation} \label{e-mass} M_\bullet = 92.8_{-27.7}^{+32.4}{~M_\odot}. \end{equation} The luminosity of the source is $L_1=4.3\times 10^{40}$ in the first observation, $L_2=2.9\times 10^{40}$ in the second, and $L_3=8.5\times 10^{39}$ in the last. The Eddington luminosity associated to the mass~\eqref{e-mass} is $L_{\rm Edd}=1.2 \times 10^{40}{{~\rm erg}\; \rm s^{-1}}$. These values must be checked for their consistency with the adopted hardening parameter. A Kerr maximally rotating black hole has an accretion efficiency $\eta\sim 0.4$ (\citealp{Fra02}, see Equation~\ref{e-eff}). \citeauthor{Kaw03}'s larger values of $f$ then apply if $L\gtrsim 40\, L_{\rm Edd}$. In all our observations, the Kerr disc model returns $L/L_{\rm Edd}\lesssim 3.5$, so the adopted hardening parameter is consistent with the inferred $M_\bullet$ and the relatively low mass accretion regime. Before ending this section we need to address the effect of the disc inclination $\theta$ on the estimate of the mass $M_\bullet$. The value of $\theta$ does not affect the quality of the fit, but the mass is sensitive to it, as shown in Figure~\ref{f-i_m} (see also \citealp{Hui08}). Low inclination angles imply lower $M_\bullet$, which (for $f=1.7$) attains a minimum value of~$\sim 77{~M_\odot}$ for $\theta=20^\circ$. If the disc is seen almost edge--on, on the other hand, $M_\bullet$ grows up to $M_\bullet\simeq 10^3{~M_\odot}$. This trend of $M_{\bullet}$ with the viewing angle is also found in the multicolour discs considered in the previous section (in particular, see Equation~\ref{e-mbh}). To summarise, the Kerr disc model is consistent with a black hole of $M_\bullet\simeq 90{~M_\odot}$, similar (within the uncertainties) to what inferred from the Comptonised multicolour disc model. Higher masses are not excluded, if the disc is highly tilted with respect to the line of sight. Also a higher hardening factor would increase $M_\bullet$, but higher values of $f$ are not required by the inferred accretion regime. \subsection{Slim Discs} \label{s-slim} The multicolour disc model assumes that the disc radiates locally as a black body. If the accretion rate largely exceeds the Eddington limit, this hypothesis is not appropriate. In such an accretion regime the heat content of the disc is trapped and dragged to the black hole's horizon before it is radiated. For this reason, {\em slim discs} are radiatively inefficient, and their properties significantly differ from those of the standard thin discs (see e.g. \citealp{Abr88, Fra02}). In this section we investigate whether the data support the possibility that N10 is powered by a slim disc accreting above its Eddington limit. In {\sc XSPEC} two slim disc models are available: we adopted the model based on the calculation by \citet{Kaw03}, with the mass of the central object, the accretion rate and the viscosity~$\alpha$ as free parameters. The spectrum is calculated taking into account the local Comptonisation and the relativistic effects. \citeauthor{Kaw03}'s model has been applied to fit the {\sl XMM--Newton} spectra of some ULXs (see e.g. \citealp{Oka06, Vie06}), returning black hole masses of few tens of~${~M_\odot}$ accreting at super--Eddington regimes. In our fit we have linked the interstellar absorption column, the hole mass and the $\alpha$~parameter between the observations. The mass accretion rate is independent for each observation. The fitting procedure is unable to constrain the black hole's mass. The confidence regions are open for $M_\bullet\gtrsim 80{~M_\odot}$ (Figure~\ref{f-kawmass}), and the model is consistent with any value of $M_\bullet$ above this limit. The absorption column is $n_H=3.6_{-0.9}^{+0.7}\times 10^{21}{~\rm cm}^{-2}$, while the viscosity parameter and the BH mass are $\alpha=0.56_{-0.37}$ and $M_\bullet= 495_{-340}{~M_\odot}$ respectively. The mass accretion rates for the three observations are ${\dot M}_1=39.4_{-23}$, ${\dot M}_2=21.4_{-12.0}^{+523}$, ${\dot M}_3=5.1_{-1.2}^{+79}$, all in units of ${\dot M}_{\rm Edd}\equiv L_{\rm Edd}/c^2$. Assuming the conversion factor ${\dot M}_{\rm Edd}=1.3\times 10^{19}~{\rm g}{~\rm s}^{-1} (M_\bullet/100{~M_\odot})$ used by \citet{Kaw03}, the best fitting accretion rates are $\dot{M}_1=2.5\times 10^{21}~{\rm g}{~\rm s}^{-1}$, $\dot{M}_2=1.4\times 10^{21}~{\rm g}{~\rm s}^{-1}$, $\dot{M}_3=3.3\times 10^{20}~{\rm g}{~\rm s}^{-1}$. It is not possible to set upper limits to the the parameters $\alpha$, $M_\bullet$ and $\dot{M}_1$, since the error calculation hits the limits of their tabulated values ($\alpha=1$, $M_\bullet=1000{~M_\odot}$ and $\dot{M}=1000~{\dot M}_{\rm Edd}$, respectively). One {\em caveat} is in order about the consistency of the application of slim disc models to N10. The radial temperature profile of slim discs varies as $T\propto R^{-1/2}$ instead of $T\propto R^{-3/4}$ characteristic of thin multicolour discs \citep{Wat00}. The radial dependence of $T_{\rm in}$ in N10 may be checked with a multicolour disc ({\tt diskpbb} in {\sc XSPEC}) where the temperature index $p$ (defined by $T(R)\propto R^{-p}$) is a free parameter. The index $p\simeq 0.5$ has been found in some ULX spectra successfully fitted with a slim disc model \citep{Oka06, Vie06}. In our case the model {\tt wabs*diskpbb} is comparable to the others and since its best--fitting parameters are very similar to those derived from a standard multicolour disc, we do not present them. Our best fitting value is $p\simeq 0.74_{-0.19}^{+3.66}$ close to the standard value~$0.75$, but slightly inconsistent with the slim disc value $p=0.5$. \subsection{The ``Hyperaccretion" model} \label{s-hyper} The last accretion model we consider for N10 is the so--called ``hyperaccretion" scenario \citep{Kin01, Kin08, Kin09}. This model explains the high luminosity of the ULXs with a combination of a high accretion rate (close to the Eddington limit) on a stellar mass black hole, and a mechanical beaming due to the accretion stream itself. The apparent X--ray luminosity (Equation~(10) in \citealp{Kin09}) of the source results \begin{equation} \label{e-hya} L\simeq 2.2 \times 10^{36} {{~\rm erg}\; \rm s^{-1}} \; \left(\frac{M_\bullet}{{~M_\odot}}\right) \; \left(\frac{\dot{M}}{\dot{M}_{\rm Edd}}\right)^2 \; \left[1 + \ln\left(\frac{\dot{M}}{\dot{M}_{\rm Edd}}\right)\right] \end{equation} where $\dot{M}$ is the mass transfer rate from the donor star and $ \dot{M}_{\rm Edd} = L_{\rm Edd} / \eta \, c^2 $ is the Eddington--limited mass transfer, dependent on the efficiency $\eta$ of the X--ray production in the accretion process, defined by Equation~\eqref{e-eff}. One of the most relevant features of this model is that the beaming factor scales as ${\dot M}^{-2}$. This causes the bolometric luminosity to scale with the disc's temperature as \begin{equation} \label{e-kingcorr} L\propto T_{\rm in}^{-4}, \end{equation} opposite to the standard correlation $L\propto T_{\rm in}^4$. The prediction~\eqref{e-kingcorr} has been found consistent with the $L-T_{\rm in}$ relation of the soft excess observed in some ULXs \citep{Fen07, Kaj09, Gla09}. We assume that the count rate of N10 correlates with its intrinsic luminosity in all our observations. In principle, it is certainly true that a strong spectral change may alter (even reverse) this correlation, but such a variation of the spectrum of N10 is not supported by our data. Under this working hypothesis, we measure a positive correlation between the inner temperature and the luminosity, at odds with the prediction of the hyperaccretion model. The hyperaccretion scenario probably is not the correct explanation for the nature of N10. \section{The Supernova Model} \label{s-supernova} As discussed in the introduction, accretion power is not the only viable explanation for the ULXs' engine. Alternative models are possible, and young SNe interacting with their surroundings may explain an appreciable fraction of the known ULXs \citep{Swa04}. In this section we investigate the SN model for N10, aiming at deriving the spectral parameters. The X--ray emission from supernovae may start up to about one year after the explosion, and it is powered by the interaction of the ejecta with the circumstellar medium (CSM, see e.g. \citealp{Imm03}). Shock fronts form as the ejecta expand through the CSM. The leading (or ``forward") shock heats the CSM up to $10^9-10^{10}~\rm K$, and gradually an inner (or ``reverse") shock wave starts to propagate. The density behind the reverse shock is $5-10$ times higher than that behind the forward shock, and the temperature is lower, around $10^7-10^8~\rm K$ \citep{Che94}. For this reason, the soft X--ray emission (below $\lesssim 5{~\rm keV}$) is dominated by the reverse shock, and is well modelled by thermal models, while the hard emission (say, above $10{~\rm keV}$) is dominated by the forward shock. The luminosities of X--ray supernovae lie in the range $10^{37}-10^{41}{{~\rm erg}\; \rm s^{-1}}$, with the so--called SN~IIn supernovae at the bright end of the interval. These are core--collapse supernovae, owing the ``n" in their name to the presence of several narrow emission lines in their optical spectrum \citep{Tur03}. These lines are most likely due to the interaction of the ejecta with a dense, slow wind emitted by the SN progenitor. Other supernovae explode in thinner environments, resulting in dimmer X--ray luminosties. For this reason, if N10 is actually a young SN, it is most likely a SN~IIn. We fit the spectra of N10 with the thermal models {\tt apec} and {\tt nei}, corrected for interstellar absorption. The {\tt apec} model assumes that the emitting plasma is in full collisional equilibrium: there is a dynamic balance between the collisional ionisation and the electronic recombination. The {\tt nei} model does not assume collisional equilibrium, allowing a mismatch between the collisional ionisation and the (longer) recombination process. This mismatch may prevail in shocked plasmas, where the electrons were energised but did not yet settle to collisional equilibrium with the ions (see e.g. \citealp{Dop03}). The parameters of the {\tt nei} model are very similar to those derived for the {\tt apec} model (the fit is quite insensitive to the ionisation age), so we present only the {\tt apec} results. In our model ({\tt wabs*apec}) the absorption column and the metal abundances of the three observations are linked together. Since there is no compelling statistical evidence of a variation of the temperature, we also link together the temperatures of the three observations. The results of this model are presented in Table~\ref{t-apec}. The statistics of the fit is $C/{\rm dof}=95.9/100$, and the values of the best--fitting parameters are $n_H=2.8_{-0.6}^{+0.8}\times 10^{21} {~\rm cm}^2$, $k_B T=5.1_{-1.6}^{+3.1} {~\rm keV}$, $N_1 = 4.3_{-1.2}^{+0.9} \times 10^{-5}$ (normalisation of the first observation), $N_2 = 3.1_{-0.9}^{+0.7} \times 10^{-5}$ (normalisation of the second observation), and $N_3 = 1.0_{-0.2}^{+0.3} \times 10^{-5}$ (normalisation of the third observation). The units of these quantities are (for sources at low redshift) $10^{-14}\; \int dV n_e\; n_H / 4\,\pi\, D^2$, where $D$ is the distance to the source, $n_e$ and $n_H$ are the electron and proton densities of the emitting plasma, and the integral (known as {\em emission integral}) is extended over the volume occupied by the source. The emission integrals are consistent with the size (few~$10^{15}{~\rm cm}$) and the density ($\sim10^{-15}-10^{-16}~\rm g{~\rm cm}^{-3}$) expected from the reverse shock of a young supernova remnant few years after the explosion (see e.g. \citealp{Che03}). The unabsorbed light curve of N10, calculated according to the power law model, is plotted in Figure~\ref{f-snlc} (see also \citealp{Wol06}). The evolution of the observed flux cannot be fitted by a power law, as observed in other X--ray supernovae (1986J, \citealp{Hou98}; 1988Z, \citealp{Sch06}; 1998S, \citealp{Poo02}). This is not an argument against the SN nature of N10, though, since a simple power--law decline of the X--ray luminosity has not been reported in other confirmed SN~IIn. SN~1995N, for instance, shows a complex light curve, possibly resulting from the interaction of the ejecta with a complex circumstellar medium \citep{Zam05}. The sudden decrease of the X--ray luminosity of N10 in 2008 might show that the ejecta have passed the dense region of the pre--supernova wind, and have finally reached a low density outer environment. SN IIn are usually found in high metal environments (1986J, \citealp{Hou98}; 1998S, \citealp{Poo02}). Indeed, high metal abundances favour the blowing of the line--driven strong pre--supernova winds required to set up the dense circumstellar environment required to power a stronger X--ray emission. The metallicity of the Cartwheel is low as measured from the HII regions \citep{Fos77}; the X-ray measure is poorly constrained, and consistent with zero. Therefore the SN progenitor had to be quite massive to blow dense enough winds without the aid of a high metallicity. \section{Discussion and Summary} \label{s-sum} In this paper we considered the spectral modelling of the ULX N10, located in the Cartwheel galaxy in order to assess the nature of this source. Although the source is intrinsically very bright, the spectra have few counts on account of the large distance ($122{~\:\rm Mpc}$). For this reason, we are unable to reject (or confirm) a model on purely statistical grounds. We first discussed several accretion models (multicolour disc around a Schwarzschild or Kerr black hole, slim disc, hyperaccretion). All models indicate a black hole of~$\sim 100{~M_\odot}$, at the high end of the mass distribution of a black hole generated by stellar evolution, possibly in a metal poor environment. This result is consistent with the conclusions of~\citet{Map09} and~\citet{Zam09} that stellar evolution in a metal depleted environment may produce black holes of $30-90{~M_\odot}$. The theoretical studies of the evolution of very massive stars have been rekindled by the discovery of the ULXs, with the result that black holes of $\sim 100{~M_\odot}$ are more common than earlier works suggested. Isolated solar--abundance stars may have masses up to $150{~M_\odot}$, but in dense environments they may coalesce to form stars as massive as $1000{~M_\odot}$ \citep{Yun08}. Recent studies on the mass loss from hot, massive stars (\citealp{Pul08}, and references therein) have shown that the occurrence of weak and/or clumpy winds may reduce the mass loss up by a factor $10$, thus allowing the star to retain a higher fraction of its original mass at the end of its life. Also lower metallicities entail weaker wind mass losses \citep{Heg03}. Zero metallicity (Population~III) stars may be significantly more massive than solar--abundance stars \citep{Ohk06}. All these factors determine the mass of the final black hole, that may range from $\sim 70{~M_\odot}$ for a solar--abundance progenitor \citep{Yun08}, up to $500{~M_\odot}$ for low metallicity stars \citep{Ohk06}. Observations indicate that the low metallicity scenario is more likely for N10: the Cartwheel galaxy is metal--poor, and although the available measurements \citep{Fos77} are not at the exact location of N10, they should nevertheless be representative of the average abundance in the ring. We therefore conclude that the interpretation of N10 as a black hole of $\sim 100{~M_\odot}$ suggested by the X--ray data is consistent with the theoretical predictions of the evolution of massive stars in a dense environment. In summary, if ordinary stellar evolution is the correct scenario to form the black hole, N10 could be an extreme High Mass X--ray Binary (HMXB). The mass accretion rate $\dot M$ on the black hole may be inferred from the luminosity $L$ via Equation~\eqref{e-eff}, and is of the order of ${\dot M}\simeq10^{-6}~{~M_\odot}{~\rm yr}^{-1}$. This value is comparable with the loss rate of massive (i.e., few tens solar masses, \citealp{Fra02}) donor stars on a thermal time scale. The accretion flow from the donor star to the BH is most likely to occur through a Roche lobe overflow; the BH capture of a wind blown by the companion is less favoured, since the small capture radius would require an unlikely strong mass loss from the companion. Can the observed decay of the luminosity of the source be explained in the framework of the accretion model? The longest characteristic time of an accretion disc is the so--called ``viscous time'' $t_{\rm visc}$, i.e. the characteristic time taken by the disc to adapt to new conditions. For a standard $\alpha$-disc orbiting a black hole of $100{~M_\odot}$ with an accretion rate ${\dot M}\simeq10^{-6}~{~M_\odot}{~\rm yr}^{-1}$, $t_{\rm visc}\simeq 10^2-10^3{~\rm s}$ \citep{Fra02}, much shorter than the time scale of the variability of N10 (few years). Therefore, the observed variability cannot be due to a sort of disc instability, which would affect the disc and the luminosity of the source over a time scale $t_{\rm visc}$. The simplest alternative is that the variability is due to a change of the mass transfer rate from the donor star. This may be due to an intrinsic decay of the mass loss from the donor, but also to other effects occurring at the inner Lagrangian point, since the instantaneous mass flow here is very sensitive to the relative sizes of the donor star and the Roche lobe \citep{Fra02}. New observations of this interesting source would help to settle the question. \bigskip We also explored the possibility that N10 is a young supernova strongly interacting with the circumstellar medium. Both the interpretation of this model and the best--fitting spectral parameters are consistent with this view. A new flux brightening of the source in the future would rule out this hypothesis. \section*{Acknowledgments} We acknowledge financial support from INAF through grant PRIN-2007-26. We thank Luca Zampieri for a careful reading of the manuscript and for his useful observations. We also thank an anonymous referee for her/his comments that helped to improve the paper. \clearpage \nocite{} \bibliographystyle{mn2e}
1003.4720
\section{Introduction} \label{sec:intro} The physics of electroweak symmetry breaking (EWSB) will soon be probed directly at the Large Hadron Collider (LHC). One logical possibility is that the sector responsible for electroweak symmetry breaking will involve new, nonperturbative dynamics. Historically, technicolor models have represented an attempt at constructing viable theories of this type~\cite{technicolor}. Conventional technicolor models, however, suffer from a number of well-known problems. As originally proposed, the technicolor sector was assumed to be a scaled-up version of QCD, leading to estimates for the $S$ parameter that are unacceptably large~\cite{PT}. In addition, an extended technicolor (ETC) sector must be added to generate the operators needed to account for Standard Model fermion masses~\cite{etc}. In many ETC models, it is impossible to account for a heavy top quark (which requires the ETC scale to be low) and suppress other ETC operators that contribute to flavor-changing-neutral-current (FCNC) processes (which requires the ETC scale to be high). Viable and elegant ETC models have been few and far between. Developments over the last decade in the physics of higher-dimensional and conformal field theories, however, have led to new possibilities in technicolor model building~\cite{hong,sanz,cet1,acgr,cesh,hirayama,haba,piai,dandk,mint,kit,round,belitsky,supertc,luty,also}. For example, the magnitude of the $S$ parameter in QCD-like technicolor theories suggests one should not exclusively study theories that are exactly like QCD (as, for example, in Refs.~\cite{sanz} or \cite{cet1}). A decade or so ago, this would have been a fruitless effort. Now, the AdS/CFT correspondence~\cite{adscft} provides a means of constructing a perturbative, five-dimensional (5D) theory that is dual to a strongly coupled technicolor theory localized on a four-dimensional (4D) boundary~\cite{hong,sanz,cet1,acgr,cesh,hirayama,haba,piai,dandk,mint,kit,round,belitsky}. For some values of the parameters that define the 5D theory, the dual theory can model a scaled-up version of QCD. However, for other parameter choices it does not. In either case, observables can be computed reliably in the 5D theory, which we can think of as defining its strongly coupled dual. The freedom to deviate from the QCD-like limit only presumes the validity of a gauge/gravity correspondence. The evidence for this is not insignificant, and includes holographic models of QCD phenomenology that agree remarkably well with the low-energy data~\cite{adsqcd,eandw}. The problem with fermion mass generation in the conventional ETC framework, on the other hand, may suggest something about the form of the relevant low-energy effective theory. It was observed long ago that a techniquark bound state with the same quantum numbers as a Standard Model Higgs doublet can form in ETC models in which the ETC gauge coupling becomes strong~\cite{setc}; this bound state has Yukawa couplings to the Standard Model fermions and may develop a vacuum expectation value, producing fermion masses. The low-energy effective theory, taken by itself, has no problems with FCNC effects, since these originate from scalar-exchange diagrams that are no larger than in conventional two-Higgs doublet models. A significant number of phenomenological studies on such ``bosonic technicolor" scenarios were motivated by the simplicity of this low-energy effective theory~\cite{bt1,bt2,bt3, bt4,bt5,bt6,bt7,bt8,bt9}. While bosonic technicolor can arise from a (fine-tuned) strongly coupled ETC model, the low-energy effective theory is by no means linked uniquely to that ultraviolet completion. For example, the same effective theory can arise in a warped, 5D theory with a Higgs field localized near the Planck brane and symmetry-breaking boundary conditions on the bulk gauge fields~\cite{acgr}. In this setting, the presence of a scalar in the spectrum of the theory seems far from scandalous. The value of working with the low-effective effective description is that one can extract robust, low-energy predictions of the theory without being hindered unnecessarily by one's ignorance of the physics that has decoupled in the ultraviolet. It is such robust predictions of the low-energy effective theory that are of interest to us in this paper. Using the holographic approach to define our strongly coupled sector, and the associated freedom to deviate from the limit in which this sector is like QCD, we show that the parameter space of the theory is nonetheless significantly constrained. Combining the bounds on the $S$ parameter (evaluated holographically), partial-wave unitarity in longitudinal $W$ boson scattering (with the technirho couplings evaluated holographically) and the bound on the mass of the light Higgs-like scalar (with a relevant chiral Lagrangian parameter evaluated holographically), we find that there is a relatively narrow region of parameter space in which the model is currently viable. In deforming the theory away from its QCD-like limit, we focus primarily on varying the ratio of the chiral symmetry breaking to the confinement scale, as well as the amount of explicit chiral symmetry breaking originating from the current techniquark masses. In addition, bosonic technicolor models allow one to vary the symmetry breaking scale associated with the strongly interacting sector, while holding the electroweak scale fixed. Within the allowed parameter region, the ranges of observable quantities are substantially restricted, suggesting the qualitative features of the model that may be relevant at the LHC. The new results presented here suggest that the simplest version of holographic bosonic technicolor may be sufficiently constrained that upcoming collider data could soon render debates on the possible origins of the effective theory largely irrelevant. Our paper is organized as follows. In the next section, we review the relevant low-energy effective theory. In Sections~III and IV, we discuss the holographic calculations of the observable quantities that we use to constrain the parameter space of the theory. In Section~V, we present our numerical results and in Section~VI we summarize our conclusions. \section{The Model} \label{sec:two} The gauge group of the model is $G_{\rm TC} \times \textrm{SU}(3)_C \times \textrm{SU}(2)_W \times \textrm{U}(1)_Y$, where $G_{\rm TC}$ represents the technicolor group. We will assume that $G_{TC}$ is asymptotically free and confining. We assume two flavors of technifermions, $p$ and $m$, that transform in the $N$-dimensional representation of $G_{\rm TC}$. In addition, these fields form a left-handed $\textrm{SU}(2)_W$ doublet and two right-handed singlets, \begin{equation} \Upsilon_L \equiv \left( \begin{array}{c} p \\ m \end{array} \right)_L, \,\,\,\,\, p_R,\,\,\,\,\, m_R, \end{equation} with hypercharges $Y(\Upsilon_L)=0$, $Y(p_R)=1/2$, and $Y(m_R)=-1/2$. With these assignments, the technicolor sector is free of gauge anomalies. With $N$ even, the SU(2) Witten anomaly is also absent. The technicolor sector has a global $\textrm{SU}(2)_L \times \textrm{SU}(2)_R$ symmetry that is spontaneously broken when the technifermions form a condensate \begin{equation} \label{eq:condensate} \left< \bar pp + \bar mm \right> = \sigma_0 \,\,\,. \end{equation} The electroweak gauge group of the Standard Model is a subgroup of the chiral symmetry; $\textrm{SU}(2)_W$ is isomorphic to $\textrm{SU}(2)_L$, while $\textrm{U}(1)_Y$ is identified with the third generator of $\textrm{SU}(2)_R$. The condensate breaks $\textrm{SU}(2)_W \times \textrm{U}(1)_Y$ to $\textrm{U}(1)_{\rm EM}$ and generates $W$ and $Z$ masses. However, additional physics is required to communicate this symmetry breaking to the Standard Model fermions. Bosonic technicolor models utilize the simplest possibility, a scalar field $\phi$, that transforms as an $\textrm{SU}(2)_W$ doublet with hypercharge $Y(\phi) = 1/2$. The scalar has Yukawa couplings to both the technifermions, \begin{equation}\label{eq:techniyuk} \mathcal L_{\phi T} = -\bar\Upsilon_L \tilde\phi \, h_+ p_R - \bar\Upsilon_L\phi \, h_- m_R + {\rm h.c.}, \end{equation} and to the ordinary fermions, \begin{equation}\label{eq:smfyuk} \mathcal L_{\phi f} = -\bar L_L \phi h_l E_R - \bar Q_L \tilde\phi h_U U_R - \bar Q_L \phi h_D D_R + {\rm h.c.}, \end{equation} where $\tilde\phi = i \sigma^2 \phi^*$. Unlike the Standard Model Higgs doublet, the $\phi$ field is assumed to have a {\em positive} squared mass. When the technifermions condense, Eq.~(\ref{eq:techniyuk}) produces a $\phi$ tadpole term in the scalar potential, and $\phi$ develops a vacuum expectation value, as we will see in a more conventient parameterization below. Standard Model fermion masses then follow from the Yukawa couplings in Eq.~(\ref{eq:smfyuk}). To be more explicit, we use the conventional nonlinear representation of the Goldstone bosons to construct the electroweak chiral Lagrangian. We define \begin{equation}\label{eq:sigdef} \Sigma = \exp(2 i \Pi/f), \,\,\,\,\, \Pi = \left( \begin{array}{cc} \pi^0/2 & \pi^+/\sqrt{2} \\ \pi^-/\sqrt{2} & -\pi^0/2 \end{array}\right) \, , \end{equation} where $\Pi$ represents an isotriplet of technipions, and $f$ is their decay constant. The $\Sigma$ field transforms under $\textrm{SU}(2)_L \times \textrm{SU}(2)_R$ as \begin{equation} \Sigma \rightarrow L\,\Sigma\, R^\dagger \,, \end{equation} which dictates the form of the pion interactions. To include the scalar doublet consistently in the effective theory, it is convenient to use the matrix form \begin{equation} \Phi = \left( \begin{array}{cc} \overline{\phi^0} & \phi^+ \\ -\phi^- & \phi^0 \end{array} \right). \end{equation} The technifermion Yukawa couplings can be re-expressed as \begin{equation} \overline{\Upsilon}_L \left( \begin{array}{cc} \overline{\phi^0} & \phi^+ \\ -\phi^- & \phi^0 \end{array} \right) \left( \begin{array}{cc} h_+ & 0 \\ 0 & h_- \end{array} \right) \Upsilon_R \equiv \overline{\Upsilon}_L \Phi H \Upsilon_R. \end{equation} Since the underlying theory would be invariant if the combination $\Phi H$ transformed as \begin{equation} (\Phi H) \rightarrow L \, (\Phi H)\, R^\dagger, \end{equation} one may correctly include this combination in the effective theory by assuming it transforms in this way. The lowest-order term in the electroweak chiral Lagrangian that involves $\Phi H$ is \begin{equation} \label{eq:PhiHmixing} \mathcal L_H = c_1 4\pi f^3 \textrm{Tr}(\Phi H \Sigma^\dagger) + h.c. \; , \end{equation} where $c_1$ is an unknown, dimensionless coefficient; one expects $c_1$ to be of order one by naive dimensional analysis~\cite{Manohar:1983md} in a QCD-like theory. Henceforth, we assume that $h_+=h_-\equiv h$, to simplify the parameter space of the model. It is convenient to re-express the $\Phi$ field using a nonlinear field redefinition, similar to Eq.~(\ref{eq:sigdef}). Expanding about the true vacuum, \begin{equation} \label{eq:pisigma} \Phi = \frac{\sigma+f'}{\sqrt{2}}\Sigma', \,\,\,\,\,\Sigma' = \exp(2 i \Pi'/f')\,, \end{equation} where $f'$ is the vev of $\phi$ and $\Pi'$ represents its isotriplet components. The kinetic terms for the $\Phi$ and $\Sigma$ fields can be written \begin{equation} \label{eq:chirallagrangian} \mathcal L_{KE} = \frac{1}{2}\partial_\mu\sigma\partial^\mu\sigma +\frac{f^2}{4}\textrm{Tr}(D_\mu\Sigma^\dagger D^\mu\Sigma) +\frac{(\sigma+f')^2}{4}\textrm{Tr}(D_\mu\Sigma'^\dagger D^\mu\Sigma'), \end{equation} where the covariant derivative is given by \begin{equation} D^\mu\Sigma = \partial^\mu\Sigma-igW_a^\mu\frac{\tau^a}{2}\Sigma+ig'B^\mu\Sigma\frac{\tau^3}{2}. \end{equation} In the expansion of Eq.~(\ref{eq:chirallagrangian}), there are quadratic terms that mix the gauge fields with derivatives of a specific linear combination of the pion fields: \begin{equation} \label{eq:absorbedmixing} \pi_a = \frac{f\,\Pi+f'\,\Pi'}{\sqrt{f^2+f'^2}}. \end{equation} The mixing indicates that the components of $\pi_a$ are unphysical and can be gauged away. On the other hand, the orthogonal linear combination, \begin{equation} \label{eq:physicalmixing} \pi_p = \frac{-f'\,\Pi+f\,\Pi'}{\sqrt{f^2+f'^2}}\, , \end{equation} represents physical states in the low-energy theory. The physical pion mass is determined from Eq.~(\ref{eq:PhiHmixing}): \begin{equation} \label{eq:mpi} m_\pi^2 = 8 \sqrt{2} \pi c_1 h \frac{f}{f'} v^2 \,. \end{equation} In unitary gauge, the remaining quadratic terms give the masses of $W$ and $Z$ bosons, \begin{equation} m_W^2 = \frac{1}{4}g^2v^2,\,\,\,\,\,\,\,\,\,\, m_Z^2=\frac{1}{4}(g^2+g'^2)v^2, \end{equation} where $v$ represents the electroweak scale \begin{equation} \label{eq:vff} v \equiv \sqrt{f^2+f'^2} = 246 \textrm{ GeV}. \end{equation} In the absence of a technicolor sector, with $f'=v$, the $\sigma$ field corresponds to the Higgs boson of the Standard Model. Away from this limit, the $\sigma$ field is similar to a Standard Model Higgs boson, but with different couplings. Expanding the third term of Eq.~(\ref{eq:chirallagrangian}), we find that the coupling between $\sigma$ and the gauge bosons is given by \begin{equation}\label{eq:siggb} \mathcal L_{\sigma WZ} = 2\frac{f'}{v}\frac{m_W^2}{v} \sigma W^{+\mu}W_\mu^- +\frac{f'}{v} \frac{m_Z^2}{v} \sigma Z^\mu Z_\mu \, , \end{equation} which is reduced by a factor of $f'/v$ compared to the result in the Standard Model. The couplings of the $\Phi$ field to the quarks is given by \begin{equation} \label{eq:sigmaquark} \mathcal L_{\Phi \bar q q}= - \bar\psi_L \Phi \left(\begin{array}{cc} h_U&0\\ 0& V_{CKM}h_D \end{array}\right) \psi_R + \textrm{h.c.} \ , \end{equation} where $\psi_L = (U_L, V_{CKM}D_L)$, $\psi_R = (U_R, D_R)$, $h_U = \textrm{diag}(h_u,h_c,h_t)$, and $h_D = \textrm{diag}(h_d,h_s,h_b)$. Using Eq.~(\ref{eq:pisigma}), this may be written \begin{equation} \mathcal L_{\Phi \bar q q} = - \frac{\sigma + f'}{\sqrt{2}} \bar\psi_L \Sigma' \left(\begin{array}{cc} h_U&0\\ 0& V_{CKM} h_D \end{array}\right) \psi_R + \textrm{h.c.} \end{equation} Taking into account the leptons, the coupling of the $\sigma$ field to fermions is given by \begin{equation}\label{eq:sff} \mathcal L_{\sigma \bar f f} = - \sum_{\textrm{fermions}} \frac{v}{f'} \frac{m_f}{v} \sigma \bar f f \, . \end{equation} Eq.~(\ref{eq:sff}) is larger than the corresponding result in the Standard Model by a factor of $v/f'$; this enhancement corresponds to the larger Yukawa couplings that are required when electroweak symmetry breaking comes mostly from the strongly coupled sector. \section{Holographic Calculations} \label{sec:three} We model the technicolor sector using the AdS/CFT correspondence~\cite{adscft}, which allows us to numerically evaluate the otherwise undetermined coefficients of the electroweak chiral Lagrangian, such as the parameter $c_1$ of Eq.~(\ref{eq:PhiHmixing}). The AdS/CFT correspondence conjectures a duality between a 5D theory in anti-de Sitter (AdS) space and 4D conformal field theory (CFT) located on a boundary. For theories like QCD that are confining, the metric is a slice of AdS space: \begin{equation} ds^2 = \frac{1}{z^2}\left(-dz^2+dx^\mu dx_\mu\right), \ \epsilon \leq z \leq z_m. \end{equation} The position in the fifth dimension $z$ corresponds to the energy scale in the 4D theory; branes at $z=\epsilon$ and $z_m$ correspond to the ultraviolet (UV) and infrared (IR) cutoffs of the dual theory. The AdS/CFT correspondence dictates that operators in the boundary theory correspond to bulk fields in the 5D theory. To be more precise, given an operator $\mathcal O$ with the source $\phi_0(x)$, the generating functional in the 4D quantum field theory, $W_{4D}$, is given by the classical action of the 5D theory written in terms of the boundary value of the corresponding bulk field, $\phi(x,z)$: \begin{equation} \label{eq:correspondence} W_{4D}[\phi_0(x)] = S_{5D}^{\rm class} |_ {\phi(x,\epsilon) = \phi_0(x)}. \end{equation} In addition, the AdS/CFT correspondence identifies each global symmetry in the boundary theory with a gauge symmetry in the 5D theory. The 5D action that describes the technicolor sector of our model is \begin{equation} \label{eq:5Daction} S_{5D} = \int d^5x \sqrt{g} \; \textrm{Tr} \left\{ -\frac{1}{2g_5^2}(F_R^2+F_L^2) + |DX|^2 + 3|X|^2 \right\} \,. \end{equation} The chiral symmetry of the technicolor sector corresponds to the SU(2)$_L \times $SU(2)$_R$ gauge symmetry of the 5D theory, with gauge fields $A_{L,R} = A^a_{L,R}t^a$, where the $t^a$ are generators of SU(2) with ${\rm Tr} \, t^a t^b = \delta^{ab}/2$. The covariant derivative and field strength tensors are defined by $D_\mu X = \partial_\mu X - i A_{L\mu}X + iXA_{R\mu}$ and $F_{L,R\;\mu\nu} = \partial_\mu A_{L,R\;\nu} - \partial_\nu A_{L,R\;\mu} - i \left[A_{L,R\;\mu},A_{L,R\;\nu}\right]$. The scalar field $(2/z)X$ corresponds to the operator $\bar q_R q_L$, while the gauge fields $A_{L,R\;\mu}^a$ correspond to the chiral currents $\bar q_{L,R}\gamma^\mu t^a q_{L,R}$. The equations of motion for the $X$ field may be solved, subject to the UV boundary condition $2/\epsilon \, X(\epsilon) = m_q$: \begin{equation}\label{eq:theprof} X(z) = \frac{1}{2} (m_q z + \sigma_c z^3) \equiv \frac{1}{2} X_0(z). \end{equation} The techniquark mass $m_q$ is related to the parameter $h$ by $m_q = h \, f'/\sqrt{2}$. The coefficient $\sigma_c$ is equal to the condensate $\sigma_0$ (defined in Eq.~(\ref{eq:condensate})) when $m_q=0$, as can be shown by varying the action with respect to $m_q$ and then taking the chiral limit. More generally, $\sigma_c$ is a parameter that defines the holographic theory that we will eliminate in terms of the technipion decay constant $f$, as we discuss below. We work with the vector and axial vector fields $V = (A_L+A_R)/\sqrt{2}$ and $A = (A_L-A_R)/\sqrt{2}$, respectively. The bulk-to-boundary propagator $V(q,z)$ is defined as the solution to the transverse equations of motion with $V_\mu(q,z)_\perp \equiv V(q,z) V_\mu(q)_\perp$ and $V(q,\epsilon)=1$, where $\epsilon$ is the UV boundary. From Eq.~(\ref{eq:5Daction}), it follows that the bulk-to-boundary propagators satisfy \begin{eqnarray} \label{eq:btbv} \partial_z\left(\frac{1}{z}\partial_z V(q,z)\right)+\frac{q^2}{z}V(q,z)&=&0 \, ,\\ \label{eq:btba} \partial_z\left(\frac{1}{z}\partial_z A(q,z)\right)+\frac{q^2}{z}A(q,z)-\frac{g_5^2 X_0(z)^2}{2z^3}A(q,z)&=&0. \end{eqnarray} In accordance with Eq.~(\ref{eq:correspondence}), the vector and axial vector two-point functions are given holographically by~\cite{adsqcd} \begin{equation} \label{eq:pifunctions} \Pi_V(-q^2) = \left.\frac{2}{g_5^2}\frac{1}{z}\frac{\partial V(q,z)}{\partial z}\right|_{z=\epsilon}, \ \ \ \Pi_A(-q^2) = \left.\frac{2}{g_5^2}\frac{1}{z}\frac{\partial A(q,z)}{\partial z}\right|_{z=\epsilon}. \end{equation} where \begin{eqnarray} \int d^4x \, e^{iq\cdot x} \left<J_V^{a\,\mu}(x)J_V^{b\,\nu}(0)\right> &\equiv& \delta^{ab} \left(\frac{q^\mu q^\nu}{q^2}-g^{\mu\nu}\right) \Pi_V(-q^2) \, ,\nonumber \\ \int d^4x \, e^{iq\cdot x} \left<J_A^{a\,\mu}(x)J_A^{b\,\nu}(0)\right> &\equiv& \delta^{ab} \left(\frac{q^\mu q^\nu}{q^2}-g^{\mu\nu}\right) \Pi_A(-q^2). \end{eqnarray} Comparing $\Pi_V$ with the known perturbative result for an SU($N$) gauge theory, valid at high $q^2$, one finds~\cite{adsqcd} \begin{equation} \label{eq:matching} g_5^2 = \frac{24\pi^2}{N} \, . \end{equation} We discuss this assumption further in the section on our numerical results\footnote{The equations in the published version of Ref.~\cite{cet1} corresponding to Eqs.~(\ref{eq:pifunctions}) and (\ref{eq:matching}) above are off by a factor of $2$. These errors are corrected in arXiv:hep-ph/0612242 (v4).}. With the holographic self-energies determined, we may compute the technipion decay constant using the observation that $\Pi_A \rightarrow -f^2$ as $q^2 \rightarrow 0$ in the chiral limit, as in Ref.~\cite{adsqcd}. Since we treat $f$ as an input parameter, this computation may be inverted to solve for the parameter $\sigma_c$ defined in Eq.~(\ref{eq:theprof}). The holographic model of the technicolor sector is then determined by three free parameters: $h$, $f$, and $z_m$. The IR cutoff $z_m$, however, may be eliminated in terms of a single physical observable, the technirho mass. The technirho corresponds to the lowest normalizable mode of the 5D vector field. The technirho wave function $\psi_\rho(z)$ satisfies the same equation of motion as the the bulk-to-boundary propagator in Eq.~(\ref{eq:btbv}), but with different boundary conditions: $\psi_\rho(\epsilon)=0$ and $\partial_z\psi_\rho(z_m)=0$. These boundary conditions are satisfied when $q^2=m_\rho^2$ (or the squared mass of any higher vector mode). The vector equation of motion may be solved analytically, and one finds that $z_m$ is determined by \begin{equation} J_0(m_\rho \, z_m) =0 \,\,\,. \end{equation} Hence, for fixed values of $h$, the $f$-$m_\rho$ plane provides a convenient visual representation of the parameter space of the model. For our subsequent analysis, we will need the coupling of the technirho to the physical pion states. Couplings between modes may be obtained by substituting properly normalized wave functions into the appropriate interaction terms of the 5D theory, and then integrating over the extra dimension. Requiring that the 4D kinetic terms of the technirho are canonical gives us the normalization condition \begin{equation} \int(dz/z)\psi_\rho(z)^2=1 \, . \end{equation} For the pions, the situation is slightly more complicated. There is an isotriplet component $\pi$ of the $X$ field, $X=X_0\exp(2 \, i\, \pi^a \, t^a)$; the longitudinal component of the axial vector field, $\varphi$ is also an isotriplet, $A_M = A_{M\perp} + \partial_M\varphi$. These fields satisfy the coupled equations of motion \begin{eqnarray} \label{eq:pioneom} \partial_z \left(\frac{1}{z}\partial_z\varphi^a\right)+\frac{g_5^2X_0^2}{\sqrt{2}z^3} \left(\pi^a-\frac{\varphi^a}{\sqrt{2}}\right)&=&0 \, ,\nonumber \\ -\sqrt{2}q^2\partial_z\phi^a+\frac{g_5^2X_0^2}{z^2}\partial_z\pi^a&=&0 \, . \end{eqnarray} The technipion state $\Pi$ corresponds to an eigensolution of the form $\pi(q,z) = \pi(z)\Pi(q)$ and $\varphi(q,z)=\varphi(z)\Pi(q)$, subject to the boundary conditions $\varphi'(z_m)=\varphi(\epsilon)=\pi(\epsilon)=0$~\cite{adsqcd}. Again, requiring a canonical 4D kinetic term for the $\Pi$ field gives the desired normalization condition \begin{equation} \int dz \; \left(\frac{\varphi'(z)^2}{g_5^2\;z}+\frac{X_0(z)^2\left(\pi(z)-\varphi(z)/\sqrt{2}\right)^2}{z^3}\right) = 1. \end{equation} The $\rho \,\Pi \,\Pi$ coupling originates from $V\varphi\varphi$, $V\varphi\pi$ and $V\pi\pi$ interactions in the 5D theory, evaluated on the lowest modes: \begin{equation} \label{eq:grhoPiPi} g_{\rho \,\Pi\,\Pi} = \frac{g_5}{\sqrt{2}}\int dz \; \psi_\rho(z) \left(\frac{\varphi'(z)^2}{g_5^2\;z} +\frac{X_0(z)^2\left(\pi(z)-\varphi(z)/\sqrt{2}\right)^2}{z^3}\right). \end{equation} This result is not quite what we need since we have not taken into account that physical pion states in the bosonic version of the theory involve mixing between the $\Pi$ and $\Pi'$ fields. The mass of the $\Pi$ field that follows from Eq.~(\ref{eq:pioneom}) corresponds to the $\Pi^2$ part of the chiral Lagrangian term in Eq.~(\ref{eq:PhiHmixing}), allowing us to fix the coefficient $c_1$. It follows that the physical pion mass and the $\Pi$ mass are related by \begin{equation} \label{eq:mPi} m_\Pi^2 = m_\pi^2 \frac{f'^2}{v^2} \end{equation} where $m_\Pi^2$ is the $q^2$ eigenvalue of Eq.~(\ref{eq:pioneom}). Following from the 5D Lagrangian, the generic interaction between the technirho and the $\pi_a$ and/or $\pi_p$ fields is of the form \begin{equation} \label{eq:rhopion} \mathcal L_{\rho XY} = ig_{\rho XY} \rho_0^\mu\left[(\partial_\mu X^+)Y^- - Y^+(\partial_\mu X^-)\right], \end{equation} where $X$ and $Y$ are either a physical or absorbed pion. Taking into account the mixing in Eqs.~(\ref{eq:absorbedmixing}-\ref{eq:physicalmixing}), it follows from Eq.~(\ref{eq:grhoPiPi}) that \begin{eqnarray} \label{eq:gcoupling} g_{\rho\pi_a\pi_a} &=& \frac{f^2}{v^2} g_{\rho\Pi\Pi} \, , \nonumber \\ g_{\rho\pi_a\pi_p} =g_{\rho\pi_p\pi_a} &=& \frac{ff'}{v^2} g_{\rho\Pi\Pi} \, , \nonumber \\ g_{\rho\pi_p\pi_p} &=& \frac{f'^2}{v^2} g_{\rho\Pi\Pi} \, . \end{eqnarray} For the masses of the technirho of interest to us later, the $\pi_a$ couplings will accurately describe the coupling of the technirho to longitudinal $W$ bosons via the Goldstone boson equivalence theorem. \section{Constraints on the model}\label{sec:four} \subsection{$S$ Parameter} The size of the $S$ parameter represents a significant challenge for most technicolor theories~\cite{PT}. Electroweak precision tests favor a value smaller than $0.09$~\cite{Amsler:2008zzb}; we use this fact to exclude regions of the $f$-$m_\rho$ plane. The $S$ parameter may be defined in terms of the self-energies $\Pi_V$ and $\Pi_A$, which are computed holographically via Eq.~(\ref{eq:pifunctions}): \begin{equation}\label{eq:sdefine} S = 4\pi \frac{d}{dq^2} \left.\left(\Pi_V(-q^2)-\Pi_A(-q^2)\right)\right|_{q^2\rightarrow 0} \, . \end{equation} Note that the dependence of the self energies on the ultraviolet cut off, $1/\epsilon$, cancels between the two terms in Eq.~(\ref{eq:sdefine})\footnote{Given that we extract $f$ from $\Pi_A$ alone, is it worth noting that the $\ln\epsilon$ dependence in this self-energy vanishes for $q^2=0$ in the chiral limit. However, for $m_q \neq 0$, there is a divergence proportional to $m_q^2 \ln \epsilon$ in $\Pi_A(0)$ that we subtract. In the language of chiral perturbation theory, this is equivalent to adding a counterterm whose unknown finite part is of order $m_q^2$. For the techniquark masses we consider, this is a negligible correction.}. It was shown in Ref.~\cite{cet1}, that the value of the $S$-parameter may be reduced by decreasing $f$ or increasing $m_\rho$. (The same effect has been described in a different context in Ref.~\cite{acp}.) In the first case, one approaches the limit where electroweak symmetry breaking is accomplished almost entirely by the $\phi$ field, so the presence of technihadronic resonances is irrelevant; in the second case, the technihadronic resonances are decoupled, which also reduces the result for fixed $f$. \subsection{Unitarity} \begin{figure}[t] \centering \includegraphics[width=75mm,angle=0]{UnitaryHiggs.eps} \caption{Gauge and $\sigma$-boson contributions to $WW\rightarrow WW$ scattering.} \label{fig:Unitary Higgs} \end{figure} In the Standard Model, unitarity of the $W^+W^- \rightarrow W^+W^-$ scattering amplitude can be used to obtain a constraint on the Higgs boson mass~\cite{Lee:1977eg}. In the present model, the $\sigma$ field is analogous to the Standard Model Higgs boson, but its coupling to $W^+W^-$ is reduced by a factor of $f'/v$, as we saw in Eq.~(\ref{eq:siggb}). The Feynman diagrams involving gauge fields and the $\sigma$ boson are shown in Fig.~\ref{fig:Unitary Higgs}. The corresponding amplitude is given at leading order by \begin{equation} \mathcal M_{\rm gauge} + \mathcal M_{\sigma} = \frac{1}{v^2}\left(s+t\right) -\frac{1}{v^2}\left(\frac{f'}{v}\right)^2\left(\frac{s^2}{s-m_\sigma^2}+\frac{t^2}{t-m_\sigma^2}\right). \end{equation} for momenta large compared to $m_W$. In addition, the scattering amplitude also receives important contributions from diagrams involving technirho exchanges, shown in Fig.~\ref{fig:Unitary Rho}. To evaluate these, we use the Goldstone boson equivalence theorem and compute the technirho-exchange contributions to $\pi_a^+ \pi_a^- \rightarrow \pi_a^+ \pi_a^-$, where $\pi_a$ is the linear combination of isotriplet fields that would be absent in unitary gauge. This will give the longitudinal $W$ boson scattering amplitude accurately for external momenta large compared to the $W$ boson mass, a criterion that will always be satisfied in the regions of parameter space that are of interest to us. The $\rho\pi_a\pi_a$ coupling is given by Eqs.~(\ref{eq:rhopion}) and (\ref{eq:gcoupling}), with $g_{\rho\Pi\Pi}$ computed holographically using Eq.~(\ref{eq:grhoPiPi}). Thus, we find the technirho contribution to the scattering amplitude \begin{equation} \mathcal M_\rho = g_{\rho \pi_a\pi_a}^2 \left(\frac{s+2t}{s-m_\rho^2}+\frac{2s+t}{t-m_\rho^2}\right), \end{equation} and the total amplitude \begin{equation} \mathcal M = \mathcal M_{\rm gauge} + \mathcal M_{\sigma} + \mathcal M_\rho. \end{equation} Note that the total amplitude is gauge invariant, as is $\mathcal M_\rho$ separately. \begin{figure}[t] \centering \includegraphics[width=75mm,angle=0]{UnitaryRho.eps} \caption{Technirho contributions to $WW\rightarrow WW$ scattering.} \label{fig:Unitary Rho} \end{figure} The most significant constraint from unitarity can be obtained by considering the $J=0$ partial wave \begin{equation} a_0(s) = \frac{1}{16\pi s} \int_{-s}^0 \mathcal M \, dt. \end{equation} Following Ref.~\cite{hhg} we require $|\textrm{Re } a_0(s)| \leq 1/2$, over the range of energies in which our holographic calculation is valid. Based on what is known from holographic models of QCD, the holographic construction is trustworthy up to the mass of the lowest vector and axial-vector resonances, but becomes increasingly less accurate when properties of heavier hadronic resonances are considered. Thus, we take the mass scale of the second vector resonance ({\em i.e.}, the first excited state of the technirho) as a cutoff for our effective theory. If unitarity is violated above this scale, no conclusion can be drawn because the calculational framework is suspect. If unitarity is violated below this scale, the effective theory is excluded in its minimal form. For the $\sigma$ boson taken as light as possible, the technirho cannot be made arbitrarily heavy without violating this constraint. However, the lower bound on the technirho mass is relaxed when $f$ is made small since the model mimics the Standard Model in this limit. \subsection{Higgs Mass} As mentioned in the previous section, the $\sigma$ field is similar to the Standard Model Higgs boson, but with modified couplings. If light enough, the $\sigma$ boson would have been produced at LEP via the Higgstrahlung process $e^+e^- \rightarrow Z^* \rightarrow \sigma Z$. In the region of parameter space left viable after the consideration of the unitarity and $S$ parameter bounds (discussed in the next section), the $\sigma-Z$ coupling is not less than $\sim 90$\% of its Standard Model value. In this case, the LEP bound is modified in accordance with Fig.~10 of Ref.~\cite{Barate:2003sz}, and differs negligibly from the Standard Model result. Note that the partial decay widths to two fermions are slightly enhanced while the partial decay widths to two fermions and one gauge boson (via an off-shell gauge boson) are slightly suppressed. Hence, the branching fraction to the primary decay channels at LEP, namely $\bar b b$ and $\bar \tau \tau$, will remain practically unaffected. Thus, we will apply the bound $m_\sigma \geq 114.4 \textrm{ GeV}$ to constrain the parameter space of the model. The possible perturbative interactions of the $\phi$ field allow us to construct a potential for $\sigma$. Following the conventions of Ref.~\cite{bt3}, \begin{equation} \label{eq:V} V(\sigma_s) = \frac{M^2}{2}\sigma_s^2 + \frac{\lambda}{8}\sigma_s^4 - \frac{1}{64\pi^2}\left[3h_t^4 + 2Nh^4\right]\sigma_s^4\ln\left(\frac{\sigma_s^2}{\mu^2}\right) -8\sqrt{2}\pi c_1 f^3 h\sigma_s \, , \end{equation} where $M^2 \geq 0$ and $\sigma_s = \sigma + f'$. The third term represents the one-loop radiative corrections from the top quark and the techniquarks, though only the former is substantial. All other radiative corrections can be neglected for the values of the couplings that are relevant in the next section. In order to remove the dependence on the renormalization scale $\mu$, we define a renormalized coupling $\lambda_r \equiv 1/3 \, V''''(f')$, where primes refer to derivatives with respect to $\sigma_s$. It is convenient for us to work with a redefined coupling $\tilde{\lambda}$, where \begin{equation} \label{eq:Mlambda} \tilde \lambda \equiv \lambda_r + \frac{11}{24\pi^2}\left[ 3h_t^4+2Nh^4 \right] \, . \end{equation} Since the $\sigma$ field has no vacuum expectation value, $V'(f')=0$, from which it follows that \begin{equation} \label{eq:novevrelation} M^2f' + \frac{1}{2}\tilde \lambda f'^3 = 8\sqrt{2}c_1\pi f^3 h \, . \end{equation} The mass of $\sigma$ is given by $V''(f')$: \begin{equation} m_\sigma^2 = M^2 + \left( \frac{3}{2}\tilde \lambda - \frac{1}{8\pi^2}\left[3h_t^4+2Nh^4\right] \right)f'^2 \, . \end{equation} We can eliminate $\tilde \lambda$ using Eq.~(\ref{eq:novevrelation}), as well as the chiral Lagrangian parameter $c_1$ using Eqs.~(\ref{eq:mpi}) and (\ref{eq:Mlambda}): \begin{equation} m_\sigma^2 = 3 m_\pi^2 \frac{f^2}{v^2} - \frac{1}{8\pi^2}\left[3h_t^4+2Nh^4\right]f'^2 - 2 M^2 \, . \end{equation} Since the last term is no smaller than zero in the models of interest\footnote{If $M^2<0$, EWSB occurs whether or not there is a technicolor condensate, and the model is different in spirit (and arguably less interesting) then the model we consider here. We do not discuss this possibility further.}, we conclude that \begin{equation}\label{eq:msigbnd} m_\sigma^2 \leq 3 m_\pi^2 \frac{f^2}{v^2} - \frac{1}{8\pi^2}\left[3h_t^4+2Nh^4\right]f'^2 \, . \end{equation} The physical pion mass $m_\pi$ is computed holographically following the discussion of Sec.~\ref{sec:three}. For any region of the $f$-$m_\rho$ plane where the right-hand side of Eq.~(\ref{eq:msigbnd}) is less than the LEP bound, the $\sigma$ mass can never be any larger, for any positive $M^2$. \section{Numerical Results}\label{sec:five} \subsection{Allowed Regions} In this subsection, we present our results for the allowed region of the model's parameter space. We first assume $h=0.01$ and that the value of $g_5$ is the same as in an SU($4$) technicolor sector. For the unitarity calculation, we fix the $\sigma$ mass at the LEP bound, $114.4$~GeV; taking the $\sigma$ mass higher only makes the unitarity bound on the $\rho$ mass stronger. The excluded regions are plotted on the $f/v$ versus $m_\rho$ plane. We later consider how the excluded regions change as $h$ and $g_5$ are varied. Our results are presented in Fig.~\ref{fig:Exclusionh001}. The bound from the $S$ parameter eliminates the portion of the plot with large $f$ and small technirho masses. In this region, the mass scale of the technihadrons is low and electroweak symmetry breaking is primarily a consequence of technicolor dynamics; one would expect this to correspond to a problematic value of the $S$ parameter. On the other hand, the unitarity constraint excludes the region with large $f$ and large technirho masses. Here the theory is more technicolor-like, and the technirho has a greater impact than the $\sigma$ boson in unitarizing the theory. Finally, for small values of $f$ and small technirho masses, there is no value of $M^2 \geq 0$ for which the $\sigma$ boson mass is as large as the LEP bound. The allowed region is represented by the narrow band in the central region of Fig.~\ref{fig:Exclusionh001}. The intersection of the boundaries from the $S$ parameter and $\sigma$ mass bounds gives us a lower bound on the technirho mass: \begin{equation} m_\rho \geq 1.6 \textrm{ TeV}\, , \,\,\,\,\, (h=0.01) \,. \end{equation} \begin{figure} \centering \includegraphics[width=15cm,angle=0]{Exclusionh001.eps} \caption{The allowed region for $h = 0.01$.} \label{fig:Exclusionh001} \end{figure} The allowed region and the lower bound on technirho mass can change if we vary the assumed values of $h$ and $g_5$. In Fig.~\ref{fig:varh}, we show the effect of increasing the techniquark Yukawa coupling $h$ from $0.01$ to $0.05$. While the unitarity and $S$ parameter exclusion regions are only slightly affected, the boundary of the LEP-excluded region is shifted noticeably. For a fixed point in the $f/v$-$m_\rho$ plane, taking $h$ larger ({\em i.e.}, making the techiquarks heavier) increases the minimium possible mass of the $\sigma$ boson, by increasing the technipion mass in the first term of Eq.~(\ref{eq:msigbnd}). Hence, the exclusion line shifts to lower values of the technirho mass, where $m_\pi$ is again reduced. A consequence of the enlarged allowed region is that the absolute lower bound on the technirho mass is relaxed: \begin{equation} m_\rho \geq 960 \textrm{ GeV}\, , \,\,\,\,\, (h=0.05)\,. \end{equation} One may reasonably ask what happens to the allowed parameter space as one varies $h$ further. For larger values of $h$, $m_q/\sigma_c^{1/3}$ quickly becomes of order one: the assumption of approximate chiral symmetry is lost and the predictions of the holographic theory are no longer trustworthy. If one requires $m_q/\sigma_c^{1/3} <1/3$ everywhere in the allowed region, then the largest possible techniquark Yukawa coupling is $h=0.18$, and one finds $m_\rho \geq 630$~GeV. On the other hand, if $h$ is taken smaller that $0.01$, the exclusion line from the LEP bound moves toward larger $m_\rho$, while the others do not change appreciably. For $h<1.0\times 10^{-3}$, no allowed region remains. \begin{figure} \centering \includegraphics[width=15cm,angle=0]{varh.eps} \caption{The allowed region as the value of $h$ is varied. The solid (dashed) lines correspond to $h=0.01$ $(0.05)$. The unitarity exclusion lines for $h=0.01$ and $h=0.05$ coincide and are represented by a single solid line.} \label{fig:varh} \end{figure} In constructing the holographic model of the technicolor sector, the 5D gauge coupling was chosen so that current-current correlators would have the same high-$q^2$ behavior as in an SU($N$) gauge theory. The same approach is used in successful holographic models of QCD~\cite{adsqcd}, where one knows with certainty that the gauge theory of interest is SU(3). In our case, this choice simply defines the class of models that we choose to study, and allows us to make definite phenomenological predictions. While predictivity requires us to make some well-motivated choice for $g_5$, it is still useful study whether our predictions are sensitive to the precise value chosen. To do so, we allow $g_5$ to vary by half and twice the value given in Eq.~(\ref{eq:matching}). The results are shown in Fig.~\ref{fig:varg5}. One can see that the qualitative changes in the shape of the allowed region are not particularly dramatic. \begin{figure} \centering \includegraphics[width=15cm,angle=0]{varg5.eps} \caption{The allowed region as the value of $g_5$ is varied. The solid, dashed and dot-dashed lines correspond to $r=1$, $0.5$ and $2.0$ respectively, where $g_5 = r \sqrt{24\pi^2/N}$ with $N=4$.} \label{fig:varg5} \end{figure} \subsection{Technirho Decays} Since the allowed parameter space of Fig.~\ref{fig:Exclusionh001} that is within the reach of the LHC is limited, it is interesting to see whether observable quantities vary appreciably within this region. We focus on the $\rho$ and technipion masses, as well as the dominant $\rho$ branching fractions. The technirho couples most strongly to the technipion field $\Pi$, which is partly $\pi_p$ and $\pi_a$. Using the fact that the technipion mass is considerably smaller than technirho mass, the decay to absorbed technipions is equal to the decay to longitudinal $W$ bosons by the Goldstone boson equivalence theorem. The interaction Lagrangian is defined in Eqs.~(\ref{eq:rhopion}) and (\ref{eq:gcoupling}), with the coupling $g_{\rho\,\Pi\,\Pi}$ calculated holographically using Eq.~(\ref{eq:grhoPiPi}). The associated decay widths are given by~\cite{cet1} \begin{eqnarray} \Gamma_{\pi_p\pi_p} &=& \frac{1}{48\pi}m_\rho g^2_{\rho\pi_p\pi_p}\left(1-4\frac{m_\pi^2}{m_\rho^2}\right)^{3/2}, \nonumber \\ \Gamma_{W_L W_L} &=& \frac{1}{48\pi}m_\rho g^2_{\rho\pi_a\pi_a}\left(1-4\frac{m_W^2}{m_\rho^2}\right)^{3/2}, \nonumber \\ \Gamma_{W_L^{\pm}\pi_p^{\mp}} & =& \frac{1}{48\pi}m_\rho g^2_{\rho\pi_p\pi_a}\left(1+\frac{m_\pi^4}{m_\rho^4} +\frac{m_W^4}{m_\rho^4}-2\frac{m_W^2}{m_\rho^2}-2\frac{m_\pi^2}{m_\rho^2}-2\frac{m_\pi^2m_W^2}{m_\rho^4}\right)^{3/2}. \end{eqnarray} There are many subleading decay modes that one could also consider. Each could be evaluated by a holographic calculation, in some cases requiring the modification of the 5D theory to include additional fields. A complete analysis goes beyond the scope of the present work. However, we will consider the decay to dileptons here, since this represents a particularly clean channel for searches at the LHC. This decay proceeds via the vector-meson dominance couplings of the technirho to the photon and the $Z$. In the 5D theory, the gauge fields of a weakly gauged subgroup of the global chiral symmetry of the boundary theory appear as coefficients of the non-normalizable modes of the bulk gauge fields~\cite{ss}. Substituting these into the 5D Lagrangian and integrating over the extra dimension yields the desired couplings: \begin{equation} \mathcal L = -\frac{m_\rho^2}{f_\rho} \left[ e A_\mu + \frac{e}{2 s_\theta c_\theta} (c_\theta^2-s_\theta^2) Z_\mu\right] \rho^\mu \, . \end{equation} Here $s_\theta$ ($c_\theta$) represents the sine (cosine) of the weak mixing angle. The technirho decay constant is given by \begin{equation} f_\rho = \frac{1}{2} \, g_5 \, (m_\rho z_m)\, J_1(m_\rho\, z_m) \,\,. \end{equation} Since the product $m_\rho \,z_m$ is fixed for the lowest vector resonance, one finds $f_\rho \approx 4.8$. The decay width to a single flavor of lepton is then straightforward to compute: \begin{eqnarray} \Gamma_{e^+ e^-} &=& \frac{4\pi\alpha_{EM}^2}{3f_\rho^2} m_\rho \nonumber \\ && \; \times\left[\left( Q_e +c_{V,e}\frac{c_\theta^2-s_\theta^2} {4s_\theta^2 c_\theta^2}\frac{m_\rho^2}{m_\rho^2-m_Z^2}\right)^2+\left( c_{A,e} \frac{c_\theta^2-s_\theta^2} {4s_\theta^2 c_\theta^2}\frac{m_\rho^2}{m_\rho^2-m_Z^2}\right)^2\right]. \end{eqnarray} Here $Q_e = -1$, $c_{V,e} = -1/2 + 2s_\theta^2$, and $c_{A,e} = -1/2$. The total decay width is obtained by summing the partial widths for all decay channels. Ignoring some of the possible subleading modes only provides small corrections to the branching fractions that we consider here. \begin{table} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline No. & $m_\rho$ (TeV) & $f/v$ & $m_\pi/m_\rho$ & $\Gamma_\rho/m_\rho$ & $\textrm{BR}_{WW}$ (\%) & $\textrm{BR}_{W\pi}$ (\%)& $\textrm{BR}_{\pi\pi}$ (\%)& $\textrm{BR}_{e^+e^-}$ (\%)\\ \hline 1& 1.59& 0.36& 0.13& 0.23& 1.8& 23.3& 74.9& $5\times 10^{-3}$\\ 2& 1.90& 0.36& 0.12& 0.24& 1.8& 23.3& 74.9& $5\times 10^{-3}$\\ 3& 2.21& 0.36& 0.12& 0.24& 1.8& 23.3& 74.8& $5\times 10^{-3}$\\ 4& 2.21& 0.29& 0.13& 0.25& 0.78& 16.1& 83.1& $5\times 10^{-3}$\\ 5& 2.21& 0.22& 0.15& 0.24& 0.27& 9.8& 89.9& $5\times 10^{-3}$\\ 6& 2.90& 0.22& 0.15& 0.24& 0.27& 9.8& 89.8& $5\times 10^{-3}$\\ 7& 3.50& 0.22& 0.15& 0.25& 0.27& 9.8& 89.8& $5\times 10^{-3}$\\ 8& 3.50& 0.16& 0.18& 0.23& 0.08& 5.5& 94.4& $5\times 10^{-3}$\\ 9& 3.50& 0.11& 0.22& 0.20& 0.01& 2.4& 97.6& $6\times 10^{-3}$\\ 10& 5.00& 0.15& 0.18& 0.23& 0.06& 4.9& 95.0& $5\times 10^{-3}$\\ 11& 5.00& 0.10& 0.22& 0.21& 0.01& 2.4& 97.6& $6\times 10^{-3}$\\ 12& 5.00& 0.05& 0.31& 0.14& $0.001$& 0.76& 99.1& $8\times 10^{-3}$\\ \hline \end{tabular} \caption{Technirho decay table for $h=0.01$} \label{tab:TechnirhoDecay} \end{table} \begin{figure} \centering \includegraphics[width=15cm,angle=0]{h001points.eps} \caption{Twelve sample points considered in Table \ref{tab:TechnirhoDecay}. The allowed region assumes $h = 0.01$.} \label{fig:h001points} \end{figure} Table~\ref{tab:TechnirhoDecay} presents our results for the case $h=0.01$, over a set of sample points within the allowed region of the model's parameter space. The location of the sample points is shown in Fig.~\ref{fig:h001points}. From the table, we see that the total decay width depends almost solely on $m_\rho$ for $m_\rho<5$~TeV (we don't consider larger masses, which are not likely to be within the reach of the LHC). The branching fractions, on the other hand, depend mostly on $f/v$. As $f/v$ becomes smaller, the branching fraction of the technirho to two physical pions increases, since the other dominant decay channels, $W_L\pi$ and $W_L W_L$, are suppressed by factors of $f/v$ and $f^2/v^2$, respectively. Everywhere in the allowed parameter space the decay mode to $\pi_p \pi_p$ is dominant. The branching fraction to dileptons varies only between $5\times 10^{-5}$ and $8 \times 10^{-5}$, always significantly suppressed compared to the leading modes. We have estimated that detection of the technirho via its decays to dileptons at the LHC would be feasible only if the dominant modes to technipions were kinematically forbidden. However, this favorable situation only occurs in regions of parameter space that are excluded by the bounds we have considered. \section{Conclusions}\label{sec:concl} We have shown how the combined constraints from the $S$ parameter, partial-wave unitarity and searches for a light Higgs-like scalar, meaningfully limit the viable parameter space of a simple holographic bosonic technicolor model. The parameter space of the model itself indicates an important difference between this model and conventional, QCD-like technicolor theories: different points in the $f$-$m_\rho$ plane have different ratios of the chiral symmetry breaking scale to the confinement scale. In QCD-like technicolor, this ratio is fixed. The $S$ parameter eliminates the region where $f$ is large and $m_\rho$ is small. Here, technicolor dynamics is the primary agent responsible for electroweak symmetry breaking. Perturbative unitarity eliminates the region where $f$ is large and $m_\rho$ is large. Here, the technirho is more important than the Higgs-like scalar in unitarizing the theory. Finally, the LEP bound on the mass of the Higgs-like scale eliminates the region where both $m_\rho$ and $f$ are small. Below $m_\rho < 5$~TeV, a limited region of allowed parameter space remains. We pointed out a number of physical quantities (for example, the ratio of the physical pion to technirho mass and the technirho branching fraction to two pions) that do not vary strongly within this region. We also studied how the allowed region changes as the techniquark mass and the 5D gauge coupling are varied. In the near future, data from the LHC may make it possible to rule out this type of model, without recourse to philosophical or aesthetic arguments. For example, something as simple as a tighter lower bound on the neutral scalar mass could substantially squeeze or eliminate the allowed band in Fig.~\ref{fig:Exclusionh001}. As another example, we have found that within the allowed region, the branching fraction of the technirho to two physical pions varies between 75-100\%, suggesting a channel for future collider studies. Other decay modes that we have not considered may be of value in excluding additional parameter space, but these require additional holographic analysis as well as dedicated collider studies. \begin{acknowledgments} We thank Josh Erlich for valuable discussions. This work was supported by the NSF under Grant PHY-0757481. \end{acknowledgments}
1003.4823
\section{Introduction} The possibility that the propagation of light through our universe might suffer from chiral effects, which could rotate the plane of polarization, arises in a variety of important contexts, such as the presence of a cosmological pseudo-scalar condensate, Lorentz invariance violation and charge parity and time (CPT) violation, neutrino number asymmetry, and the Einstein equivalence principle (EEP) violation (see \citet{Nio07} for a review). The simplest form for modeling cosmological birefringence - a frequency independent rotation of the plane of linear polarization - is described by the interaction of a pseudo-scalar field $\phi$ with photons through a term \citep{Kol90,Raf96}: \begin{equation} \label{eq:1} \mathcal{L}_{int}=-\frac{g_\phi}{4}\phi F_{\mu\nu}\tilde{F}^{\mu\nu}\,, \end{equation} where $g_\phi$ is the coupling constant, $F^{\mu\nu}$ is the electromagnetic tensor and $\tilde{F}^{\mu\nu}\equiv\frac{1}{2}\epsilon^{\mu\nu\rho\sigma} F_{\rho\sigma}$ its dual. $\phi$ could be a fundamental field or an effective description for cosmological birefringence due to Lorentz violation \citep{Car89}. Indeed several efforts have been devoted to look for evidence of rotation of the plane of polarization: since we expect tiny effects on the basis of laboratory experiments, cosmological distances are required to have measurable effects and therefore the obvious approach has been to look for rotation in the most distant sources in the universe. What is required for this test is then a polarized distant source, for which the polarization orientation can be predicted: the predicted orientation is then compared with the measured one, looking for a rotation between the two. Radio galaxies (RG) are very good candidates, since these astrophysical objects are often polarized, both at radio and at UV-optical wavelengths, and are found at very high redshifts \citep{Mil08}. Since the first successful detection of anisotropies in polarization of the cosmic microwave background (CMB) by DASI in 2002 \citep{dasi}, also the CMB polarization pattern has become an important test for cosmological birefringence, which could probe the propagation of light back to the recombination surface, i.e. up to a redshift as high as $z \sim 1100$. Cosmological birefringence was first constrained from RG observations, since these were the first cosmological sources providing information on polarization. \citet{Car89} have used the fact that the distribution of the difference between the position angle (P.A.) of the radio axis and the P.A. of the E vector of linear radio polarization in distant RG ($0.4<z<1.5$) peaks around $90^o$ to argue that this phenomenon is intrinsic to the source and therefore to put limits ($|\theta| \le 6.0^o$ at the 95\% confidence level) on the rotation of the plane of polarization for radiation traveling over cosmic distances. Later \citet{Cim93} used the perpendicularity between the optical/UV axis and the linear optical/UV polarization of distant RG --- this perpendicularity is expected since the polarization and the elongation are due to scattering of anisotropic nuclear radiation --- to show that the plane of polarization is not rotated by more than $10^o$ for every distant RG with a polarization measurement up to $z=2.63$. The advantage of the test using the optical/UV polarization over that using the radio one is that it is based on a physical prediction of the orientation of the polarization due to scattering, which is lacking in the radio case, and that it does not require a correction for the Faraday rotation, which is considerable in the radio but negligible in the optical/UV. A few years later \citet{Nod97} claimed to have found a rotation, independent of the Faraday one, in the radio polarization of distant RG. However several authors \citep{War97,Eis97,Car97,Lor97} have independently and convincingly argued against this claim, and additional unpublished data \citep{Lea97} on the lack of rotation for the radio polarization of distant RG have been reported \citep{Car98}. As already said, the observed polarization of the CMB has recently been used to put stringent constraints on cosmological birefringence, which would modify the linear polarization pattern created first by Thomson scattering and then by reionization, and generate correlations between time and magnetic field and between electric and magnetic fields, otherwise zero in a standard cosmological scenario. By using the constant angle approximation - we denote $\bar \theta$ the rotation angle in the following - for the integrated rotation of the linear polarization plane along the line of sight \citep{Lue98}, the observed power spectra are proportional to the power spectra on the last scattering surface through trigonometric functions of $\bar \theta$. Several constraints, summarized in Table 1, have been obtained within this approximation (see however \citet{Fin08} for the limits of the constant angle approximation). In this paper we report on an update of the test using the UV polarization of distant RG, because several new polarization information has become available on very distant RG since this test was last performed \citep{Cim93}, and discuss its implications in various contexts. Our paper is organized as follows: after this introduction, we describe the set of observations on UV polarization and the constraints on the rotation angle. We then discuss the implications of this constraints for cosmological birefringence caused by a pseudo-scalar field (playing the role of dark matter or dark energy) and by a Cherns-Simons term respectively in Sections 3 and 4. In Section 5 we conclude. \section{Limits on the rotation of UV linear polarization of radio galaxies at $z>2$} The birefringence test based on the UV polarization of RG is independent, complementary and placed at a different frequency with respect to those based on the radio polarization of distant RG and on the CMB polarization. The UV polarization test has also some advantages over the other tests. The main advantage over the test based on the radio polarization is that the UV and the CMB tests are based on a clear prediction of the polarization angle, given by the scattering physics, while a clear prediction is lacking for the radio polarization angle, which is only phenomenologically found to peak at about $90^o$ and $0^o$ from the radio axis, without a clear understanding of the physics behind it \citep{Cla80}. Distant RG observations provide a snapshot integrated up to a much smaller redshifts ($z \simeq {\rm few}$) with respect to the CMB one: as it occurs for CMB and SN Ia in probing the expansion history, the combination of CMB and RG may be very useful to constrain the cosmological birefringence. Being based at short wavelengths, the UV test is practically immune from Faraday rotation by intervening magnetic fields along the line of sight, which instead is relevant for radio and - to a smaller extent - for microwave observations \citep{Sca97}, reminding however that the Faraday rotation can be corrected for, since it depends on frequency, while birefringence does not. After the first birefringence test based on the UV polarization of distant RG by \citet{Cim93}, the test has been repeated by other authors. In particular, the RG 3C 265 at z=0.811 is a suitable source, because its misalignment between the radio and optical/UV axes provides a crucial check of the scattering hypothesis \citep{diS96} and because its bright extensions allow to build up a good polarization map \citep{Tra98}, in which the perpendicularity of the polarization vectors can be tested for each of the several tens of independent measurements at different locations. Indeed the spectacular polarization pattern of 3C265 has been used by Wardle et al. (1997) to rule out the birefringence claimed by Nodland \& Ralston (1997). Since then, several new polarization measurements for distant RG have become available and an update of the birefringence test has become desirable, in particular using the most distant ones, as a complement of the similar test performed using the CMB polarization. In order to perform the best test now possible with RG, we have selected all RG with $z>2.0$, with the degree of linear polarization $P$ larger than 5\% in the far UV (at $\sim$ 1300 \AA, rest frame), and with elongated optical morphology at these wavelengths, since these are the marking characteristics of the presence of scattered nuclear radiation \citep{diS94}, and can therefore lead to a safe test of the polarization rotation \citep{diS95}. The relevant data are collected from the literature in Table 2. The second-last column of the table lists the difference between the P.A. of the linear UV polarization and the P.A. of the UV axis, which we have measured on the available images in the rest-frame UV, and is shown in Figure 1. According to the scattering model, these two directions should be perpendicular for every object in our sample. The fact that the P.A. difference is close to $90^o$ for every object, actually compatible with $90^o$ within the accuracy of the measurements, puts stringent constraints on any possible rotation $\theta$ of the polarization plane for light traveling to us from each RG, as listed in the last column of the table. Assuming that the rotation of the polarization plane should be the same in every direction (as is done in the CMB case), we can set the average constraint $\theta = -0.8^o\pm 2.2^o$, as listed in the last row of the table. \section{Constraint on Cosmological Pseudo-Scalar Fields} Upper limits on the linear polarization rotation angle $\theta$ can be used to constrain cosmological birefringence caused by the coupling of the electromagnetic field to pseudo-scalar fields, suggested to solve the strong charge and parity (CP) problem \citep{Pec77}. The existence of light pseudo-scalar particles \citep{Wei77} is very relevant in cosmology, since these are viable candidate either for dark matter \citep{Kol90} or for dark energy \citep{Fri95}, depending on their (effective) mass. A pseudo-scalar field $\phi$ is predicted to be coupled to photons as can be read from the Lagrangian of the electromagnetic-$\phi$ sector: \begin{equation} \mathcal{L}= -\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-\frac{1}{2}\nabla_{\mu}\phi\nabla^{\mu}\phi - V(\phi) -\frac{g_{\phi}}{4}\phi F_{\mu\nu}\tilde{F}^{\mu\nu} \label{total} \end{equation} where $V(\phi)$ is the potential for the pseudo-scalar field. At the lowest order in fluctuations, the photon is coupled to the time derivative of the cosmological value of $\phi$, which is governed by the potential. Different time evolutions of $\phi$ lead to different values for the resulting cosmological birefringence, and therefore in the following two subsections we consider representative cosmological scenarios involving totally different values for the time variation of $\phi$ \footnote{Note that the CMB polarization auto and cross spectra depend on the time variation of $\phi$ and in many cases the constant angle approximation is a poor description of cosmological birefringence in CMB anisotropies \citep{Fin08}.}. \subsection{Dark Matter Pseudo-scalar Field} \label{Sect:DM} We consider as potential in Equation (\ref{total}): \begin{equation} V(\phi)=m^2 f_a^2 \left(1-\cos\frac{\phi}{f_a}\right) \label{potential_DM} \end{equation} where $m$ is the mass and $f_a$ is the energy scale at which the Peccei-Quinn symmetry is broken. In the dark matter regime the pseudo-scalar field oscillates near the minimum of the potential, therefore $V(\phi)\simeq m^2\phi^2/2$. The evolution of the field as a function of cosmic time $t$ is \citep{Fin08} \begin{eqnarray} \phi(t) &\simeq&\sqrt{\frac{3\Omega_\mathrm{ MAT}}{\pi}}\frac{H_0 M_\mathrm{ pl}}{2 m a^{3/2}(t)}\nonumber\\ \label{E:phi_t_1} & & \sin\left[m t \sqrt{1-\left(1-\Omega_\mathrm{ MAT}\right)\left(\frac{3H_0}{2m}\right)^2}\right]\,. \label{phi:EVOL} \end{eqnarray} where $\Omega_\mathrm{ MAT}$ is the density parameter for $\phi$ nowadays (which we consider equal to the dark matter one), $H_0$ is the Hubble constant, $M_\mathrm{ pl}$ is the Planck mass. Averaging through the oscillations, the evolution of the scale factor is given by \citep{Fin08}: \begin{eqnarray} \label{a:LCDM} a(t) & \simeq &\left(\frac{\Omega_\mathrm{ MAT}}{1-\Omega_\mathrm{ MAT}}\right)^\frac{1}{3}\nonumber\\ & &\left[\sinh \left(\frac{3}{2}\sqrt{1-\Omega_\mathrm{ MAT}} H_0 t\right)\right]^\frac{2}{3}\,. \end{eqnarray} Considering photon propagation in a homogeneous pseudo-scalar background ($\phi=\phi(\eta)$) the Fourier transform of the electromagnetic vector potential in the basis of left and right circular polarized modes in the plane transverse to the direction of propagation in the Coulomb gauge ($\nabla\cdot{\bf A}=0$) is: \begin{equation} \label{eq:Apm} \tilde{A}_{\pm}^{\prime\prime}(k,\eta)+\left[ k^2 \pm g_{\phi} \phi^{\prime} k \right] \tilde{A}_{\pm}(k,\eta)=0\,, \end{equation} where $\prime$ denotes derivative respect to conformal time $\eta$ ($d\eta=dt/a(t)$, \citet{Fin08}). The linear polarization rotation angle is given by: \begin{eqnarray} \theta_{\rm DM}(z)&=&\frac{g_\phi}{2}\left[\phi\left(\eta_0\right)-\phi\left(\eta\right)\right]\nonumber\\ &=&\frac{1}{4}\sqrt{\frac{3\Omega_\mathrm{ MAT}}{\pi}}\frac{g_\phi M_\mathrm{ pl}H_0}{m} \left(\frac{1}{a_0^{3/2}}-\frac{1}{a^{3/2}}\right)\nonumber\\ &=&- \frac{1}{4}\sqrt{\frac{3\Omega_\mathrm{ MAT}}{\pi}}\frac{g_\phi M_\mathrm{ pl}H_0}{m } \left[1-(1+z)^{3/2}\right]\,. \end{eqnarray} Fixed the average redshift ($\bar{z}=3$), $H_0=72\, \mathrm{km \, s}^{-1} \, \mathrm{Mpc}^{-1}$, $M_\mathrm{ pl}\simeq 1.22\times10^{19}$ GeV, and $\Delta\theta<5.0^o$ we obtain a constraint in the plane $(\log_{10} m\,\left[\mbox{eV}\right],\, \log_{10} g_\phi\, \left[\mbox{eV}^{-1}\right])$, as from Figure~\ref{mg01_rg}, which we superimpose with the one obtained in \citet{Fin08}. \subsection{Dark Energy pseudo-scalar field} \label{Sect:DE} An ultralight pseudo Nambu-Goldstone boson could drive an accelerated expansion of the universe, as proposed by \citet{Fri95}, by considering a simple shift of the potential in Equation (\ref{potential_DM}): \begin{equation} V(\phi) = M^4 \left[1+\cos(\phi/f)\right] \label{potential_DE} \end{equation} with $M$ and $f$ mass and energy scale for the dark energy case, respectively (note that these numbers and $g_\phi$ may be quite different from the dark matter case). When $\phi$ acts as dark energy, it is presently rolling toward the bottom of the potential (located at $\phi = \pi f$) with small velocity: in the future, $\phi$ will roll around the bottom of the potential and will be another matter component added to cold dark matter (CDM). The linear polarization angle $\theta$ is related to the variation of $\phi(\eta)$: \begin{equation} \label{theta:DE} \theta(\eta)=\frac{g_\phi}{2} \left[\phi(\eta_0)-\phi(\eta) \right]\,. \end{equation} and the evolution of $\phi$ is determined solving the following system of equations: \begin{equation} \left\{ \begin{array}{l} \ddot{\phi}+3H\dot{\phi}-\frac{M^4}{f}\sin\frac{\phi}{f}=0\,,\\ H^2=\frac{8\pi}{3 M_\mathrm{ pl}^2}\left(\rho_\mathrm{ RAD}+\rho_\mathrm{ MAT}+\rho_{\phi}\right)\,. \end{array} \right. \end{equation} We solve it numerically fixed $M=8.5\times10^{-4}$ eV, $f=0.3 M_\mathrm{ pl}/\sqrt{8\pi}$, $\phi_i/f=0.25$ and $\dot{\phi}_i=0$ \citep{Abr08}: see Figure~\ref{plot::modelPNGBb} for the evolution of the critical densities for matter ($\Omega_\mathrm{ MAT}$), dark energy ($\Omega_{\phi}$) and for the parameter $w_\phi\equiv p_\phi/\rho_\phi$ of the dark energy equation of state. Figure~\ref{plot::ThetaPNGB2b} shows the variation of $\phi/f$ as a function of $\ln\,a/a_0$. In the region probed by high-redshift RG ($\bar{z}=3$) there is a variation of the pseudo-scalar field the order $\phi/f\sim1.1$. Therefore Equation~(\ref{theta:DE}) can be used to obtain an upper limit on $g_\phi$: \begin{eqnarray} & &-5.0<\theta<3.4\nonumber\\ & &\Longrightarrow -2.2 \times 10^{-28} {\rm eV}^{-1} < g_\phi < 1.5 \times 10^{-28} {\rm eV}^{-1} \end{eqnarray} Let us also consider a runaway potential like: \begin{equation} V(\phi) = V_0 \exp \left( - \lambda \sqrt{8 \pi} \frac{\phi}{M_{\rm pl}} \right) \,. \label{expotential_DE} \end{equation} The above potential has only $M_{\rm pl}$ as physical scale, differently from the one in Equation (8). The resulting dark energy model is stable for $\lambda < \sqrt{2}$ and has an equation of state $p_\phi=w_\phi \rho_\phi$, with $w_\phi = -1 + \lambda^2/3$, constant in time \citep{CLW}. The evolution of the scale factor in this cosmological model can therefore be found analytically \citep{GF}, as also the evolution of the scalar field. We therefore give the analytical formula for the rotation angle: \begin{eqnarray} \theta_{\rm DE}(z) &=& \frac{g_\phi}{2}\left[\phi\left(\eta_0\right)-\phi\left(\eta\right)\right]\nonumber\\ &=& g_\phi M_{\rm pl} \sqrt{\frac{1+w_\phi}{3}} \frac{1}{-w_\phi} \left[ \mbox{arcsinh} \left( \sqrt{\frac{\Omega_\phi}{1-\Omega_\phi}} \right) \right. \nonumber \\ & & \left. - \mbox{arcsinh} \left( \sqrt{\frac{\Omega_\phi}{1-\Omega_\phi}} a^\frac{- 3 w_\phi}{2} \right) \right] \,, \end{eqnarray} where $\Omega_\phi$ is the dark energy fraction at present time. Figure 5 shows the value of $\theta_{\rm DE}(z=2.80)$ as a function of $(\Omega_\phi, w_\phi )$. By considering $\theta_{\rm DE}(z=2.80) \simeq 0.2 g_\phi M_{\rm pl}$ as an representative value, we obtain $|g_\phi | \lesssim {\rm few} \times {\cal O} (10^{-29}) {\rm eV}^{-1}$. \section{Constraints on Chern-Simons Theory} \label{Sect:CS} We consider the following Lagrangian: \begin{equation} \mathcal{L}=-\frac{1}{4}F_{\mu\nu}F^{\mu\nu} -\frac{1}{2}p_\mu A_\nu \tilde{F}^{\mu\nu} \end{equation} where $p_\mu=(p_0,\mathbf p)$ is a constant 4-vector and $A_\nu$ the vector potential \citep{Car89}. The corresponding dispersion relation for an electromagnetic-wave $k^\mu=\left(\omega,\mathbf{k}\right)$ is \citep{Car89}: \begin{equation} \omega^2-k^2=\pm\left(p_0 k-\omega p \cos\alpha\right)\left[1-\frac{p^2 \sin^2\alpha}{\omega^2-k^2}\right]^{-\frac{1}{2}} \end{equation} where $\alpha$ is the angle between $\mathbf p$ and $\mathbf k$. The angle by which the plane of polarization rotates is half of the difference phase, since $p_\mu$ is expected to be small, therefore the dispersion relation can be expanded at first order in $p_\mu$: \begin{equation} k=\omega\mp \frac{1}{2} \left(p_0-p\cos \alpha\right)\,. \end{equation} For a wave traveling a distance $L$ the linear polarization vector rotates by: \begin{equation} \theta=-\frac{1}{2}\left(p_0-p \cos\alpha\right)L \end{equation} independent of wavelength. In a $\Lambda$CDM universe the evolution of the scale factor in terms of cosmic time is given by Equation~(\ref{a:LCDM}), therefore the relation between $t$ and redshift is: \begin{equation} t=\frac{2}{3 H_0\sqrt{1-\Omega_\mathrm{MAT}}}\mbox{arcsinh}\left[\sqrt{\frac{1-\Omega_\mathrm{MAT}}{\Omega_\mathrm{MAT}}}\left(\frac{1}{1 +z}\right)^{\frac{3}{2}}\right] \end{equation} Fixed $L=t$ for the distance traveled by photons, the linear polarization plane from redshift $z$ to nowadays rotates by: \begin{eqnarray} \theta&=&-\frac{1}{2}\left(p_0-p \cos\alpha\right) t \nonumber\\ &=&-\frac{p_0-p \cos\alpha}{3 H_0\sqrt{1-\Omega_\mathrm{MAT}}} \left\{\mbox{arcsinh}\left[\sqrt{\frac{1-\Omega_\mathrm{MAT}}{\Omega_\mathrm{MAT}}}\right]\right.\nonumber\\ & &\left.- \mbox{arcsinh}\left[\sqrt{\frac{1-\Omega_\mathrm{MAT}}{\Omega_\mathrm{MAT}}}\left(\frac{1}{1+z}\right)^{\frac{3}{2}}\right] \right\} \end{eqnarray} Fixed $\Omega_\mathrm{MAT}=0.3$, $H_0 = 100 \, h \, \mathrm{ km \, s}^{-1} \, \mathrm{Mpc}^{-1}=2.13\,h\,\times10^{-33}$ eV and $\bar{z}=3$: \begin{equation} \left|p_0-p \cos\alpha\right|<\theta\times5.2\, h\,\times 10^{-33}\,\mbox{eV} \end{equation} If $h=0.72$ and $\theta<5.0^o$: \begin{equation} \left|p_0-p \cos\alpha\right|<3.2\times 10^{-34}\,\mbox{eV} \,, \end{equation} which updates the constraint given in \citet{Car89} for a matter-dominated universe to one valid for the present cosmological concordance model. \section{Conclusions} Every single existing measurement of the UV linear polarization in RG at $z>2$, due to scattering of anisotropic nuclear radiation, excludes that the polarization plane rotates by more than a few degrees while the light travels from the source to us for more than 3/4 of the universe lifetime, confirming previous results at lower redshifts \citep{Cim93, War97}. The all-sky-average constraint derived on the rotation of the polarization from the set of observations considered in this paper ($\theta = -0.8^o\pm 2.2^o$) is independent, but consistent with the constraints derived from CMB observations. We have studied the implications of this constraint on physical models of cosmological birefringence, showing how observations at high redshifts as those of RG are complementary to CMB anisotropies, as already occurs for SN Ia and CMB in measuring the expansion history. In the framework of theoretical models associating the cosmological birefringence with the variation of the Newton constant our results increase our confidence in the validity of the EEP, on which all metric theories of gravity are based. An improvement in both quantity and quality of the measurements of the UV linear polarization in RG at high redshift should be possible in the future with the coming generation of giant optical telescopes \citep{Gil08,Nel08,Joh08}, and would narrow the constraint on $\theta$ to a level smaller than what is now possible with RG and CMB. \section*{Acknowledgements} We thank Wei-Tou Ni for his encouragement to publish this work. We also thank Marc Kamionkowski and the two referees for helpful comments.
2112.02305
\section{Introduction} \label{sec:intro} Intelligent reflecting surface (IRS)~\cite{IRSintro1,IRSintro2,IRSintro3} can enhance the network throughput and has received great attention recently. Specifically, IRS, a planar meta-surface, is composed of a large number of tunable reflecting elements, each of which is able to induce the incident signal via changing its phase and/or amplitude. Based on the channel state information (CSI) and/or the channel statistics, the central controller of the IRS can collaboratively adjust its reflection coefficients such that the desired and interfering signals can be enhanced and suppressed, respectively, thus substantially improving the wireless system performance. Moreover, compared to conventional techniques, such as active relaying/beamforming, IRS does not require any transmit/receive radio frequency (RF) chains. Hence, it is also more energy and hardware efficient. Full-duplex (FD), another powerful wireless technique, can significantly improve the spectral efficiency (SE) \cite{FDintro1,FDintro2,FDintro3,FDintro4}. Compared with the conventional half-duplex (HD) scheme, it fully utilizes the spectrum by enabling signal transmission and reception over the same frequency at the same time, which can double the SE theoretically. However, the self-interference (SI) caused by the simultaneous downlink (DL) and uplink (UL) transmission is a challenging issue in FD systems. Fortunately, there have been key advances to address this issue~\cite{FDintro4,FDintro5,FDintro6}, such as passive suppression, analog and digital cancellations, etc.. Therefore, the FD technique has many applications in wireless communications, such as bidirectional communications \cite{FDbidirectional}, relays \cite{FDrelay1,FDrelay2}, and multi-user systems \cite{FDmultiuser1,FDmultiuser2}. \vspace{-1.0em} \subsection{Prior works} Lately, there have been a number of applications of IRS in various wireless communication scenarios, such as the IRS-aided secure communication~\cite{IRSsecure1,IRSsecure2,IRSsecure3}, simultaneous wireless information and power transfer~\cite{IRSswipt1,IRSswipt2}, millimeter wave communication~\cite{IRSmmWave1,IRSmmWave2}, and mobile edge computing~\cite{IRSMEC1}. More recently, several pieces of works on IRS-aided FD systems have been proposed~\cite{IRSFD1,IRSFD2,IRSFD3,IRSFD4,IRSFD5,IRSFD6}. In \cite{IRSFD1}, an IRS is used to enhance FD two-way communication systems, where the source precoders and the IRS passive beamforming matrix are jointly optimized to maximize the system capacity based on the Arimoto-Blahut algorithm. Under the same scenario, an algorithm with faster convergence speed has been proposed in \cite{IRSFD2}. Moreover, a novel hybrid communication network that utilizes both an FD decode-and-forward relay and an IRS to enhance data transmission rate has been investigated in \cite{IRSFD3}. In addition, the passive beamforming and deployment design have been investigated in \cite{IRSFD4} in an IRS-aided cellular FD system. To ensure user fairness, the minimum weighted rate is maximized in \cite{IRSFD5} in an IRS-aided multi-user cellular FD system. Furthermore, the resource allocation problem for an IRS-assisted FD cognitive radio system has been studied in \cite{IRSFD6}. It is worth mentioning that in the above studies, the beamforming matrices are designed based on the instantaneous CSI, which will incur high computational complexity and large signaling overhead in practice. Recently, more practical schemes have been developed by exploiting the channel statistics~\cite{IRSCDI1,IRSCDI2,IRSCDI3}. The angle domain framework in \cite{IRSCDI1} designs the beamforming matrices at the access point (AP) and IRS based on the derived effective angles, which approaches the performance with full CSI. By utilizing historical channel samples, the authors of \cite{IRSCDI2} proposed two stochastic optimization algorithms to configure the IRS phase shifter. In \cite{IRSCDI3}, a two-timescale protocol has been exploited to design the passive beamforming matrix based on the channel statistics and the active beamforming matrices based on the effective CSI Although the aforementioned algorithms that utilize channel statistics can significantly reduce the CSI overhead, they are still with high computational complexity since complex manipulations, such as matrix inversion, are involved in each iteration. In order to tackle this problem, machine learning based techniques, such as deep neural network (DNN), have been employed for beamforming design in IRS-aided systems~\cite{blackbox1,blackbox2,blackbox3}. DNN only consists of linear operation and simple non-linear activation function, which can potentially meet the real-time requirement~\cite{fastbeamforming}. However, the black-box NNs generally have poor interpretability and require a large number of training samples. To this end, deep-unfolding NN~\cite{unfoldingintro1} unfolds some iterative optimization algorithms into layer-wise structures and learns the key parameters. The deep-unfolding NNs take advantages of both the model-driven optimization algorithms and the data-driven learning-based algorithms. They are more interpretable and efficient than the black-box NNs and can achieve comparable performance with the conventional optimization algorithms with dramatically reduced computational complexity. Hence, the deep-unfolding has attracted great research interests and has a wide range of applications in communications, such as signal detection~\cite{unfolding1,unfolding2,unfolding3}, resource allocation~\cite{unfolding4,unfolding5}, and precoding~\cite{unfolding6,unfolding7,unfolding8,unfolding9}. \vspace{-1.0em} \subsection{Main Contributions} Inspired by the above works, we investigate a multi-user MIMO IRS-assisted FD system in this paper, which consists of an AP, an IRS, multiple UL users and DL users as shown in Fig. \ref{fig:structure}. The AP operates in an FD mode and the users operate in an HD mode. We jointly optimize the active beamforming matrices at the AP and UL users and the passive beamforming matrix at the IRS to maximize the weighted sum-rate of the system. Since it is practically difficult to acquire the CSI for IRS-related links due to its passive operation and large number of elements, we conceive a mixed-timescale scheme. Specifically, the high-dimensional passive beamforming matrix at the IRS is updated based on the channel statistics while the active beamforming matrices are optimized relied on the low-dimensional real-time effective CSI at each time slot. The proposed scheme avoids estimating the high dimensional IRS-related channels in each time slot and saves the heavy overhead required by the conventional single-timescale algorithm, thus alleviating the performance degradation caused by CSI delay. However, the mixed-timescale brings new challenges to algorithm design since the objective function turns stochastic and the long-term and short-term variables are highly coupled. To address these issues, we develop an efficient stochastic successive convex approximation (SSCA)-based optimization algorithm. More precisely, for the short-term active beamforming design, we equivalently convert the original problem into a more tractable form and propose a block coordinate descent (BCD)-type algorithm. Then, with optimized short-term active beamforming matrices, the long-term passive beamforming matrix is designed based on SSCA~\cite{cssca}. Specifically, we construct a convex surrogate function based on the collected full CSI samples\:\!\footnote{In this work, channel statistics refer to the moments or distribution of the channel fading realizations. By observing the collected full channel samples (possibly outdated), the proposed SSCA-based design algorithm can automatically learn the channel statistics (in an implicit way) and converge to a stationary point of the stochastic optimization problem considered in our design.} to approximate the objective function and iteratively optimize the IRS passive beamforming matrix until convergence. The proposed algorithm can be guaranteed to converge to a stationary point of the original problem. Furthermore, we design a novel deep-unfolding NN that jointly unfolds the proposed SSCA-based optimization algorithm into a layer-wise structure. The proposed deep-unfolding NN consists of a long-term passive beamforming network (LPBN) and a short-term active beamforming network (SABN). In the forward propagation stage, the collected full channel samples are first fed into the LPBN and it outputs the low-dimensional effective CSI. Note that we directly set the IRS passive beamforming matrix as the learnable parameter of the LPBN. Then, the effective CSI passes through the SABN and it outputs the active beamforming matrices. The SABN maintains the structure of the proposed BCD-type active beamforming algorithm but employs a novel non-linear activation function and some learnable parameters induced by the first-order Taylor expansion to approximate the matrix inversion, which significantly reduces the computational complexity. In the back propagation stage, the learnable parameters of the deep-unfolding NN are updated based on the stochastic gradient descent (SGD) method. The main contributions of this paper are summarized as follows. \begin{itemize} \item We study a multi-user MIMO IRS-assisted FD system, which has not been well investigated in the literature, and propose a practical mixed-timescale beamforming scheme to reduce the CSI overhead and mitigate the CSI mismatch caused by delay. \item To maximize the weighted average sum-rate in the IRS-assisted FD system, we propose an efficent mixed-timescale joint active and passive beamforming algorithm based on the framework of SSCA, which can guarantee convergence. \item To further reduce the computational complexity, we develop a novel deep-unfolding NN that unfolds the proposed mixed-timescale SSCA-based algorithm into a layer-wise structure. The deep-unfolding NN exploits a novel non-linear activation function and some learnable parameters induced by the first-order Taylor expansion to approximate the matrix inversion. \item Simulation results show that the proposed mixed-timescale beamforming algorithm outperforms the single-timescale counterpart in the presence of CSI delay, and the proposed deep-unfolding NN approaches the performance of the SSCA-based algorithm with much reduced computational complexity when deployed online. \end{itemize} \vspace{-1.0em} \subsection{Organizations and Notations} The paper is organized as follows. Section II describes the system model and formulates the investigated problem. The proposed SSCA-based mixed-timescale beamforming optimization algorithm is presented in Section III. Then, Section IV introduces the proposed deep-unfolding NN based algorithm. Section V presents the simulation results and Section VI concludes this paper. Scalars, vectors and matrices are denoted by lower case, boldface lower case and boldface upper case letters, respectively. $\mathbf{I}$ represents an identity matrix and $\mathbf{0}$ denotes an all-zero matrix. For a matrix $\mathbf{A}$, ${{\mathbf{A}}^T}$, $\textrm{conj}(\mathbf{A})$, ${{\mathbf{A}}^H}$, and $\|\mathbf{A}\|$ denote its transpose, conjugate, conjugate transpose, and Frobenius norm, respectively. For a square matrix $\mathbf{A}$, $\textrm{Tr} \{\mathbf{A}\}$ and $\mathbf{A}^{-1}$ denotes its trace and inverse, respectively, while ${\mathbf{A}} \succeq {\mathbf{0}}~({\mathbf{A}} \preceq {\mathbf{0}})$ means that $\mathbf{A}$ is positive (negative) semi-definite. For a vector $\mathbf{a}$, $\|\mathbf{a}\|$ represents its Euclidean norm. $\Re e\{\cdot\}$ ($\Im m\{\cdot\}$) denotes the real (imaginary) part of a variable. $| \cdot |$ denotes the absolute value of a complex scalar. ${\mathbb{C}^{m \times n}}\;({\mathbb{R}^{m \times n}})$ denotes the space of ${m \times n}$ complex (real) matrices and $\angle$ represents the phase of complex vectors/matrices. $\text{diag}(\cdot)$ extracts the diagonal elements of a square matrix and $\text{Diag}(\cdot)$ constructs a diagonal matrix based on the input vector. The key notations used in this paper is summarized in Table I. \vspace{-0.2em} \begin{table}[htbp] \centering \caption{List of notations.} \label{table1} \begin{tabular}{|c|c|} \hline Symbol&Representation\\ \hline $K$ ($k$)&Number of UL users (index for UL users)\\ \hline $L$ ($l$)&Number of DL users (index for DL users) \\ \hline $N_t$ ($N_r$)&Number of transmit (receive) antennas at the AP\\ \hline $M_\text{U}$ ($M_\text{D}$) &Number of antennas at the UL (DL) users \\ \hline $D_{\text{U}}$ ($D_{\text{D}}$)&Number of data flows at the UL (DL) users\\ \hline $T$ &Number of reflecting elements at the IRS \\ \hline $\mathbf{\Phi}$&Passive beamforming matrix at the IRS (long-term)\\ \hline $\mathbf{P}$ &Active beamforming matrix at UL user (short-term) \\ \hline $\mathbf{F}$&Active beamforming matrix for DL user (short-term)\\ \hline $\mathcal{H}$ ($\mathcal{H}_{ef}$) &Set of full (effective) channels \\ \hline $R_{\text{U}}$ ($R_{\text{D}}$)&Achievable rate of UL (DL) user \\ \hline $\alpha$ ($\beta$) &Weight of UL (DL) user \\ \hline $\boldsymbol{\phi}$&Diagonal elements of $\mathbf{\Phi}$ \\ \hline $\boldsymbol{\theta}$ &Phase of $\boldsymbol{\phi}$ \\ \hline \end{tabular} \end{table} \vspace{-1.6em} \section{System Model and Problem Formulation} In this section, we first introduce the IRS-assisted FD system and then mathematically formulate the optimization problem. \vspace{-1.6em} \subsection{System Model} As depicted in Fig.~\ref{fig:structure}, we consider an IRS-assisted FD system, which consists of an AP, an IRS, $K$ UL users, and $L$ DL users. The AP operates in an FD mode and it is equipped with $N_t$ transmit antennas and $N_r$ receive antennas. The $K$ UL users and the $L$ DL users operate in an HD mode and they are equipped with $M_{\text{U}, k}$ and $M_{\text{D}, l}$ antennas, respectively, where $k\in\mathcal{K}\triangleq\{1, \ldots, K\}$ and $l\in\mathcal{L}\triangleq\{1,\ldots, L\}$ denote the user indexes. The IRS is equipped with $T$ reflecting elements. Assuming that the IRS is equipped near the users and far away from the AP, the signals through the AP-IRS-AP link can be neglected due to the high path loss~\cite{IRSFD4}. Then, the received data vector at the AP $\mathbf{y}_\text{U}\in\mathbb{C}^{N_r\times 1}$ is given by \begin{equation} \mathbf{y}_\text{U}=\sum^{K}_{k=1}\mathbf{H}_{\text{U},k}\mathbf{P}_{k}\mathbf{b}_{k}+\sum^{K}_{k=1}\mathbf{V}_{\text{U}}\mathbf{\Phi}\mathbf{G}_{\text{U},k}\mathbf{P}_{k}\mathbf{b}_{k}+\sum^{L}_{l=1}\mathbf{\tilde{H}}\mathbf{F}_{l}\mathbf{s}_{l}+\mathbf{n}_\text{U}, \end{equation} where $\mathbf{b}_{k}\in\mathbb{C}^{D_{\text{U},k}\times 1}$ ($D_{\text{U},k}\leq M_{\text{U},k}$) denotes the transmit data vector of the $k$-th UL user, $\mathbf{P}_{k}\in\mathbb{C}^{M_{\text{U},k}\times D_{\text{U},k}}$ is the beamforming matrix of the $k$-th UL user, $\mathbf{H}_{\text{U},k}\in\mathbb{C}^{N_r\times M_{\text{U},k}}$ denotes the channel matrix between the $k$-th UL user and the AP. $\mathbf{\Phi}\in\mathbb{C}^{T\times T}$ denotes the diagonal passive beamforming matrix at the IRS due to no signal coupling/joint processing over its passive reflecting elements, $\mathbf{G}_{\text{U},k}\in\mathbb{C}^{T\times M_{\text{U},k}}$ denotes the channel matrix between the $k$-th UL user and the IRS, and $\mathbf{V}_{\text{U}}\in\mathbb{C}^{N_r\times T}$ denotes the channel matrix between the IRS and the AP. Note that $\mathbf{s}_{l}\in\mathbb{C}^{D_{\text{D}, l}\times 1}$ ($D_{\text{D}, l}\leq M_{\text{D}, l}$) denotes the data vector for the $l$-th DL user, $\mathbf{F}_{l}\in\mathbb{C}^{N_t\times D_{\text{D},l}}$ denotes the beamforming matrix at the AP for serving the $l$-th DL user, and $\mathbf{\tilde{H}}\in\mathbb{C}^{N_r\times N_t}$ denotes the residual SI channel matrix at the AP\:\!\footnote{Since the CSI of the SI link can be obtained at the AP, based on certain interference cancellation techniques~\cite{FDintro3,FDintro6}, we assume that the SI at the AP can be greatly eliminated.}. $\mathbf{n}_\text{U}\in\mathbb{C}^{N_r\times 1}$ denotes the complex circular Gaussian noise vector at the AP with zero mean and variance $\sigma^2_\text{U}$. \begin{figure}[!t] \centering \scalebox{0.50}{\includegraphics{system_v3-eps-converted-to}} \caption{ IRS-assisted full-duplex system.}\label{fig:structure} \end{figure} The received data vector at the $l$-th DL user $\mathbf{y}_{\text{D},l}\in\mathbb{C}^{M_{\text{D},l}\times 1}$ is given by \begin{equation} \begin{split} \mathbf{y}_{\text{D},l}=&\sum^{L}_{l=1}\mathbf{H}_{\text{D},l}\mathbf{F}_{l}\mathbf{s}_{l}+\sum^{L}_{l=1}\mathbf{G}_{\text{D},l}\mathbf{\Phi}\mathbf{V}_\text{D}\mathbf{F}_{l}\mathbf{s}_{l}\\ &+\underbrace{\sum^{K}_{k=1}\mathbf{J}_{k,l}\mathbf{P}_{k}\mathbf{b}_{k}+\sum^{K}_{k=1}\mathbf{G}_{\text{D},l}\mathbf{\Phi}\mathbf{G}_{\text{U},k}\mathbf{P}_{k}\mathbf{b}_{k}}_{interference\,\, from\,\, the\,\, UL\,\, users}+\mathbf{n}_{\text{D},l}, \end{split} \end{equation} where $\mathbf{H}_{\text{D},l}\in\mathbb{C}^{M_{\text{D},l}\times N_t}$ is the channel matrix between the AP and the $l$-th DL user, $\mathbf{V}_{\text{D}}\in\mathbb{C}^{T\times N_t}$ denotes the channel matrix between the AP and the IRS, $\mathbf{G}_{\text{D},l}\in\mathbb{C}^{M_{\text{D},l}\times T}$ is the channel matrix between the IRS and the $l$-th DL user, $\mathbf{J}_{k,l}\in\mathbb{C}^{M_{\text{D},l}\times M_{\text{U},k}}$ denotes the channel matrix between the $k$-th UL user and the $l$-th DL user, and $\mathbf{n}_{\text{D},l}\in\mathbb{C}^{M_{\text{D},l}\times 1}$ denotes the complex circular Gaussian noise vector at the $l$-th DL user with zero mean and variance $\sigma^2_{\text{D},l}$. The transmission rate for user $k$ in the uplink is given by \vspace{-0.4em} \begin{equation} \vspace{-0.5em} \begin{split} &\mathcal{R}_{\text{U},k}\triangleq \log \det \bigg( \mathbf{I}+\mathbf{\bar{H}}_{\text{U},k}\mathbf{P}_{k}\mathbf{P}^H_{k}\mathbf{\bar{H}}^H_{\text{U},k}\times\\ &(\sum^{L}_{l=1}\mathbf{\tilde{H}}\mathbf{F}_{l}\mathbf{F}^H_{l}\mathbf{\tilde{H}}^H+\sum^{K}_{k^{'}\neq k} \bar{\mathbf{H}}_{\text{U},k^{'}}\mathbf{P}_{k^{'}}\mathbf{P}^H_{k^{'}}\bar{\mathbf{H}}^H_{\text{U},k^{'}} +\sigma^2_\text{U}\mathbf{I})^{-1} \bigg), \end{split} \end{equation} where $\mathbf{\bar{H}}_{\text{U}, k}\triangleq \mathbf{H}_{\text{U},k}+\mathbf{V}_\text{U}\mathbf{\Phi}\mathbf{G}_{\text{U},k}$. The transmission rate for user $l$ in the downlink is given by \vspace{-0.4em} \begin{equation} \vspace{-0.4em} \begin{split} &\mathcal{R}_{\text{D},l}\triangleq\log \det \bigg( \mathbf{I}+\mathbf{\bar{H}}_{\text{D},l}\mathbf{F}_{l}\mathbf{F}^H_{l}\mathbf{\bar{H}}^H_{\text{D},l}\times \\ &(\sum^{K}_{k=1}\mathbf{\bar{J}}_{k,l}\mathbf{P}_{k}\mathbf{P}^H_{k}\mathbf{\bar{J}}^H_{k,l}+\sum^{L}_{l^{'}\neq l}\mathbf{\bar{H}}_{\text{D},l^{'}}\mathbf{F}_{l^{'}}\mathbf{F}^H_{l^{'}}\mathbf{\bar{H}}^H_{\text{D},l^{'}}+\sigma^2_{\text{D},l}\mathbf{I})^{-1} \bigg), \end{split} \end{equation} where $\mathbf{\bar{H}}_{\text{D},l}\triangleq \mathbf{H}_{\text{D},l}+\mathbf{G}_{\text{D},l}\mathbf{\Phi}\mathbf{V}_\text{D}$ and $\mathbf{\bar{J}}_{k,l}\triangleq \mathbf{J}_{k,l}+\mathbf{G}_{\text{D,l}}\mathbf{\Phi}\mathbf{G}_{\text{U},k}$ \vspace{-0.2em} \subsection{Mixed-Timescale Protocols} \begin{figure*}[!t] \centering \includegraphics[width=0.76\linewidth,height=0.25\linewidth]{Timescale1-eps-converted-to} \caption{ Proposed mixed-timescale beamforming scheme.}\label{fig:timescale} \vspace{-0.6em} \end{figure*} In practice, the acquisition of the real-time IRS-related high-dimensional CSI matrix is very challenging due to the large number of reflecting elements and the passive architecture of the IRS while estimating the low-dimensional effective channels $\mathcal{H}_{ef} \triangleq \{\bar{\mathbf{H}}_{\text{U},k},\bar{\mathbf{H}}_{\text{D},l},\bar{\mathbf{J}}_{k,l},\tilde{\mathbf{H}}\}$ with given IRS passive beamforming matrix is much easier. Based on this observation, we propose a mixed-timescale transmission protocol. Specifically, we focus on a sufficient large time block during which the channel statistics are constant, as shown in Fig. \ref{fig:timescale}. In the first stage, the AP estimates a small amount of high-dimensional full CSI samples $\{\mathcal{H}(n)\}_{n=\{1,...,N_{s}\}}$ (possibly outdated) using some standard IRS-related channel estimation methods~\cite{IRS_CM1,IRS_CM2,IRS_CM3}, where $N_{s}$ denotes the number of collected full CSI samples, and $\mathcal{H} \triangleq \{\mathbf{H}_{\text{U},k},\mathbf{H}_{\text{D},l},\mathbf{G}_{\text{U},k},\mathbf{G}_{\text{D},l},\mathbf{V}_{\text{U}},\mathbf{V}_{\text{D}},\mathbf{J}_{k,l},\tilde{\mathbf{H}}\}$ denotes the set of all CSI matrices. Then, the AP designs the long-term passive beamforming matrix $\mathbf{\Phi}$ based on these collected full CSI samples and sends it to the IRS. In the second stage, the long-term passive beamforming matrix at the IRS is fixed. In each time slot (channel coherence time) $i$, the AP obtains the low-dimensional effective channels $\mathcal{H}_{ef}(i)$ via conventional MIMO channel estimation methods and designs the active short-term beamforming matrices $\mathbf{P}_{k}$ and $\mathbf{F}_{l}$ accordingly. Then, the beamforming matrices $\mathbf{P}_{k}$ are sent to the UL users. As we can see, the proposed mixed-timescale scheme avoids estimating the high-dimensional IRS-related CSI matrix in each time slot. By contrast, the conventional single-timescale algorithm requires a tremendous amount of full CSI samples in each coherence time block, which needs a huge overhead. \vspace{-0.5em} \subsection{Problem Formulation} In this work, we aim at jointly designing the short-term active beamforming matrices $\mathbf{P}_{k}$ and $\mathbf{F}_{l}$, and the long-term IRS passive beamforming matrix $\mathbf{\Phi}$ in order to maximize the average weighted sum-rate over a coherence time block. Hence, the problem can be formulated as follows \vspace{-1.8em} \begin{subequations} \label{eq:problem1} \begin{align}\vspace{-1.6em} (\mathcal{P}1): \quad \max_{\boldsymbol{\phi}} \mathbb{E}_{\mathcal{H}}\{\max_{\{\mathbf{P}_{k},\mathbf{F}_{l}\}} & \sum^{K}_{k=1}\alpha_{k} \mathcal{R}_{\text{U},k}+ \sum^{L}_{l=1}\beta_{l} \mathcal{R}_{\text{D},l}\} \label{objective}\\ \mbox{s.t.}\quad &\|\mathbf{P}_{k}\|^2\leq P_{\text{U},k}, \, \forall k, \label{transmitpower}\\ &\sum^{L}_{l=1}\|\mathbf{F}_{l}\|^2\leq P_{AP},\label{transmitpower2}\\ & |\boldsymbol{\phi}(n)|=1, \forall n, \label{constantmodulus} \end{align} \end{subequations} where $\boldsymbol{\phi} \triangleq \textrm{diag}(\mathbf{\Phi}) \in\mathbb{C}^{T\times 1} $ denotes the long-term passive beamforming vector. $P_{\text{U},k}$ and $P_{AP}$ denote the limited transmit power budgets of the UL users and the AP, respectively, and \eqref{constantmodulus} denotes the constant modulus constraint imposed on the elements of the IRS passive beamforming vector. \vspace{-1.0em} \section{Mixed-Timescale Beamforming Algorithm} In this section, we propose the mixed-timescale beamforming algorithm for solving $\mathcal{P}1$. Firstly, by fixing the long-term passive beamforming matrix at the IRS, we optimize the short-term active beamforming matrices at the AP and UL users, where a BCD-type algorithm is proposed to tackle this problem. Then, we develop an efficient SSCA-based algorithm for designing the long-term passive beamforming matrix at the IRS. \vspace{-1.0em} \subsection{Short-Term Active Beamforming Design} With fixed long-term IRS passive beamforming matrix, the optimization problem of the short-term active beamforming design is given by \begin{subequations} \label{eq:short_problem} \begin{align} (\mathcal{P}2): \quad \max_{\{\mathbf{P}_{k},\mathbf{F}_{l}\} }\quad & \sum^{K}_{k=1}\alpha_{k} \mathcal{R}_{\text{U},k}+ \sum^{L}_{l=1}\beta_{l} \mathcal{R}_{\text{D},l} \label{shortobjective}\\ \mbox{s.t.}\quad &\|\mathbf{P}_{k}\|^2\leq P_{\text{U},k}, \, \forall k, \label{shorttransmitpower}\\ &\sum^{L}_{l=1}\|\mathbf{F}_{l}\|^2\leq P_{AP}.\label{shorttransmitpower2} \end{align} \end{subequations} The objective function of $\mathcal{P}2$ is difficult to handle due to the highly nonlinear objective function and coupled optimization variables. Hence, we first transform $\mathcal{P}2$ into an equivalent but more tractable form via the celebrated WMMSE method~\cite{WMMSE}. Specifically, we introduce auxiliary variables $\mathbf{W}_{\text{U},k} \in \mathbb{C}^{D_{\text{U},k}\times D_{\text{U},k}}$,$\mathbf{W}_{\text{D},l} \in \mathbb{C}^{D_{\text{D},l}\times D_{\text{D},l}}$,$\mathbf{U}_{\text{U},k} \in \mathbb{C}^{N_r\times D_{\text{U},k}}$, and $\mathbf{U}_{\text{D},l} \in \mathbb{C}^{M_{\text{D},l}\times D_{\text{D},l}}$, and the converted problem can be expressed as \vspace{-0.2em} \begin{subequations} \vspace{-0.2em} \begin{align} (\mathcal{P}3): \min_{\Omega}\quad &\sum_{k=1}^{K} \alpha_k\left(\text{Tr}\left(\mathbf{W}_{\text{U},k}\mathbf{E}_{\text{U},k}\right)-\log \det\left(\mathbf{W}_{\text{U},k}\right)\right) \nonumber\\ +\sum_{l=1}^{L} &\beta_l\left(\text{Tr}\left(\mathbf{W}_{\text{D},l}\mathbf{E}_{\text{D},l}\right)-\log \det\left(\mathbf{W}_{\text{D},l}\right)\right) \\ \mbox{s.t.}\quad& \eqref{transmitpower}, \eqref{transmitpower2}, \end{align} \end{subequations} where $\Omega \triangleq \{\mathbf{P}_k,\mathbf{F}_l,\mathbf{U}_{\text{U},k},\mathbf{U}_{\text{D},l},\mathbf{W}_{\text{U},k},\mathbf{W}_{\text{D},l}\}$ is the set of optimization variables, and \vspace{-0.3em} \begin{equation} \vspace{-0.3em} \begin{split} &\mathbf{E}_{\text{U},k} \triangleq (\mathbf{U}_{\text{U},k}^{\rm H}\bar{\mathbf{H}}_{\text{U},k}\mathbf{P}_k-\mathbf{I})(\mathbf{U}_{\text{U},k}^{\rm H}\bar{\mathbf{H}}_{\text{U},k}\mathbf{P}_k-\mathbf{I})^{\rm H} + \mathbf{U}_{\text{U},k}^{\rm H}\times\\ &\left(\sum_{k^{'}\neq k}^{K}\bar{\mathbf{H}}_{\text{U},k^{'}}\mathbf{P}_{k^{'}}\mathbf{P}_{k^{'}}^{\rm H}\bar{\mathbf{H}}_{\text{U},k^{'}}^{\rm H}+\sum_{l=1}^{L}\tilde{\mathbf{H}}\mathbf{F}_l\mathbf{F}_l^{\rm H}\tilde{\mathbf{H}}^{\rm H}+\sigma_{\text{U}}^2\mathbf{I}\right)\mathbf{U}_{\text{U},k}, \end{split} \end{equation} \begin{equation} \begin{split} &\mathbf{E}_{\text{D},l} \triangleq (\mathbf{U}_{\text{D},l}^{\rm H}\bar{\mathbf{H}}_{\text{D},l}\mathbf{F}_l-\mathbf{I})(\mathbf{U}_{\text{D},l}^{\rm H}\bar{\mathbf{H}}_{\text{D},l}\mathbf{F}_l-\mathbf{I})^{\rm H} + \mathbf{U}_{\text{D},l}^{\rm H}\times \\ &\left(\sum_{l^{'}\neq l}^{L}\bar{\mathbf{H}}_{\text{D},l}\mathbf{F}_{l^{'}}\mathbf{F}_{l^{'}}^{\rm H}\bar{\mathbf{H}}_{\text{D},l}^{\rm H}+\sum_{k=1}^{K}\bar{\mathbf{J}}_{k,l}\mathbf{P}_k\mathbf{P}_k^{\rm H}\bar{\mathbf{J}}_{k,l}^{\rm H}+\sigma_{\text{D},l}^2\mathbf{I}\right)\mathbf{U}_{\text{D},l}. \end{split} \end{equation} $\mathcal{P}2$ and $\mathcal{P}3$ are equivalent since they share the same global optimal solution \cite{WMMSE}. Then, we develop a BCD-type algorithm for solving $\mathcal{P}3$. Specifically, the set of optimization variables $\Omega$ is divided into four blocks, i.e. $\{\mathbf{U}_{\text{U},k},\mathbf{U}_{\text{D},l}\}$, $\{\mathbf{W}_{\text{U},k},\mathbf{W}_{\text{D},l}\}$, $\{\mathbf{P}_k\}$, and $\{\mathbf{F}_l\}$. Each block are optimized in turn with the other blocks of variables fixed. The proposed BCD-type algorithm for optimizing the short-term active beamforming matrices is summarized in Algorithm \ref{BCDtype}. The details on solving the subproblems w.r.t. each block of variables are given in Appendix A. \begin{algorithm}[t]\caption{Proposed BCD-type short-term active beamforming design algorithm.} \label{BCDtype} \begin{algorithmic}[1] \footnotesize \begin{small} \STATE Initialize the beamforming matrices $\mathbf{P}_k$ and $\mathbf{F}_l$ with feasible values. Set the maximum iteration number $I_{max}$ and the threshold value $\delta$. \REPEAT \STATE Update $\mathbf{U}_{{\rm U},k} = \mathbf{A}_{\text{U},k}^{-1} \bar{\mathbf{H}}_{\text{U},k}\mathbf{P}_k$ and $\mathbf{U}_{{\rm D},l} = \mathbf{A}_{\text{D},l}^{-1} \bar{\mathbf{H}}_{\text{D},l}\mathbf{F}_l$, where $\mathbf{A}_{\text{U},k}$ and $\mathbf{A}_{\text{D},l}$ are defined in~\eqref{AU} and~\eqref{AD}, respectively. \STATE Update $\mathbf{W}_{\text{U},k} = \mathbf{E}_{\text{U},k}^{-1}$ and $\mathbf{W}_{\text{D},l} = \mathbf{E}_{\text{D},l}^{-1}$. \STATE Update $ \mathbf{P}_k = \alpha_k(\mathbf{A}_{\text{P},k}+\lambda_k\mathbf{I})^{-1} \bar{\mathbf{H}}_{\text{U},k}^{\rm H}\mathbf{U}_{\text{U},k}\mathbf{W}_{\text{U},k}$, where $\mathbf{A}_{\text{P},k}$ is defined in~\eqref{APk} and $\lambda_k$ is the Lagrange multiplier. \STATE Update $\mathbf{F}_l = \beta_l(\mathbf{A}_{\text{F}}+\mu \mathbf{I})^{-1}\bar{\mathbf{H}}_{\text{D},l}^{\rm H}\mathbf{U}_{\text{D},l}\mathbf{W}_{\text{D},l}$, where $\mathbf{A}_{\text{F}}$ is defined in~\eqref{AF} and $\mu$ is the Lagrange multiplier. \UNTIL the maximum iteration number is reached or the difference between the successive objective function value is less than $\delta$. \end{small} \end{algorithmic} \end{algorithm} \vspace{-0.4em} \subsection{Long-term IRS Passive Beamforming Design} In this subsection, we introduce the proposed long-term IRS passive beamforming design algorithm. With the optimized short-term variables, the stochastic optimization problem w.r.t. the passive beamforming vector can be formulated as \begin{equation} (\mathcal{P}4)\,\,\min_{\boldsymbol{\theta}} \quad f (\bm{\theta}, \{\mathbf{P}_k^*,\mathbf{F}_{l}^*\} ) = \mathbb{E}_{\mathcal{H}}\{g(\bm{\theta},\{\mathbf{P}_k^*,\mathbf{F}_{l}^*\};\mathcal{H})\}, \label{long-term-object} \vspace{-1.5mm} \end{equation} where $\boldsymbol{\theta} \triangleq \angle{\boldsymbol{\phi}}$, $\{\mathbf{P}_k^*,\mathbf{F}_l^*\}$ is the optimal solution of the proposed short-term active beamforming algorithm and $g(\bm{\theta},\{\mathbf{P}_k^*,\mathbf{F}_{l}^*\};\mathcal{H})$ denotes the sum-rate given by \vspace{-0.2em} \begin{equation} \vspace{-0.2em} \begin{split} g(\bm{\theta},\{\mathbf{P}_k^*,\mathbf{F}_{l}^*\};\mathcal{H}) \triangleq &\sum^{K}_{k=1}\alpha_{k} \mathcal{R}_{\text{U},k}(\bm{\theta},\{\mathbf{P}_k^*,\mathbf{F}_{l}^*\};\mathcal{H})\\ &+\sum^{L}_{l=1}\beta_{l}\mathcal{R}_{\text{D},l}(\bm{\theta},\{\mathbf{P}_k^*,\mathbf{F}_{l}^*\};\mathcal{H}). \end{split} \label{gfunction} \end{equation} Note that $\mathcal{P}4$ is hard to solve directly since the objective function is highly non-convex and it is difficult to obtain a closed-form expression via computing the expectation over $g(\bm{\theta},\{\mathbf{P}_k^*,\mathbf{F}_{l}^*\};\mathcal{H})$. Hence, by leveraging the stochastic optimization framework in~\cite{cssca}, we approximate \eqref{long-term-object} by using a quadratic surrogate function. Specifically, at the $t$-th iteration of the proposed algorithm, $B$ channel samples, where $B$ is the batch size, denoted as $\{\mathcal{H}^t(m)\}_{m=\{1,...,B\}}$ are randomly selected from the collection of high-dimensional CSI samples $\{\mathcal{H}(n) \}_{n=\{1,...,N_s\}}$ and the surrogate function is updated as \vspace{-0.2em} \begin{equation} \vspace{-0.2em} \begin{split} \bar{f}^t(\boldsymbol{\theta}) = (\mathbf{f}^t)^T(\boldsymbol{\theta}-\boldsymbol{\theta}^t)+\varpi\|\boldsymbol{\theta}-\boldsymbol{\theta}^t\|^2, \label{surrogate function} \end{split} \end{equation} where $\boldsymbol{\theta}^t$ denotes the current value of $\boldsymbol{\theta}$ and $\varpi>0$ is a constant. Note that $\mathbf{f}^t$ denotes the approximation of the partial derivatives $\frac{\partial f}{\partial \boldsymbol{\theta}}$, which is updated as \vspace{-0.2em} \begin{equation} \vspace{-0.2em} \mathbf{f}^t = (1-\varrho^{t})\mathbf{f}^{t-1}+\varrho^t\sum_{m=1}^{B}\frac{\partial g(\boldsymbol{\theta},\{\mathbf{P}_k^*,\mathbf{F}_{l}^*\};\mathcal{H}^t(m)) }{\partial \boldsymbol{\theta}}\big|_{\boldsymbol{\theta}=\boldsymbol{\theta}^{t}}, \label{surrogate_gradient} \end{equation} where $\{\varrho^t\}$ is a sequence to be properly chosen and the expression of $\frac{\partial g(\boldsymbol{\theta},\{\mathbf{P}_k^*,\mathbf{F}_{l}^*\};\mathcal{H}^t(m))}{\partial \boldsymbol{\theta}}$ is omitted here. Subsequently, we aim to solve the approximated problem at the $t$-th frame, which is given by \vspace{-0.2em} \begin{equation} \vspace{-0.2em} \min_{\boldsymbol{\theta}} \quad \bar{f}^t(\boldsymbol{\theta}). \end{equation} This is a convex quadratic problem and the optimal solution can be readily derived as \vspace{-0.2em} \begin{equation} \vspace{-0.2em} \bar{\boldsymbol{\theta}}^t = \boldsymbol{\theta}^t-\frac{\mathbf{f}^t}{2\varpi}. \label{solution_to_surrogate} \end{equation} Then, the long-term variable is updated as \vspace{-0.2em} \begin{equation} \vspace{-0.2em} \boldsymbol{\theta}^{t+1}=(1-\gamma^t)\boldsymbol{\theta}^{t} + \gamma^t\bar{\boldsymbol{\theta}}^t, \label{theta_update} \end{equation} where $\{\gamma^t\}$ denotes a sequence of parameters and the convergence can be guaranteed if we choose $\varrho^t$ and $\gamma^t$ by following the conditions: $\lim_{t\rightarrow \infty} \varrho^t = 0, \sum_{t} \varrho^t = \infty, \sum_{t} (\varrho^t)^2 < \infty, \lim_{t\rightarrow \infty} \gamma^t = 0, \sum_{t} \gamma^t = \infty, \sum_{t} (\gamma^t)^2 < \infty$ and $\lim_{t\rightarrow \infty} \frac{\gamma^t}{\varrho^t} = 0$~\cite{cssca,CRAN}. The proposed long-term passive beamforming design algorithm is summarized in \textbf{Algorithm 2}. \vspace{-0.2em} \begin{remark} The proposed SSCA-based algorithm can be guaranteed to converge to a stationary solution of $\mathcal{P}4$~\cite{cssca}. Moreover, combining the convergence property of the BCD-type short-term active beamforming algorithm \cite{CRAN}, the proposed overall mixed-timescale joint active and passive beamforming algorithm converges to a stationary point of $\mathcal{P}1$. \end{remark} \vspace{-0.4em} For the single-timescale algorithm, the required CSI signaling bits in a coherence time block is given by $Q_s = qT_s(N_\text{U}KM_\text{U}+N_\text{D}LM_\text{D}+KLM_\text{U}M_\text{D}+T(N_\text{U}+N_\text{D}+2KM_\text{U}+2LM_\text{D}-3))$~\cite{IRS_CM2}, where $q$ is the quantization bits for each element of CSI matrices and $T_s$ denotes the number of time slots in a coherence time block while that of the mixed-timescale scheme is given by $Q_m = qT_s(N_\text{U}KM_\text{U}+N_\text{D}LM_\text{D}+KLM_\text{U}M_\text{D}) + qA_sT(N_\text{U}+N_\text{D}+2KM_\text{U}+2LM_\text{D}-3)$. Fig.~\ref{fig:CSI_overhead} compares the single-timescale scheme and the proposed mixed-timescale in terms of CSI overhead, where we set $q = 8, T_s = 10000$~\cite{FDrelay2}, $A_s = 30, K = L = 2, N_\text{U} = N_\text{D} = 32, M_\text{U} = M_\text{D} = 4$. From the figure, our proposed mixed-timescale algorithm can significantly reduce the CSI overhead, especially when $T$ is large. \begin{figure}[h] \centering \scalebox{0.60}{\includegraphics{CSI_overhead-eps-converted-to}} \caption{CSI overhead versus the number of reflecting elements.}\label{fig:CSI_overhead} \vspace{-0.4em} \end{figure} \begin{algorithm}[t]\caption{Proposed SSCA-based algorithm for the long-term passive beamforming design.} \begin{algorithmic}[1] \footnotesize \begin{small} \STATE Initialize $\boldsymbol{\theta}^0$ with a feasible point. Select a proper sequence for $\{\varrho^t\}$ and $\{\gamma^t\}$. Set an appropriate value for $\varpi$ and let $t=0$. \REPEAT \STATE Randomly select $B$ samples $\{\mathcal{H}^t(m)\}_{m=\{1,...,B\}}$ from the collection of full CSI samples. Compute the surrogate function \eqref{surrogate function} based on \eqref{surrogate_gradient}. \STATE Obtain the optimal solution via \eqref{solution_to_surrogate}. \STATE Update $\boldsymbol{\theta}^t$ based on \eqref{theta_update}. \STATE Update the iteration number $t=t+1$. \UNTIL the convergence condition is satisfied or the maximum number of iterations is reached. \end{small} \end{algorithmic} \end{algorithm} \begin{figure*}[!t] \centering \scalebox{0.85}{\includegraphics{Architecture-eps-converted-to}} \caption{Architecture of the proposed deep-unfolding NN consisting of the LPBN and SABN.}\label{fig:joint_structure} \end{figure*} \begin{figure*}[!t] \centering \scalebox{0.93}{\includegraphics{DeepUnfolding1-eps-converted-to}} \caption{Structure of the SABN.}\label{fig:short_unfolding} \vspace{-0.9em} \end{figure*} \section{Deep-Unfolding Beamforming} In this section, we introduce the proposed deep-unfolding NN that unfolds the SSCA-based mixed-timescale beamforming algorithm. \vspace{-0.6em} \subsection{Architecture of Deep-Unfolding NN} The framework of our proposed deep-unfolding NN is shown in Fig. \ref{fig:joint_structure}. It consists of a LPBN and a SABN, which corresponds to the long-term passive beamforming algorithm and short-term active beamforming algorithm, respectively. \subsubsection{Forward Propagation} In the forward propagation stage, the full CSI samples, $\mathcal{H}$, are first input into the LPBN, and then the effective CSI samples, $\mathcal{H}_{ef}$, are output. Note that we set the phase of the IRS passive beamforming vector $\boldsymbol{\theta}$ as the learnable parameter of LPBN and the operation $e^{j(\cdot)}$ ensures that the unit-modulus constraint is satisfied. Moreover, the function that computes the effective channels is given by \vspace{-0.1em} \begin{equation} \vspace{-0.1em} \begin{split} \mathcal{H}_{ef} &= \Pi(\mathcal{H},e^{j\boldsymbol{\theta}})\triangleq\{\mathbf{\bar{H}}_{\text{U}, k}= \mathbf{H}_{\text{U},k}+\mathbf{V}_\text{U}\mathbf{\Phi}\mathbf{G}_{\text{U},k},\mathbf{\bar{H}}_{\text{D},l}= \\& \mathbf{H}_{\text{D},l}+\mathbf{G}_{\text{D},l}\mathbf{\Phi}\mathbf{V}_\text{D}, \mathbf{\bar{J}}_{k,l}= \mathbf{J}_{k,l}+\mathbf{G}_{\text{D,l}}\mathbf{\Phi}\mathbf{G}_{\text{U},k},\tilde{\mathbf{H}} = \tilde{\mathbf{H}}\}, \end{split} \end{equation} where $\mathbf{\Phi} = \textrm{Diag}(e^{j\boldsymbol{\theta}})$. Then, the effective CSI samples pass through the SABN that outputs the active beamforming matrices $\mathbf{P}_k$ and $\mathbf{F}_l$. The detailed structure of the SABN will be introduced in Section \ref{ArchiSABN}. Denote $\mathcal{F}(\cdot)$ as the whole forward propagation stage of our proposed deep-unfolding NN, that is, \begin{equation} \{\mathbf{P}_k,\mathbf{F}_l\} = \mathcal{F}(\{\boldsymbol{\theta},\mathbf{\Psi}\};\mathcal{H}), \end{equation} where $\boldsymbol{\theta}$ and $\mathbf{\Psi}$ are the learnable parameters of the LPBN and SABN, respectively, and $\{\mathbf{P}_k,\mathbf{F}_l\}$ are the output active beamforming matrices of the deep-unfolding NN. \subsubsection{Loss Function} Since we aim to maximize the weighted sum-rate of the system, the loss function of the deep-unfolding NN can be expressed as \begin{equation} \mathcal{L}(\{\boldsymbol{\theta},\mathbf{\Psi}\};\mathcal{H}) \triangleq g(\boldsymbol{\theta},\mathcal{F}(\{\boldsymbol{\theta},\mathbf{\Psi}\};\mathcal{H});\mathcal{H}), \label{loss_function} \end{equation} where $g(\cdot)$ is the sum-rate function defined in \eqref{gfunction}. \subsubsection{Back Propagation} In the back propagation stage, the gradients of the learnable parameters, $\frac{\partial \mathcal{L}(\{\boldsymbol{\theta},\mathbf{\Psi}\};\mathcal{H})}{\partial \boldsymbol{\theta}}$ and $\frac{\partial \mathcal{L}(\{\boldsymbol{\theta},\mathbf{\Psi}\};\mathcal{H})}{\partial \boldsymbol{\Psi}}$, are computed based on the chain rule. \subsubsection{Update of Learnable Parameters} We update $\{\boldsymbol{\theta},\mathbf{\Psi}\}$ based on the gradients of the learnable parameters. Specifically, in the $t$-th round of the learning process, the learnable parameters are updated as \begin{equation} \boldsymbol{\theta}^{t+1} = \boldsymbol{\theta}^{t} - \eta \frac{\partial \mathcal{L}(\{\boldsymbol{\theta},\mathbf{\Psi}\};\mathcal{H}^t)}{\partial \boldsymbol{\theta}}, \end{equation} \begin{equation} \mathbf{\Psi}^{t+1} = \mathbf{\Psi}^{t} - \eta \frac{\partial \mathcal{L}(\{\boldsymbol{\theta},\mathbf{\Psi}\};\mathcal{H}^t)}{\partial \boldsymbol{\Psi}}, \end{equation} where $\eta$ denotes the learning rate. Since the LPBN is an approximation of the SSCA-based algorithm, we can also update $\boldsymbol{\theta}$ based on~\eqref{surrogate_gradient},~\eqref{solution_to_surrogate}, and~\eqref{theta_update}, correspondingly. \subsection{Structure of the SABN} \label{ArchiSABN} The LPBN sets the IRS passive beamforming vector $\boldsymbol{\theta}$ as a learnable parameter and its forward propagation is to compute effective channels $\mathcal{H}_{ef}$. In this subsection, we introduce the detailed structure of the SABN, which unfolds Algorithm \ref{BCDtype} into a layer-wise structure. We first define a novel element-wise non-linear operation that takes the reciprocal of each element in the diagonal of matrix $\mathbf{A}$ while setting the non-diagonal elements to be $0$, i.e., denoted as $\mathbf{A}^{\dagger}$. We take a $3\times 3$ matrix as an example, \begin{equation} \mathbf{A} = \left[ \begin{array}{ccc} a_{11} & a_{12} & a_{13}\\ a_{21} & a_{22} & a_{23}\\ a_{31} & a_{32} & a_{33}\\ \end{array} \right], \quad \mathbf{A}^{\dagger} = \left[ \begin{array}{ccc} \frac{1}{a_{11}} & 0 & 0\\ 0 & \frac{1}{a_{22}} & 0\\ 0 & 0 & \frac{1}{a_{33}} \\ \end{array} \right]. \end{equation} Note that $\mathbf{A}^{-1}=\mathbf{A}^{\dagger}$ when $\mathbf{A}$ is a diagonal matrix. We observe that the diagonal elements of the matrices are much larger than the off-diagonal elements in the proposed BCD-type algorithm. Hence, $\mathbf{A}^{\dagger}$ is a good estimation of $\mathbf{A}^{-1}$. Since solving the matrix inversion $\mathbf{A}^{-1}$ requires high computational complexity, we approximate it by employing the combination of the following two architectures with lower complexity: (i) $\mathbf{A}^{\dagger}\mathbf{X}$ with the element-wise non-linear function $\mathbf{A}^{\dagger}$ and learnable parameter $\mathbf{X}$; (ii) By recalling the first-order Taylor expansion of $\mathbf{A}^{-1}$ at $\mathbf{A}_0$: $\mathbf{A}^{-1}=2\mathbf{A}_{0}^{-1}-\mathbf{A}_{0}^{-1}\mathbf{A}\mathbf{A}_{0}^{-1}$, we apply $\mathbf{A}\mathbf{Y} + \mathbf{Z}$ with learnable parameters $\mathbf{Y}$ and $\mathbf{Z}$. Note that the learnable parameters $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ are introduced to improve the performance of the deep-unfolding NN. Thus, we apply $\mathbf{A}^{\dagger}\mathbf{X}+\mathbf{A}\mathbf{Y}+\mathbf{Z}$ to approximate matrix inversion $\mathbf{A}^{-1}$. Note that $\mathbf{\Xi}^m \triangleq \{ \mathbf{X}_{{\rm U},k}^{u,m}, \mathbf{Y}_{{\rm U},k}^{u,m}, \mathbf{Z}_{{\rm U},k}^{u,m} \}$$\cup$$\{ \mathbf{X}_{{\rm D},l}^{u,m}$\\ $, \mathbf{Y}_{{\rm D},l}^{u,m}, \mathbf{Z}_{{\rm D},l}^{u,m} \}$$\cup$$\{ \mathbf{X}_{{\rm U},k}^{w,m}, \mathbf{Y}_{{\rm U},k}^{w,m},\mathbf{Z}_{{\rm U},k}^{w,m} \}$$\cup$$\{ \mathbf{X}_{{\rm D},l}^{w,m}, \mathbf{Y}_{{\rm D},l}^{w,m}, \mathbf{Z}_{{\rm D},l}^{w,m} \}$\\ $\cup$$\{ \mathbf{X}_{k}^{p,m}, \mathbf{Y}_{k}^{p,m}, \mathbf{Z}_{k}^{p,m} \}$$\cup$$\{ \mathbf{X}_{l}^{f,m}, \mathbf{Y}_{l}^{f,m}, \mathbf{G}_{\text{D},l}^{f,m} \}$\footnote{Note that here the subscripts $\text{U}$ and $\text{D}$ correspond to the uplink and downlink, respectively. Subscripts $k$ and $l$ represent the $k$-th UL user and the $l$-th DL user, respectively. Superscripts $u$, $w$, $p$ and $f$ denote the corresponding unfolding matrices and superscript $m$ denotes the layer index.} are introduced learnable parameter sets to approximate the inversion of matrix variables $\mathbf{U}_{{\rm U},k}^{m}$, $\mathbf{U}_{{\rm D},l}^{m}$, $\mathbf{W}_{{\rm U},k}^{m}$, $\mathbf{W}_{{\rm D},l}^{m}$, $\mathbf{P}_{k}^{m}$, and $\mathbf{F}_{l}^{m}$ in the $m$-th layer, respectively, and $\{ \mathbf{O}_{{\rm U},k}^{u,m}, \mathbf{O}_{{\rm D},l}^{u,m}, \mathbf{O}_{k}^{p,m}, \mathbf{O}_{l}^{f,m} \}$ denote the learnable offsets. The architecture of the SABN is designed as: \begin{figure*}[t] \begin{subequations} \label{network} \begin{eqnarray} & & \!\!\!\!\! \mathbf{U}_{{\rm U},k}^{m} = \bigg( (\mathbf{A}_{{\rm U},k}^{m-1})^{\dagger}\mathbf{X}_{{\rm U},k}^{u,m} + \mathbf{A}_{{\rm U},k}^{m-1}\mathbf{Y}_{{\rm U},k}^{u,m} + \mathbf{Z}_{{\rm U},k}^{u,m} \bigg) \bar{\mathbf{H}}_{{\rm U},k}\mathbf{P}_{k}^{m-1} + \mathbf{O}_{{\rm U},k}^{u,m}, \label{UU} \\ & & \!\!\!\!\! \mathbf{U}_{{\rm D},l}^{m} = \bigg( (\mathbf{A}_{{\rm D},l}^{m-1})^{\dagger}\mathbf{X}_{{\rm D},l}^{u,m} + \mathbf{A}_{{\rm D-1},l}^{m}\mathbf{Y}_{{\rm D},l}^{u,m} + \mathbf{Z}_{{\rm D},l}^{u,m} \bigg) \bar{\mathbf{H}}_{{\rm D},l}\mathbf{F}_{l}^{m-1} + \mathbf{O}_{{\rm D},l}^{u,m}, \label{UD} \\ & & \!\!\!\!\! \mathbf{W}_{{\rm U},k}^{m} = (\mathbf{E}_{{\rm U},k}^{m})^{\dagger}\mathbf{X}_{{\rm U},k}^{w,m} + \mathbf{E}_{{\rm U},k}^{m}\mathbf{Y}_{{\rm U},k}^{w,m} + \mathbf{Z}_{{\rm U},k}^{w,m}, \label{WU} \\ & & \!\!\!\!\! \mathbf{W}_{{\rm D},l}^{m} = (\mathbf{E}_{{\rm D},l}^{m})^{\dagger}\mathbf{X}_{{\rm D},l}^{w,m} + \mathbf{E}_{{\rm D},l}^{m}\mathbf{Y}_{{\rm D},l}^{w,m} + \mathbf{Z}_{{\rm D},l}^{w,m}, \label{WD} \\ & & \!\!\!\!\! \mathbf{P}_{k}^{m} \!\!=\!\! \alpha_k \bigg( (\mathbf{A}_{\text{P},k}^{m} \!+\! \lambda_k^m\mathbf{I})^{\dagger}\mathbf{X}_{k}^{p,m} \!+\! (\mathbf{A}_{\text{P},k}^{m} \!+\! \lambda_k^m\mathbf{I})\mathbf{Y}_{k}^{p,m} \!+\! \mathbf{Z}_{k}^{p,m} \bigg) \bar{\mathbf{H}}_{{\rm U}, k}^{\rm H}\mathbf{U}_{{\rm U},k}^{m}\mathbf{W}_{{\rm U},k}^{m} \!+\! \mathbf{O}_{k}^{p,m}, \label{PK} \\ & & \!\!\!\!\! \mathbf{F}_{l}^{m} \!\!=\!\! \beta_l \bigg( (\mathbf{A}_{\text{F},l}^{m} \!+\! \mu^m \mathbf{I})^{\dagger}\mathbf{X}_{l}^{f,m} \!+\! (\mathbf{A}_{\text{F},l}^{m} \!+\! \mu^m \mathbf{I})\mathbf{Y}_{l}^{f,m} \!+\! \mathbf{G}_{\text{D},l}^{f,m} \bigg) \bar{\mathbf{H}}_{{\rm D}, l}^{\rm H}\mathbf{U}_{{\rm D},l}^{m}\mathbf{W}_{{\rm D},l}^{m} \!+\! \mathbf{O}_{l}^{f,m}. \label{FL} \end{eqnarray} \end{subequations} \vspace{-1em} \end{figure*} The architecture of the SABN is presented in Fig.~\ref{fig:short_unfolding}, where $\mathcal{U}^{m}_{\rm U}$, $\mathcal{U}^{m}_{\rm D}$, $\mathcal{W}^{m}_{\rm U}$, $\mathcal{W}^{m}_{\rm D}$, $\mathcal{P}^{m}$, and $\mathcal{F}^{m}$ represent the layers of the deep-unfolding NN, i.e., \eqref{UU}-\eqref{FL}. In addition, the Lagrange multipliers, $\lambda_k^m$ and $\mu^m$, are also set as learnable parameters. Hence, the learnable parameters of the SABN can be denoted as $\mathbf{\Psi} \triangleq \bigcup_{m=1}^{I_u}\mathbf{\Xi}^m\cup\{\mathbf{O}_{{\rm U},k}^{u,m}, \mathbf{O}_{{\rm D},l}^{u,m}, \mathbf{O}_{k}^{p,m}, \mathbf{O}_{l}^{f,m},\lambda_k^m,\mu^m\}$, where $I_u$ is the number of layers. Moreover, to avoid gradient explosion and ensure that the power constraints \eqref{shorttransmitpower} and \eqref{shorttransmitpower2} are satisfied, we scale each $\mathbf{F}_{l}^{m}$ as $\frac{ \sqrt{P_{AP}} }{ \big( \sum\limits_{l} \textrm{Tr}(\mathbf{F}_{l}^{m}(\mathbf{F}_{l}^{m})^{H}) \big)^{\frac{1}{2}} }\mathbf{F}_{l}^{m}$. Note that $\mathbf{P}_{k}^{m}$ can be scaled in the same way. The output layer is a single-layer BCD iteration as~\cite{unfolding8}.\vspace{-0.3em} \begin{remark} Let us investigate the connection between the deep-unfolding NN and the SSCA-based algorithm. First, it is obvious that the SABN has a similar structure with the BCD-type short-term active beamforming algorithm. However, the SABN introduces learnable parameters to approximate the matrix inversion. Second, regarding the relation between the LPBN and the SSCA-based long-term passive beamforming algorithm, let us focus on the surrogate functions of these two approaches. Specifically, the surrogate function of the SSCA-based algorithm is constructed based on the sample gradient $\frac{\partial g(\boldsymbol{\theta},\{\mathbf{P}_k^*,\mathbf{F}_{l}^*\};\mathcal{H})}{\partial \boldsymbol{\theta}}\big|_{\boldsymbol{\theta}=\boldsymbol{\theta}^{t}}$, while that of the deep-unfolding NN is constructed based on \begin{equation} \begin{split} \frac{\partial \mathcal{L}(\{\boldsymbol{\theta},\mathbf{\Psi}\};\mathcal{H})}{\partial \boldsymbol{\theta}}\big|_{\boldsymbol{\theta}=\boldsymbol{\theta}^{t}} &= \frac{\partial g(\boldsymbol{\theta},\mathcal{F}(\{\boldsymbol{\theta},\mathbf{\Psi}\};\mathcal{H});\mathcal{H})}{\partial \boldsymbol{\theta}}\big|_{\boldsymbol{\theta}=\boldsymbol{\theta}^{t}} \\ & = \frac{\partial g(\boldsymbol{\theta},\mathcal{F}(\{\boldsymbol{\theta}^t,\mathbf{\Psi}\};\mathcal{H});\mathcal{H})}{\partial \boldsymbol{\theta}}\big|_{\boldsymbol{\theta}=\boldsymbol{\theta}^{t}}\\ &+(\frac{\partial g}{\partial \mathcal{F}})^T\frac{\partial \mathcal{F}(\{\boldsymbol{\theta},\mathbf{\Psi};\mathcal{H}\}}{\partial \boldsymbol{\theta}}\big|_{\boldsymbol{\theta}=\boldsymbol{\theta}^{t}}. \end{split} \label{gradient_comparsion} \end{equation} The first term in the last row of \eqref{gradient_comparsion} is the same as the sample gradient of the SSCA-based algorithm except that the active beamforming matrices are obtained by the network $\mathcal{F}(\{\boldsymbol{\theta}^t,\mathbf{\Psi}\};\mathcal{H})$ instead of the BCD-type algorithm $\{\mathbf{P}_k^*,\mathbf{F}_{l}^*\}$. The second term that represents the gradient of the network only appears in the deep-unfolding NN and is not included in the SSCA-based algorithm. This is because the jointly designed structure of the proposed deep-unfolding NN ties the long-term IRS passive beamforming matrix and the short-term active beamforming matrices more tightly. \vspace{-0.2em} \end{remark} \vspace{-0.8em} \subsection{Black-box NN for Beamforming Design} In this subsection, we propose a black-box NN for comparison. As shown in Fig. \ref{fig:blackbox}, the black-box NN also consists of two parts. The long-term passive beamforming part is the same as that of our proposed deep-unfolding NN. The short-term active beamforming part is comprised of conventional black-box layers, such as convolutional layers (CLs) and fully connected layers (FCLs). Specifically, full channel samples $\mathcal{H}$ first pass through the long-term passive beamforming part and it outputs the effective channels $\mathcal{H}_{ef}$. Then, the effective channels enter the CL, the batch normalization (BN) layer, and the non-linear function in serial and this process repeats for a number of times. Subsequently, the outputs are flattened and pass through several FCLs followed by a non-linear function. In particular, we adopt Leaky ReLU as the non-linear function, i.e., $y=\max\{x,x/a\}$, where $a>1$ is a constant. The final outputs of the black-box NN are the active beamforming matrices $\mathbf{P}_k$ and $\mathbf{F}_l$, and we scale them to satisfy the power constraint. The weighted sum-rate in \eqref{loss_function} is employed as the loss function. \begin{figure*}[!t] \centering \scalebox{0.5}{\includegraphics{blackbox-eps-converted-to}} \caption{Structure of the black-box NN.}\label{fig:blackbox} \vspace{-0.8em} \end{figure*} \vspace{-0.8em} \subsection{Computational Complexity} \vspace{-0.1em} In this subsection, we analyze the computational complexity of the proposed SSCA-based optimization algorithm, the proposed deep-unfolding NN, the benchmark black-box NN, and the conventional single-timescale algorithm. The computational complexity of the SSCA-based algorithm is dominated by the short-term BCD-type active beamforming algorithm, which is given by $\mathcal{O}\big( I_m \big( K(N_r^3 + M_U^3) + L(N_t^3 +M_D^3) + K^2N_r^2M_U + L^2N_t^2M_D \big) \big)$, where $I_m$ denotes the number of iterations. The computational complexity of the deep-unfolding NN is given by $\mathcal{O}\big( I_u \big( K(N_r^{2.37} + M_U^{2.37}) + L(N_t^{2.37} +M_D^{2.37}) + K^2N_r^2M_U + L^2N_t^2M_D \big) \big)$, where $I_u \ll I_m$ is the number of layers. Compared to the complexity of the iterative BCD-type algorithm, the deep-unfolding NN has much lower complexity for the following two reasons: (i) The number of layers in the deep-unfolding NN is much smaller than that of the BCD-type algorithm; (ii) The iterative BCD-type algorithm involves the matrix inversion with complexity $\mathcal{O}(N^3)$ while the deep-unfolding NN simply requires matrix multiplication with complexity $\mathcal{O}(N^{2.37})$. The computational complexity of the black-box NN is $\mathcal{O}\big( \sum_{l=1}^{L_{c}-1}Q_l^2S_l^2C_{l-1}C_l + C_{L_c}Q_{L_{c}}F_1 + \sum_{l=2}^{L_{f}}F_{l-1}F_{l} + F_{L_f}(KM_\text{U}D_\text{U} + LN_tD_\text{D}) \big)$, where $L_c$ is the number of CLs, $L_f$ is the number of FCLs, $S_l$ represents the size of the convolutional kernel, $C_l$ denotes the number of channels in the $l$-th CL, $Q_l$ denotes the output size of the $l$-th CL, which depends on the input size, padding number, and stride, and $F_l$ is the output size of the $l$-th FCL. Moreover, we analyze the computational complexity of the single-timescale algorithm for comparison. Specifically, the single-timescale algorithm collects real-time high-dimensional full CSI samples and optimizes the active beamforming matrices and the IRS passive beamforming matrix employing a BCD-type algorithm in each time slot. The procedure for updating the active beamforming matrices is given in Algorithm \ref{BCDtype}. Regarding the optimization of the IRS passive beamforming matrix, a one-iteration BCD algorithm is adopted~\cite{oneiterationBCD}, whose computational complexity is given by $\mathcal{O}(T^3)$. Hence, the overall computational complexity of the single-timescale algorithm is given by $\mathcal{O}\big( I_s \big( T^3 + K(N_r^3 + M_U^3) + L(N_t^3 +M_D^3) + K^2N_r^2M_U + L^2N_t^2M_D \big) \big)$. It is readily seen that our proposed mixed-timescale scheme can significantly reduce the computational complexity compared with the single-timescale algorithm since $T$ is generally large. \vspace{-0.3em} \section{Simulation Results} In this section, we present simulation results to evaluate the performance of our proposed algorithms. The simulation setting is shown in Fig.~\ref{fig:setting}. The AP is located at $(0 \,\text{m}, 0\,\text{m}, 0 \,\text{m})$ and the position of the IRS is $(0\, \text{m}, d_1 = 80\, \text{m}, 3\, \text{m})$. We consider $K = 2$ UL users and $L = 2$ DL users and they lie on the corners of a square centered at $(0\,\text{m}, 80\, \text{m}, 0\, \text{m})$ with a side length $20\, \text{m}$. Both AP and users are equipped with uniform linear arrays (ULA) and the IRS is equipped with a uniform planar array (UPA). The distance-dependent path loss is modeled as $L(d) = C_0(\frac{d_{link}}{D_{0}})^{-a}$, where $C_0$ is the path loss at the reference distance $D_0 =1\, \text{m}$, $d_{link}$ represents the individual link distance, and $a$ denotes the path loss exponent. As for the small-scale fading, we assume the Rician fading channel model, which is given by \begin{equation} \mathbf{H} = \sqrt{\frac{b_{}}{1+b_{}}}\mathbf{H}^{\text{Los}} + \sqrt{\frac{1}{1+b_{}}}\mathbf{H}_{}^{\text{NLos}}, \end{equation} where $b_{}$ is the Rician factor, and $\mathbf{H}^{\text{Los}}$ and $\mathbf{H}^{\text{NLos}}$ represent the deterministic line-of-sight (LoS) and random Rayleigh fading components, respectively. In particular, we let $a_{AI}$, $a_{Au}$, $a_{{Iu}}$, and $a_{uu}$ denote the path loss exponents of the AP-IRS link, AP-user link, IRS-user link, and user-user link, respectively, and let $b_{AI}$, $b_{Au}$, $b_{Iu}$, and $b_{uu}$ represent the Rician factor of these links, respectively. The residual SI channel matrix $\tilde{\mathbf{H}}$ is generated based on the model described in \cite{SIchannel}, and the average power of the SI channel is denote by $\sigma_{SI}^2$. The system parameters are set as follows unless otherwise stated: $N_t = N_r = N =32$, $M_{\text{U},k} = M_{\text{D},l} = 4, D_{\text{U},k} = D_{\text{D},l} = 4, \forall k,l$, $T = 200$, $\sigma_{\text{U}}^2 = \sigma_{\text{D},l}^2 = -76 $ dBm$, \forall l$, $\sigma_{SI}^2 = -60 $ dB, $P_{\text{U},k} = 24$ dBm$, \forall k$, $P_{AP} = 44$ dBm, $\alpha_{k} = \beta_{l} = 1, \forall k,l$, $a_{AI} = 2.4$, $a_{Au} = 3.8$, $a_{Iu} = 2.2$, $a_{uu} = 3.0$, $b_{AI} = b_{Iu} = 3$ dB, $b_{Au} = -3$ dB, and $b_{uu} = 0$ dB. For the algorithm parameters, we set $I_{max} = 100$, $\delta = 10^{-4}$, $\varrho^t = \frac{10}{(10+t)^{0.6}}$, $\gamma^t = \frac{15}{15+t}$, $\varpi = 0.5$. The default layer number of the proposed deep-unfolding NN is $8$ and the learning rate is chosen as $0.001$. The black-box NN consists of $3$ CLs and $5$ FCLs, and we adopt the Adam optimizer with the same learning rate $0.001$. Moreover, the batch size of the SSCA-based algorithm, the deep-unfolding NN, and the black-box NN are all set as $5$. All the experiments are conducted on a desktop Intel CPU (i5-8400 with 6 cores) with 8GB RAM. The benchmarks are provided as follows: \begin{figure}[!t] \centering \scalebox{0.66}{\includegraphics{setup80-eps-converted-to}} \caption{Simulation setup.}\label{fig:setting} \vspace{-1em} \end{figure} \begin{itemize} \item SSCA: The proposed SSCA-based mixed-timescale joint active and passive beamforming algorithm. \item Deep-unfolding NN: The proposed deep-unfolding NN introduced in Section IV. \item Black-box NN: The benchmark black-box NN. \item Full CSI: The single-timescale algorithm that collects high-dimensional full CSI in each time slot and optimizes the active beamforming matrices using Algorithm 1 and optimizes the IRS passive beamforming matrix using the one-iteration BCD algorithm. \item No IRS: The conventional scheme without IRS which directly employs Algorithm \ref{BCDtype} to optimize the active beamforming matrices. \item Random IRS: In this algorithm, the IRS passive beamforming matrix is randomly generated and Algorithm \ref{BCDtype} is adopted to optimize the active beamforming matrices. \item HD: The conventional HD scheme where the WMMSE algorithm is adopted for optimizing the active beamforming matrices and the IRS passive beamforming matrix is randomly generated. \end{itemize} \begin{figure}[t] \centering \subfloat[]{\label{fig:short-term_convergence}{\includegraphics[width=0.25\textwidth]{BCD_convergence_a-eps-converted-to}}} \subfloat[]{\label{fig:long-term_convergence}{\includegraphics[width=0.25\textwidth]{convergence_b-eps-converted-to}}} \caption{(a) Convergence performance of the BCD-type short-term active beamforming algorithm. (b) Convergence performance of the SSCA-based long-term passive beamforming algorithm.} \label{fig:fig3} \vspace{-0.4em} \end{figure} \begin{figure}[!t] \centering \scalebox{0.63}{\includegraphics{convergence_unfolding-eps-converted-to}} \caption{Convergence performance of the proposed deep-unfolding NN.}\label{fig:unfolding_convergence} \vspace{-0.8em} \end{figure} Fig.~\ref{fig:fig3}(\subref*{fig:short-term_convergence}) shows the convergence behavior of the proposed short-term active beamforming algorithm. It is observed that the BCD-type algorithm converges monotonically within 100 iterations. Fig.~\ref{fig:fig3}(\subref*{fig:long-term_convergence}) shows the value of the objective function versus the number of iterations for the SSCA-based long-term passive beamforming algorithm. From this figure, the weighted sum-rate converges within around 100 iterations. Moreover, we also observe some fluctuations of the objective function, which is due to the randomness of the sampled channels. Fig.~\ref{fig:unfolding_convergence} presents the impact of the learning rate on the convergence performance of the proposed deep-unfolding NN. As we can see, the sum-rate performance becomes better when the learning rate decreases from $0.1$ to $0.001$. When further decreasing the learning rate to $0.0001$, there is no evident performance improvement but the convergence speed significantly slows down. Hence, we choose $0.001$ as the learning rate in our test. \begin{table*}[htbp] \centering \caption{Weighted sum-rate performance versus the number of collected/training samples.} \begin{tabular}{ccccccccc} \hline Collected samples&5&10&15&20&25&30&35&40 \\ \hline SSCA&$88.79\%$&$93.73\%$&$97.02\%$&$98.32\%$&$99.57\%$&$99.92\%$&$100\%$&$100\%$ \\ \hline \end{tabular} \vspace{0.4mm} \begin{tabular}{ccccccccc} \hline Training samples &100&200&300&400&500&600&700&800\\ \hline Deep-unfolding NN&$90.02\%$&$94.13\%$&$96.04\%$&$97.53\%$&$97.68\%$&$97.70\%$&$97.71\%$&$97.71\%$ \\ \hline \end{tabular} \vspace{0.4mm} \begin{tabular}{ccccccccc} \hline Training samples&500&1000&1500&2000&2500&3000&3500&4000 \\ \hline Black-box NN&$75.03\%$&$79.59\%$&$83.20\%$&$85.52\%$&$86.27\%$&$86.42\%$&$86.62\%$&$86.64\%$ \\ \hline \label{table:sample} \end{tabular} \vspace{-0.4em} \end{table*} \begin{table*}[htbp] \centering \caption{Weighted sum-rate performance versus the number of layers.} \begin{tabular}{c|ccccccccc} \hline Layers&2&3&4&5&6&7&8&9&10 \\ \hline SSCA&$78.02\%$&$80.01\%$&$83.08\%$&$85.41\%$&$86.55\%$&$87.34\%$&$88.46\%$&$89.35\%$&$90.21\%$ \\ \hline Deep-unfolding NN&$90.76\%$&$92.56\%$&$95.70\%$&$96.31\%$&$97.57\%$&$97.69\%$&$97.72\%$&$97.71\%$&$97.72\%$ \\ \hline Black-box NN&$86.09\%$&$86.23\%$&$86.50\%$&$86.83\%$&$86.18\%$&$86.81\%$&$86.21\%$&$86.64\%$&$86.17\%$ \\ \hline \end{tabular} \vspace{0.1em} \label{table:layer} \end{table*} Table~\ref{table:sample} shows the weighted sum-rate performance versus the number of collected/training samples. Note that the results are all normalized by a reference value, which is the weighted sum-rate of the SSCA-based algorithm that collects $100$ samples for optimizing $\boldsymbol{\theta}$. As we can see, the SSCA-based algorithm is very efficient and only about 30 samples are sufficient for it to learn the channel statistics. We also observe that the black-box NN requires most channel samples for training while the proposed deep-unfolding NN needs much fewer training samples since it fully exploits the structure of our proposed SSCA-based mixed-timescale beamforming algorithm\;\!\footnote{Note that the number of training samples required for the deep-unfolding NN and the black-box NN are suitable for the offline training stage. For the case of online deployment, since the channel statistics vary continuously between adjacent time blocks, transfer learning and meta learning can be used to significantly reduce the required training samples and time in each coherence time block.}. Table~\ref{table:layer} presents the weighted sum-rate performance versus the number of layers. Similarly, the results are all normalized by a reference value, which is the weighted sum-rate of the SSCA-based algorithm with 100 layers. Note that the number of layers of the SSCA-based algorithm and the black-box NN refer to the maximum iteration number of the BCD-type short-term active beamforming algorithm $I_{max}$ and the number of FCLs (each layer with $1000$ neurons), respectively. We observe that the performance of the SSCA-based algorithm monotonically increases with $I_{max}$ and when $I_{max}=10$, it achieves $90.21\%$ performance of when it converges. We can also see that the performance of the deep-unfolding NN increases with the number of layers $I_{u}$ when it is small. When $I_{u}$ is greater than $7$, the result fluctuates. Hence, $I_{u}$ could be selected as $7$ or $8$ since it achieves a good balance between the performance and computational complexity. For the black-box NN, increasing the number of layers does not significantly improve the performance. \begin{table*}[t] \centering \caption{Weighted sum-rate performance versus different numbers of AP antennas ($N$).} \begin{tabular}{c|cccccc} \hline $N$&8&16&32&64&128&256 \\ \hline SSCA (bits/s/Hz)&$26.49$&$30.70$&$34.73$&$39.19$&$43.89$&$48.65$ \\ \hline Deep-unfolding NN&$98.94\%$&$98.32\%$&$97.71\%$&$96.97\%$&$96.13\%$&$95.08\%$ \\ \hline Black-box NN&$90.95\%$&$88.65\%$&$86.01\%$&$83.19\%$&$80.42\%$&$77.94\%$ \\ \hline \end{tabular} \label{table:antenna} \vspace{0.1em} \end{table*} \begin{table*}[htbp] \centering \caption{The CPU running time of the analyzed schemes.} \begin{tabular}{c|cc|cccc} \hline \multirow{2}*{(N,T)} &\multicolumn{2}{c|}{CPU training time (min)}& \multicolumn{4}{c}{CPU testing time (s)} \\ ~ & deep-unfolding & black-box&SSCA & deep-unfolding & black-box&full CSI \\ \hline (8,100) &10.80&35.77&2.82&0.052&0.016&18.62 \\ \hline (16,100) &11.53&38.85&2.90&0.054&0.017&18.71 \\ \hline (32,100) &15.89&41.87&3.07&0.056&0.019&18.89 \\ \hline (64,100) &35.75&57.00&3.65&0.058&0.021&19.47 \\ \hline (128,100) &52.37&126.54&6.21&0.071&0.027&22.03 \\ \hline (256,100) &102.15&267.31&21.45&0.15&0.071&37.33 \\ \hline (256,200) &119.14&289.63&21.45&0.15&0.071&66.26 \\ \hline (256,300) &137.46&312.34&21.45&0.15&0.071&250.58 \\ \hline (256,400) &159.23&343.77&21.45&0.15&0.071&1085.18 \\ \hline \end{tabular} \label{table:time} \end{table*} Table~\ref{table:antenna} shows the weighted sum-rate performance versus the number of antennas at the AP ($N$). The sum-rate performance of the deep-unfolding NN and the black-box NN is normalized by the corresponding sum-rate of the SSCA-based algorithm. When $N$ is small, the deep-unfolding NN achieves very close performance compared to the SSCA-based algorithm. It suffers from a slight performance degradation with the increase of $N$. However, it still achieves more than $95\%$ performance of the SSCA-based algorithm and is significantly better than the black-box NN. Table~\ref{table:time} compares the CPU training time and the testing time of different schemes when the number of antennas at the AP ($N$) and the number of reflecting elements at the IRS ($T$) change. We observe that the training time of the deep-unfolding NN is less than that of the black-box NN since it fully exploits the structure of the proposed SSCA-based mixed-timescale beamforming design algorithm. In terms of the testing time, the proposed deep-unfolding NN and the black-box NN provide a significant advantage over the SSCA-based algorithm and the full CSI scheme, which demonstrates the efficiency of the learning-based approaches. Moreover, the testing time of the full CSI scheme increases dramatically with $T$ while the testing time of the other mixed-timescale algorithms remains the same. This validates that the mixed-timescale beamforming scheme is much more suitable for practical design. \begin{figure}[!t] \centering \scalebox{0.63}{\includegraphics{performance_csidelay-eps-converted-to}} \caption{The weighted sum-rate performance versus the CSI delay $\tau$. }\label{fig:performance_csidelay} \vspace{-0.3em} \end{figure} Then, we investigate the impact of the CSI delay $\tau$ on different schemes. We adopt the delay model in \cite{CSI_delay_model} and assume that the CSI delay is proportional to the number of CSI signaling bits~\cite{FDrelay2}. If the CSI delay of the single-timescale algorithm is given by $\tau$, then, that of out proposed mixed-timescale algorithm can be computed as $\tau_{m} = \frac{Q_m}{Q_s}\tau$. Fig.~\ref{fig:performance_csidelay} shows the weighted sum-rate performance of different schemes versus the CSI delay $\tau$. As we can see, the proposed mixed-timescale algorithms are insensitive to the CSI delay while the single-timescale scheme relied on the full CSI suffers from severe performance degradation. When the CSI delay is greater than $0.6$ ms, the proposed SSCA-based mixed-timescale beamforming algorithm and deep-unfolding NN outperform the single-timescale algorithm. Moreover, compared with the conventional optimization based algorithm, the deep-unfolding NN and black-box NN are even more robust to the CSI delay. This is because the NN-based algorithms can learn CSI errors from the data and alleviate the performance deterioration. \begin{figure}[!t] \centering \scalebox{0.63}{\includegraphics{performance_element-eps-converted-to}} \caption{The weighted sum-rate performance versus the number of reflecting elements $T$. }\label{fig:performance_element} \vspace{-0.3em} \end{figure} \begin{figure}[!t] \centering \scalebox{0.63}{\includegraphics{performance_power-eps-converted-to}} \caption{The weighted sum-rate performance versus different transmit power $P$.}\label{fig:performance_power} \vspace{-0.3em} \end{figure} Fig.~\ref{fig:performance_element} presents the weighted sum-rate of different schemes versus the number of reflecting elements of the IRS. As we can see, the proposed deep-unfolding NN approaches the SSCA-based optimization algorithm and it significantly outperforms the other schemes. It is also observed that the weighted sum-rate performance achieved by the SSCA-based algorithm, the deep-unfolding NN, and the black-box NN increases more rapidly with $T$ compared with the random IRS scheme. This is due to the fact that the joint active and passive beamforming design provides a remarkable gain. The full CSI scheme suffers from severe performance degradation due to CSI mismatches and is only comparable with the random IRS scheme. Moreover, the FD scheme provides a huge gap over the HD scheme and the gap increases with $T$. Furthermore, the scheme without IRS provides the worst performance among all the analyzed algorithms, which demonstrates that the IRS can tremendously enhance the spectral efficiency of the conventional FD systems. Fig.~\ref{fig:performance_power} illustrates the performance under different transmit power $P$. Note that we set the power budget of the UL users as $P_{\text{U},k} = P,\forall k$ and that of the AP as $P_{AP} = P+20$ dB. As we can see, the performance of different schemes increases almost linearly with the transmit power (except the full CSI scheme since the CSI error deteriorates its performance). We also observe that the proposed SSCA-based algorithm and deep-unfolding NN both achieve better performance compared with the other schemes under different values of transmit power, which validates the effectiveness of our proposed design. \begin{figure}[!t] \centering \scalebox{0.63}{\includegraphics{quantization-eps-converted-to}} \caption{The weighted sum-rate performance versus different numbers of quantization bits.}\label{fig:quantization} \vspace{-0.0em} \end{figure} Fig.~\ref{fig:quantization} shows the sum-rate performance versus different numbers of quantization bits of the IRS. From the figure, the proposed SSCA-based algorithm, the proposed deep-unfolding NN, and the black-box NN are not sensitive to the quantization bits and they can achieve near optimal performance with only a few quantization bits. The quantization of the IRS phase shifters has little effect on the HD scheme, the random IRS scheme because the phase shifters of the IRS are random. It is also observed that the full CSI scheme is most sensitive to the quantization bits. The sum-rate performance is very poor when there are few quantization bits. This is because in the single-timescale scheme, $\boldsymbol{\theta}$, $\mathbf{P}_{k}$, and $\mathbf{F}_{l}$ are optimized alternatively based on the full CSI. Thus, the optimality of $\mathbf{P}_{k}$ and $\mathbf{F}_{l}$ relies on the $\boldsymbol{\theta}$ with infinite precision. When $\boldsymbol{\theta}$ is quantized, the derived $\mathbf{P}_{k}$ and $\mathbf{F}_{l}$ are not optimal. In comparison, in the mixed-timescale scheme, $\mathbf{P}_{k}$ and $\mathbf{F}_{l}$ are optimized based on the effective CSI which consists of the full CSI and the quantized $\boldsymbol{\theta}$. The optimality of $\mathbf{P}_{k}$ and $\mathbf{F}_{l}$ holds for the quantized $\boldsymbol{\theta}$. Thus, the proposed mixed-timescale scheme is more robust to the quantization error of the IRS phase shifters. \begin{figure}[!t] \centering \scalebox{0.63}{\includegraphics{SI_antenna8-eps-converted-to}} \caption{The weighted sum-rate performance versus the self-interference power $\sigma_{SI}^2$ ($N = 8$).}\label{fig:SI} \end{figure} Fig.~\ref{fig:SI} illustrates the sum-rate performance under different levels of self-interference power when $N=8$. From the figure, the overall sum-rate performance of the SSCA-based algorithm does not change much when the self-interference increases. However, when $\sigma_{SI}^2$ is larger than $-40$ dB, the gap between the UL and DL users becomes larger. When the self-interference increases, the overall sum-rate performance of the proposed deep-unfolding NN decreases slightly, but the deep-unfolding NN still achieves a relatively balanced sum-rate performance between the UL and DL users. Therefore, the proposed deep-unfolding NN can provide a better quality of service (QoS) for the UL users, especially when the self-interference is strong. As for the black-box NN, the overall sum-rate performance suffers from a severe performance deterioration when $\sigma_{SI}^2$ becomes larger. \begin{table*}[htbp] \setlength\tabcolsep{2.8pt} \centering \caption{Weighted sum-rate performance versus random locations of users.} \begin{tabular}{c|ccccccccc} \hline $R_0\, \text{(m)}$&0&2&4&6&8&10 \\ \hline SSCA&$34.73\, (100\%)$&$34.45\,(99.16\%)$&$34.14\,(98.26\%)$&$33.92\,(97.62\%)$&$33.73\,(97.09\%)$&$33.64\,(96.81\%)$ \\ \hline Deep-unfolding NN&$33.94\,(100\%)$&$33.81\,(99.61\%)$&$33.68\,(99.23\%)$&$33.57\,(98.89\%)$&$33.50\,(98.69\%)$&$33.44\,(98.51\%)$& \\ \hline Black-box NN&$29.87\,(100\%)$&$29.84\,(99.89\%)$&$29.80 \,(99.76\%)$&$29.76\,(99.63\%)$&$29.65\,(99.27\%)$&$29.51\,(98.77 \%)$& \\ \hline \end{tabular} \label{table:location} \end{table*} Table~\ref{table:location} shows the weighted sum-rate performance when the locations of the users are random. Specifically, the users are randomly located in a circle centered at its original position with radius $R_0$. Note that the percentages in the brackets denote the sum-rate value normalized by the first column. From the table, the weighted sum-rate of all schemes decreases with $R_0$. The weighted sum-rate of the SSCA-based algorithm declines the fastest and when $R_0 = 10$ m, it achieves $96.81\%$ of the performance of fixed locations. The deep-unfolding NN and the black-box NN have more learning parameters and can adapt to the randomness of the channels well. When $R_0 = 10$ m, they achieve $98.51\%$ and $98.77\%$ of the performance of fixed locations, respectively. \begin{figure}[!t] \centering \scalebox{0.63}{\includegraphics{channel_error-eps-converted-to}} \caption{The weighted sum-rate performance versus the CSI error variance $\sigma_{CE}^2$.}\label{fig:channel_error} \end{figure} Fig.~\ref{fig:channel_error} presents the achievable weighted sum-rate performance versus the channel estimation error. Specifically, the estimated channel is model as $\bar{\mathbf{H}} = \mathbf{H}+\bigtriangleup \mathbf{H}$, where $\mathbf{H}$ is the true channel matrix and $\bigtriangleup \mathbf{H}$ is the channel error matrix. We assume that the elements of $\bigtriangleup \mathbf{H}$ are independent and follow the Gaussian distribution with zero mean and variance $p_{H}\sigma_{CE}^2$, where $p_{H}$ denotes the average power of the elements in $\mathbf{H}$ and $\sigma_{CE}^2$ indicates the strength of the channel estimation error. From Fig.~\ref{fig:channel_error}, the performance of all schemes degrades with the channel estimation error. Moreover, the weighted sum-rate performance achieved by the conventional optimization based algorithms deteriorates severely as $\sigma_{CE}^2$ increases while the learning based algorithms are much more robust. This is because the learning based algorithms can learn the channel estimation errors from the training samples and alleviate the performance degradation. As we can see, the proposed deep-unfolding NN starts to outperform the SSCA-based algorithm when $\sigma_{CE}^2$ is larger than $-30$ dB, which further demonstrates the benefits of the proposed deep-unfolding design. \section{Conclusion} In this paper, we have investigated a MIMO IRS-assisted FD system and formulated a mixed-timescale beamforming design problem for cutting down the heavy CSI overhead. To tackle this highly non-convex optimization problem, an efficient mixed-timescale SSCA-based optimization algorithm has been developed. Moreover, to further reduce the computational complexity of the proposed SSCA-based algorithm, we developed a novel deep-unfolding beamforming algorithm. The deep-unfolding NN consists of a LPBN and a SABN, which maintains the structure of the SSCA-based algorithm but introduces a novel non-linear activation function and some learnable parameters induced by the first-order Taylor expansion to approximate the matrix inversion. It also ties the long-term passive beamforming matrix and the short-term active beamforming matrices more tightly compared with the SSCA-based optimization algorithm. Simulation results verified that the proposed deep-unfolding NN achieves the performance of the SSCA-based optimization algorithm with significantly reduced complexity. \vspace{-0.0em} \begin{appendices} \section{Solutions to the Subproblems of the BCD-Type Short-Term Active Beamforming Design} \subsubsection{Subproblem w.r.t. $\mathbf{U}_{{\rm U},k},\mathbf{U}_{{\rm D},l}$} The subproblems w.r.t. $\mathbf{U}_{{\rm U},k}$ is given by \begin{equation} \min_{\mathbf{U}_{{\rm U},k}} {\rm Tr} (\mathbf{W}_{\text{U},k}\mathbf{U}_{\text{U},k}^\text{H}\mathbf{A}_{\text{U},k}\mathbf{U}_{\text{U},k}) - 2\Re e \{{\rm Tr}(\mathbf{W}_{\text{U},k}\mathbf{U}_{\text{U},k}^\text{H}\bar{\mathbf{H}}_{\text{U},k}\mathbf{P}_k)\}, \vspace{-0.1em} \end{equation} where \begin{equation} \mathbf{A}_{\text{U},k} \triangleq \left(\sum_{k^{'} = 1}^{K}\bar{\mathbf{H}}_{\text{U},k^{'}}\mathbf{P}_{k^{'}}\mathbf{P}_{k^{'}}^{\rm H}\bar{\mathbf{H}}_{\text{U},k^{'}}^{\rm H}+\sum_{l=1}^{L}\tilde{\mathbf{H}}\mathbf{F}_l\mathbf{F}_l^{\rm H}\tilde{\mathbf{H}}^{\rm H}+\sigma_{\text{U}}^2\mathbf{I}\right). \label{AU} \vspace{-0.1em} \end{equation} By applying the first order optimality condition, the solution of $\mathbf{U}_{{\rm U},k}$ is given by\vspace{-0.1em} \begin{equation} \mathbf{U}_{{\rm U},k} = \mathbf{A}_{\text{U},k}^{-1} \bar{\mathbf{H}}_{\text{U},k}\mathbf{P}_k. \label{UU_update} \vspace{-0.1em} \end{equation} Similarly, we obtain the solution of $\mathbf{U}_{{\rm D},l}$ as \vspace{-0.1em} \begin{equation} \mathbf{U}_{{\rm D},l} = \mathbf{A}_{\text{D},l}^{-1} \bar{\mathbf{H}}_{\text{D},l}\mathbf{F}_l, \label{UDupdate} \vspace{-0.1em} \end{equation} where \begin{equation} \mathbf{A}_{{\rm D},l} \triangleq \left(\sum_{l^{'}=1}^{L}\bar{\mathbf{H}}_{\text{D},l}\mathbf{F}_{l^{'}}\mathbf{F}_{l^{'}}^{\rm H}\bar{\mathbf{H}}_{\text{D},l}^{\rm H}+\sum_{k=1}^{K}\bar{\mathbf{J}}_{k,l}\mathbf{P}_k\mathbf{P}_k^{\rm H}\bar{\mathbf{J}}_{k,l}^{\rm H}+\sigma_{\text{D},l}^2\mathbf{I}\right). \label{AD} \vspace{-0.1em} \end{equation} \vspace{-0.1em} \subsubsection{Subproblem w.r.t. $\mathbf{W}_{\text{U},k},\mathbf{W}_{\text{D},l}$} The subproblem w.r.t. $\mathbf{W}_{\text{U},k}$ is given by \vspace{-0.1em} \begin{equation} \min_{\mathbf{W}_{\text{U},k}} \quad \text{Tr}\left(\mathbf{W}_{\text{U},k}\mathbf{E}_{\text{U},k}\right)-\log \det\left(\mathbf{W}_{\text{U},k}\right). \vspace{-0.1em} \end{equation} By checking the first order optimality condition, we obtain the optimal solution as \vspace{-0.1em} \begin{equation} \mathbf{W}_{\text{U},k} =\mathbf{E}_{\text{U},k}^{-1}. \label{WUupdate} \vspace{-0.1em} \end{equation} Similarly, the optimal solution for $\mathbf{W}_{\text{D},l}$ can be derived as \vspace{-0.1em} \begin{equation} \mathbf{W}_{\text{D},l} =\mathbf{E}_{\text{D},l}^{-1}. \label{WDupdate} \vspace{-0.1em} \end{equation} \subsubsection{Subproblem w.r.t. $\mathbf{P}_k$} After appropriate rearrangement, we can write the subproblem w.r.t. $\mathbf{P}_k$ as \begin{subequations} \begin{align} \min_{\mathbf{P}_k} \quad &{\rm Tr} (\mathbf{P}_k^{\rm H} \mathbf{A}_{\text{P},k} \mathbf{P}_k) - 2\alpha_k\Re e \{\text{Tr}(\mathbf{P}_k^{\rm H} \bar{\mathbf{H}}_{\text{U},k}^{\rm H}\mathbf{U}_{\text{U},k}\mathbf{W}_{\text{U},k})\}\\ \text{s.t.} \quad & {\rm Tr} (\mathbf{P}_k^{\rm H} \mathbf{P}_k) \leq P_{\text{U},k}, \end{align} \label{subproblemP} \end{subequations} where \begin{equation} \begin{split} \mathbf{A}_{\text{P},k} &\triangleq \sum_{k^{'}=1}^{K}\alpha_{k^{'}}\bar{\mathbf{H}}_{\text{U},k}^{\rm H}\mathbf{U}_{\text{U},k^{'}}\mathbf{W}_{\text{U},k^{'}}\mathbf{U}_{\text{U},k^{'}}^{\rm H}\bar{\mathbf{H}}_{\text{U},k} \\ &\qquad+ \sum_{l=1}^{L}\beta_l \bar{\mathbf{J}}_{k,l}^{\rm H}\mathbf{U}_{\text{D},l}\mathbf{W}_{\text{D},l}\mathbf{U}_{\text{D},l}^{\rm H}\bar{\mathbf{J}}_{k,l}. \end{split}\label{APk} \vspace{-0.2em} \end{equation} It is readily seen that \eqref{subproblemP} is a convex optimization problem. Therefore, by introducing Lagrange multipliers $\lambda_k \geq 0, \forall k$ and applying the Karush–Kuhn–Tucker (KKT) condition, we can express the optimal solution to $\mathbf{P}_k$ as \vspace{-0.1em} \begin{equation} \mathbf{P}_k = \alpha_k(\mathbf{A}_{\text{P},k}+\lambda_k\mathbf{I})^{-1} \bar{\mathbf{H}}_{\text{U},k}^{\rm H}\mathbf{U}_{\text{U},k}\mathbf{W}_{\text{U},k}. \label{Pupdate} \vspace{-0.1em} \end{equation} Denote $Q(\lambda_k) = {\rm Tr} (\mathbf{P}_k^{\rm H} \mathbf{P}_k)- P_{\text{U},k}$. If $Q(0) \leq 0$, then we have $\lambda_k = 0$, otherwise, we have $\lambda_k = \lambda_k^*$, where $\lambda_k^*$ is obtained by solving the equation $Q(\lambda_k^*) = 0$ via the bisection search. \subsubsection{Subproblem w.r.t. $\mathbf{F}_l$} Similar to the problem w.r.t. $\mathbf{P}_{k}$, after appropriate rearrangement, we express the subproblem w.r.t. $\mathbf{F}_l$ as \vspace{-0.1em} \begin{subequations} \label{PBS} \begin{align} \min_{\mathbf{F}_{l}} \quad & {\rm Tr} (\mathbf{F}_l^{\rm H} \mathbf{A}_{\text{F}} \mathbf{F}_l) - 2\beta_l\Re e \{\text{Tr}(\mathbf{F}_l^{\rm H} \bar{\mathbf{H}}_{\text{D},l}^{\rm H}\mathbf{U}_{\text{D},l}\mathbf{W}_{\text{D},l})\} \\ \text{s.t.} \quad & {\rm Tr} (\mathbf{F}_l^{\rm H} \mathbf{F}_l) \leq P_{AP}, \vspace{-0.1em} \end{align} \end{subequations} where \vspace{-0.1em} \begin{equation} \begin{split} \mathbf{A}_{\text{F}} &\triangleq \sum_{l^{}=1}^{L}\beta_{l^{}}\bar{\mathbf{H}}_{\text{D},l^{}}^{\rm H}\mathbf{U}_{\text{D},l^{}}\mathbf{W}_{\text{D},l^{}}\mathbf{U}_{\text{D},l^{}}^{\rm H}\bar{\mathbf{H}}_{\text{D},l^{}} \\ &\qquad +\sum_{k=1}^{K}\alpha_k \tilde{\mathbf{H}}^{\rm H}\mathbf{U}_{\text{U},k}\mathbf{W}_{\text{U},k}\mathbf{U}_{\text{U},k}^{\rm H}\tilde{\mathbf{H}}. \end{split} \label{AF} \vspace{-0.1em} \end{equation} By introducing a Lagrange multiplier $\mu \geq 0$ to problem (\ref{PBS}) and employing the KKT condition, we obtain the optimal $\mathbf{F}_l$ as \vspace{-0.1em} \begin{equation} \mathbf{F}_l = \beta_l(\mathbf{A}_{\text{F}}+\mu \mathbf{I})^{-1}\bar{\mathbf{H}}_{\text{D},l}^{\rm H}\mathbf{U}_{\text{D},l}\mathbf{W}_{\text{D},l}, \label{Fupdate} \vspace{-0.1em} \end{equation} where $\mu$ can be found similarly via the bisection search. \end{appendices} \section{Introduction} \label{sec:intro} Intelligent reflecting surface (IRS)~\cite{IRSintro1,IRSintro2,IRSintro3} can enhance the network throughput and has received great attention recently. Specifically, IRS, a planar meta-surface, is composed of a large number of tunable reflecting elements, each of which is able to induce the incident signal via changing its phase and/or amplitude. Based on the channel state information (CSI) and/or the channel statistics, the central controller of the IRS can collaboratively adjust its reflection coefficients such that the desired and interfering signals can be enhanced and suppressed, respectively, thus substantially improving the wireless system performance. Moreover, compared to conventional techniques, such as active relaying/beamforming, IRS does not require any transmit/receive radio frequency (RF) chains. Hence, it is also more energy and hardware efficient. Full-duplex (FD), another powerful wireless technique, can significantly improve the spectral efficiency (SE) \cite{FDintro1,FDintro2,FDintro3,FDintro4}. Compared with the conventional half-duplex (HD) scheme, it fully utilizes the spectrum by enabling signal transmission and reception over the same frequency at the same time, which can double the SE theoretically. However, the self-interference (SI) caused by the simultaneous downlink (DL) and uplink (UL) transmission is a challenging issue in FD systems. Fortunately, there have been key advances to address this issue~\cite{FDintro4,FDintro5,FDintro6}, such as passive suppression, analog and digital cancellations, etc.. Therefore, the FD technique has many applications in wireless communications, such as bidirectional communications \cite{FDbidirectional}, relays \cite{FDrelay1,FDrelay2}, and multi-user systems \cite{FDmultiuser1,FDmultiuser2}. \vspace{-1.0em} \subsection{Prior works} Lately, there have been a number of applications of IRS in various wireless communication scenarios, such as the IRS-aided secure communication~\cite{IRSsecure1,IRSsecure2,IRSsecure3}, simultaneous wireless information and power transfer~\cite{IRSswipt1,IRSswipt2}, millimeter wave communication~\cite{IRSmmWave1,IRSmmWave2}, and mobile edge computing~\cite{IRSMEC1}. More recently, several pieces of works on IRS-aided FD systems have been proposed~\cite{IRSFD1,IRSFD2,IRSFD3,IRSFD4,IRSFD5,IRSFD6}. In \cite{IRSFD1}, an IRS is used to enhance FD two-way communication systems, where the source precoders and the IRS passive beamforming matrix are jointly optimized to maximize the system capacity based on the Arimoto-Blahut algorithm. Under the same scenario, an algorithm with faster convergence speed has been proposed in \cite{IRSFD2}. Moreover, a novel hybrid communication network that utilizes both an FD decode-and-forward relay and an IRS to enhance data transmission rate has been investigated in \cite{IRSFD3}. In addition, the passive beamforming and deployment design have been investigated in \cite{IRSFD4} in an IRS-aided cellular FD system. To ensure user fairness, the minimum weighted rate is maximized in \cite{IRSFD5} in an IRS-aided multi-user cellular FD system. Furthermore, the resource allocation problem for an IRS-assisted FD cognitive radio system has been studied in \cite{IRSFD6}. It is worth mentioning that in the above studies, the beamforming matrices are designed based on the instantaneous CSI, which will incur high computational complexity and large signaling overhead in practice. Recently, more practical schemes have been developed by exploiting the channel statistics~\cite{IRSCDI1,IRSCDI2,IRSCDI3}. The angle domain framework in \cite{IRSCDI1} designs the beamforming matrices at the access point (AP) and IRS based on the derived effective angles, which approaches the performance with full CSI. By utilizing historical channel samples, the authors of \cite{IRSCDI2} proposed two stochastic optimization algorithms to configure the IRS phase shifter. In \cite{IRSCDI3}, a two-timescale protocol has been exploited to design the passive beamforming matrix based on the channel statistics and the active beamforming matrices based on the effective CSI Although the aforementioned algorithms that utilize channel statistics can significantly reduce the CSI overhead, they are still with high computational complexity since complex manipulations, such as matrix inversion, are involved in each iteration. In order to tackle this problem, machine learning based techniques, such as deep neural network (DNN), have been employed for beamforming design in IRS-aided systems~\cite{blackbox1,blackbox2,blackbox3}. DNN only consists of linear operation and simple non-linear activation function, which can potentially meet the real-time requirement~\cite{fastbeamforming}. However, the black-box NNs generally have poor interpretability and require a large number of training samples. To this end, deep-unfolding NN~\cite{unfoldingintro1} unfolds some iterative optimization algorithms into layer-wise structures and learns the key parameters. The deep-unfolding NNs take advantages of both the model-driven optimization algorithms and the data-driven learning-based algorithms. They are more interpretable and efficient than the black-box NNs and can achieve comparable performance with the conventional optimization algorithms with dramatically reduced computational complexity. Hence, the deep-unfolding has attracted great research interests and has a wide range of applications in communications, such as signal detection~\cite{unfolding1,unfolding2,unfolding3}, resource allocation~\cite{unfolding4,unfolding5}, and precoding~\cite{unfolding6,unfolding7,unfolding8,unfolding9}. \vspace{-1.0em} \subsection{Main Contributions} Inspired by the above works, we investigate a multi-user MIMO IRS-assisted FD system in this paper, which consists of an AP, an IRS, multiple UL users and DL users as shown in Fig. \ref{fig:structure}. The AP operates in an FD mode and the users operate in an HD mode. We jointly optimize the active beamforming matrices at the AP and UL users and the passive beamforming matrix at the IRS to maximize the weighted sum-rate of the system. Since it is practically difficult to acquire the CSI for IRS-related links due to its passive operation and large number of elements, we conceive a mixed-timescale scheme. Specifically, the high-dimensional passive beamforming matrix at the IRS is updated based on the channel statistics while the active beamforming matrices are optimized relied on the low-dimensional real-time effective CSI at each time slot. The proposed scheme avoids estimating the high dimensional IRS-related channels in each time slot and saves the heavy overhead required by the conventional single-timescale algorithm, thus alleviating the performance degradation caused by CSI delay. However, the mixed-timescale brings new challenges to algorithm design since the objective function turns stochastic and the long-term and short-term variables are highly coupled. To address these issues, we develop an efficient stochastic successive convex approximation (SSCA)-based optimization algorithm. More precisely, for the short-term active beamforming design, we equivalently convert the original problem into a more tractable form and propose a block coordinate descent (BCD)-type algorithm. Then, with optimized short-term active beamforming matrices, the long-term passive beamforming matrix is designed based on SSCA~\cite{cssca}. Specifically, we construct a convex surrogate function based on the collected full CSI samples\:\!\footnote{In this work, channel statistics refer to the moments or distribution of the channel fading realizations. By observing the collected full channel samples (possibly outdated), the proposed SSCA-based design algorithm can automatically learn the channel statistics (in an implicit way) and converge to a stationary point of the stochastic optimization problem considered in our design.} to approximate the objective function and iteratively optimize the IRS passive beamforming matrix until convergence. The proposed algorithm can be guaranteed to converge to a stationary point of the original problem. Furthermore, we design a novel deep-unfolding NN that jointly unfolds the proposed SSCA-based optimization algorithm into a layer-wise structure. The proposed deep-unfolding NN consists of a long-term passive beamforming network (LPBN) and a short-term active beamforming network (SABN). In the forward propagation stage, the collected full channel samples are first fed into the LPBN and it outputs the low-dimensional effective CSI. Note that we directly set the IRS passive beamforming matrix as the learnable parameter of the LPBN. Then, the effective CSI passes through the SABN and it outputs the active beamforming matrices. The SABN maintains the structure of the proposed BCD-type active beamforming algorithm but employs a novel non-linear activation function and some learnable parameters induced by the first-order Taylor expansion to approximate the matrix inversion, which significantly reduces the computational complexity. In the back propagation stage, the learnable parameters of the deep-unfolding NN are updated based on the stochastic gradient descent (SGD) method. The main contributions of this paper are summarized as follows. \begin{itemize} \item We study a multi-user MIMO IRS-assisted FD system, which has not been well investigated in the literature, and propose a practical mixed-timescale beamforming scheme to reduce the CSI overhead and mitigate the CSI mismatch caused by delay. \item To maximize the weighted average sum-rate in the IRS-assisted FD system, we propose an efficent mixed-timescale joint active and passive beamforming algorithm based on the framework of SSCA, which can guarantee convergence. \item To further reduce the computational complexity, we develop a novel deep-unfolding NN that unfolds the proposed mixed-timescale SSCA-based algorithm into a layer-wise structure. The deep-unfolding NN exploits a novel non-linear activation function and some learnable parameters induced by the first-order Taylor expansion to approximate the matrix inversion. \item Simulation results show that the proposed mixed-timescale beamforming algorithm outperforms the single-timescale counterpart in the presence of CSI delay, and the proposed deep-unfolding NN approaches the performance of the SSCA-based algorithm with much reduced computational complexity when deployed online. \end{itemize} \vspace{-1.0em} \subsection{Organizations and Notations} The paper is organized as follows. Section II describes the system model and formulates the investigated problem. The proposed SSCA-based mixed-timescale beamforming optimization algorithm is presented in Section III. Then, Section IV introduces the proposed deep-unfolding NN based algorithm. Section V presents the simulation results and Section VI concludes this paper. Scalars, vectors and matrices are denoted by lower case, boldface lower case and boldface upper case letters, respectively. $\mathbf{I}$ represents an identity matrix and $\mathbf{0}$ denotes an all-zero matrix. For a matrix $\mathbf{A}$, ${{\mathbf{A}}^T}$, $\textrm{conj}(\mathbf{A})$, ${{\mathbf{A}}^H}$, and $\|\mathbf{A}\|$ denote its transpose, conjugate, conjugate transpose, and Frobenius norm, respectively. For a square matrix $\mathbf{A}$, $\textrm{Tr} \{\mathbf{A}\}$ and $\mathbf{A}^{-1}$ denotes its trace and inverse, respectively, while ${\mathbf{A}} \succeq {\mathbf{0}}~({\mathbf{A}} \preceq {\mathbf{0}})$ means that $\mathbf{A}$ is positive (negative) semi-definite. For a vector $\mathbf{a}$, $\|\mathbf{a}\|$ represents its Euclidean norm. $\Re e\{\cdot\}$ ($\Im m\{\cdot\}$) denotes the real (imaginary) part of a variable. $| \cdot |$ denotes the absolute value of a complex scalar. ${\mathbb{C}^{m \times n}}\;({\mathbb{R}^{m \times n}})$ denotes the space of ${m \times n}$ complex (real) matrices and $\angle$ represents the phase of complex vectors/matrices. $\text{diag}(\cdot)$ extracts the diagonal elements of a square matrix and $\text{Diag}(\cdot)$ constructs a diagonal matrix based on the input vector. The key notations used in this paper is summarized in Table I. \vspace{-0.2em} \begin{table}[htbp] \centering \caption{List of notations.} \label{table1} \begin{tabular}{|c|c|} \hline Symbol&Representation\\ \hline $K$ ($k$)&Number of UL users (index for UL users)\\ \hline $L$ ($l$)&Number of DL users (index for DL users) \\ \hline $N_t$ ($N_r$)&Number of transmit (receive) antennas at the AP\\ \hline $M_\text{U}$ ($M_\text{D}$) &Number of antennas at the UL (DL) users \\ \hline $D_{\text{U}}$ ($D_{\text{D}}$)&Number of data flows at the UL (DL) users\\ \hline $T$ &Number of reflecting elements at the IRS \\ \hline $\mathbf{\Phi}$&Passive beamforming matrix at the IRS (long-term)\\ \hline $\mathbf{P}$ &Active beamforming matrix at UL user (short-term) \\ \hline $\mathbf{F}$&Active beamforming matrix for DL user (short-term)\\ \hline $\mathcal{H}$ ($\mathcal{H}_{ef}$) &Set of full (effective) channels \\ \hline $R_{\text{U}}$ ($R_{\text{D}}$)&Achievable rate of UL (DL) user \\ \hline $\alpha$ ($\beta$) &Weight of UL (DL) user \\ \hline $\boldsymbol{\phi}$&Diagonal elements of $\mathbf{\Phi}$ \\ \hline $\boldsymbol{\theta}$ &Phase of $\boldsymbol{\phi}$ \\ \hline \end{tabular} \end{table} \vspace{-1.6em} \section{System Model and Problem Formulation} In this section, we first introduce the IRS-assisted FD system and then mathematically formulate the optimization problem. \vspace{-1.6em} \subsection{System Model} As depicted in Fig.~\ref{fig:structure}, we consider an IRS-assisted FD system, which consists of an AP, an IRS, $K$ UL users, and $L$ DL users. The AP operates in an FD mode and it is equipped with $N_t$ transmit antennas and $N_r$ receive antennas. The $K$ UL users and the $L$ DL users operate in an HD mode and they are equipped with $M_{\text{U}, k}$ and $M_{\text{D}, l}$ antennas, respectively, where $k\in\mathcal{K}\triangleq\{1, \ldots, K\}$ and $l\in\mathcal{L}\triangleq\{1,\ldots, L\}$ denote the user indexes. The IRS is equipped with $T$ reflecting elements. Assuming that the IRS is equipped near the users and far away from the AP, the signals through the AP-IRS-AP link can be neglected due to the high path loss~\cite{IRSFD4}. Then, the received data vector at the AP $\mathbf{y}_\text{U}\in\mathbb{C}^{N_r\times 1}$ is given by \begin{equation} \mathbf{y}_\text{U}=\sum^{K}_{k=1}\mathbf{H}_{\text{U},k}\mathbf{P}_{k}\mathbf{b}_{k}+\sum^{K}_{k=1}\mathbf{V}_{\text{U}}\mathbf{\Phi}\mathbf{G}_{\text{U},k}\mathbf{P}_{k}\mathbf{b}_{k}+\sum^{L}_{l=1}\mathbf{\tilde{H}}\mathbf{F}_{l}\mathbf{s}_{l}+\mathbf{n}_\text{U}, \end{equation} where $\mathbf{b}_{k}\in\mathbb{C}^{D_{\text{U},k}\times 1}$ ($D_{\text{U},k}\leq M_{\text{U},k}$) denotes the transmit data vector of the $k$-th UL user, $\mathbf{P}_{k}\in\mathbb{C}^{M_{\text{U},k}\times D_{\text{U},k}}$ is the beamforming matrix of the $k$-th UL user, $\mathbf{H}_{\text{U},k}\in\mathbb{C}^{N_r\times M_{\text{U},k}}$ denotes the channel matrix between the $k$-th UL user and the AP. $\mathbf{\Phi}\in\mathbb{C}^{T\times T}$ denotes the diagonal passive beamforming matrix at the IRS due to no signal coupling/joint processing over its passive reflecting elements, $\mathbf{G}_{\text{U},k}\in\mathbb{C}^{T\times M_{\text{U},k}}$ denotes the channel matrix between the $k$-th UL user and the IRS, and $\mathbf{V}_{\text{U}}\in\mathbb{C}^{N_r\times T}$ denotes the channel matrix between the IRS and the AP. Note that $\mathbf{s}_{l}\in\mathbb{C}^{D_{\text{D}, l}\times 1}$ ($D_{\text{D}, l}\leq M_{\text{D}, l}$) denotes the data vector for the $l$-th DL user, $\mathbf{F}_{l}\in\mathbb{C}^{N_t\times D_{\text{D},l}}$ denotes the beamforming matrix at the AP for serving the $l$-th DL user, and $\mathbf{\tilde{H}}\in\mathbb{C}^{N_r\times N_t}$ denotes the residual SI channel matrix at the AP\:\!\footnote{Since the CSI of the SI link can be obtained at the AP, based on certain interference cancellation techniques~\cite{FDintro3,FDintro6}, we assume that the SI at the AP can be greatly eliminated.}. $\mathbf{n}_\text{U}\in\mathbb{C}^{N_r\times 1}$ denotes the complex circular Gaussian noise vector at the AP with zero mean and variance $\sigma^2_\text{U}$. \begin{figure}[!t] \centering \scalebox{0.50}{\includegraphics{system_v3-eps-converted-to}} \caption{ IRS-assisted full-duplex system.}\label{fig:structure} \end{figure} The received data vector at the $l$-th DL user $\mathbf{y}_{\text{D},l}\in\mathbb{C}^{M_{\text{D},l}\times 1}$ is given by \begin{equation} \begin{split} \mathbf{y}_{\text{D},l}=&\sum^{L}_{l=1}\mathbf{H}_{\text{D},l}\mathbf{F}_{l}\mathbf{s}_{l}+\sum^{L}_{l=1}\mathbf{G}_{\text{D},l}\mathbf{\Phi}\mathbf{V}_\text{D}\mathbf{F}_{l}\mathbf{s}_{l}\\ &+\underbrace{\sum^{K}_{k=1}\mathbf{J}_{k,l}\mathbf{P}_{k}\mathbf{b}_{k}+\sum^{K}_{k=1}\mathbf{G}_{\text{D},l}\mathbf{\Phi}\mathbf{G}_{\text{U},k}\mathbf{P}_{k}\mathbf{b}_{k}}_{interference\,\, from\,\, the\,\, UL\,\, users}+\mathbf{n}_{\text{D},l}, \end{split} \end{equation} where $\mathbf{H}_{\text{D},l}\in\mathbb{C}^{M_{\text{D},l}\times N_t}$ is the channel matrix between the AP and the $l$-th DL user, $\mathbf{V}_{\text{D}}\in\mathbb{C}^{T\times N_t}$ denotes the channel matrix between the AP and the IRS, $\mathbf{G}_{\text{D},l}\in\mathbb{C}^{M_{\text{D},l}\times T}$ is the channel matrix between the IRS and the $l$-th DL user, $\mathbf{J}_{k,l}\in\mathbb{C}^{M_{\text{D},l}\times M_{\text{U},k}}$ denotes the channel matrix between the $k$-th UL user and the $l$-th DL user, and $\mathbf{n}_{\text{D},l}\in\mathbb{C}^{M_{\text{D},l}\times 1}$ denotes the complex circular Gaussian noise vector at the $l$-th DL user with zero mean and variance $\sigma^2_{\text{D},l}$. The transmission rate for user $k$ in the uplink is given by \vspace{-0.4em} \begin{equation} \vspace{-0.5em} \begin{split} &\mathcal{R}_{\text{U},k}\triangleq \log \det \bigg( \mathbf{I}+\mathbf{\bar{H}}_{\text{U},k}\mathbf{P}_{k}\mathbf{P}^H_{k}\mathbf{\bar{H}}^H_{\text{U},k}\times\\ &(\sum^{L}_{l=1}\mathbf{\tilde{H}}\mathbf{F}_{l}\mathbf{F}^H_{l}\mathbf{\tilde{H}}^H+\sum^{K}_{k^{'}\neq k} \bar{\mathbf{H}}_{\text{U},k^{'}}\mathbf{P}_{k^{'}}\mathbf{P}^H_{k^{'}}\bar{\mathbf{H}}^H_{\text{U},k^{'}} +\sigma^2_\text{U}\mathbf{I})^{-1} \bigg), \end{split} \end{equation} where $\mathbf{\bar{H}}_{\text{U}, k}\triangleq \mathbf{H}_{\text{U},k}+\mathbf{V}_\text{U}\mathbf{\Phi}\mathbf{G}_{\text{U},k}$. The transmission rate for user $l$ in the downlink is given by \vspace{-0.4em} \begin{equation} \vspace{-0.4em} \begin{split} &\mathcal{R}_{\text{D},l}\triangleq\log \det \bigg( \mathbf{I}+\mathbf{\bar{H}}_{\text{D},l}\mathbf{F}_{l}\mathbf{F}^H_{l}\mathbf{\bar{H}}^H_{\text{D},l}\times \\ &(\sum^{K}_{k=1}\mathbf{\bar{J}}_{k,l}\mathbf{P}_{k}\mathbf{P}^H_{k}\mathbf{\bar{J}}^H_{k,l}+\sum^{L}_{l^{'}\neq l}\mathbf{\bar{H}}_{\text{D},l^{'}}\mathbf{F}_{l^{'}}\mathbf{F}^H_{l^{'}}\mathbf{\bar{H}}^H_{\text{D},l^{'}}+\sigma^2_{\text{D},l}\mathbf{I})^{-1} \bigg), \end{split} \end{equation} where $\mathbf{\bar{H}}_{\text{D},l}\triangleq \mathbf{H}_{\text{D},l}+\mathbf{G}_{\text{D},l}\mathbf{\Phi}\mathbf{V}_\text{D}$ and $\mathbf{\bar{J}}_{k,l}\triangleq \mathbf{J}_{k,l}+\mathbf{G}_{\text{D,l}}\mathbf{\Phi}\mathbf{G}_{\text{U},k}$ \vspace{-0.2em} \subsection{Mixed-Timescale Protocols} \begin{figure*}[!t] \centering \includegraphics[width=0.76\linewidth,height=0.25\linewidth]{Timescale1-eps-converted-to} \caption{ Proposed mixed-timescale beamforming scheme.}\label{fig:timescale} \vspace{-0.6em} \end{figure*} In practice, the acquisition of the real-time IRS-related high-dimensional CSI matrix is very challenging due to the large number of reflecting elements and the passive architecture of the IRS while estimating the low-dimensional effective channels $\mathcal{H}_{ef} \triangleq \{\bar{\mathbf{H}}_{\text{U},k},\bar{\mathbf{H}}_{\text{D},l},\bar{\mathbf{J}}_{k,l},\tilde{\mathbf{H}}\}$ with given IRS passive beamforming matrix is much easier. Based on this observation, we propose a mixed-timescale transmission protocol. Specifically, we focus on a sufficient large time block during which the channel statistics are constant, as shown in Fig. \ref{fig:timescale}. In the first stage, the AP estimates a small amount of high-dimensional full CSI samples $\{\mathcal{H}(n)\}_{n=\{1,...,N_{s}\}}$ (possibly outdated) using some standard IRS-related channel estimation methods~\cite{IRS_CM1,IRS_CM2,IRS_CM3}, where $N_{s}$ denotes the number of collected full CSI samples, and $\mathcal{H} \triangleq \{\mathbf{H}_{\text{U},k},\mathbf{H}_{\text{D},l},\mathbf{G}_{\text{U},k},\mathbf{G}_{\text{D},l},\mathbf{V}_{\text{U}},\mathbf{V}_{\text{D}},\mathbf{J}_{k,l},\tilde{\mathbf{H}}\}$ denotes the set of all CSI matrices. Then, the AP designs the long-term passive beamforming matrix $\mathbf{\Phi}$ based on these collected full CSI samples and sends it to the IRS. In the second stage, the long-term passive beamforming matrix at the IRS is fixed. In each time slot (channel coherence time) $i$, the AP obtains the low-dimensional effective channels $\mathcal{H}_{ef}(i)$ via conventional MIMO channel estimation methods and designs the active short-term beamforming matrices $\mathbf{P}_{k}$ and $\mathbf{F}_{l}$ accordingly. Then, the beamforming matrices $\mathbf{P}_{k}$ are sent to the UL users. As we can see, the proposed mixed-timescale scheme avoids estimating the high-dimensional IRS-related CSI matrix in each time slot. By contrast, the conventional single-timescale algorithm requires a tremendous amount of full CSI samples in each coherence time block, which needs a huge overhead. \vspace{-0.5em} \subsection{Problem Formulation} In this work, we aim at jointly designing the short-term active beamforming matrices $\mathbf{P}_{k}$ and $\mathbf{F}_{l}$, and the long-term IRS passive beamforming matrix $\mathbf{\Phi}$ in order to maximize the average weighted sum-rate over a coherence time block. Hence, the problem can be formulated as follows \vspace{-1.8em} \begin{subequations} \label{eq:problem1} \begin{align}\vspace{-1.6em} (\mathcal{P}1): \quad \max_{\boldsymbol{\phi}} \mathbb{E}_{\mathcal{H}}\{\max_{\{\mathbf{P}_{k},\mathbf{F}_{l}\}} & \sum^{K}_{k=1}\alpha_{k} \mathcal{R}_{\text{U},k}+ \sum^{L}_{l=1}\beta_{l} \mathcal{R}_{\text{D},l}\} \label{objective}\\ \mbox{s.t.}\quad &\|\mathbf{P}_{k}\|^2\leq P_{\text{U},k}, \, \forall k, \label{transmitpower}\\ &\sum^{L}_{l=1}\|\mathbf{F}_{l}\|^2\leq P_{AP},\label{transmitpower2}\\ & |\boldsymbol{\phi}(n)|=1, \forall n, \label{constantmodulus} \end{align} \end{subequations} where $\boldsymbol{\phi} \triangleq \textrm{diag}(\mathbf{\Phi}) \in\mathbb{C}^{T\times 1} $ denotes the long-term passive beamforming vector. $P_{\text{U},k}$ and $P_{AP}$ denote the limited transmit power budgets of the UL users and the AP, respectively, and \eqref{constantmodulus} denotes the constant modulus constraint imposed on the elements of the IRS passive beamforming vector. \vspace{-1.0em} \section{Mixed-Timescale Beamforming Algorithm} In this section, we propose the mixed-timescale beamforming algorithm for solving $\mathcal{P}1$. Firstly, by fixing the long-term passive beamforming matrix at the IRS, we optimize the short-term active beamforming matrices at the AP and UL users, where a BCD-type algorithm is proposed to tackle this problem. Then, we develop an efficient SSCA-based algorithm for designing the long-term passive beamforming matrix at the IRS. \vspace{-1.0em} \subsection{Short-Term Active Beamforming Design} With fixed long-term IRS passive beamforming matrix, the optimization problem of the short-term active beamforming design is given by \begin{subequations} \label{eq:short_problem} \begin{align} (\mathcal{P}2): \quad \max_{\{\mathbf{P}_{k},\mathbf{F}_{l}\} }\quad & \sum^{K}_{k=1}\alpha_{k} \mathcal{R}_{\text{U},k}+ \sum^{L}_{l=1}\beta_{l} \mathcal{R}_{\text{D},l} \label{shortobjective}\\ \mbox{s.t.}\quad &\|\mathbf{P}_{k}\|^2\leq P_{\text{U},k}, \, \forall k, \label{shorttransmitpower}\\ &\sum^{L}_{l=1}\|\mathbf{F}_{l}\|^2\leq P_{AP}.\label{shorttransmitpower2} \end{align} \end{subequations} The objective function of $\mathcal{P}2$ is difficult to handle due to the highly nonlinear objective function and coupled optimization variables. Hence, we first transform $\mathcal{P}2$ into an equivalent but more tractable form via the celebrated WMMSE method~\cite{WMMSE}. Specifically, we introduce auxiliary variables $\mathbf{W}_{\text{U},k} \in \mathbb{C}^{D_{\text{U},k}\times D_{\text{U},k}}$,$\mathbf{W}_{\text{D},l} \in \mathbb{C}^{D_{\text{D},l}\times D_{\text{D},l}}$,$\mathbf{U}_{\text{U},k} \in \mathbb{C}^{N_r\times D_{\text{U},k}}$, and $\mathbf{U}_{\text{D},l} \in \mathbb{C}^{M_{\text{D},l}\times D_{\text{D},l}}$, and the converted problem can be expressed as \vspace{-0.2em} \begin{subequations} \vspace{-0.2em} \begin{align} (\mathcal{P}3): \min_{\Omega}\quad &\sum_{k=1}^{K} \alpha_k\left(\text{Tr}\left(\mathbf{W}_{\text{U},k}\mathbf{E}_{\text{U},k}\right)-\log \det\left(\mathbf{W}_{\text{U},k}\right)\right) \nonumber\\ +\sum_{l=1}^{L} &\beta_l\left(\text{Tr}\left(\mathbf{W}_{\text{D},l}\mathbf{E}_{\text{D},l}\right)-\log \det\left(\mathbf{W}_{\text{D},l}\right)\right) \\ \mbox{s.t.}\quad& \eqref{transmitpower}, \eqref{transmitpower2}, \end{align} \end{subequations} where $\Omega \triangleq \{\mathbf{P}_k,\mathbf{F}_l,\mathbf{U}_{\text{U},k},\mathbf{U}_{\text{D},l},\mathbf{W}_{\text{U},k},\mathbf{W}_{\text{D},l}\}$ is the set of optimization variables, and \vspace{-0.3em} \begin{equation} \vspace{-0.3em} \begin{split} &\mathbf{E}_{\text{U},k} \triangleq (\mathbf{U}_{\text{U},k}^{\rm H}\bar{\mathbf{H}}_{\text{U},k}\mathbf{P}_k-\mathbf{I})(\mathbf{U}_{\text{U},k}^{\rm H}\bar{\mathbf{H}}_{\text{U},k}\mathbf{P}_k-\mathbf{I})^{\rm H} + \mathbf{U}_{\text{U},k}^{\rm H}\times\\ &\left(\sum_{k^{'}\neq k}^{K}\bar{\mathbf{H}}_{\text{U},k^{'}}\mathbf{P}_{k^{'}}\mathbf{P}_{k^{'}}^{\rm H}\bar{\mathbf{H}}_{\text{U},k^{'}}^{\rm H}+\sum_{l=1}^{L}\tilde{\mathbf{H}}\mathbf{F}_l\mathbf{F}_l^{\rm H}\tilde{\mathbf{H}}^{\rm H}+\sigma_{\text{U}}^2\mathbf{I}\right)\mathbf{U}_{\text{U},k}, \end{split} \end{equation} \begin{equation} \begin{split} &\mathbf{E}_{\text{D},l} \triangleq (\mathbf{U}_{\text{D},l}^{\rm H}\bar{\mathbf{H}}_{\text{D},l}\mathbf{F}_l-\mathbf{I})(\mathbf{U}_{\text{D},l}^{\rm H}\bar{\mathbf{H}}_{\text{D},l}\mathbf{F}_l-\mathbf{I})^{\rm H} + \mathbf{U}_{\text{D},l}^{\rm H}\times \\ &\left(\sum_{l^{'}\neq l}^{L}\bar{\mathbf{H}}_{\text{D},l}\mathbf{F}_{l^{'}}\mathbf{F}_{l^{'}}^{\rm H}\bar{\mathbf{H}}_{\text{D},l}^{\rm H}+\sum_{k=1}^{K}\bar{\mathbf{J}}_{k,l}\mathbf{P}_k\mathbf{P}_k^{\rm H}\bar{\mathbf{J}}_{k,l}^{\rm H}+\sigma_{\text{D},l}^2\mathbf{I}\right)\mathbf{U}_{\text{D},l}. \end{split} \end{equation} $\mathcal{P}2$ and $\mathcal{P}3$ are equivalent since they share the same global optimal solution \cite{WMMSE}. Then, we develop a BCD-type algorithm for solving $\mathcal{P}3$. Specifically, the set of optimization variables $\Omega$ is divided into four blocks, i.e. $\{\mathbf{U}_{\text{U},k},\mathbf{U}_{\text{D},l}\}$, $\{\mathbf{W}_{\text{U},k},\mathbf{W}_{\text{D},l}\}$, $\{\mathbf{P}_k\}$, and $\{\mathbf{F}_l\}$. Each block are optimized in turn with the other blocks of variables fixed. The proposed BCD-type algorithm for optimizing the short-term active beamforming matrices is summarized in Algorithm \ref{BCDtype}. The details on solving the subproblems w.r.t. each block of variables are given in Appendix A. \begin{algorithm}[t]\caption{Proposed BCD-type short-term active beamforming design algorithm.} \label{BCDtype} \begin{algorithmic}[1] \footnotesize \begin{small} \STATE Initialize the beamforming matrices $\mathbf{P}_k$ and $\mathbf{F}_l$ with feasible values. Set the maximum iteration number $I_{max}$ and the threshold value $\delta$. \REPEAT \STATE Update $\mathbf{U}_{{\rm U},k} = \mathbf{A}_{\text{U},k}^{-1} \bar{\mathbf{H}}_{\text{U},k}\mathbf{P}_k$ and $\mathbf{U}_{{\rm D},l} = \mathbf{A}_{\text{D},l}^{-1} \bar{\mathbf{H}}_{\text{D},l}\mathbf{F}_l$, where $\mathbf{A}_{\text{U},k}$ and $\mathbf{A}_{\text{D},l}$ are defined in~\eqref{AU} and~\eqref{AD}, respectively. \STATE Update $\mathbf{W}_{\text{U},k} = \mathbf{E}_{\text{U},k}^{-1}$ and $\mathbf{W}_{\text{D},l} = \mathbf{E}_{\text{D},l}^{-1}$. \STATE Update $ \mathbf{P}_k = \alpha_k(\mathbf{A}_{\text{P},k}+\lambda_k\mathbf{I})^{-1} \bar{\mathbf{H}}_{\text{U},k}^{\rm H}\mathbf{U}_{\text{U},k}\mathbf{W}_{\text{U},k}$, where $\mathbf{A}_{\text{P},k}$ is defined in~\eqref{APk} and $\lambda_k$ is the Lagrange multiplier. \STATE Update $\mathbf{F}_l = \beta_l(\mathbf{A}_{\text{F}}+\mu \mathbf{I})^{-1}\bar{\mathbf{H}}_{\text{D},l}^{\rm H}\mathbf{U}_{\text{D},l}\mathbf{W}_{\text{D},l}$, where $\mathbf{A}_{\text{F}}$ is defined in~\eqref{AF} and $\mu$ is the Lagrange multiplier. \UNTIL the maximum iteration number is reached or the difference between the successive objective function value is less than $\delta$. \end{small} \end{algorithmic} \end{algorithm} \vspace{-0.4em} \subsection{Long-term IRS Passive Beamforming Design} In this subsection, we introduce the proposed long-term IRS passive beamforming design algorithm. With the optimized short-term variables, the stochastic optimization problem w.r.t. the passive beamforming vector can be formulated as \begin{equation} (\mathcal{P}4)\,\,\min_{\boldsymbol{\theta}} \quad f (\bm{\theta}, \{\mathbf{P}_k^*,\mathbf{F}_{l}^*\} ) = \mathbb{E}_{\mathcal{H}}\{g(\bm{\theta},\{\mathbf{P}_k^*,\mathbf{F}_{l}^*\};\mathcal{H})\}, \label{long-term-object} \vspace{-1.5mm} \end{equation} where $\boldsymbol{\theta} \triangleq \angle{\boldsymbol{\phi}}$, $\{\mathbf{P}_k^*,\mathbf{F}_l^*\}$ is the optimal solution of the proposed short-term active beamforming algorithm and $g(\bm{\theta},\{\mathbf{P}_k^*,\mathbf{F}_{l}^*\};\mathcal{H})$ denotes the sum-rate given by \vspace{-0.2em} \begin{equation} \vspace{-0.2em} \begin{split} g(\bm{\theta},\{\mathbf{P}_k^*,\mathbf{F}_{l}^*\};\mathcal{H}) \triangleq &\sum^{K}_{k=1}\alpha_{k} \mathcal{R}_{\text{U},k}(\bm{\theta},\{\mathbf{P}_k^*,\mathbf{F}_{l}^*\};\mathcal{H})\\ &+\sum^{L}_{l=1}\beta_{l}\mathcal{R}_{\text{D},l}(\bm{\theta},\{\mathbf{P}_k^*,\mathbf{F}_{l}^*\};\mathcal{H}). \end{split} \label{gfunction} \end{equation} Note that $\mathcal{P}4$ is hard to solve directly since the objective function is highly non-convex and it is difficult to obtain a closed-form expression via computing the expectation over $g(\bm{\theta},\{\mathbf{P}_k^*,\mathbf{F}_{l}^*\};\mathcal{H})$. Hence, by leveraging the stochastic optimization framework in~\cite{cssca}, we approximate \eqref{long-term-object} by using a quadratic surrogate function. Specifically, at the $t$-th iteration of the proposed algorithm, $B$ channel samples, where $B$ is the batch size, denoted as $\{\mathcal{H}^t(m)\}_{m=\{1,...,B\}}$ are randomly selected from the collection of high-dimensional CSI samples $\{\mathcal{H}(n) \}_{n=\{1,...,N_s\}}$ and the surrogate function is updated as \vspace{-0.2em} \begin{equation} \vspace{-0.2em} \begin{split} \bar{f}^t(\boldsymbol{\theta}) = (\mathbf{f}^t)^T(\boldsymbol{\theta}-\boldsymbol{\theta}^t)+\varpi\|\boldsymbol{\theta}-\boldsymbol{\theta}^t\|^2, \label{surrogate function} \end{split} \end{equation} where $\boldsymbol{\theta}^t$ denotes the current value of $\boldsymbol{\theta}$ and $\varpi>0$ is a constant. Note that $\mathbf{f}^t$ denotes the approximation of the partial derivatives $\frac{\partial f}{\partial \boldsymbol{\theta}}$, which is updated as \vspace{-0.2em} \begin{equation} \vspace{-0.2em} \mathbf{f}^t = (1-\varrho^{t})\mathbf{f}^{t-1}+\varrho^t\sum_{m=1}^{B}\frac{\partial g(\boldsymbol{\theta},\{\mathbf{P}_k^*,\mathbf{F}_{l}^*\};\mathcal{H}^t(m)) }{\partial \boldsymbol{\theta}}\big|_{\boldsymbol{\theta}=\boldsymbol{\theta}^{t}}, \label{surrogate_gradient} \end{equation} where $\{\varrho^t\}$ is a sequence to be properly chosen and the expression of $\frac{\partial g(\boldsymbol{\theta},\{\mathbf{P}_k^*,\mathbf{F}_{l}^*\};\mathcal{H}^t(m))}{\partial \boldsymbol{\theta}}$ is omitted here. Subsequently, we aim to solve the approximated problem at the $t$-th frame, which is given by \vspace{-0.2em} \begin{equation} \vspace{-0.2em} \min_{\boldsymbol{\theta}} \quad \bar{f}^t(\boldsymbol{\theta}). \end{equation} This is a convex quadratic problem and the optimal solution can be readily derived as \vspace{-0.2em} \begin{equation} \vspace{-0.2em} \bar{\boldsymbol{\theta}}^t = \boldsymbol{\theta}^t-\frac{\mathbf{f}^t}{2\varpi}. \label{solution_to_surrogate} \end{equation} Then, the long-term variable is updated as \vspace{-0.2em} \begin{equation} \vspace{-0.2em} \boldsymbol{\theta}^{t+1}=(1-\gamma^t)\boldsymbol{\theta}^{t} + \gamma^t\bar{\boldsymbol{\theta}}^t, \label{theta_update} \end{equation} where $\{\gamma^t\}$ denotes a sequence of parameters and the convergence can be guaranteed if we choose $\varrho^t$ and $\gamma^t$ by following the conditions: $\lim_{t\rightarrow \infty} \varrho^t = 0, \sum_{t} \varrho^t = \infty, \sum_{t} (\varrho^t)^2 < \infty, \lim_{t\rightarrow \infty} \gamma^t = 0, \sum_{t} \gamma^t = \infty, \sum_{t} (\gamma^t)^2 < \infty$ and $\lim_{t\rightarrow \infty} \frac{\gamma^t}{\varrho^t} = 0$~\cite{cssca,CRAN}. The proposed long-term passive beamforming design algorithm is summarized in \textbf{Algorithm 2}. \vspace{-0.2em} \begin{remark} The proposed SSCA-based algorithm can be guaranteed to converge to a stationary solution of $\mathcal{P}4$~\cite{cssca}. Moreover, combining the convergence property of the BCD-type short-term active beamforming algorithm \cite{CRAN}, the proposed overall mixed-timescale joint active and passive beamforming algorithm converges to a stationary point of $\mathcal{P}1$. \end{remark} \vspace{-0.4em} For the single-timescale algorithm, the required CSI signaling bits in a coherence time block is given by $Q_s = qT_s(N_\text{U}KM_\text{U}+N_\text{D}LM_\text{D}+KLM_\text{U}M_\text{D}+T(N_\text{U}+N_\text{D}+2KM_\text{U}+2LM_\text{D}-3))$~\cite{IRS_CM2}, where $q$ is the quantization bits for each element of CSI matrices and $T_s$ denotes the number of time slots in a coherence time block while that of the mixed-timescale scheme is given by $Q_m = qT_s(N_\text{U}KM_\text{U}+N_\text{D}LM_\text{D}+KLM_\text{U}M_\text{D}) + qA_sT(N_\text{U}+N_\text{D}+2KM_\text{U}+2LM_\text{D}-3)$. Fig.~\ref{fig:CSI_overhead} compares the single-timescale scheme and the proposed mixed-timescale in terms of CSI overhead, where we set $q = 8, T_s = 10000$~\cite{FDrelay2}, $A_s = 30, K = L = 2, N_\text{U} = N_\text{D} = 32, M_\text{U} = M_\text{D} = 4$. From the figure, our proposed mixed-timescale algorithm can significantly reduce the CSI overhead, especially when $T$ is large. \begin{figure}[h] \centering \scalebox{0.60}{\includegraphics{CSI_overhead-eps-converted-to}} \caption{CSI overhead versus the number of reflecting elements.}\label{fig:CSI_overhead} \vspace{-0.4em} \end{figure} \begin{algorithm}[t]\caption{Proposed SSCA-based algorithm for the long-term passive beamforming design.} \begin{algorithmic}[1] \footnotesize \begin{small} \STATE Initialize $\boldsymbol{\theta}^0$ with a feasible point. Select a proper sequence for $\{\varrho^t\}$ and $\{\gamma^t\}$. Set an appropriate value for $\varpi$ and let $t=0$. \REPEAT \STATE Randomly select $B$ samples $\{\mathcal{H}^t(m)\}_{m=\{1,...,B\}}$ from the collection of full CSI samples. Compute the surrogate function \eqref{surrogate function} based on \eqref{surrogate_gradient}. \STATE Obtain the optimal solution via \eqref{solution_to_surrogate}. \STATE Update $\boldsymbol{\theta}^t$ based on \eqref{theta_update}. \STATE Update the iteration number $t=t+1$. \UNTIL the convergence condition is satisfied or the maximum number of iterations is reached. \end{small} \end{algorithmic} \end{algorithm} \begin{figure*}[!t] \centering \scalebox{0.85}{\includegraphics{Architecture-eps-converted-to}} \caption{Architecture of the proposed deep-unfolding NN consisting of the LPBN and SABN.}\label{fig:joint_structure} \end{figure*} \begin{figure*}[!t] \centering \scalebox{0.93}{\includegraphics{DeepUnfolding1-eps-converted-to}} \caption{Structure of the SABN.}\label{fig:short_unfolding} \vspace{-0.9em} \end{figure*} \section{Deep-Unfolding Beamforming} In this section, we introduce the proposed deep-unfolding NN that unfolds the SSCA-based mixed-timescale beamforming algorithm. \vspace{-0.6em} \subsection{Architecture of Deep-Unfolding NN} The framework of our proposed deep-unfolding NN is shown in Fig. \ref{fig:joint_structure}. It consists of a LPBN and a SABN, which corresponds to the long-term passive beamforming algorithm and short-term active beamforming algorithm, respectively. \subsubsection{Forward Propagation} In the forward propagation stage, the full CSI samples, $\mathcal{H}$, are first input into the LPBN, and then the effective CSI samples, $\mathcal{H}_{ef}$, are output. Note that we set the phase of the IRS passive beamforming vector $\boldsymbol{\theta}$ as the learnable parameter of LPBN and the operation $e^{j(\cdot)}$ ensures that the unit-modulus constraint is satisfied. Moreover, the function that computes the effective channels is given by \vspace{-0.1em} \begin{equation} \vspace{-0.1em} \begin{split} \mathcal{H}_{ef} &= \Pi(\mathcal{H},e^{j\boldsymbol{\theta}})\triangleq\{\mathbf{\bar{H}}_{\text{U}, k}= \mathbf{H}_{\text{U},k}+\mathbf{V}_\text{U}\mathbf{\Phi}\mathbf{G}_{\text{U},k},\mathbf{\bar{H}}_{\text{D},l}= \\& \mathbf{H}_{\text{D},l}+\mathbf{G}_{\text{D},l}\mathbf{\Phi}\mathbf{V}_\text{D}, \mathbf{\bar{J}}_{k,l}= \mathbf{J}_{k,l}+\mathbf{G}_{\text{D,l}}\mathbf{\Phi}\mathbf{G}_{\text{U},k},\tilde{\mathbf{H}} = \tilde{\mathbf{H}}\}, \end{split} \end{equation} where $\mathbf{\Phi} = \textrm{Diag}(e^{j\boldsymbol{\theta}})$. Then, the effective CSI samples pass through the SABN that outputs the active beamforming matrices $\mathbf{P}_k$ and $\mathbf{F}_l$. The detailed structure of the SABN will be introduced in Section \ref{ArchiSABN}. Denote $\mathcal{F}(\cdot)$ as the whole forward propagation stage of our proposed deep-unfolding NN, that is, \begin{equation} \{\mathbf{P}_k,\mathbf{F}_l\} = \mathcal{F}(\{\boldsymbol{\theta},\mathbf{\Psi}\};\mathcal{H}), \end{equation} where $\boldsymbol{\theta}$ and $\mathbf{\Psi}$ are the learnable parameters of the LPBN and SABN, respectively, and $\{\mathbf{P}_k,\mathbf{F}_l\}$ are the output active beamforming matrices of the deep-unfolding NN. \subsubsection{Loss Function} Since we aim to maximize the weighted sum-rate of the system, the loss function of the deep-unfolding NN can be expressed as \begin{equation} \mathcal{L}(\{\boldsymbol{\theta},\mathbf{\Psi}\};\mathcal{H}) \triangleq g(\boldsymbol{\theta},\mathcal{F}(\{\boldsymbol{\theta},\mathbf{\Psi}\};\mathcal{H});\mathcal{H}), \label{loss_function} \end{equation} where $g(\cdot)$ is the sum-rate function defined in \eqref{gfunction}. \subsubsection{Back Propagation} In the back propagation stage, the gradients of the learnable parameters, $\frac{\partial \mathcal{L}(\{\boldsymbol{\theta},\mathbf{\Psi}\};\mathcal{H})}{\partial \boldsymbol{\theta}}$ and $\frac{\partial \mathcal{L}(\{\boldsymbol{\theta},\mathbf{\Psi}\};\mathcal{H})}{\partial \boldsymbol{\Psi}}$, are computed based on the chain rule. \subsubsection{Update of Learnable Parameters} We update $\{\boldsymbol{\theta},\mathbf{\Psi}\}$ based on the gradients of the learnable parameters. Specifically, in the $t$-th round of the learning process, the learnable parameters are updated as \begin{equation} \boldsymbol{\theta}^{t+1} = \boldsymbol{\theta}^{t} - \eta \frac{\partial \mathcal{L}(\{\boldsymbol{\theta},\mathbf{\Psi}\};\mathcal{H}^t)}{\partial \boldsymbol{\theta}}, \end{equation} \begin{equation} \mathbf{\Psi}^{t+1} = \mathbf{\Psi}^{t} - \eta \frac{\partial \mathcal{L}(\{\boldsymbol{\theta},\mathbf{\Psi}\};\mathcal{H}^t)}{\partial \boldsymbol{\Psi}}, \end{equation} where $\eta$ denotes the learning rate. Since the LPBN is an approximation of the SSCA-based algorithm, we can also update $\boldsymbol{\theta}$ based on~\eqref{surrogate_gradient},~\eqref{solution_to_surrogate}, and~\eqref{theta_update}, correspondingly. \subsection{Structure of the SABN} \label{ArchiSABN} The LPBN sets the IRS passive beamforming vector $\boldsymbol{\theta}$ as a learnable parameter and its forward propagation is to compute effective channels $\mathcal{H}_{ef}$. In this subsection, we introduce the detailed structure of the SABN, which unfolds Algorithm \ref{BCDtype} into a layer-wise structure. We first define a novel element-wise non-linear operation that takes the reciprocal of each element in the diagonal of matrix $\mathbf{A}$ while setting the non-diagonal elements to be $0$, i.e., denoted as $\mathbf{A}^{\dagger}$. We take a $3\times 3$ matrix as an example, \begin{equation} \mathbf{A} = \left[ \begin{array}{ccc} a_{11} & a_{12} & a_{13}\\ a_{21} & a_{22} & a_{23}\\ a_{31} & a_{32} & a_{33}\\ \end{array} \right], \quad \mathbf{A}^{\dagger} = \left[ \begin{array}{ccc} \frac{1}{a_{11}} & 0 & 0\\ 0 & \frac{1}{a_{22}} & 0\\ 0 & 0 & \frac{1}{a_{33}} \\ \end{array} \right]. \end{equation} Note that $\mathbf{A}^{-1}=\mathbf{A}^{\dagger}$ when $\mathbf{A}$ is a diagonal matrix. We observe that the diagonal elements of the matrices are much larger than the off-diagonal elements in the proposed BCD-type algorithm. Hence, $\mathbf{A}^{\dagger}$ is a good estimation of $\mathbf{A}^{-1}$. Since solving the matrix inversion $\mathbf{A}^{-1}$ requires high computational complexity, we approximate it by employing the combination of the following two architectures with lower complexity: (i) $\mathbf{A}^{\dagger}\mathbf{X}$ with the element-wise non-linear function $\mathbf{A}^{\dagger}$ and learnable parameter $\mathbf{X}$; (ii) By recalling the first-order Taylor expansion of $\mathbf{A}^{-1}$ at $\mathbf{A}_0$: $\mathbf{A}^{-1}=2\mathbf{A}_{0}^{-1}-\mathbf{A}_{0}^{-1}\mathbf{A}\mathbf{A}_{0}^{-1}$, we apply $\mathbf{A}\mathbf{Y} + \mathbf{Z}$ with learnable parameters $\mathbf{Y}$ and $\mathbf{Z}$. Note that the learnable parameters $\mathbf{X}$, $\mathbf{Y}$, and $\mathbf{Z}$ are introduced to improve the performance of the deep-unfolding NN. Thus, we apply $\mathbf{A}^{\dagger}\mathbf{X}+\mathbf{A}\mathbf{Y}+\mathbf{Z}$ to approximate matrix inversion $\mathbf{A}^{-1}$. Note that $\mathbf{\Xi}^m \triangleq \{ \mathbf{X}_{{\rm U},k}^{u,m}, \mathbf{Y}_{{\rm U},k}^{u,m}, \mathbf{Z}_{{\rm U},k}^{u,m} \}$$\cup$$\{ \mathbf{X}_{{\rm D},l}^{u,m}$\\ $, \mathbf{Y}_{{\rm D},l}^{u,m}, \mathbf{Z}_{{\rm D},l}^{u,m} \}$$\cup$$\{ \mathbf{X}_{{\rm U},k}^{w,m}, \mathbf{Y}_{{\rm U},k}^{w,m},\mathbf{Z}_{{\rm U},k}^{w,m} \}$$\cup$$\{ \mathbf{X}_{{\rm D},l}^{w,m}, \mathbf{Y}_{{\rm D},l}^{w,m}, \mathbf{Z}_{{\rm D},l}^{w,m} \}$\\ $\cup$$\{ \mathbf{X}_{k}^{p,m}, \mathbf{Y}_{k}^{p,m}, \mathbf{Z}_{k}^{p,m} \}$$\cup$$\{ \mathbf{X}_{l}^{f,m}, \mathbf{Y}_{l}^{f,m}, \mathbf{G}_{\text{D},l}^{f,m} \}$\footnote{Note that here the subscripts $\text{U}$ and $\text{D}$ correspond to the uplink and downlink, respectively. Subscripts $k$ and $l$ represent the $k$-th UL user and the $l$-th DL user, respectively. Superscripts $u$, $w$, $p$ and $f$ denote the corresponding unfolding matrices and superscript $m$ denotes the layer index.} are introduced learnable parameter sets to approximate the inversion of matrix variables $\mathbf{U}_{{\rm U},k}^{m}$, $\mathbf{U}_{{\rm D},l}^{m}$, $\mathbf{W}_{{\rm U},k}^{m}$, $\mathbf{W}_{{\rm D},l}^{m}$, $\mathbf{P}_{k}^{m}$, and $\mathbf{F}_{l}^{m}$ in the $m$-th layer, respectively, and $\{ \mathbf{O}_{{\rm U},k}^{u,m}, \mathbf{O}_{{\rm D},l}^{u,m}, \mathbf{O}_{k}^{p,m}, \mathbf{O}_{l}^{f,m} \}$ denote the learnable offsets. The architecture of the SABN is designed as: \begin{figure*}[t] \begin{subequations} \label{network} \begin{eqnarray} & & \!\!\!\!\! \mathbf{U}_{{\rm U},k}^{m} = \bigg( (\mathbf{A}_{{\rm U},k}^{m-1})^{\dagger}\mathbf{X}_{{\rm U},k}^{u,m} + \mathbf{A}_{{\rm U},k}^{m-1}\mathbf{Y}_{{\rm U},k}^{u,m} + \mathbf{Z}_{{\rm U},k}^{u,m} \bigg) \bar{\mathbf{H}}_{{\rm U},k}\mathbf{P}_{k}^{m-1} + \mathbf{O}_{{\rm U},k}^{u,m}, \label{UU} \\ & & \!\!\!\!\! \mathbf{U}_{{\rm D},l}^{m} = \bigg( (\mathbf{A}_{{\rm D},l}^{m-1})^{\dagger}\mathbf{X}_{{\rm D},l}^{u,m} + \mathbf{A}_{{\rm D-1},l}^{m}\mathbf{Y}_{{\rm D},l}^{u,m} + \mathbf{Z}_{{\rm D},l}^{u,m} \bigg) \bar{\mathbf{H}}_{{\rm D},l}\mathbf{F}_{l}^{m-1} + \mathbf{O}_{{\rm D},l}^{u,m}, \label{UD} \\ & & \!\!\!\!\! \mathbf{W}_{{\rm U},k}^{m} = (\mathbf{E}_{{\rm U},k}^{m})^{\dagger}\mathbf{X}_{{\rm U},k}^{w,m} + \mathbf{E}_{{\rm U},k}^{m}\mathbf{Y}_{{\rm U},k}^{w,m} + \mathbf{Z}_{{\rm U},k}^{w,m}, \label{WU} \\ & & \!\!\!\!\! \mathbf{W}_{{\rm D},l}^{m} = (\mathbf{E}_{{\rm D},l}^{m})^{\dagger}\mathbf{X}_{{\rm D},l}^{w,m} + \mathbf{E}_{{\rm D},l}^{m}\mathbf{Y}_{{\rm D},l}^{w,m} + \mathbf{Z}_{{\rm D},l}^{w,m}, \label{WD} \\ & & \!\!\!\!\! \mathbf{P}_{k}^{m} \!\!=\!\! \alpha_k \bigg( (\mathbf{A}_{\text{P},k}^{m} \!+\! \lambda_k^m\mathbf{I})^{\dagger}\mathbf{X}_{k}^{p,m} \!+\! (\mathbf{A}_{\text{P},k}^{m} \!+\! \lambda_k^m\mathbf{I})\mathbf{Y}_{k}^{p,m} \!+\! \mathbf{Z}_{k}^{p,m} \bigg) \bar{\mathbf{H}}_{{\rm U}, k}^{\rm H}\mathbf{U}_{{\rm U},k}^{m}\mathbf{W}_{{\rm U},k}^{m} \!+\! \mathbf{O}_{k}^{p,m}, \label{PK} \\ & & \!\!\!\!\! \mathbf{F}_{l}^{m} \!\!=\!\! \beta_l \bigg( (\mathbf{A}_{\text{F},l}^{m} \!+\! \mu^m \mathbf{I})^{\dagger}\mathbf{X}_{l}^{f,m} \!+\! (\mathbf{A}_{\text{F},l}^{m} \!+\! \mu^m \mathbf{I})\mathbf{Y}_{l}^{f,m} \!+\! \mathbf{G}_{\text{D},l}^{f,m} \bigg) \bar{\mathbf{H}}_{{\rm D}, l}^{\rm H}\mathbf{U}_{{\rm D},l}^{m}\mathbf{W}_{{\rm D},l}^{m} \!+\! \mathbf{O}_{l}^{f,m}. \label{FL} \end{eqnarray} \end{subequations} \vspace{-1em} \end{figure*} The architecture of the SABN is presented in Fig.~\ref{fig:short_unfolding}, where $\mathcal{U}^{m}_{\rm U}$, $\mathcal{U}^{m}_{\rm D}$, $\mathcal{W}^{m}_{\rm U}$, $\mathcal{W}^{m}_{\rm D}$, $\mathcal{P}^{m}$, and $\mathcal{F}^{m}$ represent the layers of the deep-unfolding NN, i.e., \eqref{UU}-\eqref{FL}. In addition, the Lagrange multipliers, $\lambda_k^m$ and $\mu^m$, are also set as learnable parameters. Hence, the learnable parameters of the SABN can be denoted as $\mathbf{\Psi} \triangleq \bigcup_{m=1}^{I_u}\mathbf{\Xi}^m\cup\{\mathbf{O}_{{\rm U},k}^{u,m}, \mathbf{O}_{{\rm D},l}^{u,m}, \mathbf{O}_{k}^{p,m}, \mathbf{O}_{l}^{f,m},\lambda_k^m,\mu^m\}$, where $I_u$ is the number of layers. Moreover, to avoid gradient explosion and ensure that the power constraints \eqref{shorttransmitpower} and \eqref{shorttransmitpower2} are satisfied, we scale each $\mathbf{F}_{l}^{m}$ as $\frac{ \sqrt{P_{AP}} }{ \big( \sum\limits_{l} \textrm{Tr}(\mathbf{F}_{l}^{m}(\mathbf{F}_{l}^{m})^{H}) \big)^{\frac{1}{2}} }\mathbf{F}_{l}^{m}$. Note that $\mathbf{P}_{k}^{m}$ can be scaled in the same way. The output layer is a single-layer BCD iteration as~\cite{unfolding8}.\vspace{-0.3em} \begin{remark} Let us investigate the connection between the deep-unfolding NN and the SSCA-based algorithm. First, it is obvious that the SABN has a similar structure with the BCD-type short-term active beamforming algorithm. However, the SABN introduces learnable parameters to approximate the matrix inversion. Second, regarding the relation between the LPBN and the SSCA-based long-term passive beamforming algorithm, let us focus on the surrogate functions of these two approaches. Specifically, the surrogate function of the SSCA-based algorithm is constructed based on the sample gradient $\frac{\partial g(\boldsymbol{\theta},\{\mathbf{P}_k^*,\mathbf{F}_{l}^*\};\mathcal{H})}{\partial \boldsymbol{\theta}}\big|_{\boldsymbol{\theta}=\boldsymbol{\theta}^{t}}$, while that of the deep-unfolding NN is constructed based on \begin{equation} \begin{split} \frac{\partial \mathcal{L}(\{\boldsymbol{\theta},\mathbf{\Psi}\};\mathcal{H})}{\partial \boldsymbol{\theta}}\big|_{\boldsymbol{\theta}=\boldsymbol{\theta}^{t}} &= \frac{\partial g(\boldsymbol{\theta},\mathcal{F}(\{\boldsymbol{\theta},\mathbf{\Psi}\};\mathcal{H});\mathcal{H})}{\partial \boldsymbol{\theta}}\big|_{\boldsymbol{\theta}=\boldsymbol{\theta}^{t}} \\ & = \frac{\partial g(\boldsymbol{\theta},\mathcal{F}(\{\boldsymbol{\theta}^t,\mathbf{\Psi}\};\mathcal{H});\mathcal{H})}{\partial \boldsymbol{\theta}}\big|_{\boldsymbol{\theta}=\boldsymbol{\theta}^{t}}\\ &+(\frac{\partial g}{\partial \mathcal{F}})^T\frac{\partial \mathcal{F}(\{\boldsymbol{\theta},\mathbf{\Psi};\mathcal{H}\}}{\partial \boldsymbol{\theta}}\big|_{\boldsymbol{\theta}=\boldsymbol{\theta}^{t}}. \end{split} \label{gradient_comparsion} \end{equation} The first term in the last row of \eqref{gradient_comparsion} is the same as the sample gradient of the SSCA-based algorithm except that the active beamforming matrices are obtained by the network $\mathcal{F}(\{\boldsymbol{\theta}^t,\mathbf{\Psi}\};\mathcal{H})$ instead of the BCD-type algorithm $\{\mathbf{P}_k^*,\mathbf{F}_{l}^*\}$. The second term that represents the gradient of the network only appears in the deep-unfolding NN and is not included in the SSCA-based algorithm. This is because the jointly designed structure of the proposed deep-unfolding NN ties the long-term IRS passive beamforming matrix and the short-term active beamforming matrices more tightly. \vspace{-0.2em} \end{remark} \vspace{-0.8em} \subsection{Black-box NN for Beamforming Design} In this subsection, we propose a black-box NN for comparison. As shown in Fig. \ref{fig:blackbox}, the black-box NN also consists of two parts. The long-term passive beamforming part is the same as that of our proposed deep-unfolding NN. The short-term active beamforming part is comprised of conventional black-box layers, such as convolutional layers (CLs) and fully connected layers (FCLs). Specifically, full channel samples $\mathcal{H}$ first pass through the long-term passive beamforming part and it outputs the effective channels $\mathcal{H}_{ef}$. Then, the effective channels enter the CL, the batch normalization (BN) layer, and the non-linear function in serial and this process repeats for a number of times. Subsequently, the outputs are flattened and pass through several FCLs followed by a non-linear function. In particular, we adopt Leaky ReLU as the non-linear function, i.e., $y=\max\{x,x/a\}$, where $a>1$ is a constant. The final outputs of the black-box NN are the active beamforming matrices $\mathbf{P}_k$ and $\mathbf{F}_l$, and we scale them to satisfy the power constraint. The weighted sum-rate in \eqref{loss_function} is employed as the loss function. \begin{figure*}[!t] \centering \scalebox{0.5}{\includegraphics{blackbox-eps-converted-to}} \caption{Structure of the black-box NN.}\label{fig:blackbox} \vspace{-0.8em} \end{figure*} \vspace{-0.8em} \subsection{Computational Complexity} \vspace{-0.1em} In this subsection, we analyze the computational complexity of the proposed SSCA-based optimization algorithm, the proposed deep-unfolding NN, the benchmark black-box NN, and the conventional single-timescale algorithm. The computational complexity of the SSCA-based algorithm is dominated by the short-term BCD-type active beamforming algorithm, which is given by $\mathcal{O}\big( I_m \big( K(N_r^3 + M_U^3) + L(N_t^3 +M_D^3) + K^2N_r^2M_U + L^2N_t^2M_D \big) \big)$, where $I_m$ denotes the number of iterations. The computational complexity of the deep-unfolding NN is given by $\mathcal{O}\big( I_u \big( K(N_r^{2.37} + M_U^{2.37}) + L(N_t^{2.37} +M_D^{2.37}) + K^2N_r^2M_U + L^2N_t^2M_D \big) \big)$, where $I_u \ll I_m$ is the number of layers. Compared to the complexity of the iterative BCD-type algorithm, the deep-unfolding NN has much lower complexity for the following two reasons: (i) The number of layers in the deep-unfolding NN is much smaller than that of the BCD-type algorithm; (ii) The iterative BCD-type algorithm involves the matrix inversion with complexity $\mathcal{O}(N^3)$ while the deep-unfolding NN simply requires matrix multiplication with complexity $\mathcal{O}(N^{2.37})$. The computational complexity of the black-box NN is $\mathcal{O}\big( \sum_{l=1}^{L_{c}-1}Q_l^2S_l^2C_{l-1}C_l + C_{L_c}Q_{L_{c}}F_1 + \sum_{l=2}^{L_{f}}F_{l-1}F_{l} + F_{L_f}(KM_\text{U}D_\text{U} + LN_tD_\text{D}) \big)$, where $L_c$ is the number of CLs, $L_f$ is the number of FCLs, $S_l$ represents the size of the convolutional kernel, $C_l$ denotes the number of channels in the $l$-th CL, $Q_l$ denotes the output size of the $l$-th CL, which depends on the input size, padding number, and stride, and $F_l$ is the output size of the $l$-th FCL. Moreover, we analyze the computational complexity of the single-timescale algorithm for comparison. Specifically, the single-timescale algorithm collects real-time high-dimensional full CSI samples and optimizes the active beamforming matrices and the IRS passive beamforming matrix employing a BCD-type algorithm in each time slot. The procedure for updating the active beamforming matrices is given in Algorithm \ref{BCDtype}. Regarding the optimization of the IRS passive beamforming matrix, a one-iteration BCD algorithm is adopted~\cite{oneiterationBCD}, whose computational complexity is given by $\mathcal{O}(T^3)$. Hence, the overall computational complexity of the single-timescale algorithm is given by $\mathcal{O}\big( I_s \big( T^3 + K(N_r^3 + M_U^3) + L(N_t^3 +M_D^3) + K^2N_r^2M_U + L^2N_t^2M_D \big) \big)$. It is readily seen that our proposed mixed-timescale scheme can significantly reduce the computational complexity compared with the single-timescale algorithm since $T$ is generally large. \vspace{-0.3em} \section{Simulation Results} In this section, we present simulation results to evaluate the performance of our proposed algorithms. The simulation setting is shown in Fig.~\ref{fig:setting}. The AP is located at $(0 \,\text{m}, 0\,\text{m}, 0 \,\text{m})$ and the position of the IRS is $(0\, \text{m}, d_1 = 80\, \text{m}, 3\, \text{m})$. We consider $K = 2$ UL users and $L = 2$ DL users and they lie on the corners of a square centered at $(0\,\text{m}, 80\, \text{m}, 0\, \text{m})$ with a side length $20\, \text{m}$. Both AP and users are equipped with uniform linear arrays (ULA) and the IRS is equipped with a uniform planar array (UPA). The distance-dependent path loss is modeled as $L(d) = C_0(\frac{d_{link}}{D_{0}})^{-a}$, where $C_0$ is the path loss at the reference distance $D_0 =1\, \text{m}$, $d_{link}$ represents the individual link distance, and $a$ denotes the path loss exponent. As for the small-scale fading, we assume the Rician fading channel model, which is given by \begin{equation} \mathbf{H} = \sqrt{\frac{b_{}}{1+b_{}}}\mathbf{H}^{\text{Los}} + \sqrt{\frac{1}{1+b_{}}}\mathbf{H}_{}^{\text{NLos}}, \end{equation} where $b_{}$ is the Rician factor, and $\mathbf{H}^{\text{Los}}$ and $\mathbf{H}^{\text{NLos}}$ represent the deterministic line-of-sight (LoS) and random Rayleigh fading components, respectively. In particular, we let $a_{AI}$, $a_{Au}$, $a_{{Iu}}$, and $a_{uu}$ denote the path loss exponents of the AP-IRS link, AP-user link, IRS-user link, and user-user link, respectively, and let $b_{AI}$, $b_{Au}$, $b_{Iu}$, and $b_{uu}$ represent the Rician factor of these links, respectively. The residual SI channel matrix $\tilde{\mathbf{H}}$ is generated based on the model described in \cite{SIchannel}, and the average power of the SI channel is denote by $\sigma_{SI}^2$. The system parameters are set as follows unless otherwise stated: $N_t = N_r = N =32$, $M_{\text{U},k} = M_{\text{D},l} = 4, D_{\text{U},k} = D_{\text{D},l} = 4, \forall k,l$, $T = 200$, $\sigma_{\text{U}}^2 = \sigma_{\text{D},l}^2 = -76 $ dBm$, \forall l$, $\sigma_{SI}^2 = -60 $ dB, $P_{\text{U},k} = 24$ dBm$, \forall k$, $P_{AP} = 44$ dBm, $\alpha_{k} = \beta_{l} = 1, \forall k,l$, $a_{AI} = 2.4$, $a_{Au} = 3.8$, $a_{Iu} = 2.2$, $a_{uu} = 3.0$, $b_{AI} = b_{Iu} = 3$ dB, $b_{Au} = -3$ dB, and $b_{uu} = 0$ dB. For the algorithm parameters, we set $I_{max} = 100$, $\delta = 10^{-4}$, $\varrho^t = \frac{10}{(10+t)^{0.6}}$, $\gamma^t = \frac{15}{15+t}$, $\varpi = 0.5$. The default layer number of the proposed deep-unfolding NN is $8$ and the learning rate is chosen as $0.001$. The black-box NN consists of $3$ CLs and $5$ FCLs, and we adopt the Adam optimizer with the same learning rate $0.001$. Moreover, the batch size of the SSCA-based algorithm, the deep-unfolding NN, and the black-box NN are all set as $5$. All the experiments are conducted on a desktop Intel CPU (i5-8400 with 6 cores) with 8GB RAM. The benchmarks are provided as follows: \begin{figure}[!t] \centering \scalebox{0.66}{\includegraphics{setup80-eps-converted-to}} \caption{Simulation setup.}\label{fig:setting} \vspace{-1em} \end{figure} \begin{itemize} \item SSCA: The proposed SSCA-based mixed-timescale joint active and passive beamforming algorithm. \item Deep-unfolding NN: The proposed deep-unfolding NN introduced in Section IV. \item Black-box NN: The benchmark black-box NN. \item Full CSI: The single-timescale algorithm that collects high-dimensional full CSI in each time slot and optimizes the active beamforming matrices using Algorithm 1 and optimizes the IRS passive beamforming matrix using the one-iteration BCD algorithm. \item No IRS: The conventional scheme without IRS which directly employs Algorithm \ref{BCDtype} to optimize the active beamforming matrices. \item Random IRS: In this algorithm, the IRS passive beamforming matrix is randomly generated and Algorithm \ref{BCDtype} is adopted to optimize the active beamforming matrices. \item HD: The conventional HD scheme where the WMMSE algorithm is adopted for optimizing the active beamforming matrices and the IRS passive beamforming matrix is randomly generated. \end{itemize} \begin{figure}[t] \centering \subfloat[]{\label{fig:short-term_convergence}{\includegraphics[width=0.25\textwidth]{BCD_convergence_a-eps-converted-to}}} \subfloat[]{\label{fig:long-term_convergence}{\includegraphics[width=0.25\textwidth]{convergence_b-eps-converted-to}}} \caption{(a) Convergence performance of the BCD-type short-term active beamforming algorithm. (b) Convergence performance of the SSCA-based long-term passive beamforming algorithm.} \label{fig:fig3} \vspace{-0.4em} \end{figure} \begin{figure}[!t] \centering \scalebox{0.63}{\includegraphics{convergence_unfolding-eps-converted-to}} \caption{Convergence performance of the proposed deep-unfolding NN.}\label{fig:unfolding_convergence} \vspace{-0.8em} \end{figure} Fig.~\ref{fig:fig3}(\subref*{fig:short-term_convergence}) shows the convergence behavior of the proposed short-term active beamforming algorithm. It is observed that the BCD-type algorithm converges monotonically within 100 iterations. Fig.~\ref{fig:fig3}(\subref*{fig:long-term_convergence}) shows the value of the objective function versus the number of iterations for the SSCA-based long-term passive beamforming algorithm. From this figure, the weighted sum-rate converges within around 100 iterations. Moreover, we also observe some fluctuations of the objective function, which is due to the randomness of the sampled channels. Fig.~\ref{fig:unfolding_convergence} presents the impact of the learning rate on the convergence performance of the proposed deep-unfolding NN. As we can see, the sum-rate performance becomes better when the learning rate decreases from $0.1$ to $0.001$. When further decreasing the learning rate to $0.0001$, there is no evident performance improvement but the convergence speed significantly slows down. Hence, we choose $0.001$ as the learning rate in our test. \begin{table*}[htbp] \centering \caption{Weighted sum-rate performance versus the number of collected/training samples.} \begin{tabular}{ccccccccc} \hline Collected samples&5&10&15&20&25&30&35&40 \\ \hline SSCA&$88.79\%$&$93.73\%$&$97.02\%$&$98.32\%$&$99.57\%$&$99.92\%$&$100\%$&$100\%$ \\ \hline \end{tabular} \vspace{0.4mm} \begin{tabular}{ccccccccc} \hline Training samples &100&200&300&400&500&600&700&800\\ \hline Deep-unfolding NN&$90.02\%$&$94.13\%$&$96.04\%$&$97.53\%$&$97.68\%$&$97.70\%$&$97.71\%$&$97.71\%$ \\ \hline \end{tabular} \vspace{0.4mm} \begin{tabular}{ccccccccc} \hline Training samples&500&1000&1500&2000&2500&3000&3500&4000 \\ \hline Black-box NN&$75.03\%$&$79.59\%$&$83.20\%$&$85.52\%$&$86.27\%$&$86.42\%$&$86.62\%$&$86.64\%$ \\ \hline \label{table:sample} \end{tabular} \vspace{-0.4em} \end{table*} \begin{table*}[htbp] \centering \caption{Weighted sum-rate performance versus the number of layers.} \begin{tabular}{c|ccccccccc} \hline Layers&2&3&4&5&6&7&8&9&10 \\ \hline SSCA&$78.02\%$&$80.01\%$&$83.08\%$&$85.41\%$&$86.55\%$&$87.34\%$&$88.46\%$&$89.35\%$&$90.21\%$ \\ \hline Deep-unfolding NN&$90.76\%$&$92.56\%$&$95.70\%$&$96.31\%$&$97.57\%$&$97.69\%$&$97.72\%$&$97.71\%$&$97.72\%$ \\ \hline Black-box NN&$86.09\%$&$86.23\%$&$86.50\%$&$86.83\%$&$86.18\%$&$86.81\%$&$86.21\%$&$86.64\%$&$86.17\%$ \\ \hline \end{tabular} \vspace{0.1em} \label{table:layer} \end{table*} Table~\ref{table:sample} shows the weighted sum-rate performance versus the number of collected/training samples. Note that the results are all normalized by a reference value, which is the weighted sum-rate of the SSCA-based algorithm that collects $100$ samples for optimizing $\boldsymbol{\theta}$. As we can see, the SSCA-based algorithm is very efficient and only about 30 samples are sufficient for it to learn the channel statistics. We also observe that the black-box NN requires most channel samples for training while the proposed deep-unfolding NN needs much fewer training samples since it fully exploits the structure of our proposed SSCA-based mixed-timescale beamforming algorithm\;\!\footnote{Note that the number of training samples required for the deep-unfolding NN and the black-box NN are suitable for the offline training stage. For the case of online deployment, since the channel statistics vary continuously between adjacent time blocks, transfer learning and meta learning can be used to significantly reduce the required training samples and time in each coherence time block.}. Table~\ref{table:layer} presents the weighted sum-rate performance versus the number of layers. Similarly, the results are all normalized by a reference value, which is the weighted sum-rate of the SSCA-based algorithm with 100 layers. Note that the number of layers of the SSCA-based algorithm and the black-box NN refer to the maximum iteration number of the BCD-type short-term active beamforming algorithm $I_{max}$ and the number of FCLs (each layer with $1000$ neurons), respectively. We observe that the performance of the SSCA-based algorithm monotonically increases with $I_{max}$ and when $I_{max}=10$, it achieves $90.21\%$ performance of when it converges. We can also see that the performance of the deep-unfolding NN increases with the number of layers $I_{u}$ when it is small. When $I_{u}$ is greater than $7$, the result fluctuates. Hence, $I_{u}$ could be selected as $7$ or $8$ since it achieves a good balance between the performance and computational complexity. For the black-box NN, increasing the number of layers does not significantly improve the performance. \begin{table*}[t] \centering \caption{Weighted sum-rate performance versus different numbers of AP antennas ($N$).} \begin{tabular}{c|cccccc} \hline $N$&8&16&32&64&128&256 \\ \hline SSCA (bits/s/Hz)&$26.49$&$30.70$&$34.73$&$39.19$&$43.89$&$48.65$ \\ \hline Deep-unfolding NN&$98.94\%$&$98.32\%$&$97.71\%$&$96.97\%$&$96.13\%$&$95.08\%$ \\ \hline Black-box NN&$90.95\%$&$88.65\%$&$86.01\%$&$83.19\%$&$80.42\%$&$77.94\%$ \\ \hline \end{tabular} \label{table:antenna} \vspace{0.1em} \end{table*} \begin{table*}[htbp] \centering \caption{The CPU running time of the analyzed schemes.} \begin{tabular}{c|cc|cccc} \hline \multirow{2}*{(N,T)} &\multicolumn{2}{c|}{CPU training time (min)}& \multicolumn{4}{c}{CPU testing time (s)} \\ ~ & deep-unfolding & black-box&SSCA & deep-unfolding & black-box&full CSI \\ \hline (8,100) &10.80&35.77&2.82&0.052&0.016&18.62 \\ \hline (16,100) &11.53&38.85&2.90&0.054&0.017&18.71 \\ \hline (32,100) &15.89&41.87&3.07&0.056&0.019&18.89 \\ \hline (64,100) &35.75&57.00&3.65&0.058&0.021&19.47 \\ \hline (128,100) &52.37&126.54&6.21&0.071&0.027&22.03 \\ \hline (256,100) &102.15&267.31&21.45&0.15&0.071&37.33 \\ \hline (256,200) &119.14&289.63&21.45&0.15&0.071&66.26 \\ \hline (256,300) &137.46&312.34&21.45&0.15&0.071&250.58 \\ \hline (256,400) &159.23&343.77&21.45&0.15&0.071&1085.18 \\ \hline \end{tabular} \label{table:time} \end{table*} Table~\ref{table:antenna} shows the weighted sum-rate performance versus the number of antennas at the AP ($N$). The sum-rate performance of the deep-unfolding NN and the black-box NN is normalized by the corresponding sum-rate of the SSCA-based algorithm. When $N$ is small, the deep-unfolding NN achieves very close performance compared to the SSCA-based algorithm. It suffers from a slight performance degradation with the increase of $N$. However, it still achieves more than $95\%$ performance of the SSCA-based algorithm and is significantly better than the black-box NN. Table~\ref{table:time} compares the CPU training time and the testing time of different schemes when the number of antennas at the AP ($N$) and the number of reflecting elements at the IRS ($T$) change. We observe that the training time of the deep-unfolding NN is less than that of the black-box NN since it fully exploits the structure of the proposed SSCA-based mixed-timescale beamforming design algorithm. In terms of the testing time, the proposed deep-unfolding NN and the black-box NN provide a significant advantage over the SSCA-based algorithm and the full CSI scheme, which demonstrates the efficiency of the learning-based approaches. Moreover, the testing time of the full CSI scheme increases dramatically with $T$ while the testing time of the other mixed-timescale algorithms remains the same. This validates that the mixed-timescale beamforming scheme is much more suitable for practical design. \begin{figure}[!t] \centering \scalebox{0.63}{\includegraphics{performance_csidelay-eps-converted-to}} \caption{The weighted sum-rate performance versus the CSI delay $\tau$. }\label{fig:performance_csidelay} \vspace{-0.3em} \end{figure} Then, we investigate the impact of the CSI delay $\tau$ on different schemes. We adopt the delay model in \cite{CSI_delay_model} and assume that the CSI delay is proportional to the number of CSI signaling bits~\cite{FDrelay2}. If the CSI delay of the single-timescale algorithm is given by $\tau$, then, that of out proposed mixed-timescale algorithm can be computed as $\tau_{m} = \frac{Q_m}{Q_s}\tau$. Fig.~\ref{fig:performance_csidelay} shows the weighted sum-rate performance of different schemes versus the CSI delay $\tau$. As we can see, the proposed mixed-timescale algorithms are insensitive to the CSI delay while the single-timescale scheme relied on the full CSI suffers from severe performance degradation. When the CSI delay is greater than $0.6$ ms, the proposed SSCA-based mixed-timescale beamforming algorithm and deep-unfolding NN outperform the single-timescale algorithm. Moreover, compared with the conventional optimization based algorithm, the deep-unfolding NN and black-box NN are even more robust to the CSI delay. This is because the NN-based algorithms can learn CSI errors from the data and alleviate the performance deterioration. \begin{figure}[!t] \centering \scalebox{0.63}{\includegraphics{performance_element-eps-converted-to}} \caption{The weighted sum-rate performance versus the number of reflecting elements $T$. }\label{fig:performance_element} \vspace{-0.3em} \end{figure} \begin{figure}[!t] \centering \scalebox{0.63}{\includegraphics{performance_power-eps-converted-to}} \caption{The weighted sum-rate performance versus different transmit power $P$.}\label{fig:performance_power} \vspace{-0.3em} \end{figure} Fig.~\ref{fig:performance_element} presents the weighted sum-rate of different schemes versus the number of reflecting elements of the IRS. As we can see, the proposed deep-unfolding NN approaches the SSCA-based optimization algorithm and it significantly outperforms the other schemes. It is also observed that the weighted sum-rate performance achieved by the SSCA-based algorithm, the deep-unfolding NN, and the black-box NN increases more rapidly with $T$ compared with the random IRS scheme. This is due to the fact that the joint active and passive beamforming design provides a remarkable gain. The full CSI scheme suffers from severe performance degradation due to CSI mismatches and is only comparable with the random IRS scheme. Moreover, the FD scheme provides a huge gap over the HD scheme and the gap increases with $T$. Furthermore, the scheme without IRS provides the worst performance among all the analyzed algorithms, which demonstrates that the IRS can tremendously enhance the spectral efficiency of the conventional FD systems. Fig.~\ref{fig:performance_power} illustrates the performance under different transmit power $P$. Note that we set the power budget of the UL users as $P_{\text{U},k} = P,\forall k$ and that of the AP as $P_{AP} = P+20$ dB. As we can see, the performance of different schemes increases almost linearly with the transmit power (except the full CSI scheme since the CSI error deteriorates its performance). We also observe that the proposed SSCA-based algorithm and deep-unfolding NN both achieve better performance compared with the other schemes under different values of transmit power, which validates the effectiveness of our proposed design. \begin{figure}[!t] \centering \scalebox{0.63}{\includegraphics{quantization-eps-converted-to}} \caption{The weighted sum-rate performance versus different numbers of quantization bits.}\label{fig:quantization} \vspace{-0.0em} \end{figure} Fig.~\ref{fig:quantization} shows the sum-rate performance versus different numbers of quantization bits of the IRS. From the figure, the proposed SSCA-based algorithm, the proposed deep-unfolding NN, and the black-box NN are not sensitive to the quantization bits and they can achieve near optimal performance with only a few quantization bits. The quantization of the IRS phase shifters has little effect on the HD scheme, the random IRS scheme because the phase shifters of the IRS are random. It is also observed that the full CSI scheme is most sensitive to the quantization bits. The sum-rate performance is very poor when there are few quantization bits. This is because in the single-timescale scheme, $\boldsymbol{\theta}$, $\mathbf{P}_{k}$, and $\mathbf{F}_{l}$ are optimized alternatively based on the full CSI. Thus, the optimality of $\mathbf{P}_{k}$ and $\mathbf{F}_{l}$ relies on the $\boldsymbol{\theta}$ with infinite precision. When $\boldsymbol{\theta}$ is quantized, the derived $\mathbf{P}_{k}$ and $\mathbf{F}_{l}$ are not optimal. In comparison, in the mixed-timescale scheme, $\mathbf{P}_{k}$ and $\mathbf{F}_{l}$ are optimized based on the effective CSI which consists of the full CSI and the quantized $\boldsymbol{\theta}$. The optimality of $\mathbf{P}_{k}$ and $\mathbf{F}_{l}$ holds for the quantized $\boldsymbol{\theta}$. Thus, the proposed mixed-timescale scheme is more robust to the quantization error of the IRS phase shifters. \begin{figure}[!t] \centering \scalebox{0.63}{\includegraphics{SI_antenna8-eps-converted-to}} \caption{The weighted sum-rate performance versus the self-interference power $\sigma_{SI}^2$ ($N = 8$).}\label{fig:SI} \end{figure} Fig.~\ref{fig:SI} illustrates the sum-rate performance under different levels of self-interference power when $N=8$. From the figure, the overall sum-rate performance of the SSCA-based algorithm does not change much when the self-interference increases. However, when $\sigma_{SI}^2$ is larger than $-40$ dB, the gap between the UL and DL users becomes larger. When the self-interference increases, the overall sum-rate performance of the proposed deep-unfolding NN decreases slightly, but the deep-unfolding NN still achieves a relatively balanced sum-rate performance between the UL and DL users. Therefore, the proposed deep-unfolding NN can provide a better quality of service (QoS) for the UL users, especially when the self-interference is strong. As for the black-box NN, the overall sum-rate performance suffers from a severe performance deterioration when $\sigma_{SI}^2$ becomes larger. \begin{table*}[htbp] \setlength\tabcolsep{2.8pt} \centering \caption{Weighted sum-rate performance versus random locations of users.} \begin{tabular}{c|ccccccccc} \hline $R_0\, \text{(m)}$&0&2&4&6&8&10 \\ \hline SSCA&$34.73\, (100\%)$&$34.45\,(99.16\%)$&$34.14\,(98.26\%)$&$33.92\,(97.62\%)$&$33.73\,(97.09\%)$&$33.64\,(96.81\%)$ \\ \hline Deep-unfolding NN&$33.94\,(100\%)$&$33.81\,(99.61\%)$&$33.68\,(99.23\%)$&$33.57\,(98.89\%)$&$33.50\,(98.69\%)$&$33.44\,(98.51\%)$& \\ \hline Black-box NN&$29.87\,(100\%)$&$29.84\,(99.89\%)$&$29.80 \,(99.76\%)$&$29.76\,(99.63\%)$&$29.65\,(99.27\%)$&$29.51\,(98.77 \%)$& \\ \hline \end{tabular} \label{table:location} \end{table*} Table~\ref{table:location} shows the weighted sum-rate performance when the locations of the users are random. Specifically, the users are randomly located in a circle centered at its original position with radius $R_0$. Note that the percentages in the brackets denote the sum-rate value normalized by the first column. From the table, the weighted sum-rate of all schemes decreases with $R_0$. The weighted sum-rate of the SSCA-based algorithm declines the fastest and when $R_0 = 10$ m, it achieves $96.81\%$ of the performance of fixed locations. The deep-unfolding NN and the black-box NN have more learning parameters and can adapt to the randomness of the channels well. When $R_0 = 10$ m, they achieve $98.51\%$ and $98.77\%$ of the performance of fixed locations, respectively. \begin{figure}[!t] \centering \scalebox{0.63}{\includegraphics{channel_error-eps-converted-to}} \caption{The weighted sum-rate performance versus the CSI error variance $\sigma_{CE}^2$.}\label{fig:channel_error} \end{figure} Fig.~\ref{fig:channel_error} presents the achievable weighted sum-rate performance versus the channel estimation error. Specifically, the estimated channel is model as $\bar{\mathbf{H}} = \mathbf{H}+\bigtriangleup \mathbf{H}$, where $\mathbf{H}$ is the true channel matrix and $\bigtriangleup \mathbf{H}$ is the channel error matrix. We assume that the elements of $\bigtriangleup \mathbf{H}$ are independent and follow the Gaussian distribution with zero mean and variance $p_{H}\sigma_{CE}^2$, where $p_{H}$ denotes the average power of the elements in $\mathbf{H}$ and $\sigma_{CE}^2$ indicates the strength of the channel estimation error. From Fig.~\ref{fig:channel_error}, the performance of all schemes degrades with the channel estimation error. Moreover, the weighted sum-rate performance achieved by the conventional optimization based algorithms deteriorates severely as $\sigma_{CE}^2$ increases while the learning based algorithms are much more robust. This is because the learning based algorithms can learn the channel estimation errors from the training samples and alleviate the performance degradation. As we can see, the proposed deep-unfolding NN starts to outperform the SSCA-based algorithm when $\sigma_{CE}^2$ is larger than $-30$ dB, which further demonstrates the benefits of the proposed deep-unfolding design. \section{Conclusion} In this paper, we have investigated a MIMO IRS-assisted FD system and formulated a mixed-timescale beamforming design problem for cutting down the heavy CSI overhead. To tackle this highly non-convex optimization problem, an efficient mixed-timescale SSCA-based optimization algorithm has been developed. Moreover, to further reduce the computational complexity of the proposed SSCA-based algorithm, we developed a novel deep-unfolding beamforming algorithm. The deep-unfolding NN consists of a LPBN and a SABN, which maintains the structure of the SSCA-based algorithm but introduces a novel non-linear activation function and some learnable parameters induced by the first-order Taylor expansion to approximate the matrix inversion. It also ties the long-term passive beamforming matrix and the short-term active beamforming matrices more tightly compared with the SSCA-based optimization algorithm. Simulation results verified that the proposed deep-unfolding NN achieves the performance of the SSCA-based optimization algorithm with significantly reduced complexity. \vspace{-0.0em} \begin{appendices} \section{Solutions to the Subproblems of the BCD-Type Short-Term Active Beamforming Design} \subsubsection{Subproblem w.r.t. $\mathbf{U}_{{\rm U},k},\mathbf{U}_{{\rm D},l}$} The subproblems w.r.t. $\mathbf{U}_{{\rm U},k}$ is given by \begin{equation} \min_{\mathbf{U}_{{\rm U},k}} {\rm Tr} (\mathbf{W}_{\text{U},k}\mathbf{U}_{\text{U},k}^\text{H}\mathbf{A}_{\text{U},k}\mathbf{U}_{\text{U},k}) - 2\Re e \{{\rm Tr}(\mathbf{W}_{\text{U},k}\mathbf{U}_{\text{U},k}^\text{H}\bar{\mathbf{H}}_{\text{U},k}\mathbf{P}_k)\}, \vspace{-0.1em} \end{equation} where \begin{equation} \mathbf{A}_{\text{U},k} \triangleq \left(\sum_{k^{'} = 1}^{K}\bar{\mathbf{H}}_{\text{U},k^{'}}\mathbf{P}_{k^{'}}\mathbf{P}_{k^{'}}^{\rm H}\bar{\mathbf{H}}_{\text{U},k^{'}}^{\rm H}+\sum_{l=1}^{L}\tilde{\mathbf{H}}\mathbf{F}_l\mathbf{F}_l^{\rm H}\tilde{\mathbf{H}}^{\rm H}+\sigma_{\text{U}}^2\mathbf{I}\right). \label{AU} \vspace{-0.1em} \end{equation} By applying the first order optimality condition, the solution of $\mathbf{U}_{{\rm U},k}$ is given by\vspace{-0.1em} \begin{equation} \mathbf{U}_{{\rm U},k} = \mathbf{A}_{\text{U},k}^{-1} \bar{\mathbf{H}}_{\text{U},k}\mathbf{P}_k. \label{UU_update} \vspace{-0.1em} \end{equation} Similarly, we obtain the solution of $\mathbf{U}_{{\rm D},l}$ as \vspace{-0.1em} \begin{equation} \mathbf{U}_{{\rm D},l} = \mathbf{A}_{\text{D},l}^{-1} \bar{\mathbf{H}}_{\text{D},l}\mathbf{F}_l, \label{UDupdate} \vspace{-0.1em} \end{equation} where \begin{equation} \mathbf{A}_{{\rm D},l} \triangleq \left(\sum_{l^{'}=1}^{L}\bar{\mathbf{H}}_{\text{D},l}\mathbf{F}_{l^{'}}\mathbf{F}_{l^{'}}^{\rm H}\bar{\mathbf{H}}_{\text{D},l}^{\rm H}+\sum_{k=1}^{K}\bar{\mathbf{J}}_{k,l}\mathbf{P}_k\mathbf{P}_k^{\rm H}\bar{\mathbf{J}}_{k,l}^{\rm H}+\sigma_{\text{D},l}^2\mathbf{I}\right). \label{AD} \vspace{-0.1em} \end{equation} \vspace{-0.1em} \subsubsection{Subproblem w.r.t. $\mathbf{W}_{\text{U},k},\mathbf{W}_{\text{D},l}$} The subproblem w.r.t. $\mathbf{W}_{\text{U},k}$ is given by \vspace{-0.1em} \begin{equation} \min_{\mathbf{W}_{\text{U},k}} \quad \text{Tr}\left(\mathbf{W}_{\text{U},k}\mathbf{E}_{\text{U},k}\right)-\log \det\left(\mathbf{W}_{\text{U},k}\right). \vspace{-0.1em} \end{equation} By checking the first order optimality condition, we obtain the optimal solution as \vspace{-0.1em} \begin{equation} \mathbf{W}_{\text{U},k} =\mathbf{E}_{\text{U},k}^{-1}. \label{WUupdate} \vspace{-0.1em} \end{equation} Similarly, the optimal solution for $\mathbf{W}_{\text{D},l}$ can be derived as \vspace{-0.1em} \begin{equation} \mathbf{W}_{\text{D},l} =\mathbf{E}_{\text{D},l}^{-1}. \label{WDupdate} \vspace{-0.1em} \end{equation} \subsubsection{Subproblem w.r.t. $\mathbf{P}_k$} After appropriate rearrangement, we can write the subproblem w.r.t. $\mathbf{P}_k$ as \begin{subequations} \begin{align} \min_{\mathbf{P}_k} \quad &{\rm Tr} (\mathbf{P}_k^{\rm H} \mathbf{A}_{\text{P},k} \mathbf{P}_k) - 2\alpha_k\Re e \{\text{Tr}(\mathbf{P}_k^{\rm H} \bar{\mathbf{H}}_{\text{U},k}^{\rm H}\mathbf{U}_{\text{U},k}\mathbf{W}_{\text{U},k})\}\\ \text{s.t.} \quad & {\rm Tr} (\mathbf{P}_k^{\rm H} \mathbf{P}_k) \leq P_{\text{U},k}, \end{align} \label{subproblemP} \end{subequations} where \begin{equation} \begin{split} \mathbf{A}_{\text{P},k} &\triangleq \sum_{k^{'}=1}^{K}\alpha_{k^{'}}\bar{\mathbf{H}}_{\text{U},k}^{\rm H}\mathbf{U}_{\text{U},k^{'}}\mathbf{W}_{\text{U},k^{'}}\mathbf{U}_{\text{U},k^{'}}^{\rm H}\bar{\mathbf{H}}_{\text{U},k} \\ &\qquad+ \sum_{l=1}^{L}\beta_l \bar{\mathbf{J}}_{k,l}^{\rm H}\mathbf{U}_{\text{D},l}\mathbf{W}_{\text{D},l}\mathbf{U}_{\text{D},l}^{\rm H}\bar{\mathbf{J}}_{k,l}. \end{split}\label{APk} \vspace{-0.2em} \end{equation} It is readily seen that \eqref{subproblemP} is a convex optimization problem. Therefore, by introducing Lagrange multipliers $\lambda_k \geq 0, \forall k$ and applying the Karush–Kuhn–Tucker (KKT) condition, we can express the optimal solution to $\mathbf{P}_k$ as \vspace{-0.1em} \begin{equation} \mathbf{P}_k = \alpha_k(\mathbf{A}_{\text{P},k}+\lambda_k\mathbf{I})^{-1} \bar{\mathbf{H}}_{\text{U},k}^{\rm H}\mathbf{U}_{\text{U},k}\mathbf{W}_{\text{U},k}. \label{Pupdate} \vspace{-0.1em} \end{equation} Denote $Q(\lambda_k) = {\rm Tr} (\mathbf{P}_k^{\rm H} \mathbf{P}_k)- P_{\text{U},k}$. If $Q(0) \leq 0$, then we have $\lambda_k = 0$, otherwise, we have $\lambda_k = \lambda_k^*$, where $\lambda_k^*$ is obtained by solving the equation $Q(\lambda_k^*) = 0$ via the bisection search. \subsubsection{Subproblem w.r.t. $\mathbf{F}_l$} Similar to the problem w.r.t. $\mathbf{P}_{k}$, after appropriate rearrangement, we express the subproblem w.r.t. $\mathbf{F}_l$ as \vspace{-0.1em} \begin{subequations} \label{PBS} \begin{align} \min_{\mathbf{F}_{l}} \quad & {\rm Tr} (\mathbf{F}_l^{\rm H} \mathbf{A}_{\text{F}} \mathbf{F}_l) - 2\beta_l\Re e \{\text{Tr}(\mathbf{F}_l^{\rm H} \bar{\mathbf{H}}_{\text{D},l}^{\rm H}\mathbf{U}_{\text{D},l}\mathbf{W}_{\text{D},l})\} \\ \text{s.t.} \quad & {\rm Tr} (\mathbf{F}_l^{\rm H} \mathbf{F}_l) \leq P_{AP}, \vspace{-0.1em} \end{align} \end{subequations} where \vspace{-0.1em} \begin{equation} \begin{split} \mathbf{A}_{\text{F}} &\triangleq \sum_{l^{}=1}^{L}\beta_{l^{}}\bar{\mathbf{H}}_{\text{D},l^{}}^{\rm H}\mathbf{U}_{\text{D},l^{}}\mathbf{W}_{\text{D},l^{}}\mathbf{U}_{\text{D},l^{}}^{\rm H}\bar{\mathbf{H}}_{\text{D},l^{}} \\ &\qquad +\sum_{k=1}^{K}\alpha_k \tilde{\mathbf{H}}^{\rm H}\mathbf{U}_{\text{U},k}\mathbf{W}_{\text{U},k}\mathbf{U}_{\text{U},k}^{\rm H}\tilde{\mathbf{H}}. \end{split} \label{AF} \vspace{-0.1em} \end{equation} By introducing a Lagrange multiplier $\mu \geq 0$ to problem (\ref{PBS}) and employing the KKT condition, we obtain the optimal $\mathbf{F}_l$ as \vspace{-0.1em} \begin{equation} \mathbf{F}_l = \beta_l(\mathbf{A}_{\text{F}}+\mu \mathbf{I})^{-1}\bar{\mathbf{H}}_{\text{D},l}^{\rm H}\mathbf{U}_{\text{D},l}\mathbf{W}_{\text{D},l}, \label{Fupdate} \vspace{-0.1em} \end{equation} where $\mu$ can be found similarly via the bisection search. \end{appendices}
2001.08499
\section{Conclusion} \label{sec:conclusion} We presented the first large automotive dataset for detection with event cameras. Thanks to this dataset, we open the way to the training of deep learning models for detection on event-based cameras. We also expect benefits in other applications, such as object tracking, and unsupervised learning of optical flow and monocular depth, among others. We hope that the event-based\ research community will \comment{\davideM{I don't like this greatly} \amosrmk{what about considerably? but then we use considerable below} \davideMrmk{I suggest: We are sure the research community will have great benefits from this dataset}} greatly benefit from this dataset and that it will soon become a reference benchmark. We also believe that thanks to the availability of such a large dataset, the accuracy of event-based vision systems will undergo considerable advances. \section{Analysis and Statistics} \label{sec:statistics} In this section, we extract some statistics from the ATIS Automotive Detection Dataset and we compare it to existing event-based\ datasets. We start by analyzing the properties of raw events stream. In particular, we study the rate of event stream generated during the recordings. In order to do this, we split the recordings in $1ms$ intervals and compute the average event rate in this interval, without any filtering or noise removal. We then build an histogram from these measurements. As we can see from Fig.~\ref{fig:event_rate}, the majority of the samples contain very low data rate, below $200Kev/s$. However, the distribution has a long tail, with maximum peaks reaching up to $3Mev/s$. These peaks corresponds to scene with very strong lightening changes, such as flickering lights or fast repeated transitions from bright sun to shadow. \input{figs/event_rate.tex} We then study the distribution of the annotated bounding boxes. Similarly to~\cite{Dollar12}, we compute the heat map on the location of the bounding box (Fig.~\ref{fig:heatmaps}). For cars, we observe two principal horizontal axis, corresponding to two main positioning of the camera inside the car. This is less visible in the pedestrian heatmap, probably because pedestrians are mostly seen in city recordings, where the camera position was most stable. We also notice a larger number of boxes in the right part of the image. This is due to the fact that driving is mostly conducted on the right lane of the road and therefore objects on the left part appear smaller and are more often discarded by the 30 pixel diagonal threshold. \input{figs/heatmaps.tex} In Fig.~\ref{fig:box_stats}\textbf{(a,b)} we show the histogram of the bounding box aspect ratio, computed as width over height. Histograms are computed on the train, validation and test splits independently. For pedestrians, the aspect ratio has a gaussian distribution, with mean around 0.35; while for cars, the histogram is closer to a two-modal distribution. This is due to fact that the aspect ratio varies depending on the point of view of the car: cars seen from the front or from behind have ratio closer to 1, while cars seen from the side have larger aspect ratio. In Fig.~\ref{fig:box_stats}\textbf{(c,d)}, we show instead the histogram of the bounding box diagonal. For both cars and pedestrian we observe a long tail distribution, starting from the 30 pixel threshold set for manual annotation. Finally, we observe that train, validation and test splits, have similar statistics. \input{figs/box_stats.tex} Finally, we compare the dataset with other existing event-based\ dataset. As shown by Tab.~\ref{tab:datasets}, the GEN1 Automotive Detection Dataset is 3 time larger than the DDD17~\cite{Binas17} dataset in terms of hours and has about 22 times more labels than the~\cite{Miao19} pedestrian dataset. In terms of number of labels, the~\cite{Bi19} dataset is the second largest one, with approximately 2.5 less labels than ours. However,~\cite{Bi19} considers a classification task and each sample is only 100 ms long. \begin{table*}[htpb] \caption{Comparison of available event-based\ datasets for different tasks. The GEN1 Automotive Detection (GAD) Dataset is the largest in terms of both number of hours and number of manual annotations. It is also the only automotive dataset with semantic bounding box labels for detection.} \begin{center} \begin{tabular}{@{}llcccc@{}} \toprule \textbf{Dataset} & \textbf{Task} & \textbf{Max Sample Time (s)} & \textbf{Total Time (h)} & \textbf{\# Labels} & \textbf{\# Classes} \\% & \textbf{Real World} \\ \hline {AAD Dataset (this work)} & Detection for Automotive & 60 (10,020$^*$) & 39.32 & 255,781 & 2 \\% & yes \\ {Pedestrian Dataset~\cite{Miao19}} & Detection for Surveillance & 30 & 0.10 & 11,667 & 1 \\% & yes \\ \hline {N-Mnist~\cite{Orchard15}} & Object Classification & 0.3 & 5.83 & 70,000 & 10 \\% & no \\ {N-Caltech101~\cite{Orchard15}} & Object Classification & 0.3 & 0.76 & 9,146 & 101 \\% & no \\ {N-Cars~\cite{Sironi18}} & Object Classification & 0.1 & 0.68 & 24,029 & 2 \\% & yes \\ {DVS-Gestures~\cite{Amir17}} & Gesture Recognition & 6 & 2.24 & 1,342 & 11 \\% & yes \\ {ASL-DVS~\cite{Bi19}} & Gesture Recognition & 0.1 & 2.80 & 100,800 & 24 \\% & yes \\ \hline {MVSEC~\cite{Zhu18b}} & Stereo, Flow, VO & 1,500 & 1.13 & - & - \\% & yes \\ {DDD17~\cite{Binas17}} & Autonomous Driving & 3,135 & 12 & - & - \\% & yes \\ \bottomrule \multicolumn{6}{l}{\small $^*$ Samples are obtained by splitting continuous recordings into 60s chunks. The longest of the original recordings is 10,020s long.} \end{tabular \end{center} \label{tab:datasets} \end{table*} \section{The ATIS Automotive Detection Dataset} \label{sec:dataset} \subsection{Event Cameras} \label{subsec:event_cameras} Event cameras are a relatively recent type of sensor encoding visual information in the form of asynchronous events~\cite{Lichtsteiner08,Posch11,Serrano13}. An event corresponds to a change in the log-luminosity intensity at a given pixel location. In an event camera, the photosensitive part is composed by a 2D array of independent pixels. Whenever a pixel detects a change in illuminance intensity, it emits an event containing its $(x,y)$ position in the pixel array, the microsecond timestamp $t$ of the observed change and its polarity $p$. The polarity encodes whether the illuminance intensity increased ($p=1$) or decreased ($p=0$). Compared to standard frame cameras, event cameras have higher temporal resolution, higher dynamic range and lower power consumption. Thanks to these characteristics, event cameras find many applications in automotive, robotics and IoT, where low latency, robustness to challenging lighting conditions and power consumption are critical requirements. Many event cameras are currently available in the market~\cite{Psee19,Son17,Guo17,Brandli14}. Some of them also provide graylevel information in forms of synchronous frames~\cite{Brandli14,Guo17} or by asynchronous event-based measurements~\cite{Psee19}. In this work, we consider a Gen1 304x240 camera~\cite{Posch11}. The luminous intensity measures from the camera were used to generate standard gray-level images at a given frequency. The images were then manually annotated by human subjects to generate ground-truth bounding boxes around objects of interest. The labeling procedure is explained in detail in Sec.~\ref{subsec:labeling}. \subsection{Data Collection} \label{subsec:data} A GEN1 camera was mounted behind the windshield of a car and connected to a laptop for data recording. Different human drivers, independent from the authors, were asked to perform several rides in different scenarios, but always driving naturally. There are minor variations in the camera position due to repeated mountings of the camera. The scenarios include city with dense traffic, city with low traffic, highway, countryside, small villages and suburbs. All recordings were done on France roads, mainly in, but not limited to, the Ile-de-France region. Recordings duration varies from tens of minutes to a maximum of several consecutive hours. \input{figs/graylabels.tex} The data collection campaign was conducted over an entire year, from March 2017 to March 2018, and at variable times of the day, assuring a large variety of lightening and weather conditions. \comment{\davideM{There are differences of sesons? weather condition?} \amosrmk{yes, as mentioned in the text.}} A total of 39.32 hours, split among 121 recordings were collected, resulting in about 750GB of uncompressed raw event data. For comparison, a gray-scale frame-based\ camera working at the same resolution and acquiring at a frequency of 120fps (i.e. 100 times lower temporal resolution compared to the event camera), would generate more than 1.2TB of data \footnote{Ignoring compression and assuming 1 byte per pixel}. In the next section, we describe how the data were manually annotated. \subsection{Labeling Protocol} \label{subsec:labeling} The GEN1 sensor provides along with the change detection events, also gray-level measurements. These measurements can be used to build a gray-level image at any desired time. The time of the last measurement used to generate them is associated to the image, providing images with the same temporal resolution as the event stream. Moreover, since gray-level images and the events stream share the same pixels array, annotations on the images can directly be used as ground truth for the event stream, without the need of any calibration or rectification step. Since our primary goal is object detection, we favor low-frequency annotations in order to maximize the variety of objects aspects and scenes. Because of this, we generate images at a 1, 2 or 4Hz. These images were then given to human annotators to draw bounding boxes around cars and pedestrians. A detailed set of instruction has been provided to the annotators to reduce ambiguity and discrepancies between annotations. Due to the resolution and image quality of GEN1 images, objects of size smaller than 30 pixels have been discarded. Concerning occlusions, an object is annotated if it is visible for more than 75\%. In which case, the bounding box is drawn on the whole extend of the object. Buses, trucks, and large vehicles are not considered as cars and therefore have not been annotated. Similarly, for motorbikes and two-wheelers. People moving on skateboards or kick-scooters have been labeled as pedestrians, while people sitting inside cars or in buildings have been ignored. After annotation, we obtained a total of 228,123 cars and 27,658 pedestrians bounding boxes. More statistics about the datasets are given in Sec.~\ref{sec:statistics}. Example graylevel images together with manual annotations are shown in Fig.~\ref{fig:graylabels}. \subsection{Dataset Format and Download} \label{subsec:format} We split the recordings into train, validation and test sets. To avoid overlap between train and test splits, each single recording session is the same split. In order to facilitate the training of deep learning methods, we cut the continuous recordings into 60 seconds chunks. This yield to a total of 2359 samples: 1460 for train, 470 for test and 429 for validation. Each sample is provided in a binary .dat format, where events are encoded using 4 bytes for the timestamps and 4 bytes for the position and the polarity. More precisely, 14 bits are used for the $x$ position, 14 bits for the $y$ position and 1 bit for the polarity. Gray-level measurements are not provided with the dataset. Bounding box annotations are provided in a numpy format. Each numpy array contains the following fields: \begin{itemize} \item \texttt{ts}, timestamp of the box in microseconds \item \texttt{x}, abscissa of the top left corner in pixels \item \texttt{y}, ordinate of the top left corner in pixels \item \texttt{w}, width of the boxes in pixel \item \texttt{h}, height of the boxes in pixel \item \texttt{class\_id}, class of the object: 0 for cars and 1 for pedestrians \end{itemize} We make the obtained dataset publicly available through the following link \href{https://www.prophesee.ai/2020/01/24/prophesee-gen1-automotive-detection-dataset/ }{\texttt{https://www.prophesee.ai/2020/01/24/\\prophesee-gen1-automotive-detection-dataset/}}. We also provide a sample code together with the dataset to load and visualize some samples from the dataset with the corresponding annotations. For evaluating the accuracy of a detection method, we consider the same metrics used for the COCO dataset~\cite{Lin14}. Together with the released code, we provide a wrapper and an example on how to apply the evaluation metrics on our dataset. \section{Related Work} \label{sec:related_work} In this section, we describe the main existing event-based\ datasets. We start by describing labeled datasets for recognition and classification tasks, and then we describe datasets generated for other tasks, such as visual odometry and optical flow. \paragraph{Event-based\ Datasets for Recognition} Early event-based\ datasets have been generated by converting existing frame-based\ datasets to an event representation~\cite{Orchard15,Serrano15,Hu16}. For example in~\cite{Orchard15}, the MMINST~\cite{Lecun98} and Caltech-101~\cite{Fei06} datasets have been converted to events by moving an event camera in front of a screen displaying the images. Similarly in~\cite{Hu16} the~\cite{Kristan15,Griffin07,Reddy13} frame-based\ datasets have been converted by positioning a static event-based\ camera in front of a monitor playing the datasets. The advantage of these approaches is that it is possible to create large datasets without the need of costly manual labeling. The drawback is that the planar display and the limited frequency of the screen results in unnatural and very constrained event sequences. Because of this, recent works have focused in realizing real world datasets for recognition. For example, in~\cite{Sironi18} 12,336 car examples were manually labeled and extracted from open road driving recordings, together with 11,693 background samples. In~\cite{Amir17} and~\cite{Bi19} instead, two gesture recognition datasets were built by asking several human subjects to perform the gestures in front of the camera. For example,~\cite{Bi19} contains 100,800 examples and it is the largest classification dataset available to date in terms of number of labels. However, each sample contains only 100ms of data, cropped from longer sequences. This reduces the actual variability contained in the training data and amounts to less than 3 hours of data. The authors of~\cite{Miao19} acquired several recordings from an event camera to build 3 datasets for surveillance applications: one for pedestrian detection, one for an action recognition and one for fall detection. The labels for the pedestrian dataset were obtained by building image-like representation from 20ms of events and then manually annotating them. This is the first event-based dataset from real data for detection. However, the dataset is composed by only 12 sequences of 30 seconds. Finally, the authors of~\cite{Calabrese19} collect an event-based\ datasets for 3D pose estimation. Ground-truth was obtained using motion capture and infrared cameras together with reflective markers positioned on the human subjects joints. \paragraph{Event-based\ Datasets for Visual Odometry, Optical Flow and Stereo} Other datasets focus on different applications than recognition, and they can leverage complementary sensors or techniques for automated labeling. In~\cite{Binas17}, 12 hours of driving sequences are obtained during day and night time. Various car information, such as vehicle speed, GPS position, driver steering angle, are associated to the dataset. The dataset has been used for end-to-end steering angle prediction~\cite{Maqueda18} and also to generate pseudo-labels for event data, by running standard frame-based\ detectors on the graylevel images provided with the dataset~\cite{Chen18}. The authors of~\cite{Zhu18b}, collected sequences using several complementary sensors coupled to the event camera. In particular, depth ground truth is provided thanks to the use of a lidar. This dataset has been extended to obtain optical flow ground-truth~\cite{Zhu18}. In~\cite{Leung18} 10 hours of stereo recordings have been acquired together with pose ground-truth at 100Hz. In~\cite{Mitrokhin19} a motion segmentation dataset is realized, while the autors of~\cite{Manderscheid19} focus instead of the problem of corner detection by realizing a dataset in the same spirit of the frame-based\ ~\cite{Mikolajczyk05}. Finally, it is worth mentioning the first color event-based\ dataset~\cite{Scheerlinck19}. An event-based\ simulator is available in~\cite{Rebecq18} to generate event sequences from standard videos. For example, it has been used in~\cite{Rebecq19} for learning to reconstruct a graylevel images from events. And in~\cite{Gehrig19} together with a slowmotion frame-based method to convert frame-based\ datasets to event-based\ ones. Using a simulator to convert frame-based\ data into event-based\ is a valid and complementary approach to real data collection. However, the need of real data is still essential to fully leverage properties of the event cameras, such as high-dynamic range and high temporal resolution, which are not properly captured by standard frame-based\ cameras. Moreover, accurately replicating noise, sensor unidealities, read-out effects, etc. of real event-based\ cameras can be challenging using an idealized simulation model. The amount of datasets released in the past years confirms the growing interest in event-based\ vision and a very active community. However, the size and the annotations of the available datasets is still very minor compared to frame-based\ datasets such as Imagenet~\cite{Deng09} or COCO~\cite{Lin14}. Yet, accurate annotations and very large datasets are critical for designing and evaluating vision systems that can operate reliably in realworld situations. In the next section, we describe the first detection event-based\ dataset with accurate manual annotation of cars and pedestrians in real driving conditions. The datasets contains more that 39 hours of data, and it is the largest event-based\ dataset ever made available to the public. \section{Introduction} \label{sec:intro} Large datasets are a fundamental ingredient for modern computer vision~\cite{Deng09,Lin14}. On one side, the availability of large benchmarked datasets allowed objective and common evaluation of novel algorithms against the state-of-the-art ~\cite{Everingham10,Krizhevsky12,Lin14}. The diverse and large amount of samples in these datasets guarantee robustness in real-world applications, compared to small datasets. On another side, large labeled datasets opened the possibility to train very deep machine learning models~\cite{Krizhevsky12,Lecun15,He16}, able to generalize well also on samples drawn from different distributions than the train set. Event-based vision, which is the field of performing visual tasks from the output of an event camera~\cite{Gallego19}, is a much younger research filed compared to standard frame-based computer vision. Event cameras~\cite{Lichtsteiner08,Posch11,Serrano13} are a recent sensor representing visual information in the form of an asynchronous stream of $\{(x,y,p,t)\}$ events, representing log-luminosity contrast changes at time $t$ and location $(x,y)$. With $p$ a binary variable indicating the sign of the contrast change, Fig.~\ref{fig:intro}. \input{figs/intro.tex} Event cameras are characterized by very high dynamic range ($>$120dB), extremely high temporal resolution (in the order of microseconds) and adaptive data rate (in fact, events are produced only at the time and positions of a contrast change). As a consequence, event cameras do not suffer from oversampling, undersampling and motion blur. Similarly to frame-based vision, low-level event-based\ vision tasks such as noise filtering~\cite{Khodamoradi18}, edge detection~\cite{Lagorce15a,Lee19}, clustering~\cite{Barranco18}, etc. have been addressed using analytical and geometrical methods. However, as the complexity of the task increases, the number of variables and parameters of a system aiming at solving them also increases. Tuning this large number of parameters without a data-driven approach becomes soon impractical. For this reason, event-based vision is increasingly adopting machine learning techniques~\cite{Zhu18,Maqueda18,Manderscheid19,Rebecq19}. Together with these methods, several datasets have been released~\cite{Orchard15,Sironi18,Zhu18b,Bi19}. However, the size of these datasets is much smaller compared to their frame-based counterparts. To give an example, the largest labeled event-based\ dataset to date for classification~\cite{Bi19} is composed of 100,800 samples, while Imagenet~\cite{Deng09} contains 14 millions labeled images! Due to the scarce availability of real event-based dataset, many researchers turned to simulator-based solutions~\cite{Rebecq18,Gehrig19}. This approach is appealing because it simplifies label generation and it can be complementary to real data collection. However, real sequences remain fundamental in order to capture the unique properties of the event-based\ sensors, which can not be obtained starting from sequences of frames, and to be robust to noise and unidealities, which are hard to simulate with an idealized model. With this work, we release more than 39 hours of automotive recordings taken with an GEN1~\cite{Posch11} event camera in realistic driving conditions. Each recording has a variable duration between tens of minutes and several hours. We also collect and release 228,123 cars and 27,658 pedestrians bounding boxes, obtained by manually labeling the gray-level images provided by the GEN1 sensor, at a frequency of 1Hz, 2Hz or 4Hz, depending on the sequence. To the best of our knowledge, this is the largest event-based dataset ever released \comment{\davideM{automotive} \amosrmk{this is the largest in absolute, not restricted to automotive. We can specify that it is automotive related somewhere else if you want.}} in terms of both total number of hours and total number of labels. It is also the only automotive one providing accurate bounding box localization for a multi-class detection task. Thanks to this dataset, we reduce the gap between frame-based\ datasets and event-based\ datasets. In this way, we hope that also the gap in accuracy between frame-based\ and event-based\ vision systems will sharply decrease. We expect benefits for both supervised tasks, such as detection and classification, and self-supervised ones, such as optical flow, monocular-depth estimation as well as tracking.
2209.09666
\section{Introduction} Given the social and ethical impact that some affective computing systems may have~\cite{hupont2022landscape}, it becomes of the utmost importance to clearly identify and document their context of use, envisaged operational scenario or intended purpose. Undertaking such use case documentation practices would benefit, among others, system vendors and developers to make key design decisions from early development stages (e.g. target user profile/population, data gathering strategies, human oversight mechanisms to be put in place), authorities and auditors to assess the potential risks and misuses of a system, end users to understand the permitted uses of a commercial system, the people on whom the system is used to know how their data is processed and, in general, the wide public to have a better informed knowledge of the technology. The need for transparency and documentation practices in the field of Artificial Intelligence (AI) has been widely acknowledged in the recent literature~\cite{hupont2022documenting}. Several methodologies have been proposed for AI documentation, but their focus is rather on data~\cite{gebru2018datasheets} and models~\cite{mitchell2019model} than AI systems as a whole, limiting at most the documentation of use cases to a brief textual description. Nowadays, voluntary AI documentation practices are in the process of becoming legal requirements in some countries. The European Commission presented in April 2021 its pioneering proposal for the Regulation of Artificial Intelligence, the AI Act~\cite{AIact}, which regulates software systems that are developed with AI techniques such as machine or deep learning. Interestingly, the legal text does not mandate any specific technical solutions or approaches to be adopted; instead, it focuses on the \textit{intended purpose} of an AI system which determines its risk profile and, consequently, a set of legal requirements that must be met. The AI Act's approach further reinforces the need to properly document AI use cases. The concept of \textit{use case} has been used in classic software development for more than 20 years. Use cases are powerful documentation tools to capture the context of use, scope and functional requirements of a software system. They allow structuring requirements according to user goals~\cite{cockburn2001writing} and provide a means to specify the interaction between a certain software system and its environment~\cite{fantechi2003applications}. This work revisits classic software use case documentation methodologies, more particularly those based on the Unified Markup Language (UML) specification~\cite{UML251}, and proposes a template-based approach for AI use case documentation considering current information needs identified in the research literature and the European AI Act. Although the documentation methodology we propose is horizontal, i.e. it can be applied to different domains (e.g AI for medicine, social media, law enforcement), we address the specific information needs of affective computing use cases. The objective is to provide a standardised basis for an AI and affective computing technology-agnostic use case repository, where different aspects such as intended users, opportunities or risk levels can be easily assessed. To the best of our knowledge, this is the first methodology specific to the documentation of AI use cases. The remainder of the paper is as follows. Section~\ref{sec:background} provides an overview of the current AI regulatory framework, existing approaches for the documentation of AI and affective computing systems, and a background on UML. Section~\ref{sec:uml_template} identifies use case information needs and proposes an UML-based methodology for their unified documentation. In Section~\ref{sec:examples}, we put the methodology into practice with some concrete exemplar affective computing use cases. Finally, Section~\ref{sec:conclusions} concludes the paper. \section{Background} \label{sec:background} \subsection{``Intended purpose'' and ``emotion recognition systems'' in the AI Act} \label{subsec:aia_intended} The \textit{intended purpose} of an AI system is central to the European AI Act. It is defined as {\it ``the use for which an AI system is intended by the provider, including the specific context and conditions of use, as specified in the information supplied by the provider in the instructions for use, promotional or sales materials and statements, as well as in the technical documentation''}\footnote{The definitions provided in this manuscript are as of August 2022. The legal text is currently under negotiation and may be subject to change.}. An AI system's intended purpose determines its risk profile which can be, from highest to lowest: (1) \textit{unacceptable risk}, covering harmful uses of AI or uses that contradict ethical values; (2) \textit{high-risk}, covering uses identified through a list of high-risk application areas that may create an adverse impact on people's safety, health or fundamental rights; (3) \textit{transparency risk}, covering uses that are subject to a set of transparency rules (e.g. conversational agents, \textit{deepfakes}); and (4) \textit{minimal risk}, covering all other AI systems. The AI Act explicitly and implicitly refers to affective computing systems in several parts of the legal text\footnote{Please note that this assessment is based on the authors' own interpretation of the legal text as of August 2022.}. A transparency risk generally applies to affective computing systems, but there are some clearly identified prohibited practices and high-risk areas. Prohibited practices include systems used to distort a person's behaviour to cause psychological harm, and systems used by public authorities to perform social scoring based on predicted personality or social behaviour. AI systems intended to be used as \textit{``polygraphs and similar tools or to detect the emotional state of a person''} are listed as high-risk in the areas of \textit{``law enforcement''} and \textit{``migration, asylum and border control management''}. There might be situations where emotion recognition is exploited in recruitment contexts or to determine access to educational institutions, which would also be high-risk, as would emotion recognition systems being a safety component of a product (e.g. a system integrated in a car that detects a driver’s drowsiness and undertakes a safety action) or that are part of a machine or medical device (e.g. a companion robot for autistic children). Therefore, the AI Act establishes a clear set of harmonised rules that link use cases --including affective computing ones-- to risk levels, which in turn imply different legal requirements. This opens the door to the creation of a use case documentation methodology allowing for an unambiguous assessment of risk levels, such as the one proposed in this work, which could be a valuable tool for different stakeholders, ranging from system providers to authorities. \subsection{Current approaches for the documentation of AI systems} In the recent years, both key academic and industry players have proposed methodologies aiming at defining documentation approaches that increase transparency and trust in AI. Among the most successful initiatives, we find some that focus on documenting the datasets used for AI, such as \textit{Datasheets for Datasets}~\cite{gebru2018datasheets}, \textit{The Dataset Nutrition Label}~\cite{holland2018dataset,chmielinski2022dataset} and \textit{Data Cards}~\cite{pushkarna2022data}, as well as some that address the documentation of AI models and algorithms from a technical perspective, such as \textit{Model Cards}~\cite{mitchell2019model} and \textit{AI Factsheets}~\cite{arnold2019factsheets}. Very recently, the Organisation for Economic Co-operation and Development (OECD) has proposed a policy-oriented \textit{Framework for the classification of AI systems}~\cite{OECD} to which high-calibre institutions and a large number of AI practitioners have contributed. Being in the form of questionnaires or more visual factsheets, these methodologies are not based on formal documentation standards or specifications. Moreover, even though some of them do explicitly ask about the intended use of AI the system (e.g. \textit{``What is the intended use of the service output?"}\cite{arnold2019factsheets} and \textit{``Intended Use"} section in~\cite{mitchell2019model}), it is just in very broad terms and provided examples lack sufficient details to address complex legal concerns. To date, there is no unified and comprehensive AI documentation approach focusing exclusively on use cases. \subsection{Documentation of affective computing use cases} The aforementioned documentation approaches have scarcely been used in the field of affective computing. Only the \textit{Model Cards} original paper comes with a ``smiling detection in images" and a detection of ``toxicity in text" example. Use cases in the field have rather been presented to the community in plain text form (i.e. without following any documentation template), either in survey papers~\cite{aranha2019adapting,weninger2015emotion,zhao2019affective}, in papers presenting a very concrete application~\cite{xu2018automated,murali2021affectivespotlight,setiono2021enhancing} or in articles discussing ethical issues~\cite{hernandez2021guidelines,ong2021ethical,hupont2022landscape}. Interestingly, the Association for the Advancement of Affective Computing (AAAC) has recently launched the \textit{affective computing commercial products database}~\cite{aaac_productdb}, which presents a table with a list of commercial products, a brief description of each one and associated tags such as modality (e.g. speech, text, face), format (e.g. software, hardware) and application domain (e.g. general purpose, education, health). It is however limited to a high level description of real products in the market. \subsection{Unified Modeling Language (UML) for use case reporting} The Unified Modeling Language (UML) specification has been widely used in software engineering in the last two decades~\cite{UML251,kocc2021uml}. It provides a standard way to visualize the design and behaviour of a system by introducing a set of graphical notation elements. In particular, it allows for use case modelling, without entering into implementation details, in the form of intuitive \textit{use case diagrams} whose main elements are depicted in Figure~\ref{fig:uml_icons}. \begin{figure}[htb!] \centering \includegraphics[width=0.65\linewidth]{figures/UML_icons.png} \caption{Main UML graphical notation elements for use case modeling.} \label{fig:uml_icons} \end{figure} Use cases capture a system's requirements, i.e., what the system is supposed to do. A use case is triggered by an \textit{actor} (it might be a person or group of persons), who is called \textit{primary actor}. The use case describes the various sets of interactions that can occur between the various actors, while the primary actor is in pursuit of a goal. A use case is completed successfully when the goal that is associated with it is reached. Use case descriptions also include possible extensions to this sequence, e.g., alternative sequences that may also satisfy the goal, as well as sequences that may lead to failure in completing the service. Once use cases have been modelled in a diagrammatic form, the next step is to describe them in an easy-to-understand and structured written manner. Traditional use case modelling always includes this step, and several standards have been suggested for the layout of use case descriptions. The most widely used is the table format proposed by Cockburn in~\cite{cockburn2001writing} and shown in Figure~\ref{fig:use_case_tables}-left. UML is an powerful tool for use case documentation and communication, even to non-technical audiences. Nevertheless, it has not yet been exploited to document AI use cases. \section{An UML-based documentation methodology for AI and affective computing use cases} \label{sec:uml_template} We propose a novel methodology for the documentation of AI use cases which is grounded on (1) the UML standard specification for use case modeling and (2) the requirements for use case documentation under the European AI Act. Our methodology pays particular attention to information needs related to affective computing use cases. It is intended to be a tool to increase transparency and facilitate the understanding of the intended purpose of an AI system, in order to ease the assessment of its risk level and other relevant contextual considerations. \subsection{Information needs related to the ``intended purpose'' under the AI Act} \label{sec:aia} As discussed in Section~\ref{subsec:aia_intended}, the European AI Act centres around the concept of \textit{intended use}. Several key information elements are essential to document the intended use of an AI system according to the legal text. We have compiled them in the list presented in Table~\ref{tab:info_elements_aia}. As can be seen, the intended purpose of the system shall be put into context by providing additional information on: who will be the users and the target persons on which the system is intended to be used; the operational, geographical, behavioural and functional contexts of use that are foreseen, including a description of the hardware on which the system is intended to run (e.g. to highlight whether it is part of a device/machine); which are the system's inputs and outputs; and, if applicable, whether the system is a safety component of a product. Additionally, it is as important to clearly specify the intended use of the system as its foreseeable potential misuses and unintended purposes. Finally, the \textit{application areas} information element is one of the most important to assess when it comes to identify a system's risk level. The legal text links some practices, areas and concrete applications within these areas to prohibited practices and high-risk profiles. Table~\ref{tab:areas} compiles the prohibited practices (top) and high-risk application areas (bottom) mentioned in the legal text that are directly related to emotion recognition, or where some kind of affective computing technique could potentially be used (e.g. personality prediction for social scoring, facial expression recognition for student proctoring, pain detection for establishing priority in emergency services). In order to facilitate the identification of the level of risk of an affective computing system, it is therefore essential to indicate whether its intended application area(s) or any foreseeable misuse are among those on the list. It should be noted that Table~\ref{tab:info_elements_aia} is not meant to be a final and exhaustive list of information elements needed for compliance with any future legal requirement. First and foremost, because the AI regulation is still under negotiation, and is therefore subject to be modified in its road towards adoption. Second, because the objective of this work is the documentation of use cases, which is just a small part of the technical documentation required to demonstrate conformity with the legal text. \newcolumntype{I}{>{\raggedright\arraybackslash}m{0.08\textwidth}} \newcolumntype{D}{>{\raggedright\arraybackslash}m{0.36\textwidth}} \begin{table}[h] \centering \begin{tabular}{ID} \textbf{Element} & \textbf{Description} \\ \toprule Intended purpose & Use for which an AI system is intended by the provider. If the system is a safety component of a product, it must be clearly stated. \\ \midrule User & Natural or legal person using an AI system under its authority. \\ \midrule Target persons & Persons or group of persons on which the system is intended to be used. \\ \midrule Context of use & Description of all forms on which the system is deployed (e.g. characteristics of the specific geographical, behavioural or functional setting) and of the hardware on which it is intended to run. \\ \midrule Application areas & List of areas in which the AI system is intended to be applied, including those in Table \ref{tab:areas}.\\ \midrule Reasonably foreseeable misuses & Uses of an AI system in a way that is not in accordance with its intended purpose, which may lead to errors, faults, inconsistencies, or risks to health, safety or fundamental rights. \\ \midrule Inputs & Data provided to or directly acquired by the system, on the basis of which the system produces an output. \\ \midrule Outputs & Outputs of the AI system as provided to the user. \\ \bottomrule \end{tabular} \caption{Key use case information elements that are needed to assess an AI system's risk level according to the AI Act.} \label{tab:info_elements_aia} \end{table} \newcolumntype{A}{>{\raggedright\arraybackslash}m{0.46\textwidth}} \begin{table}[h] \centering \begin{tabular}{A} \textbf{AREA \textcolor{violet}{$>$ POTENTIAL AFFECTIVE COMPUTING USE}} \\ \toprule - Deploy subliminal techniques beyond a person's consciousness \\ \textcolor{violet}{\hspace{0.75cm} $>$ Distort a person's behaviour to cause psychological harm}\\ - Exploit the vulnerabilities of a specific group of persons \\ \textcolor{violet}{\hspace{0.75cm} $>$ Distort a person's behaviour to cause psychological harm}\\ - Social scoring by public authorities or on their behalf \\ \textcolor{violet}{\hspace{0.75cm} $>$ Evaluation of trustworthiness based on predicted personality}\\ \textcolor{violet}{\hspace{0.75cm} $>$ Evaluation of trustworthiness based on social behaviour}\\ \toprule - Education and vocational training \\ \textcolor{violet}{\hspace{0.75cm} $>$ Determine access to educational institutions}\\ \textcolor{violet}{\hspace{0.75cm} $>$ Assess students in educational institutions}\\ - Employment, workers management and access to self-employment \\ \textcolor{violet}{\hspace{0.75cm} $>$ Recruitment or selection of natural persons}\\ \textcolor{violet}{\hspace{0.75cm} $>$ Make decisions on promotion/termination of contract}\\ \textcolor{violet}{\hspace{0.75cm} $>$ Monitoring and evaluation of performance and behaviour}\\ - Access to essential private/public services and benefits \\ \textcolor{violet}{\hspace{0.75cm} $>$ Evaluate eligibility of natural persons for public assistance}\\ \textcolor{violet}{\hspace{0.75cm} $>$ Evaluate creditworthiness of natural persons}\\ \textcolor{violet}{\hspace{0.75cm} $>$ Establish priority in the dispatching of emergency services}\\ - Law enforcement \\ \textcolor{violet}{\hspace{0.75cm} $>$ Make individual risk assessments of natural persons}\\ \textcolor{violet}{\hspace{0.75cm} $>$ Detect the emotional state of a natural person}\\ \textcolor{violet}{\hspace{0.75cm} $>$ Crime profiling of natural persons}\\ - Migration, asylum and border control management \\ \textcolor{violet}{\hspace{0.75cm} $>$ Make individual risk assessments of natural persons}\\ \textcolor{violet}{\hspace{0.75cm} $>$ Detect the emotional state of a natural person}\\ \textcolor{violet}{\hspace{0.75cm} $>$ Examine applications for asylum/visa/residence}\\ - Administration of justice and democratic processes \\ \textcolor{violet}{\hspace{0.75cm} $>$ Assist judicial authority in researching and interpreting facts}\\ \bottomrule \end{tabular} \caption{Practices and application areas listed as \textit{prohibited} (top) and \textit{high-risk} (bottom) in the AI Act, that are directly or that could be indirectly related to affective computing. Please note that this table has been generated by the authors based on their own interpretation of the AI Act as of August 2022. } \label{tab:areas} \end{table} \subsection{Revisiting UML for AI and affective computing use case documentation} \label{sec:revisit_uml} The idea of \textit{intended use} defined in the AI Act is closely related to the traditional software concept of \textit{use case}, as defined in the UML specification. UML use case diagrams do not enter into technical details (e.g. implementation details, algorithm architectures) but rather focus on the context of use, the main actors using the system, and actor-actor and actor-system interactions, which is a focus aligned with that proposed by the AI Act to assess a system's risk level. The UML language is thus a powerful, standardized and highly visual tool to operationalise the need for a unified documentation of AI use cases. In Figure~\ref{fig:use_case_tables}-right, we propose an adaptation of the classic table template accompanying UML use case diagrams~\cite{cockburn2001writing} to the AI Act's taxonomy. As can be seen, the adapted fields are minimal and there is an almost perfect correspondence with the original template. We have only renamed some key words (in blue in the table), namely \textit{scope} to \textit{intended purpose}, \textit{primary actor} to \textit{user}, \textit{stakeholders and interests} to \textit{target persons}, and \textit{open issues} to \textit{misuses}. We have also included a new field called \textit{application areas} (in green), allowing to clearly identify the area(s) in which the system is intended to be used and, if applicable, specify whether they correspond to those listed in Table~\ref{tab:areas}. \begin{figure*}[htb!] \centering \includegraphics[width=\linewidth]{figures/use_case_tables.png} \caption{Left: classic table template for the documentation of UML use case diagrams, as in~\cite{cockburn2001writing}. Right: proposed adaptation for the documentation of AI use cases, inspired by the European AI Act's definitions. Green text corresponds to added fields, while blue text is used for fields that have been adapted.} \label{fig:use_case_tables} \end{figure*} \section{Methodology in practice: example of affective computing use cases} \label{sec:examples} In this section, we apply the proposed methodology to the documentation of three representative affective computing systems. Figures~\ref{fig:uc_smile}-\ref{fig:uc_car} show their corresponding UML use case diagrams and accompanying tables, which are further described below. \\ \noindent \textbf{Smart camera}. In this first use case, the system is a smart camera that shoots a picture only when all the people posing in front of it are smiling. There are several products in the market with this feature~\cite{canon,nikon}, which have inspired this example. The UML diagram of the \textit{smart shooting} use case and its corresponding table are shown in Figure~\ref{fig:uc_smile} left and right, correspondingly. This application may seem simple and naive a priori, but it has recently caused controversy. Workers at a Beijing office were forced to smile to an AI camera to get through the front doors, change the temperature or print documents, in an attempt to improve the working environment by keeping workers happy~\cite{news_happyface}. However, some workers felt their emotions were manipulated. Our proposed UML table makes it clear that the target application domain is \textit{entertainment and leisure} exclusively, and the \textit{misuses} field explicitly emphasises that the system is not conceived to be used to monitor or manipulate emotions in contexts such as working environments. This important claim excludes the use case from the high-risk area of \textit{workers management $>$ monitoring and evaluation of performance and behaviour} (c.f. Table~\ref{tab:areas}). \\ \noindent \textbf{Affective music recommender}. Figure~\ref{fig:uc_music} shows the UML diagram and table for the second use case, corresponding to an affective music recommender system proposing songs to the user based on her personality, current mood and playlist history. This use case has been inspired by the work presented in~\cite{amini2019affective}. Several studies have shown that users' music playlists can be used to infer emotions, personality traits and vulnerabilities~\cite{deshmukh2018survey}; the other way round, certain music pieces can induce behaviours and manipulate listeners' emotions~\cite{gomez2021music}. The proposed methodology allows to frame the ethical use of the system by documenting step by step its conceived functioning, and how and for what purpose personality and mood prediction are extracted and used (based on profile data voluntarily provided by the platform's users, with the sole purpose of making the most appropriate and enjoyable music recommendations). The \textit{misuses} field further strengthens the system's ethical principles by explicitly signaling the prohibition of proposing music pre-conceived to exploit vulnerabilities, manipulate, distort or induce certain emotions or behaviour in users, which would be a \textit{prohibited practice} according to the AI Act (c.f. Table~\ref{tab:areas}).\\ \noindent \textbf{Driver attention monitoring}. The third example is a use case where a driver's face is recorded with a car in-cabin camera, and monitored in order to recognise drowsiness and distraction. When such situations are detected, the vehicle's attention monitoring system sends alerts in the form of beep tones and light symbols in the car dash (Figure~\ref{fig:uc_car}). Driver monitoring systems have been a popular affective computing application in the last decade and the modelling of this use case is inspired by different papers~\cite{kumar2018driver,govindarajan2018affective} as well as real commercial products~\cite{subaru,tesla}. The \textit{intended purpose} field in the proposed UML-based table clearly states that the system is part of a safety component of the vehicle, which immediately positions it as a high-risk profile according to the AI Act. Further, the documentation methodology allows to indicate that the system is conceived to alert the driver, but in any case to allow the vehicle to take full control of the car in an autonomous manner. \begin{figure*}[h!] \centering \includegraphics[width=\linewidth]{figures/use_case_smile.png} \caption{First use case: methodology applied to a smart camera system with embedded smile detection capabilities.} \label{fig:uc_smile} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[width=\linewidth]{figures/use_case_music.png} \caption{Second use case: proposed methodology applied to an affective music recommender system.} \label{fig:uc_music} \end{figure*} \begin{figure*}[h!] \centering \includegraphics[width=\linewidth]{figures/use_case_driving.png} \caption{Third use case: proposed methodology applied to a driver attention monitoring system.} \label{fig:uc_car} \end{figure*} \section{Conclusions and future work} \label{sec:conclusions} In this paper, we propose a methodology for the documentation of AI use cases which covers the particular information elements needed to address affective computing ones. The methodology has a solid grounding, being based on two strong pillars: (1) the UML use case modelling standard, and (2) the recently proposed European AI regulatory framework. Each use case is represented in a highly visual way by means of an UML diagram, accompanied by a structured and concise table that compiles the relevant information to understand the intended use of a system, and to assess its risk level and foreseeable misuses. Our approach is not intended to be an exhaustive methodology for the technical documentation of AI or affective computing systems (e.g. to demonstrate compliance with legal acts). Rather, it aims to provide a template for compiling related use cases with a simple but effective and unified language, understandable even by non-technical audiences. We have demonstrated the power of this language through practical affective computing exemplar use cases. In the near future, we plan to develop a collaborative repository compiling a catalogue of AI --including affective computing-- use cases following the proposed template. The first step will be to transcribe the 60 facial processing applications presented in~\cite{hupont2022landscape}, which contain 18 emotion recognition use cases, in order to add them to this catalogue. \section*{Ethical Impact Statement} The methodology presented in this paper proposes the first unified documentation approach for AI use cases, with a strong focus on affective computing ones, which allows to differentiate intended uses and potential misuses. In the last years, the need for trustworthy AI has been raised by both private and public key institutions and researchers in the field~\cite{OECD,gebru2018datasheets,arnold2019factsheets,madaio2020co,hupont2022landscape}. In particular, documentation has been identified as a key factor towards the fulfilment of \textit{transparency}~\cite{hupont2022documenting}, one of the seven pillar requirements for trustworthy AI established by the High-Level Expert Group on Artificial Intelligence (AI HLEG)~\cite{HLEG}. Therefore, this work represents a major step towards ethical AI and affective computing, and could even constitute a basis for the future standardisation activities in this area. \section*{Acknowledgment} This work is partially supported by the European Commission under the HUMAINT project of the Joint Research Centre. \bibliographystyle{IEEEtran}
2204.06148
\section{Introduction} In this paper, we study the wave turbulence theory for the following KdV type equation \begin{equation} \partial_t\psi(t,x)+\Delta\partial_{x_1}\psi(t,x)-\nu \Delta \psi(t,x)=\lambda \partial_{x_1}(\psi^2(t,x)). \end{equation} as an example of three wave system. Before we talk about our main result, let us recall some basic concepts from the wave turbulence theory. For a more thorough introduction of this topic, see the book of Nazarenko \cite{Nazarenko}. \subsection{Wave turbulence theory.} It is well-known that in many PDEs coming from physics, energy can be transferred from low frequency Fourier modes to high frequency modes (like from $k$ to $k+1$). This process can happen for many times so that finally the energy can flow to very high frequency (like $1\rightarrow 2\rightarrow 3\rightarrow\cdots\rightarrow 10^4$). This is called the energy cascade phenomenon. Since it takes many steps to transfer energy to very high modes, the information in the low energy modes is lost and high energy modes become random and exhibit \textit{universality}. This is very similar to the game, \textit{Galton board}, as illustrated in Figure \ref{fig.Galtonboard}. Each step of energy transfer is similar to a collision of the ball with the pin. In order to get to the bottom (the high energy modes), the ball must collide with many pins (there must be many step of energy transfer). Finally, the distribution of balls (energy) become random and universal. \begin{figure}[H] \centering \includegraphics[scale = 0.2]{Galton_board.png} \caption{The Galton board} \label{fig.Galtonboard} \end{figure} In the Galton board case, the final distribution is Gaussian distribution. This distribution is universal in the sense that it is independent of the details of the input flow of balls. While in many example of energy cascade the final distribution is random and universal, they are usually not i.i.d Gaussian or even not exhibit property similar to i.i.d Gaussian. The cascade spectrum of strong turbulence in fluid is one example of highly non-Gaussian universality behavior. The structure functions satisfy power laws which is independent of the initial data. The second order structure function satisfies the famous Kolmogorov $5/3$ law while the high order structure functions exhibit a significant deviation from the Kolmogorov's prediction. This deviation is called \textit{intermittency}. \begin{figure}[H] \centering \includegraphics[scale = 0.6] {five_third_law.png} \caption{The $5/3$ law} \label{fig.5/3law} \end{figure} While i.i.d Gaussian assumption does not work for turbulence in fluid, it can work for turbulence generated by dispersive wave. Unlike fluid equation, a dispersive equation can exhibit turbulent behavior even when the size of the solution is small. In this case, the distribution can be calculated by perturbative expansion and it is close to i.i.d Gaussian variable if initially it is so. In other words, there is no intermittency in this case. In the wave turbulence, the energy distribution $n(k)$ is supposed to evolve according to the following wave kinetic equation \[ \tag{WKE}\label{eq.WKE} \begin{split} \partial_t n(t, k) =&\mathcal K\left(n(t, \cdot)\right), \\ \mathcal K(n)(k):=& |k_x|^2\int_{\substack{(k_1, k_2)\in \mathbb R^{2d}\\k_1+k_2=k}}n(k_1) n(k_2)\delta(|k_1|^2k_{1x}+|k_2|^2k_{2x}-|k|^2k_{x})\, dk_1 dk_2 \\ -& 2n(k)\int_{\mathbb{R}^d}k_x(k_x-k_{1x})n(k_1) \delta(|k_1|^2k_{1x}+|k_2|^2k_{2x}-|k|^2k_{x})\, dk_1 \end{split} \] \eqref{eq.WKE} admits a special solution $|k|^{-d-1}$ and the general solution of \eqref{eq.WKE} is supposed to converge to this power law. This power law is universal and independent of the external force and initial data. The \eqref{eq.WKE} is a Boltzmann type equation. Compared to the original ZK equation, \eqref{eq.WKE} has a monotonically decreasing entropy (see \cite{GI}) and exhibits irreversibility. The boltzmann equation statistically describes the collision of particles and in the collision integral there are momentum and energy momentum conservation condition $p_1+p_2=p$ and $p^2_1+p^2_2=p^2$. \eqref{eq.WKE} is a counterpart which describes the interaction of wave. The right hand side of \eqref{eq.WKE} also contains momentum and energy conservation $k_1+k_2=k$, $|k_1|^2k_{1x}+|k_2|^2k_{2x}-|k|^2k_{x}$. These conditions come from the oscillatory phase in \eqref{eq.intmain} $e^{i s\Omega(k_1,k_2,k)}$. The energy conservation is also the time resonance surface in the space-time resonance method. For a general dispersive PDE, there may not be exactly $3$ wave numbers $k_1,k_2,k$ in the WKE. In general, a quadratic equation has $3$ wave numbers in its WKE, while a cubic equation has $4$ wave numbers in its WKE. Null conditions can increase the number of waves in WKE. If the resonant coefficients vanishes on the resonant surface, a quadratic (cubic) equation can have $3$ ($4$) wave numbers in its WKE. ZK equation is an example of $3$ wave interaction case. Nonlinear Schrodinger equation studied in the papers of Deng and Hani \cite{DH}-\cite{DH3} is an example of $4$ wave interaction case. The energy distribution $n(k)$ looks very differently in different ranges. As in Figure \ref{fig.5/3law}, in the energy containing range $|k|\ll 1$, the wave number $k$ is not large and the energy distribution in this range can be influenced by the external force and initial data, so there is no universality in this range. In the inertial range $1\lesssim |k|\ll l_d^{-1}$, the influence of external force and initial data starts to be lost, while the wave number is not large enough to activate the effect of dissipation. The energy distribution is universal and satisfies the \eqref{eq.WKE}. In the dissipation range $|k|\gtrsim l_d^{-1}$, the effect of dissipation is dominant and $n(k)$ decays to $0$ very fast. In this paper, we are mostly interested in the energy distribution in the inertial range and dissipation range. In this paper we choose ZK equation as an example of $3$ wave turbulence theory. There are many other PDEs (mostly from plasma physics and capillary waves) whose resonant surfaces contain $3$ wave numbers. The proof in this paper can be transplanted to these equations with some additional efforts. \subsection{The setup of this paper} We now specify rigorously the randomness and energy distribution used in this paper. In this paper we consider the Cauchy problem of the following equation, \begin{equation}\tag{MKDV}\label{eq.MKDV} \begin{cases} \partial_t\psi(t,x)+\Delta\partial_{x_1}\psi(t,x)-\nu \Delta \psi(t,x)=\lambda \partial_{x_1}(\psi^2(t,x)),\\[.6em] \psi(0,x) = \psi_{\textrm{in}}(x), \quad x\in \mathbb T^d_{L}. \end{cases} \end{equation} We consider the periodic boundary condition, which implies that the spatial domain is a torus $\mathbb T^d_{L}=[0,L]^d$. We know that the Fourier coefficients of $\psi$ lies the lattice $\mathbb{Z}_L^d\mathrel{\mathop:}= \{k=\frac{K}{L}:K\in \mathbb{Z}^d\}$. Let $n_{\textrm{in}}$ be a known function, we assume that \begin{equation}\label{eq.wellprepared} \psi_{\textrm{in}}(x)=\frac{1}{L^d}\sum_{k\in\mathbb{Z}^d_L}\sqrt{n_{\textrm{in}}(k)} \eta_k(\omega)\, e^{2\pi i kx} \end{equation} where $\eta_k(\omega)$ are mean-zero and identically distributed complex Gaussian random variables satisfying $\mathbb E |\eta_k|^2=1$. To ensure $\psi_{\textrm{in}}$ to be a real value function, we assume that $n_{\textrm{in}}(k)=n_{\textrm{in}}(-k)$ and $\eta_k=\overline{\eta_{-k}}$. Finally, we assume that $\eta_k$ is independent of $\{\eta_{k'}\}_{k'\ne k,-k}$. In a real turbulent wave, the low frequency Fourier modes in the energy containing range are influenced by the external force and initial data, so they are not random. In the wave turbulence theory we just care about high frequency part in the inertial and dissipation range. To simplify the theory, we assume that all Fourier coefficients are random. The randomness of initial data implies the randomness of the solution, the energy spectrum $n(t,k)$ mentioned in previous section is defined to be $\mathbb E |\widehat \psi(t, k)|^2$, where $\psi(t, k)$ are Fourier coefficients of the solution. Although the initial data is assumed to be i.i.d Gaussian random variable, it is possible to develop a theory for other type of random initial data. \subsection{Statement of the results} We define $\Lambda(k)\coloneqq k_{1}(k_1^2+\cdots k_d^2)$. Under this new notations the ZK equation becomes, \[ \partial_t\psi(t,x)=i\Lambda(\nabla)\psi(t,x)+\nu \Delta \psi(t,x)+\lambda \partial_{x_1}(\psi^2(t,x)). \] Now we introduce the main theorem of this paper. \begin{thm}\label{th.main} Let $d\ge 3$ and $L$ be a large number. Suppose that $n_{\mathrm{in}} \in C^\infty_0(\mathbb{R}^d)$ is compactly supported in a domain whose diameter is bounded by $D$. Assume that $\psi$ is a solution of \eqref{eq.MKDV} with randomized initial data $\psi_{\mathrm{in}}$ given by \eqref{eq.wellprepared}. Fix a small constant $\varepsilon> 0$, and set $\alpha=\lambda L^{-\frac{d}{2}}$ to be the strength of the nonlinearity. If $\alpha$ satisfies \begin{equation}\label{eq.conditionalpha} \alpha^{-1}\le L^{\frac{1}{2}} \end{equation} and for some small constant $c$, $\nu$ satisfies \begin{equation}\label{eq.conditionnu} \nu\ge c^2T^{-1}_{\text{max}} \end{equation} then for all $L^{\varepsilon} \leq t \leq T_{\text{max}} = L^{-\varepsilon} \alpha^{-2}$, we have the following conclusions \begin{enumerate} \item If $\sup_{k}n_{\mathrm{in}}(k)\le C_{\mathrm{in}}$, then $\mathbb E |\widehat \psi(t, k)|^2$ is bounded by $2C_{\mathrm{in}}$ for $t\le T_{\text{max}}$ and for any $M$, we can construct an approximation series \begin{equation}\label{eq.approx1} \mathbb E |\widehat \psi(t, k)|^2=n_{\mathrm{in}}(k)+n^{(1)}(k)+n^{(2)}(k)+\cdots n^{(N)}(k)+O(L^{-M}) \end{equation} where each terms $n^{(i)}(k)$ can be exactly calculated. \item Define $T_{\mathrm{kin}}=\frac{1}{8\pi\alpha^2}$ and $l_{d}=(\nu T_{\mathrm{max}})^{\frac{1}{2}}\ge c$, if $D\le C_2 l_d^{-1}$, then for some constant $C_1=C_1(d)$ and $\theta=C_1^{-1}\varepsilon$, we have \begin{equation}\label{eq.n1} n^{(1)}(k)=\left\{ \begin{aligned} &\frac{t}{T_{\mathrm{kin}}}\mathcal K(n_{\mathrm{in}})(k)+O_{\ell^\infty_k}\left(L^{-\theta}\frac{T_{\text{max}}}{T_{\mathrm {kin}}}\right)+\widetilde{O}_{\ell^\infty_k}\left(\epsilon_1\text{Err}_{D}(k_x)\frac{T_{\text{max}}}{T_{\mathrm {kin}}}\right) && \text{for any } |k|\le \epsilon_1 l_{d}^{-1}, \\ &0, && \text{for any } |k|\ge 2C_{2} l_{d}^{-1} \end{aligned}\right. \end{equation and \begin{equation}\label{eq.n(j)estimate} n^{(j)}(k)=O_{\ell^\infty_k}\left(L^{-\theta}\frac{t}{T_{\mathrm {kin}}}\right), \qquad j>1 \end{equation} where $\mathcal K$ is defined in \eqref{eq.WKE}, and $O_{\ell^\infty_k}(A)$ (resp. $\widetilde{O}_{\ell^\infty_k}(A)$) is a quantity that is bounded by $A$ in $\ell^\infty_k$ by some universal constant (resp. constant just depending on $d$). The definition of universal constant can be found in section \ref{sec.notat}. The definition of $\text{Err}_D$ is \begin{equation} \text{Err}_{D}(k_x)=\left\{\begin{aligned} &D^{d+1}, && \text{if } |k_x|\le D, \\ &D^{d-1}(|k_x|^2+D|k_x|), && \text{if } |k_x|\ge D. \end{aligned} \right. \end{equation} \end{enumerate} \end{thm} \begin{rem} The condition $d\ge 3$ is essential. When $d=1$, ZK equation becomes KdV equation which is an integrable system whose long time behavior is quasi-period instead of turbulent \cite{JM}. When $d=2$, as mentioned in Remark \ref{rem.nottrue2d}, the desire number theory result is not true and a major revision to \eqref{eq.WKE} is required in order to obtain a valid wave kinetic theory. \end{rem} \begin{rem} $l_{d}$ is expected to be a small universal constant. Although $n_{in}(k)$ is compactly supported, the diameter of the support can be much larger than $l_{d}$, so the second case of \eqref{eq.n1} is not trivial. \end{rem} \begin{rem} The restriction on $\alpha^{-1}$ is not optimal. The optimal result is expected to be $\alpha^{-1}\le L$ for general torus and $\alpha^{-1}\le L^{d/2}$ for generic torus. Here the torus is general or generic in the sense of \cite{DH}. Except for those in the appendix \ref{sec.numbertheoryA}, all the arguments in this paper works under these stronger assumptions. \end{rem} \begin{rem} Due to $\partial_x$ in the nonlinearity, the equation is quasilinear. This cause a serious difficulty in controlling the high frequency part. To resolve this difficulty, some regularization to ZK equation is required. In \cite{ST} and in this paper, grid discretization and viscosity are introduced respectively. Both of them serve as canonical high frequency truncations. \end{rem} \subsection{Ideas of the proof} The basic idea of this paper is to construct an approximation series and use probability theory and number theory to control the size and error of this approximation. \subsubsection{The approximate solution}\label{sec.appsol} The equation of Fourier coefficients is \begin{equation}\label{eq.Fourierintro} \dot{\psi}_{k} = i\Lambda(k) \psi_k -\nu |k|^2 \psi_k +\frac{i\lambda}{L^{d}} \sum\limits_{\substack{(k_1,k_2) \in (\mathbb{Z}^d_L)^2 \\ k_1 + k_2 = k}} k_{x_1}\psi_{k_1} \psi_{k_2} \end{equation} Define new dynamical variable $\phi= e^{-it\Lambda(\nabla)} \psi$ and integrate \eqref{eq.Fourierintro} in time. Then (\ref{eq.MKDV}) with initial data (\ref{eq.wellprepared}) becomes \begin{equation}\label{eq.intmainintro} \begin{split} \phi_k =\xi_k+\frac{i\lambda}{L^{d}} \sum\limits_{k_1 + k_2 = k}\int^{t}_0k_{x_1}\phi_{k_1} \phi_{k_2}e^{i s\Omega(k_1,k_2,k)-\nu(t-s)|k|^2} ds. \end{split} \end{equation} Here $\Omega(k_1,k_2,k) =\Lambda(k_1)+\Lambda(k_2)-\Lambda(k)$. $\xi_k$ are the Fourier coefficients of the initial data of $\psi$ defined by $\xi_k=\sqrt{n_{\textrm{in}}(k)} \, \eta_{k}(\omega)$. Denote the second term of right hand side by $\mathcal{T}(\psi,\psi,\psi)_k$ and the right hand side by $\mathcal{F}(\psi)_k=\xi_k+\mathcal{T}(\psi,\psi,\psi)_k$. Then the equation is $\psi=\mathcal{F}(\psi)_k$. We can construct the approximation by iteration: $\psi=\mathcal{F}(\psi)=\mathcal{F}(\mathcal{F}(\psi))=\mathcal{F}(\mathcal{F}(\mathcal{F}(\psi)))=\cdots$. Define the approximate solution by $\psi_{app}=\mathcal{F}^{N}(\xi)$. By recursively expanding $\mathcal{F}^{N}$, we know that $\psi_{app}$ is a polynomial of $\xi$. The expansion can be described as the following, \begin{equation*} \begin{split} \psi_{app}=&\mathcal{F}^{N}(\xi)=\xi+\mathcal{T}(\mathcal{F}^{N-1}(\xi),\mathcal{F}^{N-1}(\xi)) \\ =&\xi+\mathcal{T}\Big(\xi+\mathcal{T}(\mathcal{F}^{N-2}(\xi),\mathcal{F}^{N-2}(\xi)), \cdots\Big)=\xi+\mathcal{T}(\xi,\xi)+\cdots \\ =&\xi+\mathcal{T}(\xi,\xi)+\mathcal{T}(\mathcal{T}(\xi,\xi),\xi) +\mathcal{T}(\xi,\mathcal{T}(\xi,\xi))+\cdots \end{split} \end{equation*} In above iteration, we recursively replace $\mathcal{F}^{l}(\xi)$ by $\xi+\mathcal{T}(\mathcal{F}^{l-1}(\xi),\mathcal{F}^{l-1}(\xi))$. We need a good upper bound for each terms of $\psi_{app}$. To get this we introduce tree diagrams to represent terms $\xi$, $\mathcal{T}(\xi,\xi)$, $\mathcal{T}(\mathcal{T}(\xi,\xi),\xi)$, $\cdots$. The basic notation of tree diagram will be introduced in section \ref{sec.appFey}. \subsubsection{The perturbative analysis}\label{sec.pert intro} The analysis in previous section suggests that $\psi_{app}$ should be a good approximation of $\psi$. In other words, the error of this approximation $w=\psi-\psi_{app}$ is very small. To prove this fact, we will use the follow equation of $w$ which can be derived from (\ref{eq.intmainintro}): \begin{equation}\label{eq.eqwintro} w= Err(\xi)+Lw+B(w,w) \end{equation} Here $Err(\xi)$ is a polynomial of $\xi$ whose degree $\le N+1$ monomials vanish. $Lw$, $B(w,w)$ are linear, quadratic in $w$ respectively. We prove the smallness of $w$ using boostrap method. Define $||w||_{X^p}=\sup_{k} \langle k\rangle^{p} |w_k|$. Starting from the assumption that $\sup_t||w||_{X^p}\le CL^{-M}$ ($C,M\gg 1$), in order to close the boostrap we need to prove that $\sup_t||w||_{X^p}\le (1+C/2)L^{-M}<CL^{-M}$. To prove $||w||_{X^p}\le (1+C/2)L^{-M}$, we use (\ref{eq.eqwintro}), which gives \begin{equation}\label{eq.ineqw} ||w||_{X^p}\le ||Err(\xi)||_{X^p}+||Lw||_{X^p}+||B(w,w)||_{X^p} \end{equation} To close the boostrap, we just need to show that \begin{equation} ||Err(\xi)||_{X^p}\le L^{-M}, \quad ||B(w,w)||_{X^p}\le C^2L^{d+O(1)-M}. \end{equation} Combining with a special treatment of $Lw$, above estimates imply that $||w||_{X^p}\le (1+C/2)L^{-M}$ which closes the boostrap. \subsubsection{Couple diagrams, lattice points counting and $||Err(\xi)||_{X^p}$}\label{sec.latticeintro} In this section we explain the idea of proving upper bound of $||Err(\xi)||_{X^p}$. $(Err(\xi))_{k}$ is a sum of terms of the form \begin{equation} \begin{split} &\mathcal{J}_{k}^0(\xi)= \xi_k, \quad \mathcal{J}_k^1(\xi)=\frac{i\lambda}{L^{d}} \sum_{k_1+k_2-k=0} H^1_{k_1k_2} \xi_{k_1}\xi_{k_2} , \quad\cdots \\ &\mathcal{J}_{T,k}^l(\xi)=\left(\frac{i\lambda}{L^{d}}\right)^l\sum_{k_1+k_2+\cdots+k_{l+1}-k=0} H^l_{k_1\cdots k_{l+1}}(T) \xi_{k_1}\xi_{k_2}\cdots\xi_{k_{l+1}}, \quad\cdots \end{split} \end{equation} According to section \ref{sec.appFey}, each terms correspond to a tree diagram and their coefficients can be calculated from these diagrams. This calculation is done in section \ref{sec.refexp}. As a corollary of tree diagram representation, we know that $H^l$ is large near a surface given by $2l$ equations $S=\{S_{\mathfrak{n}_1}(T)=0,\Omega_{\mathfrak{n}_1}(T)=0,\cdots,S_{\mathfrak{n}_{l}}(T)(T)=0,\Omega_{\mathfrak{n}_l}(T)=0\}$. By large deviation, to obtain an upper bound of Gaussian polynomials $\mathcal{J}_{T,k}^l(\xi)$, it suffices to calculate their variance. This calculation is done in section \ref{sec.coupwick} using Wick theorem and we introduce the concept of couple diagrams to represent the final result. As a corollary of couple diagram representation, we know that the coefficients of the variance concentrate near a surface given by $n$ equations ($n$ is the number of nodes in the couple) $S=\{S_{\mathfrak{n}_1}(T)=0,\Omega_{\mathfrak{n}_1}(T)=0,\cdots,S_{\mathfrak{n}_{n}}(T)(T)=0,\Omega_{\mathfrak{n}_n}(T)=0\}$. Then in order to estimate the variance it suffices to upper bound the number of lattice points near this surface. This is done in section \ref{sec.numbertheory} using the edge cutting argument to reduce the size of the couple. The method in \cite{DH} of getting number theory estimate based on tree diagram does not work in our setting. This is because the energy conservation equation $\Lambda(k_1)+\Lambda(k_2)-\Lambda(k)=0$ of ZK equation degenerates seriously when $k_{x}$ close to $0$. In \eqref{eq.numbertheory1}, the number of solutions of the diophantine equation $\Lambda(k_1)+\Lambda(k_2)=\Lambda(k)+\sigma+O(T^{-1})$ can only be bounded by $|k_x|^{-1}$ which goes to infinity when $k_{x}\rightarrow 0$. To solve this difficulty, our proof is based on couple diagram. In conclusion, combining above arguments, we can show that, for any $M$, we can take $N$ large enough so that $||Err(\xi)||_{X^p}\le L^{-M}$. \subsubsection{Upper bounds for $||B(w,w)||_{X^p}$} $||B(w,w)||_{X^p}$ is a sum of terms of the form \begin{equation} \frac{i\lambda}{L^{d}} \int^{t}_0\sum_{k_1+k_2-k=0} B_{k_1k_2}(s) w_{k_1}(\xi)w_{k_2} \end{equation} The upper bound of $||B(w,w)||_{X^p}$ can be obtained by a straight forward estimate \begin{equation} ||B(w,w)||_{X^p}\le L^{O(1)} ||w||_{X^p} \le C^2 L^{O(1)-2M}, \end{equation} Therefore, we get the desire upper bounds $||B(w,w)||_{X^p}\ll L^{-M}$ by taking $M> O(1)$. \subsubsection{A random matrix bound and $Lw$}\label{sec.randmatintro} To obtain a good upper bound for $Lw$, we need to estimate the norm of the random matrix $L$, following the idea in \cite{DH}, \cite{DH2}. In \cite{DH}, they consider solutions in the Bourgain space $X^{s,b}$ and use $TT^*$ method to get the upper bound for the operator norm of $L$, $||L||_{X^{s,b}\rightarrow X^{s,b}}\ll 1$. But we prefer to work in the simpler functional space $X^p$ which is not a Hilbert space. Although standard $TT^*$ method is not useful in a non-Hilbert space, we can bypass it using a Neumann series argument. Let us first explain how $TT^*$ method works. Here we pretend that $||\cdot||_{X^p}$ is a Hilbert norm. The key idea of $TT^*$ method is the inequality $||L||_{X^p\rightarrow X^p}=||(LL^*)^K||_{X^p\rightarrow X^p}^{\frac{1}{K}}\le (L^d\sup_{k,l} ((LL^*)^K)_{k,l})^{1/K}$. To upper bound $||L||_{X^p\rightarrow X^p}$, we just need to estimate $(LL^*)^K)_{k,l}$ which can be calculated by couple diagrams and be estimated by the large deviation inequality. By taking $K$ large, the loss $L^{d/K}$ could be made arbitrarily small. Unfortunately, $||\cdot||_{X^p}$ is not a Hilbert norm. However, we can bypass the $TT^*$ method using a Neumann series argument. Note that from \eqref{eq.intmainintro} we have the identity \begin{equation} w-Lw= Err(\xi)+B(w,w). \end{equation} We have good upper bounds for all of the three terms on the right hand side. By Neumann series argument we have \begin{equation} w= (1-L)^{-1}(\textit{RHS}) =(1-L^K)^{-1}(1+L+\cdots+L^{K-1})(\textit{RHS}). \end{equation} Since we can show that $||L^K||_{X^p\rightarrow X^p}\ll 1$ by calculating $(LL^*)^K)_{k,l}$, we have $||(1-L^K)^{-1}||_{X^p\rightarrow X^p}\lesssim 1$. The good upper bounds of $\textit{RHS}$ give us good upper bounds of $(1+L+\cdots+L^{K-1})(\textit{RHS})$. Combining above arguments, we obtain the desire estimate of $w$. This is done in section \ref{sec.errorw} and \ref{sec.randommatrices}. One additional difficulty is the unboundedness of $L$ due to the derivative $\partial_x$ in the nonlinearity. This is controlled by the high frequency decay coming from viscosity. \subsubsection{Proof of the main theorem} In summary, above arguments in section \ref{sec.appsol}-\ref{sec.randmatintro} prove that when $t\le \alpha^{-2}$, we have $||w||_{X^p}\le L^{-M}$ with high probability ($P(\textit{false})\lesssim e^{-CL^{\theta}}$). Above inequality is equivalent to $\sup_k\, |\langle k \rangle^s w_k|\le CL^{-M}$. Remember that $w:=\psi-\psi_{app}$, so with high probability we have the following estimate $\sup_k\, \langle k \rangle^s |\psi_k-\psi_{app,k}|\le CL^{-M}$. This implies that $\mathbb E |\widehat \psi(t, k)|^2=\mathbb E |\psi_{app,k}|^2+O(L^{-M})$. This suggests that we may get the approximation of $\mathbb E |\widehat \psi(t, k)|^2$ by calculating $\mathbb E |\psi_{app,k}|^2$. $\mathbb E |\psi_{app,k}|^2$ can be exactly calculated and the theorem can be proved by extract the main term in $\mathbb E |\psi_{app,k}|^2$. This is done in section \ref{sec.proofmain}. \subsection{Notations}\label{sec.notat} \underline{Universal constants:} In this paper, universal constants are constants that just depend on dimension $d$, diameter $D$ of the support of $n_{\text{in}}$ and the length of the inertial range $l^{-1}_d$. \underline{$O(\cdot)$, $\ll$, $\lesssim$, $\sim$:} Throughout this paper, we frequently use the notation, $O(\cdot)$, $\ll$, $\lesssim$. $A=O(B)$ or $A\lesssim B$ means that there exists $C$ such that $A\lesssim CB$. $A\ll B$ means that there exists a small constant $c$ such that $A\lesssim cB$. $A\sim B$ means that there exist two constant $c$, $C$ such that $cB\lesssim A\lesssim CB$. Here the meaning of constant depends on the context. If they appear in conditions involving $k$, $\Lambda$, $\Omega$, etc., like $|k|\lesssim 1$, $\iota_{\mathfrak{e}_1}k_{\mathfrak{e}_1}+\iota_{\mathfrak{e}_2}k_{\mathfrak{e}_2}+\iota_{\mathfrak{e}}k_{\mathfrak{e}}=0$, then they are universal constants. If these constants appear in an estimate which gives upper bound of some quantity, like $||L^K||_{X^p\rightarrow X^p}\ll 1$ or $\sup_t\sup_k |(\mathcal{J}_T)_k|\lesssim L^{O(l(T)\theta)} \rho^{l(T)}$, then in addition to the quantities that universal constants depend, they can also depend on the quantities $\theta$, $\varepsilon$, $K$, $M$, $N$, $\epsilon_1$. \underline{Order of constants:} Here is the order of all constants which can appear in the exponential or superscript of $L$. These constants are $\theta$, $\varepsilon$, $K$, $M$, $N$, $\epsilon_1$ All the constants are small compared to $L$ in the sense they are less than $L^{\theta}$ for arbitrarily small $\theta>0$. $\varepsilon$ can be an arbitrarily small constant less than $0.5$, the reader is encouraged to assume it to be $0.01$. The order of other constants can be decided by the relations $\theta\ll \varepsilon$, $K=O(\theta^{-1})$, $M\gg K$, $N\ge M/\theta$, here the constants in $\ll$, $O(\cdot)$ are universal. \underline{$\mathbb{Z}_L^d$:} $\mathbb{Z}_L^d\mathrel{\mathop:}= \{k=\frac{K}{L}:K\in \mathbb{Z}^d\}$ \underline{$k_x$, $k_{\perp}$:} Given any vector $k$, $k_x$ denotes the first component of $k$ and $k_{\perp}$ is the vector formed by the rest components of $k$. \underline{$\Lambda(k)$, $\Lambda(\nabla)$:} $\Lambda(k)\coloneqq k_{1}(k_1^2+\cdots k_d^2)$ and $\Lambda(\nabla) = i|\nabla|^2\partial_{x_1}$ \underline{Fourier series:} The spatial Fourier series of a function $u: \mathbb T_L^d \to \mathbb C$ is defined on $\mathbb Z^d_L:=L^{-1}\mathbb Z^{d}$ by \begin{equation}\label{fourierset} u_k=\int_{\mathbb T^d_L} u(x) e^{-2\pi i k\cdot x},\quad \mathrm{\; so \,that \;}\quad u(x)=\frac{1}{L^d}\sum_{k \in \mathbb Z^d_L} u_k \,e^{2\pi i k\cdot x}. \end{equation} Given any function $F$, $F_k$ or $(F)_k$ is its Fourier coefficients. \underline{Order of $L$:} In this paper, $L$ is assumed to be a constant which is much larger than all the universal constants and $\theta$, $\varepsilon$, $K$, $M$, $N$, $\epsilon_1$. \underline{$L$-certainty:} If some statement $S$ involving $\omega$ is true with probability $\geq 1-O_{\theta}(e^{-L^\theta})$, then we say this statement $S$ is $L$-certain. \subsection{A short survey of previous papers} (1) \underline{Results about the ZK equation:} ZK equation was introduced in the paper of Zakharov and Kuznetsov \cite{ZK} as an asymptotic model to describe the propagation of nonlinear ionic-sonic waves in a magnetized plasma. For a good reference about the physical background, see the book of Davidson \cite{Dbook}. For rigorous results about wellposedness and derivation from Euler-Poisson system see \cite{LLS} and reference therein. (2) \underline{Previous papers about wave turbulence theory:} There are a large number of physics papers about the derivation of wave kinetic equation. For reference, see the books of Zakharov, Lvov, Falkovich \cite{ZLFBook} and Nazarenko \cite{Nazarenko}, and the review paper of Newell and Rumpf \cite{NR}. The first rigorous result in wave turbulence theory is the paper of Lukkarinen and Spohn \cite{LukSpohn}, in which they verified the WKE for the Gibbs measure initial data. Then in the paper of Buckmaster, Germain, Hani, Shatah \cite{BGHS2}, they first rigorously formulated the basic concepts of general wave turbulence rigorously and proved a non-trivial result that verified WKE for a short time. In the paper of Deng and Hani \cite{DH} and the paper of Collot and Germain \cite{CG1}, \cite{CG2}, they applied the ideas from the study of randomly initialized PDE to wave turbulence and proved WKE for almost sharp time. In the paper of Faou \cite{Faou}, he derived the linearized wave kinetic equation near the Rayleigh-Jeans spectrum. In the paper of Deng and Hani \cite{DH2} and the paper of Staffilani and Tran \cite{ST}, they proved WKE for the sharp time based on normal form expansion and the expansion of Liouville equation respectively. In the paper of Ampatzoglou, Collot, and Germain \cite{ACG}, they derived WKE for space-inhomogeneous case. In the paper of Dymov and Kuksin \cite{DK1}-\cite{DK4}, they studied the normal form expansion series and introduced the concept of discrete wave turbulence. In another paper of Deng and Hani \cite{DH3}, they proved a result about for high order correlation functions. The main conclusion of this paper is similar to that of \cite{ST}. Compared to \cite{ST}, the assumption of dimension $d$ of this paper is better while the time scale is shorter. (3) \underline{Previous papers about the dynamics of WKE:} There are also many papers about the dynamics of WKE itself. For references, see \cite{GI}, \cite{GST}, \cite{SoT1}, \cite{SoT2} and the reference therein. \section{The Perturbation Expansion} In this section, we calculate the renormalized approximate series and introduce Feynman diagrams to represent terms in this series. Then we bound the error of this approximation by the boostrap method, assuming several propositions about upper bound of higher order terms. We will prove these propositions in the rest part of the paper. \subsection{The approximation series and Feynman diagrams}\label{sec.appFey} In this section we derive the equation for Fourier coefficients and construct the approximate solution. \subsubsection{The Equation of Fourier coefficients} Let $\psi_k$ be the Fourier coefficient of $\psi$. Then in term of $\psi_k$ equation (\ref{eq.MKDV}) becomes \begin{equation}\label{eq.mainfourier} \begin{cases} \dot{\psi}_{k} = i\Lambda(k) \psi_k -\nu |k|^2 \psi_k +\frac{i\lambda}{L^{d}} \sum\limits_{\substack{(k_1,k_2) \in (\mathbb{Z}^d_L)^2 \\ k_1 + k_2 = k}} k_{x}\psi_{k_1} \psi_{k_2} \\[2em] \psi_k(0) = \xi_k = \sqrt{n_{\textrm{in}}(k)} \, \eta_{k}(\omega) \end{cases} \end{equation} Define the linear profile by \begin{equation} \phi_k(t):= e^{-i\Lambda(k) t} \psi_k(t) \end{equation} Rewriting \eqref{eq.mainfourier} in terms of $\phi_k$ gives \begin{equation}\label{eq.mainlinearprofile} \begin{split} \dot{\phi}_{k} = -\nu |k|^2 \phi_k + \frac{i\lambda}{L^{d}} \sum\limits_{S(k_1,k_2,k)=0}k_{x}\phi_{k_1} \phi_{k_2}e^{i t\Omega(k_1,k_2,k)} \end{split} \end{equation} where \begin{equation} \begin{split} &S(k_1,k_2,k) = k_1 + k_2 - k, \\ &\Omega(k_1,k_2,k) =\Lambda(k_1)+\Lambda(k_2)-\Lambda(k). \end{split} \end{equation} We will work with \eqref{eq.mainlinearprofile} in the rest part of this paper. Integrating (\ref{eq.mainlinearprofile}) gives \begin{equation}\label{eq.intmain} \begin{split} \phi_k =\xi_k+ \underbrace{\frac{i\lambda}{L^{d}} \sum\limits_{S(k_1,k_2,k)=0}\int^{t}_0k_{x}\phi_{k_1} \phi_{k_2}e^{i s\Omega(k_1,k_2,k)- \nu|k|^2(t-s)} ds}_{\mathcal{T}(\phi,\phi)_k}. \end{split} \end{equation} Denote the second term on the right hand side by $\mathcal{T}(\phi,\phi)_k$. Denote the right hand side by $\mathcal{F}(\phi)_k=\xi_k+\mathcal{T}(\phi,\phi)_k$. With these notations, equation \eqref{eq.intmain} becomes $\phi=\mathcal{F}(\phi)_k$. We construct the approximation series by iteration: $\phi=\mathcal{F}(\phi)=\mathcal{F}(\mathcal{F}(\phi))=\mathcal{F}(\mathcal{F}(\mathcal{F}(\phi)))=\cdots$. To get any estimate of the approximate solution constructed in this way, we need a compact graphical notation to represent the huge amount of terms in the approximate solution. This is done by introducing the concept of Feynman diagrams. \subsubsection{Some basic definitions from graph theory} In this section we introduce the concept of binary trees, branches, leaves, subtrees, node decoration and branching leaves. \begin{defn} In this paper we need following concept from graph theory \begin{enumerate} \item \textbf{Binary trees:} A \underline{binary tree} $T$ is a tree in which each node has $2$ or $0$ children. An example of binary tree used in this paper is show in Figure \ref{fig.decsub}. \item \textbf{Branches:} A \underline{branch} (or \underline{branched node}) in a binary tree is a node which has $2$ children. The number of all branches in a tree $T$ is denoted by $l(T)$. In Figure \ref{fig.decsub}, $l(T)=2$. \item \textbf{Leaves:} A \underline{leaf} of a tree $T$ is a node which has no children. In Figure \ref{fig.decsub}, all $\star$ nodes and $\Box$ nodes are leaves. \item \textbf{Subtrees:} If any child of any node in a subset $T'$ of a tree $T$ is also contained in $T'$ then $T' $ also forms a tree, we call $T'$ a \underline{subtree} of $T$. If the root node of $T'$ is $\mathfrak{n}\in T$, we say $T'$ is the \underline{subtree rooted at $\mathfrak{n}$} or \underline{subtree of $\mathfrak{n}$} and denote it by $T_\mathfrak{n}$. In Figure \ref{fig.decsub}, the tree inside the box is the subtree rooted at node $\bullet$. \item \textbf{Node decoration:} In Figure \ref{fig.decsub}, each node is associated with a symbol in $\{\bullet,\ \star,\ \Box\}$. If a node $\mathfrak{n}$ has pattern $\bullet$ (similarly $\star,\ \Box$), we say $\mathfrak{n}$ is decorated by $\bullet$ ($\star,\ \Box$) or $\mathfrak{n}$ has decoration $\bullet$ ($\star,\ \Box$). In what follows we adopt the convention that leaves always have decoration $\star$ or $\Box$ and nodes other than leaves always have decoration $\bullet$. \begin{figure}[H] \centering \scalebox{0.5}{ \begin{tikzpicture}[level distance=80pt, sibling distance=100pt] \draw node[fillcirc](1) {} child {node[fillcirc] (2) {} child {node[draw, minimum size=0.4cm] (5) {}} child {node[draw, minimum size=0.4cm] (6) {}} } child {node[fillstar] (3) {}}; \node[rectangle, draw, minimum width = 5cm, minimum height = 4cm] at (-1.8,-4.2) {}; \end{tikzpicture} } \caption{Subtrees and node decoration.} \label{fig.decsub} \end{figure} \item \textbf{Branching and normal leaves:} Leaves denoted by $\Box$ are called \underline{branching leaves}. Other leaves denoted by $\star$ are called \underline{normal leaves}. The notion of branching leaves is useful in the construction of trees, in which the presence of $\Box$ means that the construction is not finishing and $\Box$ denotes leaves that may be replaced by a branch later. The concept of branching leaves and $\Box$ is only used in section \ref{sec.connection}, so the readers can safely forget it after that section. \end{enumerate} \end{defn} \subsubsection{Connection between iteration and trees}\label{sec.connection} In this section we explain non-rigorously the connection between perturbation expansion and trees. Rigorous argument can be find in the next section. This iteration process can be described as the following, \begin{equation*} \begin{split} \phi=&\mathcal{F}(\phi)=\xi+\mathcal{T}(\phi,\phi) \\ =&\xi+\mathcal{T}\Big(\xi+\mathcal{T}(\phi,\phi), \cdots\Big)=\xi+\mathcal{T}(\xi,\xi)+\mathcal{T}\Big(\mathcal{T}(\phi,\phi), \xi\Big)\cdots \\ =&\xi+\mathcal{T}(\xi,\xi)+\mathcal{T}(\mathcal{T}(\xi,\xi),\xi) +\mathcal{T}(\xi,\mathcal{T}(\xi,\xi))+\cdots \end{split} \end{equation*} In above iteration, we recursively choose one $\phi$, replace it by $\xi+\mathcal{T}(\phi,\phi)$ and use the linearity of $\mathcal{T}$ to expand into two terms. \begin{equation}\label{eq.termgeneration} \begin{split} &\mathcal{T}\Big(\cdots,\mathcal{T}(\mathcal{T}(\xi,\underline{\phi}),\cdots)\Big)\rightarrow \mathcal{T}\Big(\cdots,\mathcal{T}(\mathcal{T}(\xi,\underline{\xi+\mathcal{T}(\phi,\phi)}),\cdots)\Big) \\ =& \underbrace{\mathcal{T}\Big(\cdots,\mathcal{T}(\mathcal{T}(\xi,\underline{\xi}),\cdots)\Big)}_{I} +\underbrace{\mathcal{T}\Big(\cdots,\mathcal{T}(\mathcal{T}(\xi,\underline{\mathcal{T}(\phi,\phi)}),\cdots)\Big)}_{II} \end{split} \end{equation} Here $I$ and $II$ are obtained by replacing $\phi$ by $\xi$ and $\mathcal{T}(\phi,\phi)$ respectively. In summary, all terms in the expansion can be generated by following steps \begin{itemize} \item \textbf{Step $0$.} Add a term $\phi$ in the summation $\mathcal{J}$. \item \textbf{Step $i$ ($i\ge 1$).} Assume that \textbf{Step $i-1$} has been finished which produces a sum of terms $\mathcal{J}$, then choose a term in $\mathcal{J}$ which has least number of $\xi$ and $\phi$, remove this term from $\mathcal{J}$ and add the two terms in $\mathcal{J}$ constructed in \eqref{eq.termgeneration}. \end{itemize} This process is very similar to the construction of binary trees, in which we recursively replace a chosen branching node by a leaf or branch. \begin{itemize} \item \textbf{Step $0$.} Start from a branching root node $\Box$. \item \textbf{Step $i$ ($i\ge 1$).} Assume that we have finish the \textbf{Step $i-1$} which produces a collection of trees $\mathscr{T}$, then choose a tree in $\mathscr{T}$ which has least number of branching leaves $\Box$ and normal leaves $\star$, remove this tree from $\mathscr{T}$ and add two new trees in $\mathscr{T}$. In these two new trees, we replace a branching leaf $\Box$ by a normal leaf $\star$ or a branched node $\bullet$ with two branching children leaves $\Box$. This construction is illustrated by Figure \ref{fig.construction}. \begin{figure}[H] \centering \scalebox{0.5}{ \begin{tikzpicture}[level distance=80pt, sibling distance=100pt] \draw node[fillcirc](1) {} child {node[draw, minimum size=0.4cm] (2) {}} child {node[fillstar] (3) {}}; \node[draw, single arrow, minimum height=33mm, minimum width=8mm, single arrow head extend=2mm, anchor=west, rotate=0] at (4,-1.5) {}; \node[scale=3.0] at (16,-2.9) {,}; \node[fillcirc](4) at (12,0) {} child {node[fillstar] (5) {}} child {node[fillstar] (6) {}}; \node[fillcirc](7) at (22,1.5) {} child {node[fillcirc] (8) {} child {node[draw, minimum size=0.4cm] (5) {}} child {node[draw, minimum size=0.4cm] (6) {}} } child {node[fillstar] (9) {}}; \end{tikzpicture} } \caption{One step in the construction of binary trees} \label{fig.construction} \end{figure} \end{itemize} By comparing the above two process, we can make the connection between terms and trees more explicit. Each node $\bullet$ other than leaf in the tree $T$ corresponds to a $\mathcal{T}(\cdots,\cdots)$ in a term $\mathcal{J}_{T}$. Each normal leaf $\star$ and branching leaf $\Box$ corresponds to $\xi$ and $\phi$ respectively. The \textbf{Step} $i$ of replacing $\phi$ by $\xi$ or $\mathcal{T}(\phi,\phi)$ corresponds to replacing $\Box$ by $\star$ or a branch with three children $\Box$. We have following recursive formula for calculating a term $\mathcal{J}_T$ from a binary tree $T$. If $T$ has only one node then $\mathcal{J}_T=\xi$. Otherwise let $\bullet_1$, $\bullet_2$ be two children of the root node $\bullet$, let $T_{\bullet_1}$, $T_{\bullet_2}$ be the subtrees of $T$ rooted at above nodes. If $\mathcal{J}_{T_{\bullet_1}}$, $\mathcal{J}_{T_{\bullet_2}}$ have been recursively calculated, then $\mathcal{J}_T$ can be calculated by \begin{equation}\label{eq.treeterm'} \mathcal{J}_T=\mathcal{T}(\mathcal{J}_{T_{\bullet_1}}, \mathcal{J}_{T_{\bullet_2}}). \end{equation} The formal power series obtained by iterate $\phi=\mathcal{F}(\phi)$ can be calculated from trees by $\sum_{T\in \mathscr{T}} \mathcal{J}_T$. Let $l(T)$ be the number of branches in $T$, then it can be shown that $\mathcal{J}_T$ is a degree $l(T)+1$ polynomial of $\xi$. We define the approximation series to be a finite degree truncation of the formal power series which equals to $\sum_{l(T)\le N} \mathcal{J}_T$. \subsubsection{Feynman diagrams and construction of the approximation solution} In this section we present the rigorous argument equivalent to that in above section. In the construction of trees, finally all $\Box$ nodes will be replaced by $\bullet$, $\star$, so in what follows we only consider trees whose nodes are decorated by $\bullet$, $\star$. \begin{defn}\label{def.treeterms} Given a binary tree $T$ whose nodes are decorated by $\bullet$, $\star$, we inductively define the quantity $\mathcal{J}_T$ by: \begin{equation}\label{eq.treeterm} \mathcal{J}_T= \begin{cases} \xi, \qquad\qquad\quad\ \textit{ if $T$ has only one node $\star$.} \\ \mathcal{T}(\mathcal{J}_{T_{\mathfrak{n}_1}}, \mathcal{J}_{T_{\mathfrak{n}_2}}), \textit{ otherwise.} \end{cases} \end{equation} Here $\mathfrak{n}_1$, $\mathfrak{n}_2$ are two children of the root node $\mathfrak{r}$ and $T_{\mathfrak{n}_1}$, $T_{\mathfrak{n}_2}$ are the subtrees of $T$ rooted at above nodes. \end{defn} \begin{defn} Given a large number $N$, define the approximate solution $\phi_{app}$ by \begin{equation}\label{eq.approxsol} \phi_{app}=\sum_{l(T)\le N} \mathcal{J}_T \end{equation} \end{defn} Above section explains why the approximation series should equal to \eqref{eq.approxsol}, a sum of many tree terms, but if we know this fact, we can directly prove it, and forget all the motivations. The lemma below prove that $\phi_{app}$ defined by above expression is an approximate solution. \begin{lem}\label{lem.approxerror} Define \begin{equation} Err=\mathcal{F}(\phi_{app})-\phi_{app}, \end{equation} then we have \begin{equation}\label{eq.approxerror} Err=\sum_{T\in \mathcal{T}_{>N}^*} \mathcal{J}_T, \end{equation} where $\mathcal{T}_{>N}^*$ is defined by \begin{equation} \begin{split} \mathcal{T}_{>N}^*=\{&T:l(T)>N,\ l(T_{\mathfrak{n}_1})\le N, l(T_{\mathfrak{n}_2})\le N, \\ &\textit{$T_{\mathfrak{n}_1}$, $T_{\mathfrak{n}_2}$ are the subtrees defined in Definition \ref{def.treeterms}} \} \end{split} \end{equation} \end{lem} \begin{rem} Notice that all terms in $\sum_{T\in \mathcal{T}_{>N}^*}$ are polynomials of $\xi$ of degree $>N$. Therefore, the approximation error of $\phi_{app}$ is of very high order, which proves that $\phi_{app}$ is an appropriate approximation solution. \end{rem} \begin{proof} By \eqref{eq.approxsol}, we get \begin{equation}\label{eq.lemapproxerror} \begin{split} Err=&\mathcal{F}(\phi_{app})-\phi_{app} \\ =&\xi+\mathcal{T}(\phi_{app},\phi_{app})-\phi_{app} \\ =&\xi+\sum_{l(T_1),l(T_2)\le N} \mathcal{T}(\mathcal{J}_{T_1},\mathcal{J}_{T_2})-\sum_{l(T)\le N} \mathcal{J}_T \end{split} \end{equation} Let $T$ be a tree constructed by connecting the root nodes $\mathfrak{n}_1$, $\mathfrak{n}_2$ of $T_1$, $T_2$ to a new node $\mathfrak{r}$. We define $\mathfrak{r}$ to be the root node of $T$. Then by \eqref{eq.treeterm}, we have \begin{equation} \mathcal{J}_T=\mathcal{T}(\mathcal{J}_{T_1}, \mathcal{J}_{T_2}) \end{equation} and \begin{equation} \sum_{l(T_1),l(T_2)\le N} \mathcal{T}(\mathcal{J}_{T_1},\mathcal{J}_{T_2})=\sum_{\substack{l(T)\ge 1\\ l(T_1),l(T_2)\le N}} \mathcal{J}_{T} \end{equation} By \eqref{eq.lemapproxerror}, we get \begin{equation} \begin{split} Err=&\xi+\sum_{\substack{l(T)\ge 1\\ l(T_1),l(T_2)\le N}} \mathcal{J}_{T}-\sum_{l(T)\le N} \mathcal{J}_T \\ =&\sum_{\substack{T_1,T_2\text{ are subtrees of }\mathfrak{r}\\ l(T_1),l(T_2)\le N}} \mathcal{J}_{T}-\sum_{\substack{l(T)\le N\\ l(T_1),l(T_2)\le N}} \mathcal{J}_T. \\ =&\sum_{T\in \mathcal{T}_{>N}^*} \mathcal{J}_T \end{split} \end{equation} Here in the second equality, we use the fact that $\sum_{l(T)\le N}=\sum_{\substack{l(T)\le N\\ l(T_1),l(T_2)\le N}}$. Therefore, we complete the proof of this lemma. \end{proof} \subsection{Estimates of the approximation solution} \subsubsection{Estimates of tree terms} By \eqref{eq.approxerror}, in order to control the approximation error $Err$, it suffices to get upper bounds of tree terms $\mathcal{J}_T$. We state the upper bound in the proposition below and delay its proof to section \ref{sec.treetermsupperbound}. Let us introduce a definition before state the proposition. \begin{defn}\label{def.Lcertainly} Given a property $A$, we say $A$ happens $L$-certainly if the probability that $A$ happens satisfies $P(A)\ge 1-Ce^{-L^\theta}$ for some $C, \theta>0$. \end{defn} \begin{prop}\label{prop.treetermsupperbound} We have $L$-certainly that \begin{equation}\label{eq.treetermsupperbound} \sup_t\sup_k |(\mathcal{J}_T)_k|\lesssim L^{O(l(T)\theta)} \rho^{l(T)}. \end{equation} and $(\mathcal{J}_T)_k=0$ if $|k|\gtrsim 1$. Here $(\mathcal{J}_T)_k$ is the Fourier coefficients of $\mathcal{J}_T$ and \begin{equation} \rho=\alpha\, T^{\frac{1}{2}}_{\text{max}}. \end{equation} \end{prop} \subsubsection{Linearization around the approximation solution} Let $w=\phi-\phi_{app}$ be the deviation of $\phi_{app}$ to the true solution $\phi_{app}$. In order to estimate $w$, we consider the linearized equation of \begin{equation} \phi=\mathcal{F}(\phi)=\xi+\mathcal{T}(\phi,\phi). \end{equation} The linearized equation is a equation of $w$ given by \begin{equation}\label{eq.eqw} w= Err(\xi)+Lw+B(w,w), \end{equation} where $Err(\xi)$, $Lw$, $B(w,w)$ are given by \begin{equation} \left\{ \begin{aligned} &Err=\mathcal{F}(\phi_{app})-\phi_{app}, \\ &Lw=2\mathcal{T}(\phi_{app},w), \\ &B(w,w)=\mathcal{T}(w,w). \end{aligned}\right. \end{equation} As explained in section \ref{sec.randmatintro}, to control $w$, we need operator norm bound $||L^K||_{X^p}$. Notice that $\phi_{app}=\sum_{l(T)\le N} \mathcal{J}_T$, so we know that $Lw$ is a sum of terms like $\mathcal{T}(\mathcal{J}_{T},w)$. It suffices to get following operator norm bound for $w\rightarrow \mathcal{T}(\mathcal{J}_{T},w)$ in order to upper bound $L^K$. \begin{prop}\label{prop.operatorupperbound} Define $\rho=\alpha\, T^{\frac{1}{2}}_{\text{max}}$ as in Proposition \ref{prop.treetermsupperbound} and $\mathcal{P}_{T}$ to be the linear operator \begin{equation} \begin{split} w\rightarrow \mathcal{T}(\mathcal{J}_{T},w). \end{split} \end{equation} Then for any sequence of trees $\{T_1,\cdots,T_K\}$, we have $L$-certainly the operator bound \begin{equation}\label{eq.operatornorm'} \left|\left|\prod_{j=1}^K\mathcal{P}_{T_j}\right|\right|_{L_t^{\infty}X^p\rightarrow L_t^{\infty}X^p}\le L^{O\left(1+\theta\sum_{j=1}^K l(T_j)\right)} \rho^{\sum_{j=1}^K l(T_j)}. \end{equation} for any $T_j$ with $l(T_j)\le N$. Above inequality implies that \begin{equation}\label{eq.operatornorm} \left|\left|L^K\right|\right|_{L_t^{\infty}X^p\rightarrow L_t^{\infty}X^p}\le L^{O(1+K\theta)} \rho^{K}. \end{equation} \end{prop} The proof of this proposition can be found in section \ref{sec.randommatrices}. \subsection{Bound the error of the approximation}\label{sec.errorw} Define $w=v-v_{app}$ to be the approximation error. In this section we prove the following theorem that gives an upper bound of $w$, assuming several propositions from the next two sections. \begin{thm}\label{th.app} Let $w=\phi-\phi_{app}$. Given any $M\gg 1$, there exists $N$ such that if $\phi_{app}$ is the $N$-th order approximate solution, then $\sup_{t\le T_{\text{max}}}||w(t)||_{X^p}\lesssim L^{-M}$ $L$-certainly (with probability $\geq 1-Ce^{-CL^\theta}$). \end{thm} \begin{proof} If for some $C$ sufficiently large we can show that \begin{equation}\label{eq.claimw} \sup _{t\le T}||w(t)||_{X^p}\le CL^{-M}\textit{ implies that } \sup _{t\le T}||w(t)||_{X^p}< CL^{-M}, \end{equation} for all $T\le T_{\text{max}}$, then we finish the proof of this theorem. Here is the explanation. \eqref{eq.claimw} implies that if we define the set $A=\{T: \sup _{t\le T}||w||_{X^p}\le CL^{-M}\}$ then the set equals to $\{T: \sup _{t\le T}||w||_{H^s}< CL^{-M}\}$ which is open. The original definition $A=\{T: \sup _{t\le T}||w||_{X^p}\le CL^{-M}\}$ implies that this set is also closed. It is nonempty because $||w(0)||_{X^p}=0$ implies that $0\in A$. Therefore, $A$ is open, closed and nonempty in $[0,T_{\text{max}}]$, so $A=[0,T_{\text{max}}]$ which implies that the theorem. Now we prove (\ref{eq.claimw}). By \eqref{eq.eqw}, \begin{equation} w-Lw= Err(\xi)+B(w,w). \end{equation} By Neumann series we have \begin{equation}\label{eq.maintheoremeq1} \begin{split} w=& (1-L)^{-1}(Err(\xi)+B(w,w)) \\ =&(1-L^K)^{-1}(1+L+\cdots+L^{K-1})(Err(\xi)+B(w,w)). \end{split} \end{equation} Assume that the constant in $O(1+K\theta)$ in \eqref{eq.operatornorm} is $C_{norm}$. Since $T_{\text{max}}\le L^{-\varepsilon} \alpha^{-2}$, we know that $\rho=\alpha\, T^{\frac{1}{2}}_{\text{max}}\lesssim L^{-\varepsilon}$. Take $K\gg \frac{C_{norm}(1+K\theta)}{\varepsilon}$, then $\left|\left|L^K\right|\right|_{L^{\infty}X^p\rightarrow L^{\infty}X^p}\le L^{C_{norm}(1+K\theta)}\rho^K\lesssim L^{C_{norm}(1+K\theta)} L^{-K\varepsilon}\ll 1$, so we get $\left|\left|L^K\right|\right|_{L^{\infty}X^p\rightarrow L^{\infty}X^p}\ll 1$ and thus $\left|\left|(1-L^K)^{-1}\right|\right|_{L^{\infty}X^p\rightarrow L^{\infty}X^p}\lesssim 1$. By \eqref{eq.maintheoremeq1}, we get \begin{equation} ||w(t)||_{X^p}\lesssim \sum_{j=1}^K||L^j(Err(\xi))||_{X^p}+ ||(1+L+\cdots+L^{K-1})(B(w,w))||_{X^p} \end{equation} By \eqref{eq.approxerror}, we know that $Err$ is a sum of tree terms $\sum_{T\in \mathcal{T}_{>N}^*} \mathcal{J}_T$ of order $\ge N$. Since $L=\sum_{1\le l(T)\le N} \mathcal{P}_{T}$, we know that $L^j(Err(\xi))$ is a sum of terms like $\mathcal{P}_{T_1}\circ\cdots\circ\mathcal{P}_{T_{j}}(\mathcal{J}_T)$ which by \eqref{eq.operatoreqsimpleJ_T} equals to $\mathcal{J}_{T_1\circ\cdots\circ T_{j}\circ T}$. By Proposition \ref{prop.treetermsupperbound}, we get $||\mathcal{J}_{T_1\circ\cdots\circ T_{j}\circ T}||_{X^p}\lesssim (L^{O(\theta)} \rho)^{l(T_1\circ\cdots\circ T_{j}\circ T)}\lesssim L^{O(l(T)\theta)} \rho^{l(T)}$. Since $\rho=\alpha\, T^{\frac{1}{2}}_{\text{max}}\lesssim L^{-\varepsilon}$ and $l(T)>N$ in the sum of $Err$, we get \begin{equation}\label{eq.maintheoremeq2} \sum_{j=1}^K||L^j(Err(\xi))||_{X^p}\lesssim L^{O(N\theta)} \rho^{N}\lesssim L^{O(N\theta)} L^{-N\varepsilon}\ll L^{-M} \end{equation} if we take $N\gg M/\varepsilon$ and $\theta\ll \varepsilon$. Taking $K=1$ in \eqref{eq.operatornorm'}, we know that $\left|\left|L\right|\right|_{L^{\infty}X^p\rightarrow L^{\infty}X^p}\le L^{O(1)}$, so $\left|\left|L^j\right|\right|_{L^{\infty}X^p\rightarrow L^{\infty}X^p}\le L^{O(K)}$ if $j\le K$. Taking $M\gg K$, by \eqref{eq.claimw}, we have $\sup _{t\le T}||w(t)||_{X^p}\le CL^{-M}$. Therefore, we have \begin{equation}\label{eq.maintheoremeq3} ||(1+L+\cdots+L^{K-1})(B(w,w))||_{X^p}\lesssim L^{O(K)} L^d ||w||^2_{X^p}\lesssim L^{O(K)} L^{O(1)} L^{-2M}\ll L^{-M}. \end{equation} Combining \eqref{eq.maintheoremeq2} and \eqref{eq.maintheoremeq3}, we prove $\sup _{t\le T}||w(t)||_{X^p}\ll L^{-M} < CL^{-M}$ from the assumption that $\sup _{t\le T}||w(t)||_{X^p}\le CL^{-M}$. We thus complete the proof of Theorem \ref{th.app}. \end{proof} \subsection{Proof of the main theorem}\label{sec.proofmain} In this section, we prove Theorem \ref{th.main}. \begin{proof}[Proof of Theorem \ref{th.main}] \textbf{Step 1.} ($\mathbb E |\widehat \psi(t, k)|^2$ is close to $\mathbb E |\psi_{app,k}|^2$) By Theorem \ref{th.app}, we know that when $t\le T_{\text{max}}= L^{-\varepsilon}\alpha^{-2}$, we have $||w||_{X^p}\le L^{-M}$ with $L$-certainly. Above inequality is equivalent to $\sup_k\, |\langle k \rangle^s w_k|\le CL^{-M}$. Remember that $w:=\psi-\psi_{app}$, so $L$-certainly we have the following estimate \begin{equation}\label{eq.psikminusxik} \sup_k\, \langle k \rangle^s |\psi_k-\psi_{app,k}|\le CL^{-M} \end{equation} Denote by $A$ the event that above estimate is true, then $\mathbb E |\widehat \psi(t, k)|^2=\mathbb E (|\psi_k|^2 1_{A})+\mathbb E (|\psi_k|^2 1_{A^c})$. $L$-certainty implies that $\mathbb P(A^c) \lesssim e^{-CL^{\theta}}$. Since $||\psi||_{L^2}$ is conservative and $|\psi_k|^2\le L^{d/2} ||\psi||_{L^2}\le L^{d/2}$, we know that $\mathbb E (|\psi_k|^2 1_{A^c})\lesssim L^{d/2} e^{-CL^{\theta}}= O(L^{-M})$. Therefore, $\mathbb E |\widehat \psi(t, k)|^2=\mathbb E (|\psi_k|^2 1_{A})+O(L^{-M})$. Since we also have $\mathbb E |\psi_{app,k}|^2=\mathbb E (|\psi_{app,k}|^2 1_{A})+O(L^{-M})$, we conclude that \begin{equation} \mathbb E |\widehat \psi(t, k)|^2=\mathbb E |\psi_{app,k}|^2+\mathbb E ((|\psi_k|^2-|\psi_{app,k}|^2)1_{A})+O(L^{-M}) \end{equation} By (\ref{eq.psikminusxik}), $\mathbb E ((|\psi_k|^2-|\psi_{app,k}|^2)1_{A})=O(L^{-M})$. We may conclude that \begin{equation} \mathbb E |\widehat \psi(t, k)|^2=\mathbb E |\psi_{app,k}|^2+O(L^{-M}). \end{equation} This suggests that we may get the approximation of $\mathbb E |\widehat \psi(t, k)|^2$ by calculating $\mathbb E |\psi_{app,k}|^2$. \textbf{Step 2.} (Expansion of $\mathbb E |\psi_{app,k}|^2$) By \eqref{eq.approxsol}, we know that \begin{equation} \phi_{app}=\sum_{l(T)\le N} \mathcal{J}_T \end{equation} Define \begin{equation}\label{eq.n(j)} n^{(j)}(k)\mathrel{\mathop:}= \sum_{l(T)+l(T')=j} \mathbb E \mathcal{J}_{T,k}\overline{\mathcal{J}_{T',k}} \end{equation} then Proposition \ref{prop.treetermsupperbound} or \ref{prop.treetermsvariance} gives upper bounds of $n^{(j)}(k)$ , which proves (1) and \eqref{eq.n(j)estimate} of Theorem \ref{th.main}. \textbf{Step 3.} (Asymptotics of $n^{(1)}(k)$) The only thing left in Theorem \ref{th.main} is \eqref{eq.n1}. This is a corollary of Proposition \ref{prop.mainterms}. \end{proof} \section{Lattice points counting and convergence results} In this section, we prove Proposition \ref{prop.treetermsupperbound} and \ref{prop.operatorupperbound} which gives upper bounds for tree terms $\mathcal{J}_{T,k}$ and the linearization operator $\mathcal{P}_T$. As explained before, these results are crucial in the proof of the main theorem. The proof of is divided into several steps. In section \ref{sec.refexp}, we calculate the coefficients of $\mathcal{J}_{T,k}$ as polynomial of Gaussian random variable. In section \ref{sec.uppcoef}, we obtain upper bounds for the coefficients of these Gaussian polynomials. Large deviation theory suggests that an upper bound of an Gaussian polynomial can be derived from an upper bound of its expectation and variance. In section \ref{sec.coupwick}, we introduce the concept of couples which is a graphical method of calculating the expectation of Gaussian polynomials. In section \ref{sec.numbertheory}, we use couple to establish an lattice points counting result. In section \ref{sec.treetermsupperbound} and section \ref{sec.randommatrices}, we apply the lattice points counting result to derive upper bounds for $\mathcal{J}_{T,k}$ and $\mathcal{P}_T$ respectively. This finishes the proof of Proposition \ref{prop.treetermsupperbound} and \ref{prop.operatorupperbound} and therefore the proof of the main theorem. \subsection{Refined expression of coefficients}\label{sec.refexp} From \eqref{eq.treeterm}, it is easy to show that $\mathcal{J}_{T,k}$ are polynomials of $\xi$. In this section, we the coefficients of $\mathcal{J}_{T,k}$ using the definition \eqref{eq.treeterm} of them. Notice that all non-leaf nodes other than the root in a tree have degree $3$. For convenience, we add a new edge, called \underline{leg} $\mathfrak{l}$, to the root node of trees, which makes the root also of degree $3$. This process is illustrated by Figure \ref{fig.leg}. \begin{figure}[H] \centering \scalebox{0.5}{ \begin{tikzpicture}[level distance=80pt, sibling distance=100pt] \node[fillcirc] {} child {node[fillcirc] {} child {node[fillstar] {}} child {node[fillstar] {}} } child {node[fillstar] {}}; \node[draw, single arrow, minimum height=33mm, minimum width=8mm, single arrow head extend=2mm, anchor=west, rotate=0] at (6,-3) {}; \node[] at (15,1.5) {} child {node[fillcirc] {} child {node[fillcirc] {} child {node[fillstar] {}} child {node[fillstar] {}} } child {node[fillstar] {}} }; \end{tikzpicture} } \caption{Adding a leg to a tree} \label{fig.leg} \end{figure} We also need following concepts about trees. \begin{defn}\label{def.treemore} \begin{enumerate} \item \textbf{Nodes and children of an edge:} As in Figure \ref{fig.childrenofedge}, let $\mathfrak{n}_{u}$ and $\mathfrak{n}_{l}$ be two endpoints of an edge $\mathfrak{e}$ and assume that $\mathfrak{n}_{l}$ is a children of $\mathfrak{n}_{u}$. We define $\mathfrak{n}_{u}$ (resp. $\mathfrak{n}_{l}$) to the \underline{upper node} (resp. \underline{lower node}) of $\mathfrak{e}$. Let $\mathfrak{n}_1$, $\mathfrak{n}_2$ be the two children of $\mathfrak{n}_{l}$ and let $\mathfrak{e}_1$(resp. $\mathfrak{e}_2$) be the edge between two nodes $\mathfrak{n}_{l}$ and $\mathfrak{n}_1$(resp. $\mathfrak{n}_2$). $\mathfrak{e}_1$, $\mathfrak{e}_2$ are defined to be the two \underline{children edges} of $\mathfrak{e}$. \begin{figure}[H] \centering \scalebox{0.5}{ \begin{tikzpicture}[level distance=80pt, sibling distance=100pt] \node[scale=2.0] at (13.8,-2.7) {$\mathfrak{e}$}; \node[scale=2.0] at (14.3,-1.5) {$\mathfrak{n}_{u}$}; \node[scale=2.0] at (12.6,-4) {$\mathfrak{n}_{l}$}; \node[scale=2.0] at (12,-5.5) {$\mathfrak{e}_1$}; \node[scale=2.0] at (14.5,-5.5) {$\mathfrak{e}_2$}; \node[scale=2.0] at (11.5,-7.5) {$\mathfrak{n}_1$}; \node[scale=2.0] at (15.2,-7.5) {$\mathfrak{n}_2$}; \node[] at (15,1.5) {} child {node[fillcirc] {} child {node[fillcirc] {} child {node[fillstar] {}} child {node[fillstar] {}} } child {node[fillstar] {}} }; \end{tikzpicture} } \caption{Children $\mathfrak{e}_1$, $\mathfrak{e}_2$ of an edge $\mathfrak{e}$} \label{fig.childrenofedge} \end{figure} \item \textbf{Direction of an edge:} As in Figure \ref{fig.orientation}, each edge $\mathfrak{e}$ is assigned with a \underline{direction} (or \underline{direction}). This concept is mostly used to decide a the value of variables $\iota_{\mathfrak{e}}\in\{\pm\}$ that will be defined later. Although it can be shown that the final result does not depend on the choices of diretion of each edge, for definiteness, we assign downward direction to each edge. The orientation in Figure \ref{fig.orientation} is one example of this choices and under orientation, each edges pointing towards the child. \begin{figure}[H] \centering \scalebox{0.5}{ \begin{tikzpicture}[level distance=80pt, sibling distance=100pt] \node[scale=2.0] at (11.5,-7.5) {$1$}; \node[scale=2.0] at (15.0,-7.5) {$2$}; \node[scale=2.0] at (16.8,-4.7) {$3$}; \node[] at (15,1.5) (1) {} child {node[fillcirc] (2) {} child {node[fillcirc] (3) {} child {node[fillstar] (4) {}} child {node[fillstar] (5) {}} } child {node[fillstar] (6) {}} }; \draw[-{Stealth[length=5mm, width=3mm]}] (1) -- (2); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (3); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (6); \draw[-{Stealth[length=5mm, width=3mm]}] (3) -- (4); \draw[-{Stealth[length=5mm, width=3mm]}] (3) -- (5); \end{tikzpicture} } \caption{Children $\mathfrak{e}_1$, $\mathfrak{e}_2$ of an edge $\mathfrak{e}$} \label{fig.orientation} \end{figure} \item \textbf{Labelling of leaves:} As in Figure \ref{fig.orientation}, each leaves are labelled by $1$, $2$, $\cdots$, $l(T)+1$ from left to right. An edge pointing to a leaf $\mathfrak{n}$ is also labelled by $j$ if $\mathfrak{n}$ is labelled by $j$. \end{enumerate} \end{defn} Now we calculate the coefficient of $\mathcal{J}_{T,k}$. \begin{lem}\label{lem.treeterms} Given a tree $T$ of depth $l=l(T)$, denote by $T_{\text{in}}$ the tree formed by all non-leaf nodes $\mathfrak{n}$, then associate each $\mathfrak{n}\in T_{\text{in}}$ with a variable $t_{\mathfrak{n}}$ and associate each edge $\mathfrak{l}\in T$ with a variable, $k_{\mathfrak{l}}$. Given a labelling of all leaves by $1$, $2$, $\cdots$, $l+1$, we identify $k_{\mathfrak{e}}$ with $k_j$ if $\mathfrak{e}$ connects a leaf labelled by $j$. Given a node $\mathfrak{n}$, let $\mathfrak{e}_1$, $\mathfrak{e}_2$ and $\mathfrak{e}$ be the three edges from and pointing to $\mathfrak{n}$ ($\mathfrak{e}$ is the parent of $\mathfrak{e}_1$ and $\mathfrak{e}_2$), $\mathfrak{n}_1$, $\mathfrak{n}_2$ be children of $\mathfrak{n}$ and $\hat{\mathfrak{n}}$ be the parent of $\mathfrak{n}$. Let $\mathcal{J}_T$ be terms defined in Definition \ref{def.treeterms}, then their Fourier coefficients $\mathcal{J}_{T,k}$ are degree $l$ polynomials of $\xi$ given by the following formula \begin{equation}\label{eq.coefterm} \mathcal{J}_{T,k}=\left(\frac{i\lambda}{L^{d}}\right)^l\sum_{k_1,\, k_2,\, \cdots,\, k_{l+1}} H^T_{k_1\cdots k_{l+1}} \xi_{k_1}\xi_{k_2}\cdots\xi_{k_{l+1}} \end{equation} where $H^T_{k_1\cdots k_{l+1}}$ is given by \begin{equation}\label{eq.coef} H^T_{k_1\cdots k_{l+1}}=\int_{\cup_{\mathfrak{n}\in T_{\text{in}}} A_{\mathfrak{n}}} e^{\sum_{\mathfrak{n}\in T_{\text{in}}} it_{\mathfrak{n}}\Omega_{\mathfrak{n}}-\nu(t_{\widehat{\mathfrak{n}}}-t_{\mathfrak{n}})|k_{\mathfrak{e}}|^2} \prod_{\mathfrak{n}\in T_{\text{in}}} dt_{\mathfrak{n}} \ \delta_{\cap_{\mathfrak{n}\in T_{\text{in}}} \{S_{\mathfrak{n}}=0\}}\ \prod_{\mathfrak{e}\in T_{\text{in}}}\iota_{\mathfrak{e}}k_{\mathfrak{e},x} , \end{equation} and $\iota$, $A_{\mathfrak{n}}$, $S_{\mathfrak{n}}$, $\Omega_{\mathfrak{n}}$ are defined by \begin{equation}\label{eq.iotadef} \iota_{\mathfrak{e}}=\begin{cases} +1 \qquad \textit{if $\mathfrak{e}$ pointing inwards to $\mathfrak{n}$} \\ -1 \qquad \textit{if $\mathfrak{e}$ pointing outwards from $\mathfrak{n}$} \end{cases} \end{equation} \begin{equation} A_{\mathfrak{n}}= \begin{cases} \{t_{\mathfrak{n}_1},\, t_{\mathfrak{n}_2},\, t_{\mathfrak{n}_3}\le t_{\mathfrak{n}}\} \qquad \textit{if $\mathfrak{n}\ne$ the root $\mathfrak{r}$} \\ \{t_{\mathfrak{r}}\le t\} \qquad\qquad\qquad\ \textit{if $\mathfrak{n}= \mathfrak{r}$ } \end{cases} \end{equation} \begin{equation}\label{eq.defnS_n} S_{\mathfrak{n}}=\iota_{\mathfrak{e}_1}k_{\mathfrak{e}_1}+\iota_{\mathfrak{e}_2}k_{\mathfrak{e}_2}+\iota_{\mathfrak{e}}k_{\mathfrak{e}} \end{equation} \begin{equation} \Omega_{\mathfrak{n}}=\iota_{\mathfrak{e}_1}\Lambda_{k_{\mathfrak{e}_1}}+\iota_{\mathfrak{e}_2}\Lambda_{k_{\mathfrak{e}_2}}+\iota_{\mathfrak{e}}\Lambda_{k_{\mathfrak{e}}} \end{equation} For root node $\mathfrak{r}$, we impose the constrain that $k_{\mathfrak{r}}=k$ and $t_{\widehat{\mathfrak{r}}}=t$ (notice that $\mathfrak{r}$ does not have a parent so $\widehat{\mathfrak{r}}$ is not well defined). \end{lem} \begin{proof} We can check that $\mathcal{J}_T$ defined by \eqref{eq.coefterm} and \eqref{eq.coef} satisfies the recursive formula \eqref{eq.treeterm} by a direct substitution, so they are the unique solution of that recursive formula, and this proves Lemma \ref{lem.treeterms}. \end{proof} \subsection{An upper bound of coefficients in expansion series}\label{sec.uppcoef} In this section, we derive an upper bound for coefficients $H^T_{k_1\cdots k_{l+1}}$. Notice that in \eqref{eq.coefterm}, $H^T_{k_1\cdots k_{l+1}}$ are integral of some oscillatory functions. An upper bound can be derived by the standard integration by parts arguments. Associate each $\mathfrak{n}\in T_{\text{in}}$ with two variables $a_{\mathfrak{n}}$, $b_{\mathfrak{n}}$. Then we define \begin{equation}\label{eq.defF_T} F_{T}(t,\{a_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}},\{b_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}})=\int_{\cup_{\mathfrak{n}\in T_{\text{in}}} A_{\mathfrak{n}}} e^{\sum_{\mathfrak{n}\in T_{\text{in}}} it_{\mathfrak{n}} a_{\mathfrak{n}} - \nu(t_{\widehat{\mathfrak{n}}}-t_{\mathfrak{n}})b_{\mathfrak{n}}} \prod_{\mathfrak{n}\in T_{\text{in}}} dt_{\mathfrak{n}} \end{equation} \begin{lem}\label{lem.boundcoef'} We have the following upper bound for $F_{T}(t,\{a_{\mathfrak{n}}\}_{\mathfrak{n}},\{b_{\mathfrak{n}}\}_{\mathfrak{n}})$, \begin{equation}\label{eq.boundcoef'} \sup_{\{b_{\mathfrak{n}}\}_{\mathfrak{n}}\lesssim 1} |F_{T}(t,\{a_{\mathfrak{n}}\}_{\mathfrak{n}},\{b_{\mathfrak{n}}\}_{\mathfrak{n}})|\lesssim \sum_{\{d_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}}\in\{0,1\}^{l(T)}}\prod_{\mathfrak{n}\in T_{\text{in}}}\frac{1}{|q_{\mathfrak{n}}|+T^{-1}_{\text{max}}}. \end{equation} Fix a sequence $\{d_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}}$ whose elements $d_{\mathfrak{n}}$ takes boolean values $\{0,1\}$. We define the two sequences $\{q_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}}$, $\{r_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}}$ by following recursive formula \begin{equation}\label{eq.q_n'} q_{\mathfrak{n}}= \begin{cases} a_{\mathfrak{r}}, \qquad\qquad \textit{ if $\mathfrak{n}=$ the root $\mathfrak{r}$.} \\ a_{\mathfrak{n}}+d_{\mathfrak{n}}q_{\mathfrak{n}'},\ \ \textit{ if $\mathfrak{n}\neq\mathfrak{r}$ and $\mathfrak{n}'$ is the parent of $\mathfrak{n}$.} \end{cases} \end{equation} \begin{equation}\label{eq.r_n'} r_{\mathfrak{n}}= \begin{cases} b_{\mathfrak{r}}, \qquad\qquad \textit{ if $\mathfrak{n}=$ the root $\mathfrak{r}$.} \\ b_{\mathfrak{n}}+d_{\mathfrak{n}}q_{\mathfrak{n}'},\ \ \textit{ if $\mathfrak{n}\neq\mathfrak{r}$ and $\mathfrak{n}'$ is the parent of $\mathfrak{n}$.} \end{cases} \end{equation} \end{lem} \begin{proof} The lemma is proved by induction. For a tree $T$ contains only one node $\mathfrak{r}$, $F_{T}=1$ and \eqref{eq.boundcoef'} is obviously true. Assume that \eqref{eq.boundcoef'} is true for trees with $\le n-1$ nodes. We prove the $n$ nodes case. For general $T$, let $T_1$, $T_2$ be the two subtrees and $\mathfrak{n}_1$, $\mathfrak{n}_2$ be the two children of the root $\mathfrak{r}$, then by the definition of $F_T$\eqref{eq.defF_T}, we get \begin{equation}\label{eq.lemboundcoef'1} \begin{split} F_{T}(t)=&\int_{\cup_{\mathfrak{n}\in T_{\text{in}}} A_{\mathfrak{n}}} e^{\sum_{\mathfrak{n}\in T_{\text{in}}}it_{\mathfrak{n}} a_{\mathfrak{n}} - \nu(t_{\widehat{\mathfrak{n}}}-t_{\mathfrak{n}})b_{\mathfrak{n}}} \prod_{\mathfrak{n}\in T_{\text{in}}} dt_{\mathfrak{n}} \\ =&\int_{\cup_{\mathfrak{n}\in T_{\text{in}}} A_{\mathfrak{n}}}e^{it_{\mathfrak{r}} a_{\mathfrak{r}} - \nu(t-t_{\mathfrak{r}})b_{\mathfrak{r}}} e^{\sum_{\mathfrak{n}\in T_{\text{in},1}\cup T_{\text{in},2}} it_{\mathfrak{n}} a_{\mathfrak{n}} - \nu(t_{\widehat{\mathfrak{n}}}-t_{\mathfrak{n}})b_{\mathfrak{n}}} \left(dt_{\mathfrak{r}}\prod_{j=1}^2\prod_{\mathfrak{n}\in T_{\text{in},j}}dt_{\mathfrak{n}} \right) \\ =&\int_{\cup_{\mathfrak{n}\in T_{\text{in}}} A_{\mathfrak{n}}}e^{it_{\mathfrak{r}}(a_{\mathfrak{r}}+T^{-1}_{\text{max}}\, \text{sgn}(a_{\mathfrak{r}}))- \nu(t-t_{\mathfrak{r}})b_{\mathfrak{r}}} e^{-iT^{-1}_{\text{max}}t_{\mathfrak{r}} \text{sgn}(a_{\mathfrak{r}})} e^{\sum_{\mathfrak{n}\in T_{\text{in},1}\cup T_{\text{in},2}} it_{\mathfrak{n}} a_{\mathfrak{n}} - \nu(t_{\widehat{\mathfrak{n}}}-t_{\mathfrak{n}})b_{\mathfrak{n}}} \left(dt_{\mathfrak{r}}\prod_{j=1}^2\prod_{\mathfrak{n}\in T_{\text{in},j}}dt_{\mathfrak{n}} \right) \end{split} \end{equation} We do integration by parts in above integrals using Stokes formula. Notice that for $t_{\mathfrak{r}}$, there are three inequality constrains, $t_{\mathfrak{r}}\le t$ and $t_{\mathfrak{r}}\ge t_{\mathfrak{n}_1},t_{\mathfrak{n}_2}$. \begin{equation}\label{eq.lemboundcoefexpand} \begin{split} F_{T}(t)=\frac{1}{ia_{\mathfrak{r}}+iT^{-1}_{\text{max}} \text{sgn}(a_{\mathfrak{r}})+\nu b_{\mathfrak{r}} }&\int_{\cup_{\mathfrak{n}\in T_{\text{in}}} A_{\mathfrak{n}}} \frac{d}{dt_{\mathfrak{r}}}e^{it_{\mathfrak{r}}(a_{\mathfrak{r}}+T^{-1}_{\text{max}}\, \text{sgn}(a_{\mathfrak{r}}))- \nu(t-t_{\mathfrak{r}})b_{\mathfrak{r}}} \\ &e^{-iT^{-1}_{\text{max}}t_{\mathfrak{r}} \text{sgn}(a_{\mathfrak{r}})} e^{\sum_{\mathfrak{n}\in T_{\text{in},1}\cup T_{\text{in},2}} it_{\mathfrak{n}} a_{\mathfrak{n}} - \nu(t_{\widehat{\mathfrak{n}}}-t_{\mathfrak{n}})b_{\mathfrak{n}}} \left(dt_{\mathfrak{r}}\prod_{j=1}^2\prod_{\mathfrak{n}\in T_{\text{in},j}}dt_{\mathfrak{n}} \right) \end{split} \end{equation} \begin{flalign*} \hspace{1.3cm} =&\frac{1}{ia_{\mathfrak{r}}+iT^{-1}_{\text{max}} \text{sgn}(a_{\mathfrak{r}})+\nu b_{\mathfrak{r}} }\left(\int_{\cup_{\mathfrak{n}\in T_{\text{in}}} A_{\mathfrak{n}},\ t_{\mathfrak{r}}=t}-\int_{\cup_{\mathfrak{n}\in T_{\text{in}}} A_{\mathfrak{n}},\ t_{\mathfrak{r}}=t_{\mathfrak{n}_1}}-\int_{\cup_{\mathfrak{n}\in T_{\text{in}}} A_{\mathfrak{n}},\ t_{\mathfrak{r}}=t_{\mathfrak{n}_2}}\right) && \\ & e^{it_{\mathfrak{r}}(a_{\mathfrak{r}}+T^{-1}_{\text{max}}\, \text{sgn}(a_{\mathfrak{r}}))- \nu(t-t_{\mathfrak{r}})b_{\mathfrak{r}}} e^{-iT^{-1}_{\text{max}}t_{\mathfrak{r}} \text{sgn}(a_{\mathfrak{r}})} e^{\sum_{\mathfrak{n}\in T_{\text{in},1}\cup T_{\text{in},2}} it_{\mathfrak{n}} a_{\mathfrak{n}} - \nu(t_{\widehat{\mathfrak{n}}}-t_{\mathfrak{n}})b_{\mathfrak{n}}} \left(dt_{\mathfrak{r}}\prod_{j=1}^2\prod_{\mathfrak{n}\in T_{\text{in},j}}dt_{\mathfrak{n}} \right) && \end{flalign*} \begin{flalign*} \hspace{1.3cm} -&\frac{1}{ia_{\mathfrak{r}}+iT^{-1}_{\text{max}} \text{sgn}(a_{\mathfrak{r}})+\nu b_{\mathfrak{r}} }\int_{\cup_{\mathfrak{n}\in T_{\text{in}}} A_{\mathfrak{n}}}e^{it_{\mathfrak{r}}(a_{\mathfrak{r}}+T^{-1}_{\text{max}}\, \text{sgn}(a_{\mathfrak{r}}))- \nu(t-t_{\mathfrak{r}})b_{\mathfrak{r}}} && \\ &\qquad\qquad\qquad\qquad\qquad \frac{d}{dt_{\mathfrak{r}}}(e^{-iT^{-1}_{\text{max}}t_{\mathfrak{r}} \text{sgn}(a_{\mathfrak{r}})}) e^{\sum_{\mathfrak{n}\in T_{\text{in},1}\cup T_{\text{in},2}} it_{\mathfrak{n}} a_{\mathfrak{n}} - \nu(t_{\widehat{\mathfrak{n}}}-t_{\mathfrak{n}})b_{\mathfrak{n}}} \left(dt_{\mathfrak{r}}\prod_{j=1}^2\prod_{\mathfrak{n}\in T_{\text{in},j}}dt_{\mathfrak{n}} \right) && \end{flalign*} \begin{flalign*} \hspace{1.3cm} = \frac{1}{ia_{\mathfrak{r}}+iT^{-1}_{\text{max}} \text{sgn}(a_{\mathfrak{r}})+\nu b_{\mathfrak{r}} }(F_{I}-F_{T^{(1)}}-F_{T^{(2)}}-F_{II}) && \end{flalign*} Here $T^{(j)}$, $j=1,2$ are trees that is obtained by deleting the root $\mathfrak{r}$, adding edges that connecting $\mathfrak{n}_j$ with another node and defining $\mathfrak{n}_j$ to be the new root. For $T^{(j)}$, we can define the term $F_{T^{(j)}}$ by \eqref{eq.defF_T}. It can be shown that $F_{T^{(j)}}$ defined in this way is the same as the $\int_{\cup_{\mathfrak{n}\in T_{\text{in}}} A_{\mathfrak{n}},\ t_{\mathfrak{r}}=t_{\mathfrak{n}_j}}$ term in the second equality of \eqref{eq.lemboundcoefexpand}, so the last equality of \eqref{eq.lemboundcoefexpand} is true. $F_{I}$ is the $\int_{\cup_{\mathfrak{n}\in T_{\text{in}}} A_{\mathfrak{n}},\ t_{\mathfrak{r}}=t}$ term and $F_{II}$ is the last term containing $\frac{d}{dt_{\mathfrak{r}}}$. We can apply the induction assumption to $F_{T^{(j)}}$ and show that $\frac{1}{ia_{\mathfrak{r}}+iT^{-1}_{\text{max}} \text{sgn}(a_{\mathfrak{r}})+\nu b_{\mathfrak{r}} } F_{T^{(j)}}$ can be bounded by the right hand side of \eqref{eq.boundcoef'}. A direct calculation gives that \begin{equation} F_{I}(t)=e^{it a_{\mathfrak{r}} } F_{T_1}(t)F_{T_2}(t). \end{equation} Then the induction assumption implies that $\frac{1}{ia_{\mathfrak{r}}+iT^{-1}_{\text{max}} \text{sgn}(a_{\mathfrak{r}})+\nu b_{\mathfrak{r}} } F_{I}$ can be bounded by the right hand side of \eqref{eq.boundcoef'}. Another direct calculation gives that \begin{equation} F_{II}(t)=\int^t_0 e^{it_{\mathfrak{r}}(a_{\mathfrak{r}}+T^{-1}_{\text{max}}\, \text{sgn}(a_{\mathfrak{r}}))- \nu(t-t_{\mathfrak{r}})b_{\mathfrak{r}}} \frac{d}{dt_{\mathfrak{r}}}(e^{-iT^{-1}_{\text{max}}t_{\mathfrak{r}} \text{sgn}(a_{\mathfrak{r}})}) F_{T_1}(t_{\mathfrak{r}})F_{T_2}(t_{\mathfrak{r}}) dt_{\mathfrak{r}}. \end{equation} Applying the induction assumption \begin{equation} \begin{split} &\left| \frac{1}{ia_{\mathfrak{r}}+iT^{-1}_{\text{max}} \text{sgn}(a_{\mathfrak{r}})+\nu b_{\mathfrak{r}} } F_{II}(t)\right| \\ \le& \frac{1}{|q_{\mathfrak{r}}|+T^{-1}_{\text{max}}}\prod_{j=1}^2\left(\sum_{\{d_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in},j}}\in\{0,1\}^{l(T_j)}}\prod_{\mathfrak{n}\in T_{\text{in},j}}\frac{1}{|q_{\mathfrak{n}}|+T^{-1}_{\text{max}}}\right) \\ \le& \sum_{\{d_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}}\in\{0,1\}^{l(T)}}\prod_{\mathfrak{n}\in T_{\text{in}}}\frac{1}{|q_{\mathfrak{n}}|+T^{-1}_{\text{max}}}. \end{split} \end{equation} Combining the bounds of $F_{I}$, $F_{T^{(1)}}$, $F_{T^{(2)}}$, $F_{II}$, we conclude that $F_T$ can be bounded by the right hand side of \eqref{eq.boundcoef'} and thus complete the proof of Lemma \ref{lem.boundcoef'}. \end{proof} A straight forward application of above lemma gives following upper bound of the coefficients $H^T_{k_1\cdots k_{l+1}}$. \begin{lem}\label{lem.boundcoef} We have the following upper bound for $H^T_{k_1\cdots k_{l+1}}$, \begin{equation}\label{eq.boundcoef} |H^T_{k_1\cdots k_{l+1}}|\lesssim \sum_{\{d_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}}\in\{0,1\}^{l(T)}}\prod_{\mathfrak{n}\in T_{\text{in}}}\frac{1}{|q_{\mathfrak{n}}|+T^{-1}_{\text{max}}}\ \prod_{\mathfrak{e}\in T_{\text{in}}}|k_{\mathfrak{e},x}|\ \delta_{\cap_{\mathfrak{n}\in T_{\text{in}}} \{S_{\mathfrak{n}}=0\}}. \end{equation} Fix a sequence $\{d_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}}$ whose elements $d_{\mathfrak{n}}$ takes boolean values $\{0,1\}$. We define the two sequences $\{q_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}}$, $\{r_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}}$ by following recursive formula \begin{equation}\label{eq.q_n} q_{\mathfrak{n}}= \begin{cases} \Omega_{\mathfrak{r}}, \qquad\qquad \textit{ if $\mathfrak{n}=$ the root $\mathfrak{r}$.} \\ \Omega_{\mathfrak{n}}+d_{\mathfrak{n}}q_{\mathfrak{n}'},\ \ \textit{ if $\mathfrak{n}\neq\mathfrak{r}$ and $\mathfrak{n}'$ is the parent of $\mathfrak{n}$.} \end{cases} \end{equation} \begin{equation}\label{eq.r_n} r_{\mathfrak{n}}= \begin{cases} |k_{\mathfrak{r}}|^2, \qquad\qquad \textit{ if $\mathfrak{n}=$ the root $\mathfrak{r}$.} \\ |k_{\mathfrak{n}}|^2+d_{\mathfrak{n}}q_{\mathfrak{n}'},\ \ \textit{ if $\mathfrak{n}\neq\mathfrak{r}$ and $\mathfrak{n}'$ is the parent of $\mathfrak{n}$.} \end{cases} \end{equation} \end{lem} \begin{proof} This is a direct corollary of Lemma \ref{eq.boundcoef'} if we take $a_{\mathfrak{n}}=\Omega_{\mathfrak{n}}$, $b_{\mathfrak{n}}=|k_{\mathfrak{e}}|^2$. \end{proof} Lemma \ref{lem.boundcoef} suggests that the coefficients is small when $|q_{\mathfrak{n}}|\gg T^{-1}_{\text{max}}$. Therefore, in order to bound $\mathcal{J}_{T,k}$, we should count the lattice points on $|q_{\mathfrak{n}}|\lesssim T^{-1}_{\text{max}}$ \begin{equation}\label{eq.diophantineeq''} \{k_{\mathfrak{e}}\in \mathbb{Z}^d_L,\ |k_{\mathfrak{e}}|\lesssim 1,\ \forall \mathfrak{e}\in T: |q_{\mathfrak{n}}|\lesssim T^{-1}_{\text{max}},\ S_{\mathfrak{n}}=0,\ \forall \mathfrak{n}\in T.\ k_{\mathfrak{l}}=k\} \end{equation} By solving \eqref{eq.q_n}, $\Omega_{\mathfrak{n}}$ is a linear combination of $q_{\mathfrak{n}}$, so there exists constants $c_{\mathfrak{n},\mathfrak{n}'}$ such that $\Omega_{\mathfrak{n}}=\sum_{\mathfrak{n}'}c_{\mathfrak{n},\mathfrak{n}'}q_{\mathfrak{n}'}$. Therefore, $|q_{\mathfrak{n}}|\lesssim T^{-1}_{\text{max}}$ implies that $|\Omega_{\mathfrak{n}}|\le\sum_{\mathfrak{n}'}|c_{\mathfrak{n},\mathfrak{n}'}q_{\mathfrak{n}'}|\lesssim T^{-1}_{\text{max}}$. $|\Omega_{\mathfrak{n}}|\lesssim T^{-1}_{\text{max}}$ implies that \eqref{eq.diophantineeq''} is a subset of \begin{equation}\label{eq.diophantineeq'} \{k_{\mathfrak{e}}\in \mathbb{Z}^d_L,\ |k_{\mathfrak{e}}|\lesssim 1,\ \forall \mathfrak{e}\in T: |\Omega_{\mathfrak{n}}|\lesssim T^{-1}_{\text{max}},\ S_{\mathfrak{n}}=0,\ \forall \mathfrak{n}\in T. \ k_{\mathfrak{l}}=k\}. \end{equation} To bound the number of elements of \eqref{eq.diophantineeq''}, we just need to do the same thing for \eqref{eq.diophantineeq'}. \eqref{eq.diophantineeq'} can be read from the tree diagrams $T$. As in Figure \ref{fig.equations}, each edge corresponds to a variable $k_{\mathfrak{e}}$. The leg $\mathfrak{l}$ corresponds to equation $k_{\mathfrak{l}}=k$. Each node $\mathfrak{n}$ is connected with three edges $\mathfrak{e}_1$, $\mathfrak{e}_2$, $\mathfrak{e}$ whose corresponding variables $k_{\mathfrak{e}_1}$, $k_{\mathfrak{e}_2}$, $k_{\mathfrak{e}}$ satisfy the momentum conservation equation \begin{equation} \iota_{\mathfrak{e}_1}k_{\mathfrak{e}_1}+\iota_{\mathfrak{e}_2}k_{\mathfrak{e}_2}+\iota_{\mathfrak{e}}k_{\mathfrak{e}}=0 \end{equation} and the energy conservation equation (if the node is decorated by $\bullet$) \begin{equation} \iota_{\mathfrak{e}_1}\Lambda_{k_{\mathfrak{e}_1}}+\iota_{\mathfrak{e}_2}\Lambda_{k_{\mathfrak{e}_2}}+\iota_{\mathfrak{e}}\Lambda_{k_{\mathfrak{e}}} = O(T^{-1}_{\text{max}}). \end{equation} \begin{figure}[H] \centering \scalebox{0.5}{ \begin{tikzpicture}[level distance=80pt, sibling distance=150pt] \node[scale=2.0] at (14.3,0) {$k_{\mathfrak{e}}$}; \node[scale=2.0] at (14.5,-1.3) {$\mathfrak{n}$}; \node[scale=2.0] at (13,-2.4) {$k_{\mathfrak{e}_1}$}; \node[scale=2.0] at (17,-2.4) {$k_{\mathfrak{e}_2}$}; \node[] at (15,1.5) (1) {} child {node[fillcirc] (2) {} child {node[fillstar] (3) {} } child {node[fillstar] (6) {}} }; \draw[-{Stealth[length=5mm, width=3mm]}] (1) -- (2); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (3); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (6); \node[scale=2.0] at (15,-5.5) {$\iota_{\mathfrak{e}_1}=\iota_{\mathfrak{e}_2}=1, \iota_{\mathfrak{e}}=-1$}; \node[scale=2.0] at (15,-6.5) {$k_{\mathfrak{e}_1}+k_{\mathfrak{e}_1}-k_{\mathfrak{e}}=0$}; \node[scale=2.0] at (15,-7.5) {$\Lambda_{k_{\mathfrak{e}_1}}+\Lambda_{k_{\mathfrak{e}_2}}-\Lambda_{k_{\mathfrak{e}}}=O(T^{-1}_{\text{max}})$}; \end{tikzpicture} } \caption{Equations of a node $\mathfrak{n}$} \label{fig.equations} \end{figure} The goal of the next two sections is to count the number of solutions of a modified version of above equation \eqref{eq.diophantineeq'}. \subsection{Couples and Wick theorem} In this section, we calculate $\mathbb{E}|\mathcal{J}_{T,k}|^2$ using Wick theorem. We also introduce another type of diagrams, the couple diagrams, to represent the result. \label{sec.coupwick} By the upper bound in the last section, the coefficients $H^T_{k_1\cdots k_{l+1}}$ concentrate near the surface $q_{\mathfrak{n}}=0$, $\forall \mathfrak{n}$. But to get an upper bound of $\mathcal{J}_{T,k}$, we need upper bound of their variance $\mathbb{E}|\mathcal{J}_{T,k}|^2$. The coefficients of $\mathbb{E}|\mathcal{J}_{T,k}|^2$ also concentrate near a surface whose expression is similar to \eqref{eq.diophantineeq''}. Let's derive the expression of the coefficients of $\mathbb{E}|\mathcal{J}_{T,k}|^2$ and its concentration surface. By Lemma \ref{lem.treeterms}, we know that $\mathcal{J}_{T,k}$ is a polynomial of $\xi$ which are proportional to i.i.d Gaussians. Therefore, \begin{equation}\label{eq.termexp1} \begin{split} \mathbb{E}|\mathcal{J}_{T,k}|^2=&\mathbb{E}(\mathcal{J}_{T,k}\overline{\mathcal{J}_{T,k}})=\left(\frac{\lambda}{L^{d}}\right)^{2l(T)} \sum_{k_1,\, k_2,\, \cdots,\, k_{l(T)+1}}\sum_{k'_1,\, k'_2,\, \cdots,\, k'_{l(T)+1}} \\[0.5em] & H^T_{k_1\cdots k_{l(T)+1}} \overline{H^{T}_{k'_1\cdots k'_{l(T)+1}}} \mathbb{E}\Big(\xi_{k_1}\xi_{k_2}\cdots\xi_{k_{l(T)+1}}\xi_{k'_1}\xi_{k'_2}\cdots\xi_{k'_{l(T)+1}}\Big) \end{split} \end{equation} We just need to calculate \begin{equation}\label{eq.expectation''} \mathbb{E}\Big(\xi_{k_1}\xi_{k_2}\cdots\xi_{k_{l(T)+1}} \xi_{k'_1}\xi_{k'_2}\cdots\xi_{k'_{l(T)+1}}\Big). \end{equation} Notice that $\xi_k=\sqrt{n_{\textrm{in}}(k)} \, \eta_{k}(\omega)$ and $\eta_{k}$ are i.i.d Gaussians. We can apply the Wick theorem to calculate above expectations. To introduce the Wick theorem, we need the following definition. \begin{defn} \begin{enumerate} \item \textbf{Pairing:} Suppose that we have a balanced set $A=\{a_1,\cdots,a_{2m}\}$. A \underline{pairing} is a partition of $A=\{a_{i_1},b_{i_2}\}\cup\cdots\cup \{a_{i_{2m-1}},a_{i_{2m}}\}$ into $m$ subsets which have exactly two elements of different sign. Given a pairing $p$, elements $a_{i_{k}}$, $a_{i_{k'}}$ in the same subset of $p$ are called \underline{paired} with each other, which is denoted by $a_{i_{k}}\sim_{p} a_{i_{k'}}$. \item \textbf{$\mathcal{P}(A)$:} Denote by $\mathcal{P}(A)$ the set of all pairings of $A$. \end{enumerate} \end{defn} \begin{lem}[Wick theorem]\label{th.wick} Let $\{\eta_k\}_{k\in\mathbb{Z}^d_L}$ be i.i.d complex Gaussian random variable with reflection symmetry (i.e. $\eta_{k}=\bar{\eta}_{-k}$). Let $\mathcal{P}$ be the set of all pairings of $\{k_1,k_2,\cdots,k_{2m}\}$, then \begin{equation} \mathbb{E}(\eta_{k_1}\cdots \eta_{k_{2m}})=\sum_{p\in \mathcal{P}} \delta_{p}(k_1,\cdots,k_{2m}), \end{equation} where \begin{equation}\label{eq.deltapairing} \delta_{p}=\begin{cases} 1\qquad \textit{if $k_{i}=-k_{j}$ for all $k_{i}\sim_{p}k_{j}$,} \\ 0\qquad \textit{otherwise.} \end{cases} \end{equation} \end{lem} \begin{proof} By Isserlis' theorem, for $X_1$, $X_2$, $\cdots$, $X_n$ zero-mean i.i.d Gaussian, we have \begin{equation} \mathbb{E} [X_1 X_2 \cdots X_n] = \sum_{p\in\mathcal{P}} \prod_{i\sim_{p}j} \mathbb{E} [X_i X_j] \end{equation} Here $\mathcal{P}$ is the set of pairings of $\{1,2,\cdots,n\}$. Since $\mathbb{E} [\eta_{k_i}\eta_{k_j}]=\delta_{k_i=-k_j}$, take $X_1=\eta_{k_1}$, $\cdots$, $X_{2m}=\eta_{k_{2m}}$, then we can check that $\prod_{i\sim_{p}j} \mathbb{E} [X_i X_j]=\delta_{p}(k_1,\cdots,k_{2m})$. This finishes the proof of the Wick theorem. \end{proof} Applying Wick theorem to \eqref{eq.termexp1}, we get \begin{equation}\label{eq.termexp'} \begin{split} &\mathbb{E}|\mathcal{J}_{T,k}|^2=\left(\frac{\lambda}{L^{d}}\right)^{2l(T)} \sum_{p\in \mathcal{P}(\{k_1,\cdots, k_{l(T)+1}, k'_1,\cdots, k'_{l(T)+1}\})} \\[0.5em] & \underbrace{\sum_{k_1,\, k_2,\, \cdots,\, k_{l(T)+1}}\sum_{k'_1,\, k'_2,\, \cdots,\, k'_{l(T)+1}} H^T_{k_1\cdots k_{l(T)+1}} \overline{H^{T}_{k'_1\cdots k'_{l(T)+1}}} \delta_{p}(k_1,\cdots, k_{l(T)+1}, k'_1,\cdots, k'_{l(T)+1})\sqrt{n_{\textrm{in}}(k_1)}\cdots}_{Term(T, p)_k}. \end{split} \end{equation} We see that the correlation of two tree terms is a sum of smaller expressions $Term(T, p)$. By \eqref{eq.diophantineeq'}, the coefficients $H^T_{k_1\cdots k_{l(T)+1}} H^{T}_{k'_1\cdots k'_{l(T)+1}}$ of $Term(T, p)$ concentrate near the subset \begin{equation}\label{eq.diophantineequnpaired} \{k_{\mathfrak{e}}, k_{\mathfrak{e}'}\in \mathbb{Z}^d_L,\ |k_{\mathfrak{e}}|, |k_{\mathfrak{e}'}|\lesssim 1,\ \forall \mathfrak{e},\mathfrak{e}'\in T:|\Omega_{\mathfrak{n}}|,|\Omega_{\mathfrak{n}'}|\lesssim T^{-1}_{\text{max}},\ S_{\mathfrak{n}}=S_{\mathfrak{n}'}=0\ \forall \mathfrak{n},\mathfrak{n}'\in T.\ k_{\mathfrak{l}}=-k_{\mathfrak{l}}'=k\}. \end{equation} The pairing $p$ in Wick theorem introduces new equations $k_{i}=-k'_{j}$ (defined in \eqref{eq.deltapairing}) and the coefficients $H^T_{k_1\cdots k_{l(T)+1}} H^{T}_{k'_1\cdots k'_{l(T)+1}} \delta_{p}$ concentrate near the subset \begin{equation}\label{eq.diophantineeqpaired} \begin{split} \{k_{\mathfrak{e}}, k_{\mathfrak{e}'}\in \mathbb{Z}^d_L,\ &|k_{\mathfrak{e}}|, |k_{\mathfrak{e}'}|\lesssim 1,\ \forall \mathfrak{e},\mathfrak{e}'\in T: |\Omega_{\mathfrak{n}}|,|\Omega_{\mathfrak{n}'}|\lesssim T^{-1}_{\text{max}},\ S_{\mathfrak{n}}=S_{\mathfrak{n}'}=0\ \forall \mathfrak{n},\mathfrak{n}'\in T. \\ k_{\mathfrak{l}}=-k_{\mathfrak{l}}'=k.\ &\textit{$k_{i}=-k'_{j}$ (and $k_{i}=-k_{j}$, $k'_{i}=-k'_{j}$) for all $k_{i}\sim_{p}k'_{j}$ (and $k_{i}\sim_{p}k_{j}$, $k'_{i}\sim_{p}k'_{j}$)}\}. \end{split} \end{equation} As in the case of \eqref{eq.diophantineeq'}, there is a graphical representation of \eqref{eq.diophantineeqpaired}. To explain this, we need the concept of couples. \begin{defn}[Construction of couples]\label{def.conple} Given two trees $T$ and $T'$, we flip the orientation of all edges in $T'$ (as in the two left trees in Figure \ref{fig.treepairing}). We also label their leaves by $1, 2, \cdots, l(T)+1$ and $1, 2, \cdots, l(T')+1$ so that the corresponding variables of these leaves are $k_1, k_2, \cdots, k_{l(T)+1}$ and $k_1, k_2, \cdots, k_{l(T')+1}$. Assume that we have a pairing $p$ of the set $\{k_1, k_2, \cdots, k_{l(T)+1}, k_1, k_2, \cdots, k_{l(T')+1}\}$, then this pairing induces a pairing between leaves (if $k_i\sim_p k_j$ then define $\textit{the $i$-th leaf}\sim_p \textit{the $j$-th leaf}$). Given this pairing of leaves, we define the following procedure which glues two trees $T$ and $T'$ into a couple $\mathcal{C}(T,T,p)$. Some example of pairing can be find in Figure \ref{fig.treepairing}. \begin{figure}[H] \centering \scalebox{0.3}{ \begin{tikzpicture}[level distance=80pt, sibling distance=100pt] \node[] at (15,1.5) (1) {} child {node[fillcirc] (2) {} child {node[fillcirc] (3) {} child {node[fillstar] (4) {}} child {node[fillstar] (5) {}} } child {node[fillstar] (6) {}} }; \draw[-{Stealth[length=5mm, width=3mm]}] (1) -- (2); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (3); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (6); \draw[-{Stealth[length=5mm, width=3mm]}] (3) -- (4); \draw[-{Stealth[length=5mm, width=3mm]}] (3) -- (5); \node[] at (22,1.5) (11) {} child {node[fillcirc] (12) {} child {node[fillcirc] (13) {} child {node[fillstar] (14) {}} child {node[fillstar] (15) {}} } child {node[fillstar] (16) {}} }; \draw[{Stealth[length=5mm, width=3mm]}-] (11) -- (12); \draw[{Stealth[length=5mm, width=3mm]}-] (12) -- (13); \draw[{Stealth[length=5mm, width=3mm]}-] (12) -- (16); \draw[{Stealth[length=5mm, width=3mm]}-] (13) -- (14); \draw[{Stealth[length=5mm, width=3mm]}-] (13) -- (15); \draw[bend right =20, dashed] (4) edge (14); \draw[bend right =20, dashed] (5) edge (15); \draw[bend right =20, dashed] (6) edge (16); \node[] at (33,1.5) (new1) {} child {node[fillcirc] (new2) {} child {node[fillcirc] (new3) {} child {node[fillstar] (new4) {}} child {node[fillstar] (new5) {}} } child {node[fillstar] (new6) {}} }; \draw[-{Stealth[length=5mm, width=3mm]}] (new1) -- (new2); \draw[-{Stealth[length=5mm, width=3mm]}] (new2) -- (new3); \draw[-{Stealth[length=5mm, width=3mm]}] (new2) -- (new6); \draw[-{Stealth[length=5mm, width=3mm]}] (new3) -- (new4); \draw[-{Stealth[length=5mm, width=3mm]}] (new3) -- (new5); \node[] at (40,1.5) (new11) {} child {node[fillcirc] (new12) {} child {node[fillcirc] (new13) {} child {node[fillstar] (new14) {}} child {node[fillstar] (new15) {}} } child {node[fillstar] (new16) {}} }; \draw[{Stealth[length=5mm, width=3mm]}-] (new11) -- (new12); \draw[{Stealth[length=5mm, width=3mm]}-] (new12) -- (new13); \draw[{Stealth[length=5mm, width=3mm]}-] (new12) -- (new16); \draw[{Stealth[length=5mm, width=3mm]}-] (new13) -- (new14); \draw[{Stealth[length=5mm, width=3mm]}-] (new13) -- (new15); \draw[bend right =90, dashed] (new4) edge (new6); \draw[bend right =20, dashed] (new5) edge (new15); \draw[bend right =90, dashed] (new14) edge (new16); \end{tikzpicture} } \caption{Example of pairings between trees.} \label{fig.treepairing} \end{figure} \begin{enumerate} \item \textbf{Merging edges connected to leaves:} Given two edges with opposite orientation connected to two paired leaves, these two edges can be \underline{merged} into one edge as in Figure \ref{fig.pairingleaves}. \begin{figure}[H] \centering \scalebox{0.5}{ \begin{tikzpicture}[level distance=80pt, sibling distance=100pt] \node[draw, circle, minimum size=1cm, scale=2] at (0,0) (1) {$T_1$} [grow =300] child {node[fillstar] (2) {}}; \node[draw, circle, minimum size=1cm, scale=2] at (5,0) (3) {$T_2$} [grow =240] child {node[fillstar] (4) {}}; \draw[-{Stealth[length=5mm, width=3mm]}] (1) -- (2); \draw[{Stealth[length=5mm, width=3mm]}-] (3) -- (4); \draw[bend right =40, dashed] (2) edge (4); \node[draw, single arrow, minimum height=33mm, minimum width=8mm, single arrow head extend=2mm, anchor=west, rotate=0] at (7,-1.5) {}; \node[draw, circle, minimum size=1cm, scale=2] at (13,-1.5) (5) {$T_1$}; \node[draw, circle, minimum size=1cm, scale=2] at (18,-1.5) (6) {$T_2$}; \draw[-{Stealth[length=5mm, width=3mm]}] (5) -- (6); \end{tikzpicture} } \caption{Pairing and merging of two edges} \label{fig.pairingleaves} \end{figure} We know that two edges connected to leaves correspond to two indices $k_i$, $k_j$. Merging two such edges is a graphical interpretation that $k_i=-k_j$. \item \textbf{Pairing of trees and couples:} Given a pairing $p$ of the set of leaves in $T$, $T'$ we merge all edges paired by $p$ as in Figure \ref{fig.couple} and the resulting combinatorial structure is called \underline{couple}. \begin{figure}[H] \centering \scalebox{0.3}{ \begin{tikzpicture}[level distance=80pt, sibling distance=100pt] \node[] at (0,0) (1) {} child {node[fillcirc] (2) {} child {node[fillstar] (3) {}} child {node[fillstar] (4) {}} }; \draw[-{Stealth[length=5mm, width=3mm]}] (1) -- (2); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (3); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (4); \node[] at (0,-14.5) (11) {} [grow =90] child {node[fillcirc] (12) {} child {node[fillstar] (13) {}} child {node[fillstar] (14) {}} }; \draw[{Stealth[length=5mm, width=3mm]}-] (11) -- (12); \draw[{Stealth[length=5mm, width=3mm]}-] (12) -- (13); \draw[{Stealth[length=5mm, width=3mm]}-] (12) -- (14); \draw[dashed] (3) edge (14); \draw[dashed] (4) edge (13); \node[draw, single arrow, minimum height=66mm, minimum width=16mm, single arrow head extend=4mm, anchor=west, rotate=0] at (7,-7.5) {}; \node[] at (20,0) (21) {}; \node[fillcirc] at (20,-4) (22) {}; \node[fillcirc] at (20,-11) (23) {}; \node[] (24) at (20,-15) {}; \draw[-{Stealth[length=5mm, width=3mm]}] (21) edge (22); \draw[-{Stealth[length=5mm, width=3mm]}, bend right =40] (22) edge (23); \draw[-{Stealth[length=5mm, width=3mm]}, bend left =40] (22) edge (23); \draw[-{Stealth[length=5mm, width=3mm]}] (23) edge (24); \end{tikzpicture} } \caption{The construction of a couple} \label{fig.couple} \end{figure} We know that each edge connected to leaf corresponds to a variable $k_i$. A pairing $p$ of $\{k_1,k_2,\cdots,k_{2m}\}$ in \eqref{eq.diophantineeqpaired} induces a pairing of edges connected to leaves. Merging paired edges corresponds to $k_{i}=-k'_{j}$ for all $k_{i}\sim_{p}k'_{j}$ in \eqref{eq.diophantineeqpaired}. \end{enumerate} \end{defn} The following proposition introduce the graphical representation of \eqref{eq.diophantineeqpaired}. \begin{prop}\label{prop.couple} \eqref{eq.diophantineeqpaired} can be read from a couple diagram $\mathcal{C}(T,T,p)$. Each edge corresponds to a variable $k_{\mathfrak{e}}$. The leg $\mathfrak{l}$ corresponds to equation $k_{\mathfrak{l}}=k$. Each node corresponds to a momentum conservation equation \begin{equation} \iota_{\mathfrak{e}_1}k_{\mathfrak{e}_1}+\iota_{\mathfrak{e}_2}k_{\mathfrak{e}_2}+\iota_{\mathfrak{e}}k_{\mathfrak{e}}=0, \end{equation} and a energy conservation equation \begin{equation} \iota_{\mathfrak{e}_1}\Lambda_{k_{\mathfrak{e}_1}}+\iota_{\mathfrak{e}_2}\Lambda_{k_{\mathfrak{e}_2}}+\iota_{\mathfrak{e}}\Lambda_{k_{\mathfrak{e}}} = O(T^{-1}_{\text{max}}). \end{equation} \end{prop} \begin{rem} In a couple diagram, we only have nodes decorated by $\bullet$. Nodes decorated by $\star$ have been removed in (1), (2) of Definition \ref{def.conple}. \end{rem} \begin{rem} Through the process of (1), (2) in Definition \ref{def.conple}, a couple diagram can automatically encode the equation $k_{i}=-k'_{j}$ for all $k_{i}\sim_{p}k'_{j}$. Therefore, they do not appear in Proposition \ref{prop.couple}. \end{rem} \begin{proof} This directly follows from the definition of couples. \end{proof} The calculations of this section are summarized in the following proposition. \begin{prop}\label{prop.termcouple} (1) Define $Term(T,p)$ in the same way as in \eqref{eq.termexp'}, \begin{equation}\label{eq.termTp} \begin{split} Term(T, p)_k=&\sum_{k_1,\, k_2,\, \cdots,\, k_{l(T)+1}}\sum_{k'_1,\, k'_2,\, \cdots,\, k'_{l(T)+1}} \\ &H^T_{k_1\cdots k_{l(T)+1}} \overline{H^{T}_{k'_1\cdots k'_{l(T)+1}}} \delta_{p}(k_1,\cdots, k_{l(T)+1}, k'_1,\cdots, k'_{l(T)+1})\sqrt{n_{\textrm{in}}(k_1)}\cdots\sqrt{n_{\textrm{in}}(k'_1)}\cdots \end{split} \end{equation} then $\mathbb{E}|\mathcal{J}_{T,k}|^2$ is a sum of $Term(T,p)_k$ for all $p\in \mathcal{P}$, (in \eqref{eq.termexp'} the sum is over set of all possible pairing $\mathcal{P}$) \begin{equation}\label{eq.termexp} \begin{split} \mathbb{E}|\mathcal{J}_{T,k}|^2=\left(\frac{\lambda}{L^{d}}\right)^{2l(T)} \sum_{p\in \mathcal{P}(\{k_1,\cdots, k_{l(T)+1}, k'_1,\cdots, k'_{l(T)+1}\})} Term(T, p). \end{split} \end{equation} (2) $Term(T,p)$ concentrates near the subset \eqref{eq.diophantineeqpaired} which has a simple graphical representation given by Proposition \ref{prop.couple}. \end{prop} \begin{proof} The proof of (1), (2) is easy and thus skipped. \end{proof} \subsection{Counting lattice points}\label{sec.numbertheory} In this section, we apply the connection between couple and concentration subset \eqref{eq.diophantineeqpaired} to count the number of solutions of a generalized version of \eqref{eq.diophantineeqpaired}, \begin{equation}\label{eq.diophantineeqpairedsigma} \begin{split} &\{k_{\mathfrak{e}}, k_{\mathfrak{e}'}\in \mathbb{Z}^d_L,\ |k_{\mathfrak{e}}|, |k_{\mathfrak{e}}'|\lesssim 1,\ \forall \mathfrak{e},\mathfrak{e}'\in T:\ |k_{\mathfrak{e}x}|\sim \kappa_{\mathfrak{e}}, |k'_{\mathfrak{e}x}|\sim \kappa_{\mathfrak{e}'},\ \forall \mathfrak{e},\mathfrak{e}'\in T, \\ &|\Omega_{\mathfrak{n}}-\sigma_{\mathfrak{n}}|,|\Omega'_{\mathfrak{n}}-\sigma'_{\mathfrak{n}}|\lesssim T^{-1}_{\text{max}},\ S_{\mathfrak{n}}=S_{\mathfrak{n}'}=0,\ \forall \mathfrak{n}.\ k_{\mathfrak{l}}=-k_{\mathfrak{l}}'=k. \\ &\textit{$k_{i}=-k'_{j}$ (and $k_{i}=-k_{j}$, $k'_{i}=-k'_{j}$) for all $k_{i}\sim_{p}k'_{j}$ (and $k_{i}\sim_{p}k_{j}$, $k'_{i}\sim_{p}k'_{j}$)}\}. \end{split} \end{equation} In \eqref{eq.diophantineeqpairedsigma}, $\kappa_{\mathfrak{e}}\in \{0\}\cup \mathcal{D}(\alpha,1)$, where $\mathcal{D}(\alpha,1)\coloneqq\{2^{-K_{\mathfrak{e}}}:K_{\mathfrak{e}}\in \mathbb{Z}\cap [0,ln\ \alpha^{-1}]\}$. The relation $|k_{\mathfrak{e}x}|\sim \kappa_{\mathfrak{e}}$ is defined by \begin{equation}\label{eq.kappa} |k_{\mathfrak{e}x}|\sim \kappa_{\mathfrak{e}}\text{ if and only if }\left\{\begin{aligned} & \frac{1}{2}\kappa_{\mathfrak{e}}\le |k_{\mathfrak{e}x}|\le 2\kappa_{\mathfrak{e}} \qquad && \text{ if $\kappa_{\mathfrak{e}}\ne 0$} \\[1em] & |k_{\mathfrak{e}x}|\lesssim \alpha^2,\ k_{\mathfrak{e}x}\ne 0 \qquad && \text{ if $\kappa_{\mathfrak{e}}= 0$} \end{aligned} \right. \end{equation} \eqref{eq.diophantineeqpairedsigma} is obtained by replacing $\Omega_{\mathfrak{n}}$, $\Omega'_{\mathfrak{n}}$ by $\Omega_{\mathfrak{n}}-\sigma_{\mathfrak{n}}$, $\Omega'_{\mathfrak{n}}-\sigma'_{\mathfrak{n}}$ in \eqref{eq.diophantineeqpaired} and adding conditions $|k_{\mathfrak{e}}|\sim \kappa_{\mathfrak{e}}$. $\sigma_{\mathfrak{n}}$, $\sigma'_{\mathfrak{n}}$ and $\kappa_{\mathfrak{e}}$ are some given constants. The counterpart of Proposition \ref{prop.couple} in this case is \begin{prop}\label{prop.couple'} \eqref{eq.diophantineeqpairedsigma} can be read from a couple diagram $\mathcal{C}=\mathcal{C}(T,T,p)$. Each edge corresponds to a variable $k_{\mathfrak{e}}$. The leg $\mathfrak{l}$ corresponds to equation $k_{\mathfrak{l}}=k$. Each node corresponds to a momentum conservation equation \begin{equation}\label{eq.momentumconservationunit} \iota_{\mathfrak{e}_1}k_{\mathfrak{e}_1}+\iota_{\mathfrak{e}_2}k_{\mathfrak{e}_2}+\iota_{\mathfrak{e}}k_{\mathfrak{e}}=0, \end{equation} and a energy conservation equation \begin{equation}\label{eq.energyconservationunit} \iota_{\mathfrak{e}_1}\Lambda_{k_{\mathfrak{e}_1}}+\iota_{\mathfrak{e}_2}\Lambda_{k_{\mathfrak{e}_2}}+\iota_{\mathfrak{e}}\Lambda_{k_{\mathfrak{e}}} = \sigma_{\mathfrak{n}} + O(T^{-1}_{\text{max}}). \end{equation} Denote the momentum and energy conservation equations by $MC_{\mathfrak{n}}$ and $EC_{\mathfrak{n}}$ respectively, then \eqref{eq.diophantineeqpairedsigma} can be rewritten as \begin{equation}\label{eq.diophantineeqpairedsigma'} \text{\eqref{eq.diophantineeqpairedsigma}}=\{k_{\mathfrak{e}}\in \mathbb{Z}^d_L,\ |k_{\mathfrak{e}}|\lesssim 1\ \forall \mathfrak{e}\in \mathcal{C}:\ |k_{\mathfrak{e}x}| \sim \kappa_{\mathfrak{e}},\ \forall \mathfrak{e}\in \mathcal{C}_{\text{norm}}.\ MC_{\mathfrak{n}},\ EC_{\mathfrak{n}},\ \forall \mathfrak{n}\in \mathcal{C}.\ k_{\mathfrak{l}} = - k_{\mathfrak{l}}'= k.\} \end{equation} \end{prop} \begin{proof} This directly follows from Proposition \ref{prop.couple}. \end{proof} To explain the counting argument in this paper, we need following definitions related to couples. \begin{defn}\label{def.morecouple} \begin{enumerate} \item \textbf{Connected couples:} Any couple $\mathcal{C}$ is a \underline{connected couple} if it is connected as a graph. \item \textbf{Equations of a couple $Eq(\mathcal{C})$:} Given a couple $\mathcal{C}$ and $k$, $\sigma_{\mathfrak{n}}$, let $Eq(\mathcal{C},\{\sigma_{\mathfrak{n}}\}_{\mathfrak{n}}, k)$ be the system of equation \eqref{eq.diophantineeqpairedsigma'} constructed in Proposition \ref{prop.couple'}. For simplicity, we also use $Eq(\mathcal{C})$ to denote these equations. For any system of equations $Eq$, let $\#(Eq)$ be its number of solutions. \item \textbf{Normal edges and leaf edges:} Remember that any couple $\mathcal{C}$ is constructed from a pairing of two trees $T$, $T'$ and therefore all edges in $\mathcal{C}$ comes from $T$, $T'$. We define those edges coming from $T_{\text{in}}$, $T'_{\text{in}}$ to be \underline{normal edges} and those edges coming from edges which are connected to leaves in $T$, $T'$ to be \underline{leaf edges}. The set of all normal edges is denoted by $\mathcal{C}_{\text{norm}}$. A leg which is a normal edge is called a \underline{normal leg}. \end{enumerate} \end{defn} The main goal of this section is to get an upper bound of $\#Eq(\mathcal{C})$. The main idea of proving this fact is to decompose a large couple $\mathcal{C}$ into smaller pieces and then prove this fact for smaller piece using induction hypothesis. To explain the idea, let us first focus on an example. Let $\mathcal{C}$ be the left couple in the following picture. (The corresponding variables of each edge are labelled near these edges.) \begin{figure}[H] \centering \scalebox{0.4}{ \begin{tikzpicture}[level distance=80pt, sibling distance=100pt] \node[] at (0,0) (1) {}; \node[fillcirc] at (3,0) (2) {}; \node[fillcirc] at (6,-2) (3) {}; \node[fillcirc] at (9,-2) (4) {}; \node[fillcirc] at (12,0) (5) {}; \node[] at (15,0) (6) {}; \draw[-{Stealth[length=5mm, width=3mm]}] (1) edge (2); \draw[-{Stealth[length=5mm, width=3mm]}] (2) edge (3); \draw[-{Stealth[length=5mm, width=3mm]}, bend left =40] (3) edge (4); \draw[-{Stealth[length=5mm, width=3mm]}, bend right =40] (3) edge (4); \draw[-{Stealth[length=5mm, width=3mm]}] (4) edge (5); \draw[-{Stealth[length=5mm, width=3mm]}] (5) edge (6); \draw[-{Stealth[length=5mm, width=3mm]}, bend left =40] (2) edge (5); \node[scale=2.0] at (3,-0.7) {$\mathfrak{n}_{1}$}; \node[scale=2.0] at (6,-2.7) {$\mathfrak{n}_{2}$}; \node[scale=2.0] at (9,-2.7) {$\mathfrak{n}_{3}$}; \node[scale=2.0] at (12,-0.7) {$\mathfrak{n}_{4}$}; \node[scale=2.0] at (1.5,-0.5) {$k$}; \node[scale=2.0] at (4.3,-1.4) {$a$}; \node[scale=2.0] at (7.5,-3.1) {$b$}; \node[scale=2.0] at (7.5,-1) {$c$}; \node[scale=2.0] at (10.7,-1.4) {$d$}; \node[scale=2.0] at (7.5,2.2) {$e$}; \node[scale=2.0] at (13.5,-0.5) {$-k$}; \node[draw, single arrow, minimum height=33mm, minimum width=8mm, single arrow head extend=2mm, anchor=west, rotate=0] at (16,0) {}; \node[] at (20,0) (11) {}; \node[fillcirc] at (23,0) (12) {}; \node[fillstar] at (26,-2) (13) {}; \node[fillstar] at (26,2) (14) {}; \draw[-{Stealth[length=5mm, width=3mm]}] (11) edge (12); \draw[-{Stealth[length=5mm, width=3mm]}] (12) edge (13); \draw[-{Stealth[length=5mm, width=3mm]}] (12) edge (14); \node[scale=2.0] at (23,-0.7) {$\mathfrak{n}_{1}$}; \node[scale=2.0] at (21.5,-0.5) {$k$}; \node[scale=2.0] at (24.3,-1.4) {$a$}; \node[scale=2.0] at (24.3,1.4) {$e$}; \node[scale=2.0] at (23,-1.8) {$A$}; \node[] at (28,-2) (32) {}; \node[fillcirc] at (31,-2) (33) {}; \node[fillcirc] at (34,-2) (34) {}; \node[fillcirc] at (37,0) (35) {}; \node[] at (40,0) (36) {}; \node[] at (34,2) (37) {}; \draw[-{Stealth[length=5mm, width=3mm]}] (32) edge (33); \draw[-{Stealth[length=5mm, width=3mm]}, bend left =40] (33) edge (34); \draw[-{Stealth[length=5mm, width=3mm]}, bend right =40] (33) edge (34); \draw[-{Stealth[length=5mm, width=3mm]}] (34) edge (35); \draw[-{Stealth[length=5mm, width=3mm]}] (35) edge (36); \draw[-{Stealth[length=5mm, width=3mm]}] (37) edge (35); \node[scale=2.0] at (31,-2.7) {$\mathfrak{n}_{2}$}; \node[scale=2.0] at (34,-2.7) {$\mathfrak{n}_{3}$}; \node[scale=2.0] at (37,-0.7) {$\mathfrak{n}_{4}$}; \node[scale=2.0] at (29.5,-2.5) {$a$}; \node[scale=2.0] at (32.5,-3.1) {$b$}; \node[scale=2.0] at (32.5,-1) {$c$}; \node[scale=2.0] at (35.7,-1.4) {$d$}; \node[scale=2.0] at (35.7,1.4) {$e$}; \node[scale=2.0] at (38.5,-0.5) {$-k$}; \node[scale=2.0] at (37,-2.2) {$B_{a,e}$}; \end{tikzpicture} } \caption{An example of decomposing a couple} \label{fig.exampleofcuttingidea} \end{figure} By \eqref{eq.diophantineeqpairedsigma'}, we know that the couple $\mathcal{C}$ corresponds to the following equations. \begin{equation}\label{eq.cuttingexmaple} \begin{split} \{a, b, c, d, e:\ &(|a|\text{ to }|e|)\lesssim 1,\ (|a_x|\text{ to }|e_x|)\sim (\kappa_{a}\text{ to }\kappa_{e}) \\ &a+e=k,\ \Lambda(a) + \Lambda(e) - \Lambda(k) =\sigma_{1} + O(T^{-1}_{\text{max}}) \\ &a+c=b,\ \Lambda(a) + \Lambda(c) - \Lambda(b) =\sigma_{2} + O(T^{-1}_{\text{max}}) \\ &b+c=d,\ \Lambda(b) + \Lambda(c) - \Lambda(d) =\sigma_{3} + O(T^{-1}_{\text{max}}) \\ &d+e+k=0,\ \Lambda(d) + \Lambda(e) + \Lambda(k) =\sigma_{4} + O(T^{-1}_{\text{max}})\} \end{split} \end{equation} We know that \eqref{eq.cuttingexmaple} can be rewritten into the form $\bigcup_{a,e\in A} B_{a,e}$, where \begin{equation}\label{eq.couplequationA} A=\{a, b:\ |a|,|e|\lesssim 1,\ |a_x|\sim \kappa_{a},|e_x|\sim \kappa_{e}, a+e=k,\ \Lambda(a) + \Lambda(e) - \Lambda(k) =\sigma_{1} + O(T^{-1}_{\text{max}})\} \end{equation} \begin{equation}\label{eq.couplequationB} \begin{split} B_{a,e}=\{b, c, d:\ &|b|,|c|,|d|\lesssim 1,\ |b_x|\sim \kappa_{b},|c_x|\sim \kappa_{c},|d_x|\sim \kappa_{d} \\ &a+c=b,\ \Lambda(a) + \Lambda(c) - \Lambda(b) =\sigma_{2} + O(T^{-1}_{\text{max}}) \\ &b+c=d,\ \Lambda(b) + \Lambda(c) - \Lambda(d) =\sigma_{3} + O(T^{-1}_{\text{max}}) \\ &d+e+k=0,\ \Lambda(d) + \Lambda(e) + \Lambda(k) =\sigma_{4} + O(T^{-1}_{\text{max}})\} \end{split} \end{equation} If we can get upper bounds for $\# A$, $\# B_{a,e}$, then we can get upper bound for $\# Eq(\mathcal{C})$. Therefore, we just need to consider $\# A$, $\# B_{a,e}$ which are systems of equations of smaller size. Above argument allows us to reduce the size of systems of equations and prove the upper bound by induction. One problem of applying induction argument is that $\# A$, $\# B_{a,e}$ cannot be represented by couple defined by Definition \ref{def.conple}. This is because the couple defined there contains at most two legs (an edge just connected to one node). In Definition \ref{def.conple}, a leg is used to represent a variable which is fixed, as in the condition $k_{\mathfrak{l}} = - k_{\mathfrak{l}}'= k$ in \eqref{eq.diophantineeqpairedsigma'}. The definition of $\# B_{a,e}$ contains three fixed variables $a$, $e$, $k$ which cannot be represented by just two legs. Therefore, we has to define a new type of couple that can have multiple legs. Except for the lack of legs, in the definition of $\# A$, if we insist the rule that a node corresponds to a equation and the variables in the equation corresponds edges connected to this node, then we find that the graph representation of $\# A$ contains one node and three edges. All these edges are legs, but two of three edges corresponds to variable $a$, $b$ which are not fixed. Therefore, we have to define a type of legs that can correspond to unfixed variable. To solve above problems, we introduce the following definition. \begin{defn}\label{def.couplemultileg} \begin{enumerate} \item \textbf{Couples with multiple legs:} A graph in which all nodes have degree $1$ or $3$ is called a \underline{couples with multiple legs}. The graph $A$ and $B_{a,e}$ in Figure \ref{fig.exampleofcuttingidea} are examples of this definition. \item \textbf{Legs:} In a couple with multiple leg, an edge connected to a degree one node is called a \underline{leg}. Remember that we have encounter this concept in the second paragraph of section \ref{sec.refexp} and in what follows we call that leg defined there the \underline{root leg} of a tree. \item \textbf{Free legs and fixed legs:} In a couple with multiple leg, we use two types node decoration for degree $1$ nodes as in Figure \ref{fig.decorationdegreeone}. One is $\star$ and the other one is invisible. \begin{figure}[H] \centering \scalebox{0.5}{ \begin{tikzpicture}[level distance=80pt, sibling distance=100pt] \node[draw, circle, minimum size=1cm, scale=2] at (0,0) (1) {$\mathcal{C}_1$} child {node[fillstar] (2) {}}; \node[draw, circle, minimum size=1cm, scale=2] at (5,0) (3) {$\mathcal{C}_2$} child {node[] (4) {}}; \draw[{Stealth[length=5mm, width=3mm]}-] (1) -- (2); \draw[{Stealth[length=5mm, width=3mm]}-] (3) -- (4); \end{tikzpicture} } \caption{Node decoration of degree one nodes} \label{fig.decorationdegreeone} \end{figure} An edge connected to a $\star$ or invisible nodes is called a free leg or fixed leg respectively. \item \textbf{Equations of a couple $Eq(\mathcal{C},\{c_{\mathfrak{l}}\}_{\mathfrak{l}})$:} We define the corresponding equations for a couples with multiple legs. \begin{equation}\label{eq.Eq(C,c)} \begin{split} &Eq(\mathcal{C},\{c_{\mathfrak{l}}\}_{\mathfrak{l}}) \\ =&\{k_{\mathfrak{e}}\in \mathbb{Z}^d_L,\ |k_{\mathfrak{e}}|\lesssim 1\ \forall \mathfrak{e}\in \mathcal{C}:\ |k_{\mathfrak{e}x}| \sim \kappa_{\mathfrak{e}},\ \forall \mathfrak{e}\in \mathcal{C}_{\text{norm}}.\ MC_{\mathfrak{n}},\ EC_{\mathfrak{n}},\ \forall \mathfrak{n}\in \mathcal{C}.\ k_{\mathfrak{l}}=c_{\mathfrak{l}},\ \forall \mathfrak{l}.\} \end{split} \end{equation} In this representation, the corresponding variable of a fixed leg $\mathfrak{l}$ is fixed to be the constant $c_{\mathfrak{l}}$ and the corresponding variable of a free leg $\mathfrak{l}$ is not fixed to be a constant. \end{enumerate} \end{defn} With above definition, it's easy to show that the couple $A$ and $B_{a,e}$ in Figure \ref{fig.exampleofcuttingidea} correspond to the system of equations \eqref{eq.couplequationA} and \eqref{eq.couplequationB} respectively. Using above argument, we can prove the following proposition which gives an upper bound of number of solutions of \eqref{eq.diophantineeqpairedsigma} (or \eqref{eq.diophantineeqpairedsigma'}). \begin{prop}\label{prop.counting} Let $\mathcal{C}=\mathcal{C}(T,T',p)$ be an connected couple with exactly one free and one fixed leg, $n$ be the total number of nodes in $\mathcal{C}$ and $Q=L^{d}T^{-1}_{\text{max}}$. We fix $k\in \mathbb{R}$ for the legs $\mathfrak{l}$, $\mathfrak{l}'$ and $\sigma_{\mathfrak{n}}\in\mathbb{R}$ for each $\mathfrak{n}\in \mathcal{C}$. Assume that $\alpha$ satisfies \eqref{eq.conditionalpha}. Then the number of solutions $M$ of \eqref{eq.diophantineeqpairedsigma} (or \eqref{eq.diophantineeqpairedsigma'}) is bounded by \begin{equation}\label{eq.countingbd0} M\leq L^{O(n\theta)} Q^{\frac{n}{2}}\ \prod_{\mathfrak{e}\in \mathcal{C}_{\text{norm}}} \kappa^{-1}_{\mathfrak{e}}. \end{equation} \end{prop} \begin{proof} The proof is lengthy and therefore divided into several steps. The main idea of the proof is to use the operation of edge cutting to decompose the couple $\mathcal{C}$ into smaller ones $\mathcal{C}_1$, $\mathcal{C}_2$, then apply Lemma \ref{lem.Eq(C)cutting} which relates $\#Eq(\mathcal{C})$ and $\#Eq(\mathcal{C}_i)$. The desire upper bounds of $\#Eq(\mathcal{C})$ can be obtained from that of $\#Eq(\mathcal{C}_i)$ inductively. \textbf{Step 1.} In this step, we explain the cutting edge argument and prove the Lemma \ref{lem.Eq(C)cutting} which relates $\#Eq(\mathcal{C}_1)$, $\#Eq(\mathcal{C}_2)$ and $\#Eq(\mathcal{C})$. Here is the formal definition of cutting \begin{defn} \begin{enumerate} \item \textbf{Cutting an edge:} Given an edge $\mathfrak{e}$, we can cut it into two edges (a fixed leg and a free leg) as in Figure \ref{fig.cutedge}. \begin{figure}[H] \centering \scalebox{0.5}{ \begin{tikzpicture}[level distance=80pt, sibling distance=100pt] \node[draw, circle, minimum size=1cm, scale=2] at (0,0) (1) {$\mathcal{C}_1$}; \node[draw, circle, minimum size=1cm, scale=2] at (5,0) (2) {$\mathcal{C}_2$}; \draw[-{Stealth[length=5mm, width=3mm]}] (1) -- (2); \node[scale =3] at (2.5,0) {$\times$}; \node[draw, single arrow, minimum height=33mm, minimum width=8mm, single arrow head extend=2mm, anchor=west, rotate=0] at (7.5,0) {}; \node[draw, circle, minimum size=1cm, scale=2] at (13,0) (11) {$\mathcal{C}_1$}; \node[fillstar] at (16,0) (12) {}; \draw[-{Stealth[length=5mm, width=3mm]}] (11) -- (12); \node[] at (17,0) (13) {}; \node[draw, circle, minimum size=1cm, scale=2] at (20,0) (14) {$\mathcal{C}_2$}; \draw[-{Stealth[length=5mm, width=3mm]}] (13) -- (14); \end{tikzpicture} } \caption{An example of cutting an edge} \label{fig.cutedge} \end{figure} \item \textbf{Cut:} A \underline{cut} $c$ of a couple $\mathcal{C}$ is a set of edges such that $\mathcal{C}$ is disconnected after cutting all edges in $c$. A \underline{refined cut} is a cut together with a map $\text{rc}:c\rightarrow \{\text{left}, \text{right}\}$. For each $\mathfrak{e}\in c$, if $\text{rc}(\mathfrak{e})=\text{left}$ (resp. right), then as in Figure \ref{fig.cutedge} the node produced by cutting in $\mathfrak{e}$ in the left (resp. right) couple is a $\star$ node (resp. invisible node). \item \textbf{$c(\mathfrak{e})$, $c(\mathfrak{n})$ and $c(\mathfrak{l})$:} Given an edge $\mathfrak{e}$, define $c(\mathfrak{e})$ to be the cut that consists of only one edge $\mathfrak{e}$. Given a node $\mathfrak{n}\in \mathcal{C}$, let $\{\mathfrak{e}_{i}\}$ be edges that connect $\mathfrak{n}$ with other nodes, then define $c(\mathfrak{n})$ to be the cut that consists of edges $\{\mathfrak{e}_{i}\}$. Given an leg $\mathfrak{l}$, let $\mathfrak{n}$ be the unique node connected to it, then define $c(\mathfrak{e})$ to be the cut $c(\mathfrak{n})$. In another word, $\text{rc}$ describes which one should be the free or fixed leg in the two legs produced by cutting an edge. An example of cutting $c(\mathfrak{e})$ is give by Figure \ref{fig.cutedge}. The following picture gives an example of cutting $c(\mathfrak{n})$ or $c(\mathfrak{l})$ (in this picture $\mathfrak{n}=\mathfrak{n}_1$ and $\mathfrak{l}$ is the leg labelled by $k$.) \begin{figure}[H] \centering \scalebox{0.4}{ \begin{tikzpicture}[level distance=80pt, sibling distance=100pt] \node[] at (0,0) (1) {}; \node[fillcirc] at (3,0) (2) {}; \node[fillcirc] at (6,-2) (3) {}; \node[fillcirc] at (9,-2) (4) {}; \node[fillcirc] at (12,0) (5) {}; \node[] at (15,0) (6) {}; \draw[-{Stealth[length=5mm, width=3mm]}] (1) edge (2); \draw[-{Stealth[length=5mm, width=3mm]}] (2) edge (3); \draw[-{Stealth[length=5mm, width=3mm]}, bend left =40] (3) edge (4); \draw[-{Stealth[length=5mm, width=3mm]}, bend right =40] (3) edge (4); \draw[-{Stealth[length=5mm, width=3mm]}] (4) edge (5); \draw[-{Stealth[length=5mm, width=3mm]}] (5) edge (6); \draw[-{Stealth[length=5mm, width=3mm]}, bend left =40] (2) edge (5); \node[scale=2.0] at (3,-0.7) {$\mathfrak{n}_{1}$}; \node[scale=2.0] at (6,-2.7) {$\mathfrak{n}_{2}$}; \node[scale=2.0] at (9,-2.7) {$\mathfrak{n}_{3}$}; \node[scale=2.0] at (12,-0.7) {$\mathfrak{n}_{4}$}; \node[scale=2.0] at (1.5,-0.5) {$k$}; \node[scale=2.0] at (4.3,-1.4) {$a$}; \node[scale=2.0] at (7.5,-3.1) {$b$}; \node[scale=2.0] at (7.5,-1) {$c$}; \node[scale=2.0] at (10.7,-1.4) {$d$}; \node[scale=2.0] at (7.5,2.2) {$e$}; \node[scale=2.0] at (13.5,-0.5) {$-k$}; \node[scale=3.0, rotate =45] at (4.3,-0.85) {$\times$}; \node[scale=3.0, rotate = 0] at (7.5,1.75) {$\times$}; \node[draw, single arrow, minimum height=33mm, minimum width=8mm, single arrow head extend=2mm, anchor=west, rotate=0] at (16,0) {}; \node[] at (20,0) (11) {}; \node[fillcirc] at (23,0) (12) {}; \node[fillstar] at (26,-2) (13) {}; \node[fillstar] at (26,2) (14) {}; \draw[-{Stealth[length=5mm, width=3mm]}] (11) edge (12); \draw[-{Stealth[length=5mm, width=3mm]}] (12) edge (13); \draw[-{Stealth[length=5mm, width=3mm]}] (12) edge (14); \node[scale=2.0] at (23,-0.7) {$\mathfrak{n}_{1}$}; \node[scale=2.0] at (21.5,-0.5) {$k$}; \node[scale=2.0] at (24.3,-1.4) {$a$}; \node[scale=2.0] at (24.3,1.4) {$e$}; \node[scale=2.0] at (23,-1.8) {$A$}; \node[] at (28,-2) (32) {}; \node[fillcirc] at (31,-2) (33) {}; \node[fillcirc] at (34,-2) (34) {}; \node[fillcirc] at (37,0) (35) {}; \node[] at (40,0) (36) {}; \node[] at (34,2) (37) {}; \draw[-{Stealth[length=5mm, width=3mm]}] (32) edge (33); \draw[-{Stealth[length=5mm, width=3mm]}, bend left =40] (33) edge (34); \draw[-{Stealth[length=5mm, width=3mm]}, bend right =40] (33) edge (34); \draw[-{Stealth[length=5mm, width=3mm]}] (34) edge (35); \draw[-{Stealth[length=5mm, width=3mm]}] (35) edge (36); \draw[-{Stealth[length=5mm, width=3mm]}] (37) edge (35); \node[scale=2.0] at (31,-2.7) {$\mathfrak{n}_{2}$}; \node[scale=2.0] at (34,-2.7) {$\mathfrak{n}_{3}$}; \node[scale=2.0] at (37,-0.7) {$\mathfrak{n}_{4}$}; \node[scale=2.0] at (29.5,-2.5) {$a$}; \node[scale=2.0] at (32.5,-3.1) {$b$}; \node[scale=2.0] at (32.5,-1) {$c$}; \node[scale=2.0] at (35.7,-1.4) {$d$}; \node[scale=2.0] at (35.7,1.4) {$e$}; \node[scale=2.0] at (38.5,-0.5) {$-k$}; \node[scale=2.0] at (37,-2.2) {$B_{a,e}$}; \end{tikzpicture} } \caption{An example of cuts, $c(\mathfrak{n})$ and $c(\mathfrak{l})$} \label{fig.c(n)c(e)} \end{figure} \item \textbf{Normal edges in couples with multiple legs:} In this paper, all couples with multiple legs is produced by cutting a couple defined in Definition \ref{def.conple}. If a normal edge $\mathfrak{e}$ is cut into $\mathfrak{e}_1$ and $\mathfrak{e}_2$, then $\mathfrak{e}_1$ and $\mathfrak{e}_2$ are defined to be normal in the resulting couples with multiple legs. \end{enumerate} \end{defn} \begin{rem} Explicitly writing down the full definition of $\text{rc}$ is often complicated, so I what follows, when defining $\text{rc}$, we will just describe which one should be the free or fixed leg in the two legs produced by cutting an edge. \end{rem} The couples in Proposition \ref{prop.counting} contains just $2$ fixed legs, but after cutting, these couples may contain more fixed or free legs. By Definition \ref{def.couplemultileg}, for a couple $\mathcal{C}$ with multiple legs, given constants $c_{\mathfrak{l}}$ for each fixed leg $\mathfrak{l}$, the corresponding equation of $\mathcal{C}$ is denoted by $Eq(\mathcal{C},\{c_{\mathfrak{l}}\}_{\mathfrak{l}})$. In $Eq(\mathcal{C},\{c_{\mathfrak{l}}\}_{\mathfrak{l}})$ each edge $\mathfrak{e}$ is associated with a variable $k_{\mathfrak{e}}$ and each node $\mathfrak{n}$ is still associated with equations $MC_{\mathfrak{n}}$, $EC_{\mathfrak{n}}$. The corresponding variables of free (resp. fixed) legs are free (resp. fixed to be a constant $c_{\mathfrak{l}}$). Let us explain how does $Eq(\mathcal{C})$ and $\#Eq(\mathcal{C})$ changes after cutting. The result is summarized in the following lemma. \begin{lem}\label{lem.Eq(C)cutting} Let $c$ be a cut of $\mathcal{C}$ that consists of edges $\{\mathfrak{e}_{i}\}$ and $\mathcal{C}_1$, $\mathcal{C}_2$ be two components after cutting. Let $\mathfrak{e}_{i}^{(1)}\in \mathcal{C}_1$, $\mathfrak{e}_{i}^{(2)}\in \mathcal{C}_2$ be two edges obtained by cutting $\mathfrak{e}_{i}$. The $\text{rc}$ map is defined by assigning $\{\mathfrak{e}_{i}^{(1)}\}$ to be free legs and $\{\mathfrak{e}_{i}^{(2)}\}$ to be fixed legs. Then we have \begin{equation}\label{eq.Eq(C)cutting} Eq(\mathcal{C},\{c_{\mathfrak{l}}\}_{\mathfrak{l}})=\left\{(k_{\mathfrak{e}_1},k_{\mathfrak{e}_{2}}):\ k_{\mathfrak{e}_1}\in Eq(\mathcal{C}_1,\{c_{\mathfrak{l}_1}\}),\ k_{\mathfrak{e}_{2}}\in Eq\left(\mathcal{C}_{2}, \{c_{\mathfrak{l}_2}\}, \left\{k_{\mathfrak{e}_{i}^{(1)}}\right\}_{i}\right)\right\}. \end{equation} and \begin{equation}\label{eq.Eq(C)cuttingcounting} \sup_{\{c_{\mathfrak{l}}\}_{\mathfrak{l}}}\#Eq(\mathcal{C},\{c_{\mathfrak{l}}\}_{\mathfrak{l}})\le \sup_{\{c_{\mathfrak{l}_1}\}_{\mathfrak{l}_1\in \text{leg}(\mathcal{C}_1)} } \# Eq(\mathcal{C}_1,\{c_{\mathfrak{l}_1}\}) \sup_{\{c_{\mathfrak{l}_2}\}_{\mathfrak{l}_2\in \text{leg}(\mathcal{C}_2)} }\# Eq(\mathcal{C}_{2}, \{c_{\mathfrak{l}_2}\}). \end{equation} Here $\text{leg}(\mathcal{C})$ is the set of fixed legs in $\mathcal{C}$ (not the set of all legs!). \end{lem} \begin{proof} By definition \eqref{eq.Eq(C,c)} we have \begin{equation} \begin{split} Eq(\mathcal{C},\{c_{\mathfrak{l}}\}_{\mathfrak{l}})=&\{k_{\mathfrak{e}}\in \mathbb{Z}^d_L,\ |k_{\mathfrak{e}}|\lesssim 1:\ |k_{\mathfrak{e}x}| \sim \kappa_{\mathfrak{e}},\ \forall \mathfrak{e}\in \mathcal{C}_{\text{norm}}.\ MC_{\mathfrak{n}},\ EC_{\mathfrak{n}},\ \forall \mathfrak{n}.\ k_{\mathfrak{l}}=c_{\mathfrak{l}},\ \forall \mathfrak{l}\in \text{leg}(\mathcal{C}).\} \\ =&\{(k_{\mathfrak{e}_1},k_{\mathfrak{e}_2}):\ |k_{\mathfrak{e}_1}| \lesssim 1,\ MC_{\mathfrak{n}_1},\ EC_{\mathfrak{n}_1}.\ \forall \mathfrak{e}_1, \mathfrak{n}_1\in\mathcal{C}_1.\ |k_{\mathfrak{e}x}| \sim \kappa_{\mathfrak{e}},\ \forall \mathfrak{e}\in \mathcal{C}_{\text{norm}}. \\ &k_{\mathfrak{l}_1}=c_{\mathfrak{l}_1},\ \forall \mathfrak{l}_1\in \text{leg}(\mathcal{C})\cap \text{leg}(\mathcal{C}_1) \\ &|k_{\mathfrak{e}_2}| \lesssim 1,\ MC_{\mathfrak{n}_2},\ EC_{\mathfrak{n}_2}.\ \forall \mathfrak{e}_2, \mathfrak{n}_2\in\mathcal{C}_2.\ |k_{\mathfrak{e}x}| \sim \kappa_{\mathfrak{e}},\ \forall \mathfrak{e}\in \mathcal{C}_{\text{norm}}. \\ &k_{\mathfrak{l}_2}=c_{\mathfrak{l}_2},\ \forall \mathfrak{l}_2\in \text{leg}(\mathcal{C})\cap \text{leg}(\mathcal{C}_2),\ k_{\mathfrak{e}_{i}^{(2)}}=k_{\mathfrak{e}_{i}^{(1)}},\ \forall\mathfrak{e}_{i}\in c\} \\ =&\left\{(k_{\mathfrak{e}_1},k_{\mathfrak{e}_{2}}):\ k_{\mathfrak{e}_1}\in Eq(\mathcal{C}_1,\{c_{\mathfrak{l}_1}\}),\ k_{\mathfrak{e}_{2}}\in Eq\left(\mathcal{C}_{2}, \{c_{\mathfrak{l}_2}\}, \left\{k_{\mathfrak{e}_{i}^{(1)}}\right\}_{i}\right)\right\} \end{split} \end{equation} Here in $Eq\left(\mathcal{C}_{2}, \{c_{\mathfrak{l}_2}\}, \left\{k_{\mathfrak{e}_{i}^{(1)}}\right\}_{i}\right)$, $k_{\mathfrak{e}_{i}^{(1)}}$ are view as a constant value and $k_{\mathfrak{e}_{i}^{(2)}}$ are fixed to be this constant value. Therefore, after cutting, $Eq(\mathcal{C},\{c_{\mathfrak{l}}\}_{\mathfrak{l}})$ becomes \begin{equation} Eq(\mathcal{C},\{c_{\mathfrak{l}}\}_{\mathfrak{l}})=\left\{(k_{\mathfrak{e}_1},k_{\mathfrak{e}_{2}}):\ k_{\mathfrak{e}_1}\in Eq(\mathcal{C}_1,\{c_{\mathfrak{l}_1}\}),\ k_{\mathfrak{e}_{2}}\in Eq\left(\mathcal{C}_{2}, \{c_{\mathfrak{l}_2}\}, \left\{k_{\mathfrak{e}_{i}^{(1)}}\right\}_{i}\right)\right\}. \end{equation} which proves \eqref{eq.Eq(C)cutting}. We can also find the relation between $\#Eq(\mathcal{C}_1)$, $\#Eq(\mathcal{C}_2)$ and $\#Eq(\mathcal{C})$. Applying \eqref{eq.Eq(C)cutting}, \begin{equation} \begin{split} \#Eq(\mathcal{C},\{c_{\mathfrak{l}}\}_{\mathfrak{l}})=&\sum_{(k_{\mathfrak{e}_1},k_{\mathfrak{e}_{2}})\in \#Eq(\mathcal{C},\{c_{\mathfrak{l}}\}_{\mathfrak{l}})} 1 \\ =&\sum_{\left\{(k_{\mathfrak{e}_1},k_{\mathfrak{e}_{2}}):\ k_{\mathfrak{e}_1}\in Eq(\mathcal{C}_1,\{c_{\mathfrak{l}_1}\}),\ k_{\mathfrak{e}_{2}}\in Eq\left(\mathcal{C}_{2}, \{c_{\mathfrak{l}_2}\}, \left\{k_{\mathfrak{e}_{i}^{(1)}}\right\}_{i}\right)\right\}} 1 \\ =&\sum_{k_{\mathfrak{e}_1}\in Eq(\mathcal{C}_1,\{c_{\mathfrak{l}_1}\})} \sum_{k_{\mathfrak{e}_{2}}\in Eq\left(\mathcal{C}_{2}, \{c_{\mathfrak{l}_2}\}, \left\{k_{\mathfrak{e}_{i}^{(1)}}\right\}_{i}\right)} 1 \\ =&\sum_{k_{\mathfrak{e}_1}\in Eq(\mathcal{C}_1,\{c_{\mathfrak{l}_1}\})} \# Eq\left(\mathcal{C}_{2}, \{c_{\mathfrak{l}_2}\}, \left\{k_{\mathfrak{e}_{i}^{(1)}}\right\}_{i}\right) \end{split} \end{equation} Take $\sup$ in above equation \begin{equation} \begin{split} &\sup_{\{c_{\mathfrak{l}}\}_{\mathfrak{l}}}\#Eq(\mathcal{C},\{c_{\mathfrak{l}}\}_{\mathfrak{l}}) =\sup_{\{c_{\mathfrak{l}}\}_{\mathfrak{l}}}\sum_{k_{\mathfrak{e}_1}\in Eq(\mathcal{C}_1,\{c_{\mathfrak{l}_1}\})} \# Eq\left(\mathcal{C}_{2}, \{c_{\mathfrak{l}_2}\}, \left\{k_{\mathfrak{e}_{i}^{(1)}}\right\}_{i}\right) \\ \le &\sup_{\{c_{\mathfrak{l}_1}\}_{\mathfrak{l}_1\in \text{leg}(\mathcal{C})\cap \text{leg}(\mathcal{C}_1)} }\sum_{k_{\mathfrak{e}_1}\in Eq(\mathcal{C}_1,\{c_{\mathfrak{l}_1}\})} \sup_{\{c_{\mathfrak{l}_2}\}_{\mathfrak{l}_2\in \text{leg}(\mathcal{C})\cap \text{leg}(\mathcal{C}_2)} }\# Eq\left(\mathcal{C}_{2}, \{c_{\mathfrak{l}_2}\}, \left\{k_{\mathfrak{e}_{i}^{(1)}}\right\}_{i}\right) \\ \le &\sup_{\{c_{\mathfrak{l}_1}\}_{\mathfrak{l}_1\in \text{leg}(\mathcal{C}_1)} }\sum_{k_{\mathfrak{e}_1}\in Eq(\mathcal{C}_1,\{c_{\mathfrak{l}_1}\})} \sup_{\{c_{\mathfrak{l}_2}\}_{\mathfrak{l}_2\in \text{leg}(\mathcal{C}_2)} }\# Eq(\mathcal{C}_{2}, \{c_{\mathfrak{l}_2}\}) \\ = &\sup_{\{c_{\mathfrak{l}_1}\}_{\mathfrak{l}_1\in \text{leg}(\mathcal{C}_1)} } \# Eq(\mathcal{C}_1,\{c_{\mathfrak{l}_1}\}) \sup_{\{c_{\mathfrak{l}_2}\}_{\mathfrak{l}_2\in \text{leg}(\mathcal{C}_2)} }\# Eq(\mathcal{C}_{2}, \{c_{\mathfrak{l}_2}\}) \end{split} \end{equation} This proves \eqref{eq.Eq(C)cuttingcounting}. \end{proof} \textbf{Step 2.} In this step, we specify the cutting procedure. \begin{prop}\label{prop.cuttingalgorithm} There exists a recursive algorithm that repeatedly decomposes $\mathcal{C}$ into smaller pieces and satisfies the following requirements. In the rest of this paper we call this algorithm "the cutting algorithm". (1) The input of the $0$-th step of this algorithm is $\mathcal{C}(0)=\mathcal{C}$. The inputs of other steps are the outputs of previous steps of the algorithm itself. (2) In step $k$, $\mathcal{C}(k)$ is decomposed into 2 or 3 connected components by cutting edges. For $\mathcal{C}(1)$, $\# Eq(\mathcal{C}(1))=\# Eq(\mathcal{C})$ and $\mathcal{C}_{\text{norm}}(1)=\mathcal{C}_{\text{norm}}$. (3) One of the connected components after cutting contains exactly one node $\mathfrak{n}$ and one fixed normal leg $\mathfrak{l}$. We call this component $\mathcal{C}(k)_{ \mathfrak{l}}$. There are only two possibilities of $\mathcal{C}(k)_{\mathfrak{l}}$ as in Figure \ref{fig.2possibilities}. We label them by $\mathcal{C}_{I}$, $\mathcal{C}_{II}$. \begin{figure}[H] \centering \scalebox{0.3}{ \begin{tikzpicture}[level distance=80pt, sibling distance=100pt] \node[] at (0,0) (1) {} child {node[fillcirc] (2) {} child {node[fillstar] (3) {}} child {node[fillstar] (4) {}} }; \draw[-{Stealth[length=5mm, width=3mm]}] (1) -- (2); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (3); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (4); \node[] at (12,0) (1) {} child {node[fillcirc] (2) {} child {node[fillstar] (3) {}} child {node[xshift = 5pt, yshift = -10pt] (4) {}} }; \draw[-{Stealth[length=5mm, width=3mm]}] (1) -- (2); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (3); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (4); \end{tikzpicture} } \caption{Two possibilities of $\mathcal{C}_\mathfrak{l}$.} \label{fig.2possibilities} \end{figure} (4) We need the following definition. \underline{Property P of a couple $\widetilde{\mathcal{C}}$}: $\widetilde{\mathcal{C}}$ is connected and contains exactly one free leg and at least one fixed normal leg. With above definition, the cutting algorithm should satisfies the requirement that the output components of each steps of the algorithm satisfies property P. \end{prop} \begin{rem} Although the definition of cut is rather general, we only use three special type of cuts, $c(\mathfrak{e})$, $c(\mathfrak{n})$ and $c(\mathfrak{l})$ in the cutting algorithm. \end{rem} \begin{rem} The couple $\mathcal{C}$ does not satisfies the property P because it does not have any free leg. \end{rem} \begin{proof}[Proof of Proposition \ref{prop.cuttingalgorithm}.] Consider the following algorithm. \medskip \begin{mdframed} \centerline{\textbf{The cutting algorithm}} \medskip \ \ \ \textbf{Step 0.} The input $\mathcal{C}(0)$ of this step is $\mathcal{C}$. In this step, we replace one of the two fixed legs of $\mathcal{C}$ by a free leg to obtain a new couple $\widehat{\mathcal{C}}$. By Lemma \ref{lem.freeleg} (3), $\#Eq(\mathcal{C})=\#Eq(\widehat{\mathcal{C}})$. The output $\mathcal{C}(1)$ of this step is $\widehat{\mathcal{C}}$. An example of step 0 can be found in the following picture. \begin{figure}[H] \centering \scalebox{0.4}{ \begin{tikzpicture}[level distance=80pt, sibling distance=100pt] \node[] at (0,0) (1) {}; \node[fillcirc] at (3,0) (2) {}; \node[fillcirc] at (7,0) (3) {}; \node[] at (10,0) (4) {}; \draw[-{Stealth[length=5mm, width=3mm]}] (1) edge (2); \draw[-{Stealth[length=5mm, width=3mm]}, bend left =40] (2) edge (3); \draw[-{Stealth[length=5mm, width=3mm]}, bend right =40] (2) edge (3); \draw[-{Stealth[length=5mm, width=3mm]}] (3) edge (4); \node[scale=2.0] at (5.1,-2) {$\mathcal{C}(0)=\mathcal{C}$}; \node[draw, single arrow, minimum height=33mm, minimum width=8mm, single arrow head extend=2mm, anchor=west, rotate=0] at (11,0) {}; \node[] at (15,0) (11) {}; \node[fillcirc] at (18,0) (12) {}; \node[fillcirc] at (22,0) (13) {}; \node[fillstar] at (25,0) (14) {}; \draw[-{Stealth[length=5mm, width=3mm]}] (11) edge (12); \draw[-{Stealth[length=5mm, width=3mm]}, bend left =40] (12) edge (13); \draw[-{Stealth[length=5mm, width=3mm]}, bend right =40] (12) edge (13); \draw[-{Stealth[length=5mm, width=3mm]}] (13) edge (14); \node[scale=2.0] at (20.1,-2) {$\mathcal{C}(1)=\widehat{\mathcal{C}}$}; \end{tikzpicture} } \caption{An example of step $0$} \label{fig.step0} \end{figure} \textbf{Step $k$.} Assume that we have finished the step $k-1$. The input $\mathcal{C}(k)$ of this step is the output of step $k-1$. (If there are two output couples from step $k-1$, apply step $k$ to these two couples separately.) By property P, there exists a fixed normal leg in $\mathcal{C}(k)$. Choose one such leg $\mathfrak{l}$ and define $\mathcal{C}(k)_{\mathfrak{l}}$ to be the component which contains $\mathfrak{l}$ after cutting $c(\mathfrak{l})$ and define $\mathcal{C}(k)'=\mathcal{C}(k)\backslash \mathcal{C}(k)_{\mathfrak{l}}$. Check how many components does $\mathcal{C}(k)'$ have. Jump to case 1 if number of components equals to 1, otherwise jump to case 2. Examples of case 1 and case 2 can be find in the following picture. \begin{figure}[H] \centering \scalebox{0.4}{ \begin{tikzpicture}[level distance=80pt, sibling distance=100pt] \node[] at (0,0) (1) {}; \node[fillcirc] at (0, -3) (2) {}; \node[draw, circle, minimum size=1cm, scale=2] at (0,-7) (3) {$\mathcal{C}(k)'$}; \draw[-{Stealth[length=5mm, width=3mm]}] (1) edge (2); \draw[-{Stealth[length=5mm, width=3mm]}, bend left =40] (2) edge (3); \draw[-{Stealth[length=5mm, width=3mm]}, bend right =40] (2) edge (3); \node[] at (10,0) (1) {}; \node[fillcirc] at (10, -3) (2) {}; \node[draw, circle, minimum size=1cm, scale=1.5] at (7,-7) (3) {$(\mathcal{C}(k)')_1$}; \node[draw, circle, minimum size=1cm, scale=1.5] at (13,-7) (4) {$(\mathcal{C}(k)')_2$}; \draw[-{Stealth[length=5mm, width=3mm]}] (1) edge (2); \draw[-{Stealth[length=5mm, width=3mm]}] (2) edge (3); \draw[-{Stealth[length=5mm, width=3mm]}] (2) edge (4); \end{tikzpicture} } \caption{Examples of case 1 and case 2 (Left is case 1 and right is case two. $(\mathcal{C}(k)')_1$ and $(\mathcal{C}(k)')_2$ are two components of $\mathcal{C}(k)'$)} \label{fig.step1case} \end{figure} \textbf{Case 1 of step $k$.} In this case $\mathcal{C}(k)'$ has one components. We rename this $\mathcal{C}(k)'$ to $\mathcal{C}(k)_1$. By property P, there exists a unique free leg $\mathfrak{l}_{fr}$ in $\mathcal{C}(k)$. Check if $\mathfrak{l}_{fr}$ and $\mathfrak{l}$ are connected to the same node. If yes, jump to case 1.1, otherwise jump to case 1.2. \textbf{Case 1.1.} Cut edges in $c(\mathfrak{l})$ into $\{\mathfrak{e}_{i}^{(1)}\}_{i=1,2}$ and $\{\mathfrak{e}_{i}^{(2)}\}_{i=1,2}$, then $\mathcal{C}(k)$ is decomposed into $\mathcal{C}(k)_{\mathfrak{l}}$, $\mathcal{C}(k)_1=\mathcal{C}\backslash \mathcal{C}_{\mathfrak{l}}$. As in Lemma \ref{lem.Eq(C)cutting}, define $\{\mathfrak{e}_{i}^{(1)}\}\subseteq \mathcal{C}(k)_{\mathfrak{l}}$ to be free legs and $\{\mathfrak{e}_{i}^{(2)}\}\subseteq \mathcal{C}(k)_1$ to be fixed legs. If $\mathcal{C}(k)_1$ satisfies the property P, then define $\mathcal{C}(k+1)=\mathcal{C}(k)_1$ to be the output of step $k$ and apply step $k+1$ to $\mathcal{C}(k+1)$. Otherwise by Lemma \ref{lem.normleg} (2) there exists exactly one free normal leg and at least one fixed leg. By Lemma \ref{lem.freeleg}, we can define a new couple $\widehat{\mathcal{C}}$ such that the free normal leg becomes fixed and the fixed leg become free and we define $\mathcal{C}(k+1)=\widehat{\mathcal{C}}$ in this case. Examples of cutting in case 1.1 can be find in the following picture. \begin{figure}[H] \centering \scalebox{0.4}{ \begin{tikzpicture}[level distance=80pt, sibling distance=100pt] \node[] at (0,0) (1) {}; \node[fillcirc] at (0, -3) (2) {}; \node[draw, circle, minimum size=1cm, scale=2] at (0,-7) (3) {$\mathcal{C}(k)_1$}; \draw[-{Stealth[length=5mm, width=3mm]}] (1) edge (2); \draw[-{Stealth[length=5mm, width=3mm]}, bend left =40] (2) edge (3); \draw[-{Stealth[length=5mm, width=3mm]}, bend right =40] (2) edge (3); \node[scale =3] at (-1.1,-4.8) {$\times$}; \node[scale =3] at (1.1,-4.8) {$\times$}; \node[draw, single arrow, minimum height=33mm, minimum width=8mm, single arrow head extend=2mm, anchor=west, rotate=0] at (4,-4.8) {}; \node[] at (12,-1.5) (11) {} child {node[fillcirc] (12) {} child {node[fillstar] (13) {}} child {node[fillstar] (14) {}} }; \draw[-{Stealth[length=5mm, width=3mm]}] (11) -- (12); \draw[-{Stealth[length=5mm, width=3mm]}] (12) -- (13); \draw[-{Stealth[length=5mm, width=3mm]}] (12) -- (14); \node[scale =2] at (12,-8) {$\mathcal{C}(k)_{\mathfrak{l}}$}; \node[] at (17,-3) (21) {}; \node[] at (21,-3) (22) {}; \node[draw, circle, minimum size=1cm, scale=2] at (19,-6) (23) {$\mathcal{C}(k)_1$}; \draw[-{Stealth[length=5mm, width=3mm]}] (21) edge (23); \draw[-{Stealth[length=5mm, width=3mm]}] (22) edge (23); \end{tikzpicture} } \caption{Examples of cutting in case 1.1} \label{fig.step1case1.1} \end{figure} \textbf{Case 1.2.} In this case $\mathfrak{l}_{fr}$ and $\mathfrak{l}$ are connected to the same node $\mathfrak{n}$. Let $\mathfrak{e}$ be the last edge that is connected $\mathfrak{n}$. Cut $\mathfrak{e}$ into $\{\mathfrak{e}^{(1)},\mathfrak{e}^{(2)}\}$ and then $\mathcal{C}(k)$ is decomposed into $\mathcal{C}(k)_{\mathfrak{l}}$, $\mathcal{C}(k)_1=\mathcal{C}(k)\backslash \mathcal{C}(k)_{\mathfrak{l}}$. Define $\mathfrak{e}^{(1)}\in \mathcal{C}(k)_{\mathfrak{l}}$ to be fixed legs and $\mathfrak{e}^{(2)}\in \mathcal{C}(k)_1$ to be free legs. If $\mathcal{C}(k)_1$ satisfies the property P, then define $\mathcal{C}(k+1)=\mathcal{C}(k)_1$ . Otherwise by Lemma \ref{lem.normleg} (2), $\mathfrak{e}^{(2)}$ is the only normal legs, then use Lemma \ref{lem.freeleg} (2), we may construct a new couple $\mathcal{C}(k+1)$ by assigning $\mathfrak{e}^{(2)}$ to be fixed and another leg to be free. Finally define $\mathcal{C}(k+1)$ to be the output of step $k$ and apply step $k+1$ to $\mathcal{C}(k+1)$. Examples of cutting in case 1.2 can be find in the following picture. \begin{figure}[H] \centering \scalebox{0.4}{ \begin{tikzpicture}[level distance=80pt, sibling distance=100pt] \node[] at (-2,0) (0) {}; \node[fillstar] at (1.8,-0.2) (1) {}; \node[fillcirc] at (0, -3) (2) {}; \node[draw, circle, minimum size=1cm, scale=2] at (0,-7) (3) {$\mathcal{C}(k)_1$}; \draw[-{Stealth[length=5mm, width=3mm]}] (0) edge (2); \draw[-{Stealth[length=5mm, width=3mm]}] (1) edge (2); \draw[-{Stealth[length=5mm, width=3mm]}] (2) edge (3); \node[scale =3] at (0,-4.4) {$\times$}; \node[draw, single arrow, minimum height=33mm, minimum width=8mm, single arrow head extend=2mm, anchor=west, rotate=0] at (4,-4.8) {}; \node at (12,-7) (11) {} [grow =90] child {node[fillcirc] (12) {} child {node[fillstar, xshift = -0.2cm, yshift = -0.2cm] (13) {}} child {node[] (14) {}} }; \draw[{Stealth[length=5mm, width=3mm]}-] (11) -- (12); \draw[{Stealth[length=5mm, width=3mm]}-] (12) -- (13); \draw[{Stealth[length=5mm, width=3mm]}-] (12) -- (14); \node[scale =2] at (12,-8) {$\mathcal{C}(k)_{\mathfrak{l}}$}; \node[fillstar] at (19,-2) (22) {}; \node[draw, circle, minimum size=1cm, scale=2] at (19,-6) (23) {$\mathcal{C}(k)_1$}; \draw[-{Stealth[length=5mm, width=3mm]}] (22) edge (23); \end{tikzpicture} } \caption{Examples of cutting in case 1.2} \label{fig.step1case1.2} \end{figure} \textbf{Case 2 of step $k$.} Let the two connected components of $\mathcal{C}(k)\backslash \mathcal{C}(k)_{\mathfrak{l}}$ be $\mathcal{C}(k)_2$ and $\mathcal{C}(k)_3$. Let $\mathfrak{e}_{2}$, $\mathfrak{e}_{3}$ be the two edges that connect $\mathfrak{l}$ and $\mathcal{C}(k)_2$, $\mathcal{C}(k)_3$ respectively. Cut $\mathfrak{e}_{2}$, $\mathfrak{e}_{3}$ into $\{\mathfrak{e}_{2}^{(1)},\mathfrak{e}_{3}^{(1)}\}\subseteq \mathcal{C}(k)_{\mathfrak{l}}$ and $\mathfrak{e}_{2}^{(2)}\in \mathcal{C}(k)_2$, $\mathfrak{e}_{3}^{(2)}\in \mathcal{C}(k)_3$. By Lemma \ref{lem.normleg}, $\mathcal{C}(k)_2$, $\mathcal{C}(k)_3$ contain at least one normal legs and two legs in $\mathcal{C}(k)_2$, $\mathcal{C}(k)_3$. By symmetry, we can just consider $\mathcal{C}(k)_2$. If $\mathcal{C}(k)_2$ contains free legs, then define $\mathfrak{e}_{2}^{(2)}\in \mathcal{C}(k)_2$ to be fixed, otherwise define $\mathfrak{e}_{2}^{(2)}$ to be free. In this case, define $\mathcal{C}(k+1)$ to be $\mathcal{C}(k)_2$ or $\mathcal{C}(k)_3$ and apply step $k+1$ to them separately. If in $\mathcal{C}(k)_2$ $\mathfrak{e}_{2}^{(2)}$ is the only normal legs and it's free, then use Lemma \ref{lem.freeleg} (2) to construct a new couple $\widehat{\mathcal{C}}_2$ by assigning $\mathfrak{e}^{(2)}$ to be fixed and another leg to be free. Then define $\mathcal{C}(k+1)$ to be $\widehat{\mathcal{C}}_2$ or $\widehat{\mathcal{C}}_3$ and apply step $k+1$ to them separately. Examples of cutting in case 2 can be find in the following picture. \begin{figure}[H] \centering \scalebox{0.4}{ \begin{tikzpicture}[level distance=80pt, sibling distance=100pt] \node[] at (0,0) (1) {}; \node[fillcirc] at (0, -3) (2) {}; \node[draw, circle, minimum size=1cm, scale=2] at (-3,-7) (3) {$\mathcal{C}(k)_2$}; \node[draw, circle, minimum size=1cm, scale=2] at (3,-7) (4) {$\mathcal{C}(k)_3$}; \draw[-{Stealth[length=5mm, width=3mm]}] (1) edge (2); \draw[-{Stealth[length=5mm, width=3mm]}] (2) edge (3); \draw[-{Stealth[length=5mm, width=3mm]}] (2) edge (4); \node[scale =3, rotate = 44] at (-1.18,-4.5) {$\times$}; \node[scale =3, rotate = 44] at (1.18,-4.5) {$\times$}; \node[draw, single arrow, minimum height=33mm, minimum width=8mm, single arrow head extend=2mm, anchor=west, rotate=0] at (5,-4.5) {}; \node[] at (12,-1.5) (11) {} child {node[fillcirc] (12) {} child {node[fillstar] (13) {}} child {node[fillstar] (14) {}} }; \draw[-{Stealth[length=5mm, width=3mm]}] (11) -- (12); \draw[-{Stealth[length=5mm, width=3mm]}] (12) -- (13); \draw[-{Stealth[length=5mm, width=3mm]}] (12) -- (14); \node[scale =2] at (12,-8) {$\mathcal{C}(k)_{\mathfrak{l}}$}; \node[] at (18,-1.5) (11) {}; \node[draw, circle, minimum size=1cm, scale=2] at (18,-6.5) (12) {$\mathcal{C}(k)_2$}; \node[] at (23,-1.5) (13) {}; \node[draw, circle, minimum size=1cm, scale=2] at (23,-6.5) (14) {$\mathcal{C}(k)_3$}; \draw[-{Stealth[length=5mm, width=3mm]}] (11) -- (12); \draw[-{Stealth[length=5mm, width=3mm]}] (13) -- (14); \end{tikzpicture} } \caption{Examples of cutting in case 2} \label{fig.step1case2} \end{figure} \end{mdframed} (1), (3) are true by definition. (2) is true because $\#Eq(\mathcal{C})=\#Eq(\widehat{\mathcal{C}})$ as explained in step $0$ of the algoritm. Since in step $0$ we just replace a fixed leg by a free leg, $\mathcal{C}_{\text{norm}}$ should not change, so we get $\mathcal{C}_{\text{norm}}(1)=\mathcal{C}_{\text{norm}}$. The only non-trivial part is (4), which is a corollary of Lemma \ref{lem.normleg} below. Therefore, we complete the proof. \end{proof} \begin{lem}\label{lem.freeleg} Given a connected couple $\mathcal{C}$ with multiple legs, then we have the following conclusions. (1) Let $\{k_{\mathfrak{l}_i}\}_{i=1,\cdots,n_{\text{leg}}}$ be the variables corresponding to legs which satisfy the system of equation $Eq(\mathcal{C})$. Let $\iota_{\mathfrak{e}}$ be the same as \eqref{eq.iotadef}, then we have the momentum conservation equation \begin{equation}\label{eq.momentumconservation} \sum_{i=1}^{n_{\text{leg}}} \iota_{\mathfrak{l}_i}k_{\mathfrak{l}_i}=0, \end{equation} (2) Assume that there is exactly one free leg $\mathfrak{l}_{i_0}$ in $\mathcal{C}$ and all other variables $\{k_{\mathfrak{l}_{i}}\}_{i\ne i_0}$ corresponding to fix legs are fixed to be constants $\{c_{\mathfrak{l}_{i}}\}_{i\ne i_0}$. Given $i_{1}=1,\cdots,n_{\text{leg}}$, we replace the $i_0$ leg by a fix leg and $i_1$ leg by a free leg and obtain a new couple $\widehat{\mathcal{C}}$. If $i\ne i_0, i_1$, fix $k_{\mathfrak{l}_{i}}$ to be the constant $c_{\mathfrak{l}_{i}}$, if $i=i_0$, fix $k_{\mathfrak{l}_{i_0}}$ to be the constant $-\iota_{\mathfrak{l}_{i_0}}\sum_{i\ne i_0} \iota_{\mathfrak{l}_i}k_{\mathfrak{l}_i}$. Under above assumptions, we have \begin{equation} Eq(\mathcal{C}, \{c_{\mathfrak{l}_{i}}\}_{i\ne i_0})=Eq\left(\widehat{\mathcal{C}}, \{c_{\mathfrak{l}_{i}}\}_{i\ne i_0, i_1}\cup \{-\iota_{\mathfrak{l}_{i_0}}\sum_{i\ne i_0} \iota_{\mathfrak{l}_i}k_{\mathfrak{l}_i}\}\right). \end{equation} (3) Assume that there is no free legs in $\mathcal{C}$ and all $\{k_{\mathfrak{l}_{i}}\}_{i\ne i_0}$ are fixed to be constants $\{c_{\mathfrak{l}_{i}}\}_{i\ne i_0}$. Given $i_{0}=1,\cdots,n_{\text{leg}}$, we replace the $i_0$ leg by a free leg and obtain a new couple $\widehat{\mathcal{C}}$. Then we have \begin{equation} Eq(\mathcal{C}, \{c_{\mathfrak{l}_{i}}\}_{i})=Eq(\widehat{\mathcal{C}}, \{c_{\mathfrak{l}_{i}}\}_{i\ne i_0}). \end{equation} (4) If the couple $\mathcal{C}$ contains any leg, then it at least two legs. \end{lem} \begin{proof} We first prove (1). Given a node $\mathfrak{n}$ and an edge $\mathfrak{e}$ connected to it, we define $\iota_{\mathfrak{e}}(\mathfrak{n})$ by the following rule \begin{equation} \iota_{\mathfrak{e}}(\mathfrak{n})=\begin{cases} +1 \qquad \textit{if $\mathfrak{e}$ pointing towards $\mathfrak{n}$} \\ -1 \qquad \textit{if $\mathfrak{e}$ pointing outwards from $\mathfrak{n}$} \end{cases} \end{equation} for a leg $\mathfrak{l}$, since it is connected to just one node, we may omit the $(\mathfrak{n})$ and just write $\iota_{\mathfrak{l}}$ as in the statement of the lemma. For each node $\mathfrak{n}$, let $\mathfrak{e}_1(\mathfrak{n})$, $\mathfrak{e}_2(\mathfrak{n})$, $\mathfrak{e}(\mathfrak{n})$ be the three edges connected to it. For each edge $\mathfrak{e}$, let $\mathfrak{n}_1(\mathfrak{e})$, $\mathfrak{n}_2(\mathfrak{e})$ be the two nodes connected to it, then we know that $\iota_{\mathfrak{e}}(\mathfrak{n}_1(\mathfrak{e}))+\iota_{\mathfrak{e}}(\mathfrak{n}_2(\mathfrak{e}))$ since $\mathfrak{n}_1(\mathfrak{e})$ and $\mathfrak{n}_2(\mathfrak{e})$ have the opposite direction. Since $k_{\mathfrak{e}}$ satisfy $Eq(\mathcal{C})$, by \eqref{eq.momentumconservationunit}, we get $\iota_{\mathfrak{e}_1(\mathfrak{n})}(\mathfrak{n})k_{\mathfrak{e}_1(\mathfrak{n})}+\iota_{\mathfrak{e}_2(\mathfrak{n})}(\mathfrak{n})k_{\mathfrak{e}_2(\mathfrak{n})}+\iota_{\mathfrak{e}(\mathfrak{n})}(\mathfrak{n})k_{\mathfrak{e}(\mathfrak{n})}=0$. Summing over $\mathfrak{n}$ gives \begin{equation} \begin{split} 0=&\sum_{\mathfrak{n}\in \mathcal{C}}\iota_{\mathfrak{e}_1(\mathfrak{n})}(\mathfrak{n})k_{\mathfrak{e}_1(\mathfrak{n})}+\iota_{\mathfrak{e}_2(\mathfrak{n})}(\mathfrak{n})k_{\mathfrak{e}_2(\mathfrak{n})}+\iota_{\mathfrak{e}(\mathfrak{n})}(\mathfrak{n})k_{\mathfrak{e}(\mathfrak{n})} \\ =& \sum_{\mathfrak{e}\text{ is not a leg}} (\iota_{\mathfrak{e}}(\mathfrak{n}_1(\mathfrak{e}))+\iota_{\mathfrak{e}}(\mathfrak{n}_2(\mathfrak{e}))) k_{\mathfrak{e}}+ \sum_{\mathfrak{l}\text{ is a leg}} \iota_{\mathfrak{l}} k_{\mathfrak{l}} \\ =& \sum_{i=1}^{n_{\text{leg}}} \iota_{\mathfrak{l}_i}k_{\mathfrak{l}_i} \end{split} \end{equation} This proves \eqref{eq.momentumconservation} and thus proves (1). Now we prove (2). Since in $Eq\left(\widehat{\mathcal{C}}, \{c_{\mathfrak{l}_{i}}\}_{i\ne i_0, i_1}\cup \{-\iota_{\mathfrak{l}_{i_0}}\sum_{i\ne i_0} \iota_{\mathfrak{l}_i}k_{\mathfrak{l}_i}\}\right)$, $\{k_{\mathfrak{l}_{i}}\}_{i\ne i_0, i_1}$ are fixed to be constants $\{c_{\mathfrak{l}_{i}}\}_{i\ne i_0, i_1}$ and $k_{\mathfrak{l}_{i_0}}$ is fixed to be the constant $-\iota_{\mathfrak{l}_{i_0}}\sum_{i\ne i_0} \iota_{\mathfrak{l}_i}k_{\mathfrak{l}_i}$, by \eqref{eq.momentumconservation}, we know that \begin{equation} \iota_{\mathfrak{l}_{i_0}}\left(-\iota_{\mathfrak{l}_{i_0}}\sum_{i\ne i_0} \iota_{\mathfrak{l}_i}k_{\mathfrak{l}_i}\right)+ \iota_{\mathfrak{l}_{i_1}}k_{\mathfrak{l}_{i_1}}+\sum_{i\ne i_0, i_1} \iota_{\mathfrak{l}_i}c_{\mathfrak{l}_i}=0. \end{equation} This implies that $k_{\mathfrak{l}_{i_1}}=c_{\mathfrak{l}_{i_1}}$ in $Eq(\widehat{\mathcal{C}})$. Notice that the only difference of $Eq(\mathcal{C})$ and $Eq(\widehat{\mathcal{C}})$ is that $k_{\mathfrak{l}_{i_1}}$ is not fixed to be the constant $c_{\mathfrak{l}_{i_1}}$ in $Eq(\widehat{\mathcal{C}})$. Now we show that equations in $Eq(\widehat{\mathcal{C}})$ automatically imply $k_{\mathfrak{l}_{i_1}}=c_{\mathfrak{l}_{i_1}}$, then we conclude that $Eq(\mathcal{C})=Eq(\widehat{\mathcal{C}})$. We thus complete the proof of (2). The proof of (3) is similar to (2). The only difference of $Eq(\mathcal{C})$ and $Eq(\widehat{\mathcal{C}})$ is that $k_{\mathfrak{l}_{i_0}}$ is not fixed to be the constant $c_{\mathfrak{l}_{i_0}}$ in $Eq(\widehat{\mathcal{C}})$. But if $\{k_{\mathfrak{l}_{i}}\}_{i\ne i_0}$ are fixed to be constants $\{c_{\mathfrak{l}_{i}}\}_{i\ne i_0}$, by momentum conservation we know that, \begin{equation} k_{\mathfrak{l}_{i_0}}=-\iota_{\mathfrak{l}_{i_0}}\sum_{i\ne i_0} \iota_{\mathfrak{l}_i}c_{\mathfrak{l}_i}. \end{equation} Therefore, $k_{\mathfrak{l}_{i_0}}$ is fixed to be the constant $-\iota_{\mathfrak{l}_{i_0}}\sum_{i\ne i_0} \iota_{\mathfrak{l}_i}c_{\mathfrak{l}_i}$ in $Eq(\widehat{\mathcal{C}})$ and we we conclude that $Eq(\mathcal{C})=Eq(\widehat{\mathcal{C}})$. We thus complete the proof of (3). If (4) is wrong, then $\mathcal{C}$ just one leg $\mathfrak{l}$. By \eqref{eq.momentumconservation}, $k{\mathfrak{l}}=0$. This contradicts with $k{\mathfrak{l},x}\ne 0$ in \eqref{eq.kappa}. \end{proof} \begin{lem}\label{lem.normleg} (1) The output $\mathcal{C}(k+1)$ of step $k$ of the cutting algorithm satisfies the property P. (2) All the intermediate results $\mathcal{C}(k)_1$, $\mathcal{C}(k)_2$, $\mathcal{C}(k)_3$ satisfy the weak property P: either they satisfy the property P or they contain exactly one free normal leg and at least one fixed leg. Notice that the weak property implies that these couples contain at least one normal legs and two legs. \end{lem} \begin{proof} We prove the following stronger result by induction. \medskip \textit{Claim.} Assume that the couple $\mathcal{C}=\mathcal{C}(T,T,p)$ is the input the cutting algorithm. Then for any $k$, there exists a finite number of trees disjoint subtrees $T^{(k)}_1$, $T^{(k)}_2$, $\cdots$, $T^{(k)}_{m^{(k)}}$ of the two copies of $T$ which satisfies the following property. (1) Let $\text{leaf}(k)$ be the set of all leaves of these subtrees. Assume that $p$ induce a pairing $p|_{\text{leaf}_1(k)}$ of a subset $\text{leaf}_1(k)\subseteq\text{leaf}(k)$. Apply a similar construction to Definition \ref{def.conple} we can construct a couple from $T^{(k)}_1$, $T^{(k)}_2$, $\cdots$, $T^{(k)}_{m^{(k)}}$ by pairing $p$ and this couple equals exactly to $\mathcal{C}(k)$. (2) The root legs of $T^{(k)}_1$, $T^{(k)}_2$, $\cdots$, $T^{(k)}_{m^{(k)}}$ are exactly the normal legs of $\mathcal{C}(k)$. The edges in $\text{leaf}_2(k)=\text{leaf}(k)\backslash \text{leaf}_1(k)$ are exactly the leaf legs (legs that are leaf edges) of $\mathcal{C}(k)$. (3) $\mathcal{C}(k)$ satisfies the property P. (4) All the intermediate results $\mathcal{C}(k)_1$, $\mathcal{C}(k)_2$, $\mathcal{C}(k)_3$ satisfy the weak property P. \medskip We show that $\mathcal{C}(0)$ satisfies (1) -- (4) of above claim. Let $\mathcal{C}$ be the input couple of step $0$ which is obtained by pairing two copies of $T$. In step $0$ we replace a fixed leg by a free leg to obtain $\mathcal{C}(0)$. Therefore, if we define $T^{(0)}_{1}=T$ and $T^{(0)}_{2}$ to be the tree obtained by replacing the fixed root leg in $T$ by a free leg, then $\mathcal{C}(0)$ is the couple obtained by pairing $T^{(0)}_{1}$ and $T^{(0)}_{2}$. Here the pairing is $p$ and $\text{leaf}_1(0)=\text{leaf}(0)$ and $\text{leaf}_2(0)=\emptyset$. Therefore, (1) is true for $\mathcal{C}(0)$. The two fixed legs in $\mathcal{C}$ are all normal edges, because they come from the root legs in the two copies of $T$ which cannot be leaf edges. Therefore, the two legs in $\mathcal{C}(1)$ are also normal. Since the two legs in $\mathcal{C}(1)$ come from the root legs in $T^{(0)}_{1}$ and $T^{(0)}_{2}$, (2) is also true. Since in step $0$ we replace a fixed leg by a free leg, $\mathcal{C}(0)$ contains exactly one free and one fixed leg which are all normal. Therefore, the output $\mathcal{C}(0)$ of step $0$ satisfies property P and (3) is proved. In step $0$, (4) does not need any proof. \medskip Assume that $\mathcal{C}(k)$ satisfies (1) -- (4) of above claim, then we prove the claim for $\mathcal{C}(k+1)$. Remember that in step $k$ of the algorithm, the input is $\mathcal{C}(k)$. Apply the induction assumption $\mathcal{C}(k)$ is obtained by pairing $T^{(k)}_1$, $T^{(k)}_2$, $\cdots$, $T^{(k)}_{m^{(k)}}$. There are several different cases in step $k$ and we treat them separately. \underline{In case 1.1}, we cut $c(\mathfrak{l})$ into $\{\mathfrak{e}_{i}^{(1)}\}_{i=1,2}$ and $\{\mathfrak{e}_{i}^{(2)}\}_{i=1,2}$. By (2) $\mathfrak{l}$ is the root leg of some subtree $T^{(k)}_{j_0}$. Assume that in $T^{(k)}_{j_0}$ the two subtrees of the root are $T^{(k)}_{j_0,1}$, $T^{(k)}_{j_0,2}$. After cutting $c(\mathfrak{l})$, $T^{(k)}_{j_0}$ becomes two trees $T^{(k)}_{j_0,1}$, $T^{(k)}_{j_0,2}$ and $T^{(k)}_{j}$ $(j\ne j_0)$ do not change. We treat three different case separately. \underline{Case 1.1 (i).} (Both $T^{(k)}_{j_0,1}$ and $T^{(k)}_{j_0,2}$ are one node trees.) In this case all edges in $\{\mathfrak{e}_{1}^{(1)}, \mathfrak{e}_{2}^{(1)}, \mathfrak{e}_{1}^{(2)}, \mathfrak{e}_{2}^{(2)}\}$ are leaf edges of some trees in $\{T_{j}^{(k)}\}$. Define $T^{(k+1)}_{j}=T^{(k)}_{j}$ for $j< j_0$ and $T^{(k+1)}_{j}=T^{(k)}_{j-1}$ for $j> j_0$, then $\mathcal{C}(k)_1$ can be constructed from these trees with $\text{leaf}_1(k+1)=\text{leaf}_1(k)\backslash\{\mathfrak{e}_{1}^{(1)}, \mathfrak{e}_{2}^{(1)}, \mathfrak{e}_{1}^{(2)}, \mathfrak{e}_{2}^{(2)}\}$. Remember that in the case 1.1 of the algorithm, the final result $\mathcal{C}(k+1)$ is obtained by replace (or not) some free normal leg by fixed normal leg in $\mathcal{C}(k)_1$, so if we change some fixed root leg of $\{T^{(k)}_{j}\}$ to be free, then $\mathcal{C}(k+1)$ can also be obtained from pairing these trees. Therefore, (1) is true for $\mathcal{C}(k+1)$ with $\text{leaf}_1(k+1)=\text{leaf}_1(k)\backslash\{\mathfrak{e}_{1}^{(1)}, \mathfrak{e}_{2}^{(1)}, \mathfrak{e}_{1}^{(2)}, \mathfrak{e}_{2}^{(2)}\}$. Since both the set of root legs and normal legs is deleted by one element $\mathfrak{l}$ and in step $k$ they are the same, they continue to be the same in step $k$. In case 1.1 (i), $\text{leaf}_2(k+1)=\text{leaf}_2(k)\cup \{\mathfrak{e}_{i}^{(2)}\}_{i=1,2}$ and the leaf legs in $\mathcal{C}(k)$ also changes in this way, so they continue to be the same in step $k$. Therefore, (2) is true. We prove (4) by contradiction. The weak property P is equivalent to that $\mathcal{C}(k)_i$ contains exactly one free leg and at least one normal leg. In case 1.1 (i), $i=1$, assume that the weak property P is not true, then either $\mathcal{C}(k)_1$ does not any contain normal leg or the number of free leg is not 1. If $\mathcal{C}(k)_1\ne \emptyset$, then the number of $\{T^{(k)}_{j}\}$ is not zero. Because the root of $\{T^{(k)}_{j}\}$ are normal legs, there exists at least one normal leg in $\mathcal{C}(k)_1$. By our hypothesis that weak property P is wrong, the number of free leg is not 1. Since $\mathcal{C}(k)_1$ is obtained from $\mathcal{C}(k)_1$ by cutting, the number of free legs in $\mathcal{C}(k)_1$ equals to $\mathcal{C}(k)$ which is one. Therefore, we find a contradiction. Since in the case 1.1 of the algorithm we change the free normal leg in $\mathcal{C}(k)_1$ to be fixed if $\mathcal{C}(k)_1$ does not satisfy the property P. The result $\mathcal{C}(k+1)$ must satisfy the property P, so we proves (3). \underline{Case 1.1 (ii).} (One of $T^{(k)}_{j_0,1}$ or $T^{(k)}_{j_0,2}$ is an one node tree.) Without loss of generality assume that $T^{(k)}_{j_0,2}$ is an one node tree. In this case $\mathfrak{e}_{2}^{(1)}$, $\mathfrak{e}_{2}^{(2)}$ are leaf edges of some trees in $\{T_{j}^{(k)}\}$. Define $T^{(k+1)}_{j}=T^{(k)}_{j}$ for $j\ne j_0$ and $T^{(k+1)}_{j_0}=T^{(k)}_{j_0,1}$, then $\mathcal{C}(k)_1$ can be constructed from these trees with $\text{leaf}_1(k+1)=\text{leaf}_1(k)\backslash\{\mathfrak{e}_{2}^{(1)}, \mathfrak{e}_{2}^{(2)}\}$. By the same reason as in case 1.1 (i), (1) is true for $\mathcal{C}(k+1)$ with $\text{leaf}_1(k+1)=\text{leaf}_1(k)\backslash\{\mathfrak{e}_{2}^{(1)}, \mathfrak{e}_{2}^{(2)}\}$. By the same reason as in case 1.1 (i), the set of root legs and the set of normal legs continue to be the same in step $k$. In case 1.1 (ii), $\text{leaf}_2(k+1)=\text{leaf}_2(k)\cup \{ \mathfrak{e}_{2}^{(2)}\}$ and the leaf legs also changes in this way, so they continue to be the same in step $k$. Therefore, (2) is true. The proof of (3), (4) in case 1.1 (ii) is the same as that in case 1.1 (i). \underline{Case 1.1 (iii).} (Neither $T^{(k)}_{j_0,1}$ or $T^{(k)}_{j_0,2}$ is an one node tree.) In this case no edge in $\{\mathfrak{e}_{1}^{(1)}, \mathfrak{e}_{2}^{(1)}, \mathfrak{e}_{1}^{(2)}, \mathfrak{e}_{2}^{(2)}\}$ is leaf edge of $T_{j}^{(k)}$. Define $T^{(k+1)}_{j}=T^{(k)}_{j}$ for $j\le j_0$, $T^{(k+1)}_{j}=T^{(k)}_{j-1}$ for $j\ge j_0+2$, $T^{(k+1)}_{j_0}=T^{(k)}_{j_0,1}$ and $T^{(k+1)}_{j_0+1}=T^{(k)}_{j_0,2}$, then $\mathcal{C}(k)_1$ can be constructed from these trees with $\text{leaf}_1(k+1)=\text{leaf}_1(k)$. By the same reason as in case 1.1 (i), (1) is true for $\mathcal{C}(k+1)$ with $\text{leaf}_1(k+1)=\text{leaf}_1(k)$. By the same reason as in case 1.1 (i), the set of root legs and the set of normal legs continue to be the same in step $k$. In case 1.1 (iii), $\text{leaf}_2(k+1)=\text{leaf}_2(k)$ and the leaf legs also changes in this way, so they continue to be the same in step $k$. Therefore, (2) is true. The proof of (3), (4) in case 1.1 (iii) is the same as that in case 1.1 (i). \underline{In case 1.2}, we cut $\mathfrak{e}$ into $\{\mathfrak{e}^{(1)},\mathfrak{e}^{(2)}\}$. By (2) $\mathfrak{l}$ and $\mathfrak{l}_{fr}$ are the root leg and leaf of some subtree $T^{(k)}_{j_0}$. Assume that in $T^{(k)}_{j_0}$ one subtree of the root is $T^{(k)}_{j_0,1}$ and the other one is a one node tree with edge $\mathfrak{l}_{fr}$. After cutting $c(\mathfrak{l})$, $T^{(k)}_{j_0}$ becomes $T^{(k)}_{j_0,1}$ and $T^{(k)}_{j}$ $(j\ne j_0)$ do not change. We treat two different case separately. \underline{Case 1.2 (i).} ($T^{(k)}_{j_0,1}$ is an one node trees.) In this case $\mathfrak{e}^{(1)}$, $\mathfrak{e}^{(2)}$ are leaf edges of some trees in $\{T_{j}^{(k)}\}$. Define $T^{(k+1)}_{j}=T^{(k)}_{j}$ for $j< j_0$ and $T^{(k+1)}_{j}=T^{(k)}_{j-1}$ for $j> j_0$, then $\mathcal{C}(k)_1$ can be constructed from these trees with $\text{leaf}_1(k+1)=\text{leaf}_1(k)\backslash\{\mathfrak{e}^{(1)}, \mathfrak{e}^{(2)}\}$. By the same reason as in case 1.1 (i), (1) is true for $\mathcal{C}(k+1)$ with $\text{leaf}_1(k+1)=\text{leaf}_1(k)\backslash\{\mathfrak{e}^{(1)}, \mathfrak{e}^{(2)}\}$. By the same reason as in case 1.1 (i), the set of root legs and the set of normal legs continue to be the same in step $k$. In case 1.2 (i), $\text{leaf}_2(k+1)=\text{leaf}_2(k)\cup \{ \mathfrak{e}^{(2)}\}$ and the leaf legs also changes in this way, so they continue to be the same in step $k$. Therefore, (2) is true. The proof of (3), (4) in case 1.2 (i) is the same as that in case 1.1 (i). \underline{Case 1.2 (ii).} ($T^{(k)}_{j_0,1}$ is not an one node trees.) In this case $\mathfrak{e}^{(1)}$, $\mathfrak{e}^{(2)}$ are not leaf edges of any trees in $\{T_{j}^{(k)}\}$. Define $T^{(k+1)}_{j}=T^{(k)}_{j}$ for $j\ne j_0$ and $T^{(k+1)}_{j_0}=T^{(k)}_{j_0,1}$, then $\mathcal{C}(k)_1$ can be constructed from these trees with $\text{leaf}_1(k+1)=\text{leaf}_1(k)$. By the same reason as in case 1.1 (i), (1) is true for $\mathcal{C}(k+1)$ with $\text{leaf}_1(k+1)=\text{leaf}_1(k)$. By the same reason as in case 1.1 (i), the set of root legs and the set of normal legs continue to be the same in step $k$. In case 1.2 (ii), $\text{leaf}_2(k+1)=\text{leaf}_2(k)$ and the leaf legs also changes in this way, so they continue to be the same in step $k$. Therefore, (2) is true. The proof of (3), (4) in case 1.2 (ii) is the same as that in case 1.1 (i). \underline{In case 2}, we cut $c(\mathfrak{l})=\{\mathfrak{e}_2,\mathfrak{e}_3\}$ into $\{\mathfrak{e}_{2}^{(1)}, \mathfrak{e}_{3}^{(1)}\}$ and $\{\mathfrak{e}_{2}^{(2)}, \mathfrak{e}_{3}^{(2)}\}$. By (2) $\mathfrak{l}$ is the root leg of some subtree $T^{(k)}_{j_0}$. Assume that in $T^{(k)}_{j_0}$ the two subtrees of the root are $T^{(k)}_{j_0,1}$, $T^{(k)}_{j_0,2}$. After cutting $c(\mathfrak{l})$, $T^{(k)}_{j_0}$ becomes two trees $T^{(k)}_{j_0,1}$, $T^{(k)}_{j_0,2}$ and $T^{(k)}_{j}$ $(j\ne j_0)$ do not change. Since $\mathcal{C}(k)_{2}$ and $\mathcal{C}(k)_{3}$ are disjoint, for each $j\ne j_0$, $T^{(k)}_{j}$ should be a subset of one of these couples. Without loss of generality we assume that $j_0=1$ and $T^{(k)}_2, \cdots, T^{(k)}_{m^{(k)'}}\subseteq \mathcal{C}(k)_{2}$ and $T^{(k)}_{m^{(k)'}+1}, \cdots, T^{(k)}_{m^{(k)}}\subseteq \mathcal{C}(k)_{3}$. Since the output $\mathcal{C}(k+1)$ either comes from $\mathcal{C}(k)_{2}$ or $\mathcal{C}(k)_{3}$, without loss of generality we assume that the output comes from $\mathcal{C}(k)_{2}$. We treat two different case separately. \underline{Case 2 (i).} ($T^{(k)}_{j_0,1}$ is an one node trees.) In this case all edges in $\{\mathfrak{e}_{1}^{(1)}, \mathfrak{e}_{2}^{(1)}, \mathfrak{e}_{1}^{(2)}, \mathfrak{e}_{2}^{(2)}\}$ are leaf edges of some trees in $\{T_{j}^{(k)}\}$. Define $T^{(k+1)}_{j}=T^{(k)}_{j+1}$, then $\mathcal{C}(k)_2$ can be constructed from $T^{(k)}_2, \cdots, T^{(k)}_{m^{(k)'}}$ with $\text{leaf}_1(k+1)=\{\text{all leaves in }T^{(k)}_2, \cdots,$ $T^{(k)}_{m^{(k)'}}\}$. Remember that in the case 2 of the algorithm, the final result $\mathcal{C}(k+1)$ is obtained by replace (or not) some free normal leg by fixed normal leg in $\mathcal{C}(k)_2$, so by the same argument as in case 1.1 (i), (1) is true for $\mathcal{C}(k+1)$. By the same reason as in case 1.1 (i), the set of root legs and the set of normal legs continue to be the same in step $k$. In case 2 (i), in the output $\mathcal{C}(k+1)$ coming from $\mathcal{C}(k)_2$, $\text{leaf}_2(k+1)=(\text{leaf}_1(k+1)\cap \mathcal{C}(k)_2)\cup \{ \mathfrak{e}_{2}^{(2)}\}$ and the leaf legs also changes in this way, so they continue to be the same in step $k$. Therefore, (2) is true. The proof of (3), (4) in case 2 (i) is the same as that in case 1.1 (i). \underline{Case 2 (ii).} ($T^{(k)}_{j_0,1}$ is not an one node trees.) In this case $\{\mathfrak{e}_{1}^{(1)}$, $\mathfrak{e}_{1}^{(2)}$ are not leaf edges of any trees in $\{T_{j}^{(k)}\}$. Define $T^{(k+1)}_{j}=T^{(k)}_{j}$ for $j>1$ and $T^{(k+1)}_{1}=T^{(k)}_{j_0,1}$, then $\mathcal{C}(k)_2$ can be constructed from $T^{(k+1)}_1, \cdots, T^{(k+1)}_{m^{(k)'}}$ with $\text{leaf}_1(k+1)=\{\text{all leaves in }T^{(k+1)}_1, \cdots, T^{(k+1)}_{m^{(k)'}}\}$. By the same reason as in case 1.1 (i), (1) is true for $\mathcal{C}(k+1)$. By the same reason as in case 1.1 (i), the set of root legs and the set of normal legs continue to be the same in step $k$. In case 2 (ii), in the output $\mathcal{C}(k+1)$ coming from $\mathcal{C}(k)_2$, $\text{leaf}_2(k+1)=(\text{leaf}_1(k+1)\cap \mathcal{C}(k)_2)$ and the leaf legs also changes in this way, so they continue to be the same in step $k$. Therefore, (2) is true. The proof of (3), (4) in case 2 (i) is the same as that in case 1.1 (i). \end{proof} \textbf{Step 3.} In this step, we state Proposition \ref{prop.countingind} which is a stronger version of Proposition \ref{prop.counting} and derive Proposition \ref{prop.counting} from it. In the end, we prove part (1), (2) and the 1 node case of part (3) of Proposition \ref{prop.countingind}. \begin{prop}\label{prop.countingind} Let $\mathcal{C}(k)$ be a couple which is the output of step $k$ of the cutting algorithm Proposition \ref{prop.cuttingalgorithm}. For any couple $\mathcal{C}$, let $n(\mathcal{C})$ be the total number of nodes in $\mathcal{C}$ and $n_e(\mathcal{C})$ (resp. $n_{fx}(\mathcal{C})$, $n_{\textit{fr}}(\mathcal{C})$) be the total number of non-leg edges (resp. fixed legs, free legs). We fix $\sigma_{\mathfrak{n}}\in\mathbb{R}$ for each $\mathfrak{n}\in \mathcal{C}(k)$ and $c_{\mathfrak{l}}\in \mathbb{R}$ for each fixed leg $\mathfrak{l}$. Assume that $\alpha$ satisfies \eqref{eq.conditionalpha}. Then we have (1) We have following relation of $n(\mathcal{C})$, $n_e(\mathcal{C})$, $n_{fx}(\mathcal{C})$ and $n_{\textit{fr}}(\mathcal{C})$ \begin{equation} 2n_e(\mathcal{C})+n_{fx}(\mathcal{C})+n_{\textit{fr}}(\mathcal{C})=3n(\mathcal{C}) \end{equation} (2) For any couple $\mathcal{C}$, define $\chi(\mathcal{C})=n_e(\mathcal{C})+n_{\textit{fr}}(\mathcal{C})-n(\mathcal{C})$. Let $c$, $\mathcal{C}_1$, $\mathcal{C}_2$ be the same as in Lemma \ref{lem.Eq(C)cutting} and also assume that $\{\mathfrak{e}_{i}^{(1)}\}$ are free legs and $\{\mathfrak{e}_{i}^{(2)}\}$ are fixed legs. Then \begin{equation} \chi(\mathcal{C})=\chi(\mathcal{C}_1)+\chi(\mathcal{C}_2). \end{equation} (3) If we assume that $\mathcal{C}(k)$ satisfies the property P, then (recall that $Q=L^{d}T^{-1}_{\text{max}}$) \begin{equation}\label{eq.countingbd3} \sup_{\{c_{\mathfrak{l}}\}_{\mathfrak{l}}}\#Eq(\mathcal{C}(k),\{c_{\mathfrak{l}}\}_{\mathfrak{l}})\leq L^{O\left(\chi(\mathcal{C}(k))\theta\right)} Q^{\chi(\mathcal{C}(k))}\prod_{\mathfrak{e}\in \mathcal{C}_{\text{norm}}(k)} \kappa^{-1}_{\mathfrak{e}} . \end{equation} \end{prop} \underline{Derivation of Proposition \ref{prop.counting} from Proposition \ref{prop.countingind}:} By Proposition \ref{prop.cuttingalgorithm} (2), $\mathcal{C}_{\text{norm}}(1)=\mathcal{C}_{\text{norm}}$ and $\# Eq(\mathcal{C})=\# Eq(\mathcal{C}(1))$. Then by Proposition \ref{prop.countingind} (3), \begin{equation}\label{eq.countinglemstep3} \# Eq(\mathcal{C})=\# Eq(\mathcal{C}(1))\leq L^{O(\chi(\mathcal{C}(1))\theta)} Q^{\chi(\mathcal{C}(1))}\prod_{\mathfrak{e}\in \mathcal{C}_{\text{norm}}(1)} \kappa^{-1}_{\mathfrak{e}} \end{equation} Since in $\mathcal{C}(1)$, $n_{fx}(\mathcal{C}(1))=_{\textit{fr}}(\mathcal{C}(1))=1$, by Proposition \ref{prop.countingind} (1), $2n_e(\mathcal{C}(1))+2=3n(\mathcal{C}(1))$. By this fact and the definition of $\chi$, we get $\chi(\mathcal{C}(1))=n_e(\mathcal{C}(1))+1-n(\mathcal{C}(1))=n(\mathcal{C}(1))/2$. Substituting this expression of $\chi(\mathcal{C}(1))$ into \eqref{eq.countinglemstep3} proves the conclusion of Proposition \ref{prop.counting}. The proof of Proposition \ref{prop.countingind} is the main goal of the rest of the proof. \underline{Proof Proposition \ref{prop.countingind} (1):} Consider the set $\mathcal{S}=\{(\mathfrak{n}, \mathfrak{e})\in \mathcal{C}: \mathfrak{n} \textit{ is an end point of }\mathfrak{e}\}$, then \begin{equation} \#\mathcal{S}=\sum_{\substack{(\mathfrak{n}, \mathfrak{e})\in \mathcal{C}\\ \mathfrak{n} \textit{ is an end point of }\mathfrak{e}}} 1 \end{equation} First sum over $\mathfrak{e}$ and then sum over $\mathfrak{n}$, then we get \begin{equation} \#\mathcal{S}=\sum_{\mathfrak{n}}\sum_{\substack{\mathfrak{e}\in \mathcal{C}\\ \mathfrak{n} \textit{ is an end point of }\mathfrak{e}}} 1=\sum_{\mathfrak{n}}\ 3=3n(\mathcal{C}). \end{equation} In the second equality, $\sum_{\mathfrak{e}\in \mathcal{C}: \mathfrak{n} \textit{ is an end point of }\mathfrak{e}} 1 =3$ because for each node $\mathfrak{n}$ there are $3$ edges connected to it. First sum over $\mathfrak{n}$ and then sum over $\mathfrak{e}$, then we get \begin{equation} \begin{split} \#\mathcal{S}=&\sum_{\mathfrak{e} \textit{ is a non-leg edges}}\sum_{\substack{\mathfrak{n}\in \mathcal{C}\\ \mathfrak{n} \textit{ is an end point of }\mathfrak{e}}} 1+\sum_{\mathfrak{e} \textit{ is a leg}}\sum_{\substack{\mathfrak{n}\in \mathcal{C}\\ \mathfrak{n} \textit{ is an end point of }\mathfrak{e}}} 1 \\ =&\sum_{\mathfrak{e} \textit{ is a non-leg edges}} 2+\sum_{\mathfrak{e} \textit{ is a leg}} 1 \\ =& 2n_e(\mathcal{C})+n_{fx}(\mathcal{C})+n_{\textit{fr}}(\mathcal{C}) \end{split} \end{equation} In the second equality, $\sum_{\substack{\mathfrak{n}\in \mathcal{C}\\ \mathfrak{n} \textit{ is an end point of }\mathfrak{e}}} 1$ equals to $1$ or $2$ because for each non-leg edge (resp. leg) there are $2$ (resp. 1) nodes connected to it. Because the value of $\#\mathcal{S}$ does not depend on the order of summation, we conclude that $2n_e(\mathcal{C})+n_{fx}(\mathcal{C})+n_{\textit{fr}}(\mathcal{C})=3n(\mathcal{C})$, which proves Proposition \ref{prop.countingind} (1). \underline{Proof Proposition \ref{prop.countingind} (2):} Let $c$ be the cut that consists of edges $\{\mathfrak{e}_{i}\}$ and $n(c)$ be the number of edges in $c$. Then we have $n(\mathcal{C})=n(\mathcal{C}_1)+n(\mathcal{C}_2)$. Compared to $\mathcal{C}$, in $\mathcal{C}_1$ and $\mathcal{C}_2$, $n(c)$ non-leg edges are cutted into pairs of free and fixed legs, so $n_e(\mathcal{C})=n(\mathcal{C}_1)+n(\mathcal{C}_2)-n(c)$. Because we have $n(c)$ additional free legs after cutting, so $n_{\textit{fr}}(\mathcal{C})=n_{\textit{fr}}(\mathcal{C}_1)+n_{\textit{fr}}(\mathcal{C}_2)+n(c)$. Therefore, we get \begin{equation} \begin{split} \chi(\mathcal{C})=&n_e(\mathcal{C})+n_{\textit{fr}}(\mathcal{C})-n(\mathcal{C}) \\ =&n(\mathcal{C}_1)+n(\mathcal{C}_2)-n(c)+n_{\textit{fr}}(\mathcal{C}_1)+n_{\textit{fr}}(\mathcal{C}_2)+n(c)-(n(\mathcal{C}_1)+n(\mathcal{C}_2)) \\ =&(n(\mathcal{C}_1)+n_{\textit{fr}}(\mathcal{C}_1)-n(\mathcal{C}_1))+(n(\mathcal{C}_2)+n_{\textit{fr}}(\mathcal{C}_2)-n(\mathcal{C}_2)) \\ =&\chi(\mathcal{C}_1)+\chi(\mathcal{C}_2). \end{split} \end{equation} We have complete the proof of (2) of Proposition \ref{prop.countingind}. \underline{Proof of 1 node case of Proposition \ref{prop.countingind} (3):} We now prove Lemma \ref{lem.countingbdunit} which is the 1 node case of Proposition \ref{prop.countingind} (3). Recall that $\mathcal{C}_{I}$ and $\mathcal{C}_{II}$ are the two possibility of the 1 node couple $\mathcal{C}_{\mathfrak{l}}$ in Proposition \ref{prop.cuttingalgorithm} (3). \begin{lem}\label{lem.countingbdunit} $\mathcal{C}_{I}$, $\mathcal{C}_{II}$ satisfy the bound \eqref{eq.countingbd3} in Proposition \ref{prop.countingind}. In other words, fix $c_1$ (resp. $c_1$, $c_2$) for the fixed legs of $\mathcal{C}_{I}$ (resp. $\mathcal{C}_{II}$), then we have \begin{equation}\label{eq.countingbdunit} \# Eq(\mathcal{C}_{I})\leq L^\theta Q\kappa^{-1}_{\mathfrak{e}},\qquad \# Eq(\mathcal{C}_{II})\leq Q^0=1. \end{equation} \end{lem} \begin{proof} Given $c_1$, $c_2$, the equation of $\mathcal{C}_{II}$ is \begin{equation} \begin{cases} k_{\mathfrak{e}_1}+k_{\mathfrak{e}_2}-k_{\mathfrak{e}}=0,\ k_{\mathfrak{e}_1}=c_1,\ k_{\mathfrak{e}}=c_2,\ |k_{\mathfrak{e}x}|\sim \kappa_{\mathfrak{e}} \\ \Lambda_{k_{\mathfrak{e}_1}}+\Lambda_{k_{\mathfrak{e}_2}}-\Lambda_{k_{\mathfrak{e}}}=\sigma_{\mathfrak{n}}+O(T^{-1}_{\text{max}}) \end{cases} \end{equation} It's obvious that there are at most one solution to this system of equations. Given $c_1$, the equation of $\mathcal{C}_{I}$ is \begin{equation} \begin{cases} k_{\mathfrak{e}_1}+k_{\mathfrak{e}_2}-k_{\mathfrak{e}}=0,\ k_{\mathfrak{e}}=c_1,\ |k_{\mathfrak{e}x}|\sim \kappa_{\mathfrak{e}} \\ \Lambda_{k_{\mathfrak{e}_1}}+\Lambda_{k_{\mathfrak{e}_2}}-\Lambda_{k_{\mathfrak{e}}}=\sigma_{\mathfrak{n}}+O(T^{-1}_{\text{max}}) \end{cases} \end{equation} By Theorem \ref{th.numbertheory1}, the number of solutions of above system of equations can be bounded by $L^\theta L^dT^{-1}_{\text{max}}|k_{\mathfrak{e}x}|^{-1}\lesssim L^\theta Q\kappa^{-1}_{\mathfrak{e}}$ Therefore, we complete the proof of this lemma. \end{proof} \textbf{Step 4.} In this step, we apply the edge cutting algorithm Proposition \ref{prop.cuttingalgorithm} to prove Proposition \ref{prop.countingind} (3) by induction. If $\mathcal{C}$ has only one node ($n=1$), then $\mathcal{C}$ equals to $\mathcal{C}_{I}$ or $\mathcal{C}_{II}$ and Proposition \ref{prop.countingind} (3) in this case follows from Lemma \ref{lem.countingbdunit}. Suppose that Proposition \ref{prop.countingind} (3) holds true for couples with number of nodes $\le n-1$. We prove it for couples with number of nodes $n$. Given a couple $\mathcal{C}(k)$ obtained by the $k-1$-th step cutting algorithm, apply the cutting algorithm to $\mathcal{C}(k)$. Then according to Proposition \ref{prop.cuttingalgorithm}, $\mathcal{C}(k)$ (2) is decomposed into 2 or 3 components. In the first case, denote by $\mathcal{C}(k)_{ \mathfrak{l}}$ and $\mathcal{C}(k)_1$ the two components after cutting. In the second case, denote by $\mathcal{C}(k)_{ \mathfrak{l}}$, $\mathcal{C}(k)_2$ and $\mathcal{C}(k)_3$ the three components after cutting. By Proposition \ref{prop.cuttingalgorithm} (4), $\mathcal{C}(k)_1$, $\mathcal{C}(k)_2$, $\mathcal{C}(k)_3$ are couples of nodes $\le n-1$ that satisfy the property P and thus satisfy the assumption of Proposition \ref{prop.countingind} (3). Therefore, the induction assumption is applicable and Proposition \ref{prop.countingind} (3) is true for these three couples. \textbf{Case 1.} Assume that there are two components after cutting. One example of this case is the left couple in Figure \ref{fig.step1case}. Then applying Lemma \ref{lem.Eq(C)cutting} gives \begin{equation} \begin{split} \sup_{\{c_{\mathfrak{l}}\}_{\mathfrak{l}}}\#Eq(\mathcal{C}(k),\{c_{\mathfrak{l}}\}_{\mathfrak{l}})\le& \sup_{\{c_{\mathfrak{l}_1}\}_{\mathfrak{l}_1\in \text{leg}(\mathcal{C}(k)_{ \mathfrak{l}})} } \# Eq(\mathcal{C}(k)_{ \mathfrak{l}},\{c_{\mathfrak{l}_1}\}) \sup_{\{c_{\mathfrak{l}_2}\}_{\mathfrak{l}_2\in \text{leg}(\mathcal{C}(k)_1)} }\# Eq(\mathcal{C}(k)_1, \{c_{\mathfrak{l}_2}\}) \\ \lesssim& L^{\theta} Q^{\chi(\mathcal{C}_1)} L^{O(\chi(\mathcal{C}_2)\theta)} Q^{\chi(\mathcal{C}_2)} = L^{O((\chi(\mathcal{C}_1)+\chi(\mathcal{C}_2))\theta)} Q^{\chi(\mathcal{C}_1)+\chi(\mathcal{C}_2)} \\ =& L^{O(\chi(\mathcal{C})\theta)} Q^{\chi(\mathcal{C})}. \end{split} \end{equation} Here the second inequality follows from induction assumption and Lemma \ref{lem.countingbdunit}. The last equality follows from Proposition \ref{prop.countingind} (2). \textbf{Case 2.} Assume that there are three components after cutting. One example of this case is the right couple in Figure \ref{fig.step1case}. Then applying Lemma \ref{lem.Eq(C)cutting} gives \begin{equation}\label{eq.case2expand} \begin{split} \sup_{\{c_{\mathfrak{l}}\}_{\mathfrak{l}}}\#Eq(\mathcal{C}(k),\{c_{\mathfrak{l}}\}_{\mathfrak{l}})\le& \sup_{\{c_{\mathfrak{l}_1}\}_{\mathfrak{l}_1\in \text{leg}(\mathcal{C}(k)\backslash\mathcal{C}(k)_2)} } \# Eq(\mathcal{C}(k)\backslash\mathcal{C}(k)_2,\{c_{\mathfrak{l}_1}\}) \sup_{\{c_{\mathfrak{l}_2}\}_{\mathfrak{l}_2\in \text{leg}(\mathcal{C}(k)_2)} }\# Eq(\mathcal{C}(k)_2, \{c_{\mathfrak{l}_2}\}) \\ \lesssim& L^{O(\chi(\mathcal{C}(k)_2)\theta)} Q^{\chi(\mathcal{C}(k)_2)}\sup_{\{c_{\mathfrak{l}_1}\}_{\mathfrak{l}_1\in \text{leg}(\mathcal{C}(k)\backslash\mathcal{C}(k)_2)} } \# Eq(\mathcal{C}(k)\backslash\mathcal{C}(k)_2,\{c_{\mathfrak{l}_1}\}). \end{split} \end{equation} Here in the second inequality, we can apply the induction assumption to $\mathcal{C}(k)_2$. Applying Lemma \ref{lem.Eq(C)cutting} and \eqref{eq.case2expand} gives \begin{equation}\label{eq.case2expand'} \begin{split} &\sup_{\{c_{\mathfrak{l}}\}_{\mathfrak{l}}}\#Eq(\mathcal{C}(k),\{c_{\mathfrak{l}}\}_{\mathfrak{l}}) \lesssim L^{O(\mathcal{C}(k)_2\theta)} Q^{\chi(\mathcal{C}(k)_2)}\sup_{\{c_{\mathfrak{l}_1}\}_{\mathfrak{l}_1\in \text{leg}(\mathcal{C}(k)\backslash\mathcal{C}(k)_2)} } \# Eq(\mathcal{C}(k)\backslash\mathcal{C}(k)_2,\{c_{\mathfrak{l}_1}\}) \\ \lesssim& L^{O(\chi(\mathcal{C}(k)_2)\theta)} Q^{\chi(\mathcal{C}(k)_2)}\sup_{\{c_{\mathfrak{l}_1}\}_{\mathfrak{l}_1\in \text{leg}(\mathcal{C}(k)_{\mathfrak{l}})} } \# Eq(\mathcal{C}(k)_{\mathfrak{l}},\{c_{\mathfrak{l}_1}\}) \sup_{\{c_{\mathfrak{l}_3}\}_{\mathfrak{l}_3\in \text{leg}(\mathcal{C}(k)_{3})} }\# Eq(\mathcal{C}(k)_{3}, \{c_{\mathfrak{l}_3}\}) \\ \lesssim& L^{O((\chi(\mathcal{C}(k)_2)+\chi(\mathcal{C}(k)_3))\theta)} Q^{\chi(\mathcal{C}(k)_2)+\chi(\mathcal{C}(k)_3)}\sup_{\{c_{\mathfrak{l}_1}\}_{\mathfrak{l}_1\in \text{leg}(\mathcal{C}(k)_{\mathfrak{l}})} } \# Eq(\mathcal{C}(k)_{\mathfrak{l}},\{c_{\mathfrak{l}_1}\}) \\ \lesssim& (L^{O(\theta)} Q)^{\chi(\mathcal{C}(k)_2)+\chi(\mathcal{C}(k)_3)+\chi(\mathcal{C}(k)_{\mathfrak{l}})}=L^{O(\chi(\mathcal{C})\theta)} Q^{\chi(\mathcal{C})}. \end{split} \end{equation} Here in the third inequality, we apply the induction assumption to $\mathcal{C}(k)_3$. The last equality follows from Proposition \ref{prop.countingind} (2). Therefore, we complete the proof of Proposition \ref{prop.countingind} and thus the proof of Proposition \ref{prop.counting}. \end{proof} \subsection{An upper bound of tree terms}\label{sec.treetermsupperbound} In this section, we first prove the following Proposition which gives an upper bound of the variance of $\mathcal{J}_{T,k}$ and then prove Proposition \ref{prop.treetermsupperbound} as its corollary. \begin{prop}\label{prop.treetermsvariance} Assume that $\alpha$ satisfies \eqref{eq.conditionalpha} and $\rho=\alpha\, T^{\frac{1}{2}}_{\text{max}}$. For any $\theta>0$, we have \begin{equation} \sup_k\, \mathbb{E}|(\mathcal{J}_T)_k|^2\lesssim L^{O(l(T)\theta)} \rho^{2l(T)}. \end{equation} and $\mathbb{E}|(\mathcal{J}_T)_k|^2=0$ if $|k|\gtrsim 1$. \end{prop} \begin{proof} By \eqref{eq.termTp} and \eqref{eq.termexp}, we know that $\mathcal{J}_{T,k}$ is a linear combination of $Term(T,p)_k$. \begin{equation} \begin{split} \mathbb{E}|\mathcal{J}_{T,k}|^2=\left(\frac{\lambda}{L^{d}}\right)^{2l(T)} \sum_{p\in \mathcal{P}(\{k_1,\cdots, k_{l(T)+1}, k'_1,\cdots, k'_{l(T)+1}\})} Term(T, p)_k. \end{split} \end{equation} Since $\alpha=\frac{\lambda}{L^{\frac{d}{2}}}$, so $\frac{\lambda}{L^{d}}=\alpha L^{-\frac{d}{2}}$. Since the number of elements in $\mathcal{P}$ can be bounded by a constant, by Lemma \ref{lem.Tpvariance} proved below, we get \begin{equation}\label{eq.proptreetermsvariance1} \begin{split} \mathbb{E}|\mathcal{J}_{T,k}|^2\lesssim (\alpha L^{-\frac{d}{2}})^{2l(T)} L^{O(n\theta)} Q^{\frac{n}{2}} T^{n}_{\text{max}} . \end{split} \end{equation} By definition, $n$ is the total number of $\bullet$ nodes in the couple constructed from tree $T$ and pairing $p$. Therefore, $n$ equals to $2l(T)$. Replacing $n$ by $2l(T)$ and $Q$ by $L^dT^{-1}_{\text{max}}$ in \eqref{eq.proptreetermsvariance1}, we get \begin{equation} \begin{split} \mathbb{E}|\mathcal{J}_{T,k}|^2\lesssim& (\alpha L^{-\frac{d}{2}})^{2l(T)} L^{O(l(T)\theta)} (L^dT^{-1}_{\text{max}})^{l(T)} T_{\text{max}}^{2l(T)} \\ =& L^{O(l(T)\theta)} (\alpha^2 T_{\text{max}})^{l(T)} \\ =& L^{O(l(T)\theta)} \rho^{2l(T)}. \end{split} \end{equation} By Lemma \ref{lem.Tpvariance} below $Term(T,p)_k=0$ if $|k|\gtrsim 1$, we know that the same is true for $\mathbb{E}|(\mathcal{J}_T)_k|^2=0$. Therefore, we complete the proof of this proposition. \end{proof} \begin{lem}\label{lem.Tpvariance} Let $n$ and $Q=L^dT^{-1}_{\text{max}}$ be the same as in Proposition \ref{prop.counting}. Assume that $\alpha$ satisfies \eqref{eq.conditionalpha} and $n_{\mathrm{in}} \in C^\infty_0(\mathbb{R}^d)$ is compactly supported. Let $\mathcal{C}$ be the couple constructed from tree $T$ and pairing $p$. Then for any $\theta>0$, we have \begin{equation} \sup_k\, |Term(T,p)_k|\le L^{O(n\theta)} Q^{\frac{n}{2}} T_{\text{max}}^{n}. \end{equation} and $Term(T,p)_k=0$ if $|k|\gtrsim 1$. \end{lem} \begin{proof} By \eqref{eq.termTp}, we get \begin{equation} \begin{split} Term(T, p)_k=&\sum_{k_1,\, k_2,\, \cdots,\, k_{l(T)+1}}\sum_{k'_1,\, k'_2,\, \cdots,\, k'_{l(T)+1}} H^T_{k_1\cdots k_{l(T)+1}} H^{T}_{k'_1\cdots k'_{l(T)+1}} \\ & \delta_{p}(k_1,\cdots, k_{l(T)+1}, k'_1,\cdots, k'_{l(T)+1})\sqrt{n_{\textrm{in}}(k_1)}\cdots\sqrt{n_{\textrm{in}}(k'_1)}\cdots \end{split} \end{equation} Since $n_{\mathrm{in}}$ are compactly supported and there are bounded many of them in $Term(T, p)_k$, by $k_1 + k_2 + \cdots + k_{l(T)+1}=k$, we know that $Term(T, p)_k=0$ if $|k|\gtrsim 1$. By \eqref{eq.boundcoef}, we get \begin{equation}\label{eq.termlemmaeq1} |H^T_{k_1\cdots k_{l+1}}|\lesssim \sum_{\{d_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}}\in\{0,1\}^{l(T)}}\prod_{\mathfrak{n}\in T_{\text{in}}}\frac{1}{|q_{\mathfrak{n}}|+T^{-1}_{\text{max}}}\ \prod_{\mathfrak{e}\in T_{\text{in}}}|k_{\mathfrak{e},x}|\ \delta_{\cap_{\mathfrak{n}\in T_{\text{in}}} \{S_{\mathfrak{n}}=0\}}. \end{equation} By \eqref{eq.q_n}, $q_{\mathfrak{n}}$ is a linear combination of $\Omega_{\mathfrak{n}}$, so there exists constants $c_{\mathfrak{n},\widetilde{\mathfrak{n}}}$ such that $q_{\mathfrak{n}}=\sum_{\widetilde{\mathfrak{n}}}c_{\mathfrak{n},\widetilde{\mathfrak{n}}}\Omega_{\widetilde{\mathfrak{n}}}$. Let $c$ be the matrix $[c_{\mathfrak{n},\widetilde{\mathfrak{n}}}]$ and $\mathscr{M}(T)$ be the set of all possible such matrices, then the number of elements in $\mathscr{M}(T)$ can be bounded by a constant. Let $c(\Omega)$ be the vector $\{\sum_{\widetilde{\mathfrak{n}}}c_{\mathfrak{n},\widetilde{\mathfrak{n}}}\Omega_{\widetilde{\mathfrak{n}}}\}_{\mathfrak{n}}$ and $c(\Omega)_{\mathfrak{n}}=\sum_{\widetilde{\mathfrak{n}}}c_{\mathfrak{n},\widetilde{\mathfrak{n}}}\Omega_{\widetilde{\mathfrak{n}}}$ be the components of $c(\Omega)$. With this notation, we know that $q_{\mathfrak{n}}=c(\Omega)_{\mathfrak{n}}$ and the right hand side of \eqref{eq.termlemmaeq1} becomes $\sum_{c\in \mathscr{M}(T) }\prod_{\mathfrak{n}\in T_{\text{in}}} \frac{1}{|c(\Omega)_{\mathfrak{n}}|+T^{-1}_{\text{max}}}$. Therefore, using the fact that $n_{\textrm{in}}$ are compactly supported, we have \begin{equation}\label{eq.termlemmaeq3} \begin{split} &|Term(T, p)_k|\lesssim \sum_{\substack{k_1,\, \cdots,\, k_{l(T)+1},\, k'_1,\, \cdots,\, k'_{l(T)+1}\\ |k_{j}|, |k'_j|\lesssim 1, \forall j}} \sum_{c\in \mathscr{M}(T) }\prod_{\mathfrak{n}\in T_{\text{in}}}\frac{1}{|c(\Omega)_{\mathfrak{n}}|+T^{-1}_{\text{max}}} \prod_{\mathfrak{e}\in T_{\text{in}}} |k_{\mathfrak{e},x}|\ \delta_{\cap_{\mathfrak{n}\in T_{\text{in}}} \{S_{\mathfrak{n}}=0\}} \\ &\sum_{c'\in \mathscr{M}(T)}\prod_{\mathfrak{n}'\in T_{\text{in}}}\frac{1}{|c'(\Omega)_{\mathfrak{n}'}|+T^{-1}_{\text{max}}}\prod_{\mathfrak{e}'\in T_{\text{in}}}|k_{\mathfrak{e}',x}|\ \delta_{\cap_{\mathfrak{n}'\in T_{\text{in}}} \{S_{\mathfrak{n}'}=0\}} \delta_{p}(k_1,\cdots, k_{l(T)+1}, k'_1,\cdots, k'_{l(T)+1}) \end{split} \end{equation} Switch the order of summations and products in \eqref{eq.termlemmaeq3}, then we get \begin{equation}\label{eq.termlemmaeq2} \begin{split} &|Term(T, p)_k|\lesssim \sum_{\substack{k_1,\, \cdots,\, k_{l(T)+1},\, k'_1,\, \cdots,\, k'_{l(T)+1}\\ |k_{j}|, |k'_j|\lesssim 1, \forall j}} \sum_{c, c'\in \mathscr{M}(T) }\prod_{\mathfrak{n}, \mathfrak{n}'\in T_{\text{in}}}\frac{1}{|c(\Omega)_{\mathfrak{n}}|+T^{-1}_{\text{max}}} \\ &\frac{1}{|c'(\Omega)_{\mathfrak{n}'}|+T^{-1}_{\text{max}}}\prod_{\mathfrak{e},\mathfrak{e}'\in T_{\text{in}}}(|k_{\mathfrak{e},x}||k_{\mathfrak{e}',x}|)\ \delta_{\cap_{\mathfrak{n},\mathfrak{n}'\in T_{\text{in}}} \{S_{\mathfrak{n}}=0, S_{\mathfrak{n}'}=0\}} \delta_{p}(k_1,\cdots, k_{l(T)+1}, k'_1,\cdots, k'_{l(T)+1}). \end{split} \end{equation} Given a tree $T$ and pairing $p$, we can construct a couple $\mathcal{C}$. We show that \begin{equation}\label{eq.termlemmaeq4} \sum_{c, c'\in \mathscr{M}(T) }=\sum_{c\in \mathscr{M}(\mathcal{C}) },\qquad \prod_{\mathfrak{n}, \mathfrak{n}'\in T_{\text{in}}}=\prod_{\mathfrak{n}\in \mathcal{C}}, \qquad \prod_{\mathfrak{e},\mathfrak{e}'\in T_{\text{in}}}=\prod_{\mathfrak{e}\in \mathcal{C}_{\text{norm}}},\qquad \cap_{\mathfrak{n},\mathfrak{n}'\in T_{\text{in}}}=\cap_{\mathfrak{n}\in \mathcal{C}}. \end{equation} Remember that $\mathcal{C}$ is constructed by glueing two copies of $T$ by $p$. In \eqref{eq.termlemmaeq2}, $\mathfrak{n}$, $\mathfrak{e}$, $\mathfrak{n}'$, $\mathfrak{e}'$ denote nodes and edges in the first or second copy respectively. Since all nodes in $\mathcal{C}$ come from the two copies of $T$, we get $\prod_{\mathfrak{n}, \mathfrak{n}'\in T_{\text{in}}}=\prod_{\mathfrak{n}\in \mathcal{C}}$ and $\cap_{\mathfrak{n},\mathfrak{n}'\in T_{\text{in}}}=\cap_{\mathfrak{n}\in \mathcal{C}}$. Remember that $T_{\text{in}}$ is the tree formed by all non-leaf nodes, so edges of $\mathcal{C}_{\text{norm}}$ all come from the two copies of $T_{\text{in}}$. Therefore $\prod_{\mathfrak{e},\mathfrak{e}'\in T_{\text{in}}}=\prod_{\mathfrak{e}\in \mathcal{C}_{\text{norm}}}$. Given two matrix $c$, $c'$, we can construct a new matrix $c\oplus c'$ as the following. Consider two vectors $\Omega=\{\Omega_{\mathfrak{n}}\}_{\mathfrak{n}'\in T}$, $\Omega'=\{\Omega_{\mathfrak{n}'}\}_{\mathfrak{n}'\in T}$, then define $\Omega\oplus\Omega'\coloneqq \{\Omega_{\mathfrak{n}},\Omega_{\mathfrak{n}'}\}_{\mathfrak{n},\mathfrak{n}'\in T}$. We know that $c(\Omega)=\sum_{\widetilde{\mathfrak{n}}}c_{\mathfrak{n},\widetilde{\mathfrak{n}}}\Omega_{\widetilde{\mathfrak{n}}}$ and $c(\Omega')=\sum_{\widetilde{\mathfrak{n}}'}c_{\mathfrak{n}',\widetilde{\mathfrak{n}}'}\Omega_{\widetilde{\mathfrak{n}}'}$. Define $c\oplus c'$ to be the linear map whose domain is all vector of the form $\Omega\oplus\Omega'$ and whose action is $c\oplus c'(\Omega\oplus\Omega')=\{\sum_{\widetilde{\mathfrak{n}}}c_{\mathfrak{n},\widetilde{\mathfrak{n}}}\Omega_{\widetilde{\mathfrak{n}}},\sum_{\widetilde{\mathfrak{n}}'}c_{\mathfrak{n}',\widetilde{\mathfrak{n}}'}\Omega_{\widetilde{\mathfrak{n}}'}\}_{\mathfrak{n},\mathfrak{n}'\in T}$. Define $\mathscr{M}(\mathcal{C})=\{c\oplus c':c, c'\in \mathscr{M}(T)\}$, then we get $\sum_{c, c'\in \mathscr{M}(T) }=\sum_{c\in \mathscr{M}(\mathcal{C}) }$. By \eqref{eq.termlemmaeq4} and the fact that leaves corresponding to $k_j, k_j'$ are merged in $\mathcal{C}$, \eqref{eq.termlemmaeq2} is equivalent to \begin{equation}\label{eq.termlemmaeq5} |Term(T, p)_k|\lesssim \sum_{\substack{k_1,\, \cdots,\, k_{l(T)+1},\, k'_1,\, \cdots,\, k'_{l(T)+1}\\ |k_{j}|, |k'_j|\lesssim 1, \forall j}} \sum_{c\in \mathscr{M}(\mathcal{C}) }\prod_{\mathfrak{n}\in \mathcal{C}}\frac{1}{|c(\Omega)_{\mathfrak{n}}|+T^{-1}_{\text{max}}} \prod_{\mathfrak{e}\in \mathcal{C}_{\text{norm}}}|k_{\mathfrak{e},x}|\ \delta_{\cap_{\mathfrak{n}\in \mathcal{C}} \{S_{\mathfrak{n}}=0\}} \end{equation} Assigning a number $\sigma_{\mathfrak{n}}\in \mathbb{Z}_{T_{\text{max}}}$ for each node $\mathfrak{n}\in \mathcal{C}$, a number $\kappa_{\mathfrak{e}}\in \mathcal{D}(\alpha,1)\coloneqq\{2^{-K_{\mathfrak{e}}}:K_{\mathfrak{e}}\in \mathbb{Z}\cap [0,ln\ \alpha^{-1}]\}$ for each edge $\mathfrak{e}$ and a number $k\in \mathbb{Z}^d_{L}$ for the fixed legs, we can define the assoicated equation $Eq(\mathcal{C}, \{\sigma_{\mathfrak{n}}\}_{\mathfrak{n}}, \{\kappa_{\mathfrak{e}}\}_{\mathfrak{e}},k)=Eq(\mathcal{C})$ as in \eqref{eq.diophantineeqpairedsigma'}. Then we have \begin{equation}\label{eq.termlemmaeq8} \sum_{\substack{k_1,\, \cdots,\, k_{l(T)+1},\, k'_1,\, \cdots,\, k'_{l(T)+1}\\\cap_{\mathfrak{n}\in \mathcal{C}} \{S_{\mathfrak{n}}=0\}}}=\sum_{\kappa_{\mathfrak{e}}\in \mathcal{D}(\alpha,1)}\sum_{\sigma_{\mathfrak{n}}\in \mathbb{Z}_{T_{\text{max}}}}\sum_{Eq(\mathcal{C}, \{\sigma_{\mathfrak{n}}\}_{\mathfrak{n}}, \{\kappa_{\mathfrak{e}}\}_{\mathfrak{e}},k)}, \end{equation} which implies that \begin{equation}\label{eq.termlemmaeq6} \begin{split} |Term(T, p)_k|\lesssim \sum_{\kappa_{\mathfrak{e}}\in \mathcal{D}(\alpha,1)}\sum_{\sigma_{\mathfrak{n}}\in \mathbb{Z}_{T_{\text{max}}}}\sum_{Eq(\mathcal{C}, \{\sigma_{\mathfrak{n}}\}_{\mathfrak{n}}, \{\kappa_{\mathfrak{e}}\}_{\mathfrak{e}},k)} \sum_{c\in \mathscr{M}(\mathcal{C}) }\prod_{\mathfrak{n}\in \mathcal{C}}\frac{1}{|c(\Omega)_{\mathfrak{n}}|+T^{-1}_{\text{max}}} \prod_{\mathfrak{e}\in \mathcal{C}_{\text{norm}}}|k_{\mathfrak{e},x}| \end{split} \end{equation} Remember in the definition of $Eq(\mathcal{C})$, we have the condition that $|k_{\mathfrak{e}}|\lesssim 1$. This condition comes from the nontrivial fact that on the support of $\delta_{\cap_{\mathfrak{n}\in T_{\text{in}}} \{S_{\mathfrak{n}}=0\}}$, if $k_1,\cdots,k_{l(T)+1}$ are bounded, then $|k_{\mathfrak{e}}|$ is bounded. Recall that by \eqref{eq.defnS_n}, $S_{\mathfrak{n}}=\iota_{\mathfrak{e}_1}k_{\mathfrak{e}_1}+\iota_{\mathfrak{e}_2}k_{\mathfrak{e}_2}+\iota_{\mathfrak{e}}k_{\mathfrak{e}}$. The condition that $S_{\mathfrak{n}}=0$ implies that the variable $k_{\mathfrak{e}}$ of the parent edge $\mathfrak{e}$ is a linear combination of the variables $k_{\mathfrak{e}_1}$, $k_{\mathfrak{e}_2}$ of children edges $\mathfrak{e}_1$, $\mathfrak{e}_2$. The variables of children edges $\mathfrak{e}_j$ are again a linear combinations of the variables of their children. We may iterate this argument to show that all variables $k_{\mathfrak{e}}$ are linear combinations of variables of leaves $k_1,\cdots,k_{l(T)+1}$. Because $k_1,\cdots,k_{l(T)+1}$ are bounded, then their linear combinations $k_{\mathfrak{e}}$ are also bounded. In the definition of $Eq(\mathcal{C}, \{\sigma_{\mathfrak{n}}\}_{\mathfrak{n}}, \{\kappa_{\mathfrak{e}}\}_{\mathfrak{e}},k)$, $|\Omega_{\mathfrak{n}}-\sigma_{\mathfrak{n}}|=O(T^{-1}_{\text{max}})$ and $|k_{\mathfrak{e}x}| \sim \kappa_{\mathfrak{e}}$. Denote the constant in $O(T^{-1}_{\text{max}})$ by $\delta$ and then we have $|\Omega_{\mathfrak{n}}-\sigma_{\mathfrak{n}}|\le \delta T^{-1}_{\text{max}}$. Therefore, we have $|c(\Omega)_{\mathfrak{n}}-c(\{\sigma_{\mathfrak{n}}\})_{\mathfrak{n}}|\lesssim \delta T^{-1}_{\text{max}}$. We have the freedom of choosing $\delta$ in the definition and we take it sufficiently small so that $|c(\Omega)_{\mathfrak{n}}-c(\{\sigma_{\mathfrak{n}}\})_{\mathfrak{n}}|\le \frac{1}{2}T^{-1}_{\text{max}}$. This implies that $|c(\Omega)_{\mathfrak{n}}|+T^{-1}_{\text{max}}\gtrsim |c(\{\sigma_{\mathfrak{n}}\})_{\mathfrak{n}}|+T^{-1}_{\text{max}}$. Since we also have $|k_{\mathfrak{e}x}| \sim \kappa_{\mathfrak{e}}$, by \eqref{eq.termlemmaeq6} we have \begin{equation}\label{eq.lemboundtermTp} \begin{split} &Term(T, p)_k \\ \lesssim& \sum_{\kappa_{\mathfrak{e}}\in \mathcal{D}(\alpha,1)}\sum_{\sigma_{\mathfrak{n}}\in \mathbb{Z}_{T_{\text{max}}}}\sum_{Eq(\mathcal{C}, \{\sigma_{\mathfrak{n}}\}_{\mathfrak{n}}, \{\kappa_{\mathfrak{e}}\}_{\mathfrak{e}},k)} \sum_{c\in \mathscr{M}(\mathcal{C}) }\prod_{\mathfrak{n}\in \mathcal{C}}\frac{1}{|c(\Omega)_{\mathfrak{n}}|+T^{-1}_{\text{max}}} \prod_{\mathfrak{e}\in \mathcal{C}_{\text{norm}}}|k_{\mathfrak{e},x}| \\ \lesssim &\sum_{c\in \mathscr{M}(\mathcal{C}) }\sum_{\kappa_{\mathfrak{e}}\in \mathcal{D}(\alpha,1)}\sum_{\substack{\sigma_{\mathfrak{n}}\in \mathbb{Z}_{T_{\text{max}}}\\ |\sigma_{\mathfrak{n}}|\lesssim 1}}\frac{1}{|c(\{\sigma_{\mathfrak{n}}\})_{\mathfrak{n}}|+T^{-1}_{\text{max}}} \sum_{Eq(\mathcal{C}, \{\sigma_{\mathfrak{n}}\}_{\mathfrak{n}}, \{\kappa_{\mathfrak{e}}\}_{\mathfrak{e}},k)} \prod_{\mathfrak{e}\in \mathcal{C}_{\text{norm}}}|k_{\mathfrak{e},x}| \\ \lesssim &\sum_{c\in \mathscr{M}(\mathcal{C}) }\sum_{\kappa_{\mathfrak{e}}\in \mathcal{D}(\alpha,1)}\sum_{\substack{\sigma_{\mathfrak{n}}\in \mathbb{Z}_{T_{\text{max}}}\\ |\sigma_{\mathfrak{n}}|\lesssim 1}}\frac{1}{|c(\{\sigma_{\mathfrak{n}}\})_{\mathfrak{n}}|+T^{-1}_{\text{max}}} \left(\prod_{\mathfrak{e}\in \mathcal{C}_{\text{norm}}}\kappa_{\mathfrak{e}}\right)\#Eq(\mathcal{C}, \{\sigma_{\mathfrak{n}}\}_{\mathfrak{n}}, \{\kappa_{\mathfrak{e}}\}_{\mathfrak{e}},k) \\ \lesssim &\sum_{c\in \mathscr{M}(\mathcal{C}) }\sum_{\kappa_{\mathfrak{e}}\in \mathcal{D}(\alpha,1)}\sum_{\substack{\sigma_{\mathfrak{n}}\in \mathbb{Z}_{T_{\text{max}}}\\ |\sigma_{\mathfrak{n}}|\lesssim 1}}\frac{1}{|c(\{\sigma_{\mathfrak{n}}\})_{\mathfrak{n}}|+T^{-1}_{\text{max}}} \left(\prod_{\mathfrak{e}\in \mathcal{C}_{\text{norm}}}\kappa_{\mathfrak{e}}\right)L^{O(n\theta)} Q^{\frac{n}{2}}\ \prod_{\mathfrak{e}\in \mathcal{C}_{\text{norm}}} \kappa^{-1}_{\mathfrak{e}} \end{split} \end{equation} Here in the last inequality we applied \eqref{eq.countingbd0} in Proposition \ref{prop.counting}. After simplification, \eqref{eq.lemboundtermTp} gives us \begin{equation}\label{eq.lemboundtermTpsimplify} \begin{split} |Term(T, p)_k|\lesssim &L^{O(n\theta)} Q^{\frac{n}{2}}\sum_{c\in \mathscr{M}(\mathcal{C}) }\sum_{\kappa_{\mathfrak{e}}\in \mathcal{D}(\alpha,1)}\sum_{\substack{\sigma_{\mathfrak{n}}\in \mathbb{Z}_{T_{\text{max}}}\\ |\sigma_{\mathfrak{n}}|\lesssim 1}} \prod_{\mathfrak{n}\in \mathcal{C}}\frac{1}{|c(\{\sigma_{\mathfrak{n}}\})_{\mathfrak{n}}|+T^{-1}_{\text{max}}} \\ \lesssim &L^{O(n\theta)} Q^{\frac{n}{2}} \left(\sum_{\kappa_{\mathfrak{e}}\in \mathcal{D}(\alpha,1)} 1\right) \sum_{c\in \mathscr{M}(\mathcal{C})}\sum_{\substack{\sigma_{\mathfrak{n}}\in \mathbb{Z}_{T_{\text{max}}}\\ |\sigma_{\mathfrak{n}}|\lesssim 1}} \prod_{\mathfrak{n}\in \mathcal{C}}\frac{1}{|c(\{\sigma_{\mathfrak{n}}\})_{\mathfrak{n}}|+T^{-1}_{\text{max}}} \\ \lesssim & L^{O(n\theta)} Q^{\frac{n}{2}} \sum_{c\in \mathscr{M}(\mathcal{C})}\sum_{\substack{\sigma_{\mathfrak{n}}\in \mathbb{Z}_{T_{\text{max}}}\\ |\sigma_{\mathfrak{n}}|\lesssim 1}} \prod_{\mathfrak{n}\in \mathcal{C}}\frac{1}{|c(\{\sigma_{\mathfrak{n}}\})_{\mathfrak{n}}|+T^{-1}_{\text{max}}} \end{split} \end{equation} Here in the last step we use the fact that $\#\mathcal{D}(\alpha,1)\lesssim ln(\alpha^{-1})$. $|\sigma_{\mathfrak{n}}|\lesssim 1$ in the sum of the first two inequalities comes from the fact that $\#Eq(\mathcal{C}, \{\sigma_{\mathfrak{n}}\}_{\mathfrak{n}},k)=0$ if some $|\sigma_{\mathfrak{n}}|\gtrsim 1$. This fact is true because in $Eq(\mathcal{C}, \{\sigma_{\mathfrak{n}}\}_{\mathfrak{n}},k)$, all $|k_{\mathfrak{e}}|\lesssim 1$, which implies that $|\Omega_{\mathfrak{n}}|\lesssim 1$ and therefore $|\sigma_{\mathfrak{n}}|\gtrsim 1$ (notice that $|\Omega_{\mathfrak{n}}-\sigma_{\mathfrak{n}}|=O(T^{-1}_{\text{max}})$). We claim that \begin{equation}\label{eq.lemTpvarianceclaim} \sup_{c}\sum_{\substack{\sigma_{\mathfrak{n}}\in \mathbb{Z}_{T_{\text{max}}}\\ |\sigma_{\mathfrak{n}}|\lesssim 1}} \prod_{\mathfrak{n}\in \mathcal{C}}\frac{1}{|c(\{\sigma_{\mathfrak{n}}\})_{\mathfrak{n}}|+T^{-1}_{\text{max}}}\lesssim L^{O(n\theta)} T^{n}_{\text{max}} \end{equation} Since there are only bounded many matrices in $\mathscr{M}(\mathcal{C})$. Given above claim, we know that \begin{equation} |Term(T, p)_k|\lesssim L^{O(n\theta)} Q^{\frac{n}{2}} T^{n}_{\text{max}}, \end{equation} which proves the lemma. Now prove the claim. Notice that there are $n$ nodes in $\mathcal{C}$. We label these nodes by $h=1,\cdots,n$ and denote $\sigma_{\mathfrak{n}}$ by $\sigma_{h}$ if $\mathfrak{n}$ is labelled by $h$. Since $\sigma_{h}\in \mathbb{Z}_{T_{\text{max}}}$, there exists $m_{h}\in \mathbb{Z}$ such that $\sigma_{h}=\frac{m_{h}}{T_{\text{max}}}$. \eqref{eq.lemTpvarianceclaim} is thus equivalent to \begin{equation}\label{eq.lemTpvarianceclaim1} \sum_{\substack{m_{h}\in \mathbb{Z}\\ |m_{h}|\lesssim T_{\text{max}}}} \prod_{h=1}^{n}\frac{T_{\text{max}}}{|c(\{m_{h}\})_{h}|+1}\lesssim L^{O(n\theta)}T^{n}_{\text{max}} \end{equation} Before proving \eqref{eq.lemTpvarianceclaim1}, let's first look at one of its special case. If $c=Id$, then $c(\{m_{h}\})_{h}=m_h$. The right hand side of \eqref{eq.lemTpvarianceclaim1} becomes \begin{equation} \begin{split} T^{n}_{\text{max}}\sum_{\substack{m_{h}\in \mathbb{Z}\\ |m_{h}|\lesssim T_{\text{max}}}} \prod_{h=1}^{n}\frac{1}{|m_{h}|+1} = T^{n}_{\text{max}}\prod_{h=1}^{n}\left(\sum_{\substack{m_{h}\in \mathbb{Z}\\ |m_{h}|\lesssim T_{\text{max}}}} \frac{1}{|m_{h}|+1}\right) \lesssim T^{n}_{\text{max}} (ln(\alpha^{-1}))^{n}= L^{O(n\theta)}T^{n}_{\text{max}}. \end{split} \end{equation} Here we use the fact that $\sum_{\substack{j\in \mathbb{Z}\\ |j|\lesssim T_{\text{max}}}} \frac{1}{|j|+1}=O(ln(\alpha^{-1}))$. Now we prove \eqref{eq.lemTpvarianceclaim1}. We just need to show that \begin{equation}\label{eq.lemTpvarianceEulerMac} \sum_{\substack{m_{h}\in \mathbb{Z}\\ |m_{h}|\lesssim T_{\text{max}}}} \prod_{h=1}^{n}\frac{1}{|c(\{m_{h}\})_{h}|+1}\lesssim L^{O(n\theta)} \end{equation} By Euler-Maclaurin formula \eqref{eq.EulerMaclaurin} and change of variable formula, we get \begin{equation} \begin{split} \sum_{\substack{m_{h}\in \mathbb{Z}\\ |m_{h}|\lesssim T_{\text{max}}}} \prod_{h=1}^{n}\frac{1}{|c(\{m_{h}\})_{h}|+1}\le& \int_{|m_{h}|\lesssim T_{\text{max}}} \prod_{h=1}^{n}\frac{1}{|c(\{m_{h}\})_{h}|+1}\prod_{h=1}^{n} dm_{h} \\ =& \int_{c(\{|m_{h}|\lesssim T_{\text{max}}\})} \prod_{h=1}^{n}\frac{1}{|m_{h}|+1}|\text{det}\ c|\prod_{h=1}^{n} dm_{h} \\ \lesssim &\prod_{h=1}^{n}\int_{|m_{h}|\lesssim T_{\text{max}}} \frac{1}{|m_{h}|+1} dm_{h} \\ \lesssim & (ln(\alpha^{-1}))^{n}\lesssim L^{O(n\theta)} \end{split} \end{equation} We complete the proof of the claim and thus the proof of the lemma. \end{proof} Now we prove Proposition \ref{prop.treetermsupperbound}. To start with, recall the large deviation estimate for Gaussian polynomial. \begin{lem}[Large deviation for Gaussian polynomial]\label{lem.largedev} Let $\{\eta_k(\omega)\}$ be i.i.d. complex Gaussian variables with mean $0$ and variance $1$. Let $F=F(\omega)$ be an degree $n$ polynomial of $\{\eta_k(\omega)\}$ defined by \begin{equation}\label{indp} F(\omega)=\sum_{k_1,\cdots,k_n}a_{k_1\cdots k_n}\prod_{j=1}^n\eta_{k_j}^{\iota_j}, \end{equation} where $a_{k_1\cdots k_n}$ are constants, then we have \begin{equation}\label{largedevest}\mathbb{P}\left(|F(\omega)|\geq A\cdot \left(\mathbb{E}|F(\omega)|^2\right)^{\frac{1}{2}}\right)\leq Ce^{-cA^{\frac{2}{n}}} \end{equation} \end{lem} \begin{proof} This is a corollary of the hypercontractivity of Ornstein-Uhlenbeck semigroup. A good reference of this topic is \cite{Oh}. \eqref{largedevest} is equivalent to (B.9) in \cite{Oh}. \end{proof} Proposition \ref{prop.treetermsupperbound} is a corollary of above large deviation estimate and Proposition \ref{prop.treetermsvariance}. \begin{proof}[Proof of Proposition \ref{prop.treetermsupperbound}] Because $(\mathcal{J}_{T})_{k}$ are Gaussian polynomials, we can take $F(\omega)=(\mathcal{J}_{T})_{k}$ in Lemma \ref{lem.largedev}. Then we obtained \begin{equation} |(\mathcal{J}_{T})_{k}(t)|\lesssim L^{\frac{n}{2}\theta} \sqrt{\mathbb{E}|(\mathcal{J}_T)_k|^2} \end{equation} with probability less than $e^{-c(L^{\frac{n}{2}\theta})^{\frac{2}{n}}}=e^{-cL^{\theta}}$. By definition \ref{def.Lcertainly}, above inequality holds true $L$-certainly. Since by Proposition \ref{prop.treetermsvariance}, $\mathbb{E}|(\mathcal{J}_T)_k|^2\lesssim L^{O(l(T)\theta)} \rho^{2l(T)}$, we get \begin{equation}\label{eq.proptreetermsupperbound1} |(\mathcal{J}_{T})_{k}(t)|\lesssim L^{O(l(T)\theta)} \rho^{l(T)},\qquad \textit{L-certainly} \end{equation} \eqref{eq.proptreetermsupperbound1} is very similar to the final goal \eqref{eq.treetermsupperbound}, except for the $\sup_t$ and $\sup_k$ in front. In what follows we apply the standard epsilon net and union bound method to remove these two $\sup$. Assume that $|t-t'|\lesssim \rho^{l(T)}L^{-M}$, it's not hard to show that $|(\mathcal{J}_{T})_{k}(t)-(\mathcal{J}_{T})_{k}(t')|\lesssim \rho^{l(T)}$. Therefore, if $\sup_{i} \sup_{k} |(\mathcal{J}_{T})_{k}(i\rho^{l(T)}L^{-M})|\lesssim L^{O(l(T)\theta)} \rho^{l(T)}$, then $\sup_{t} \sup_{k} |(\mathcal{J}_{T})_{k}(t)|\lesssim L^{O(l(T)\theta)} \rho^{l(T)}$ and the proof is completed. Notice that \begin{equation}\label{eq.unionbound} \begin{split} &\mathbb{P}\left(\sup_{i\in \mathbb{Z}} \sup_{k} |(\mathcal{J}_{T})_{k}(i\rho^{l(T)}L^{-M})|\gtrsim L^{O(l(T)\theta)} \rho^{l(T)}\right) \\ =&\mathbb{P}\left(\bigcup_{i\in \mathbb{Z}\cap [0, T_{\text{max}}\rho^{-l(T)}L^{M}]}\bigcup_{k\in \mathbb{Z}_{L}\cap [0,1]}\{ |(\mathcal{J}_{T})_{k}(i\rho^{l(T)}L^{-M})|\gtrsim L^{O(l(T)\theta)} \rho^{l(T)}\}\right) \\ \lesssim & \sum_{i\in \mathbb{Z}\cap [0, T_{\text{max}}\rho^{-l(T)}L^{M}]}\sum_{k\in \mathbb{Z}_{L}\cap [0,1]}\mathbb{P}\left( |(\mathcal{J}_{T})_{k}(i\rho^{l(T)}L^{-M})|\gtrsim L^{O(l(T)\theta)} \rho^{l(T)}\right) \\ \lesssim & \sum_{i\in \mathbb{Z}\cap [0, T_{\text{max}}\rho^{-l(T)}L^{M}]}\sum_{k\in \mathbb{Z}_{L}\cap [0,1]} e^{-O(L^{\theta})}= L^{2M}e^{-O(L^{\theta})}=e^{-O(L^{\theta})} \end{split} \end{equation} Here in the second inequality the two ranges $[0, T_{\text{max}}\rho^{-l(T)}L^{M}]$ and $[0,1]$ of $i$ and $k$ come from the fact that $t=i\rho^{l(T)}L^{-M}\lesssim T_{\text{max}}$ and $(\mathcal{J}_{T})_{k}=0$ for $|k|\gtrsim 1$. In the last line we can replace the probability by $e^{-O(L^{\theta})}$ because the estimate $|(\mathcal{J}_{T})_{k}(t)|\lesssim L^{O(l(T)\theta)} \rho^{l(T)}$ holds true $L$-certainly. Now we complete the proof of Proposition \ref{prop.treetermsupperbound}. \end{proof} \subsection{Norm estimate of random matrices} \label{sec.randommatrices} In this section, we prove Proposition \ref{prop.operatorupperbound}. Remember that by definition of $\mathcal{P}_{T}$ and $\mathcal{T}$, \begin{equation}\label{eq.formulaP_T} \mathcal{P}_{T}(w)=\mathcal{T}(\mathcal{J}_{T},w)=\frac{i\lambda}{L^{d}} \sum\limits_{S(k_1,k_2,k)=0}\int^{t}_0k_{x}\mathcal{J}_{T,k_1} w_{k_2}e^{i s\Omega(k_1,k_2,k)- \nu|k|^2(t-s)} ds. \end{equation} \subsubsection{Dyadic decomposition of $\mathcal{P}_{T}$} Remember that we have the dyadic decomposition $[0,1]= \bigcup_{\tau=0}^{\infty}[2^{-\tau},2^{-\tau-1}]$. We can then construct a dyadic decomposition $\mathcal{P}_{T}=\sum_{l=0}^{\infty} \mathcal{P}^l_{T}$. Here $\mathcal{P}_T^l$ is defined by the following formula. \begin{equation} \mathcal{P}_T^l(w)=\frac{i\lambda}{L^{d}} \sum\limits_{S(k_1,k_2,k)=0}\int_{(t-s)/T_{\text{max}}\in [2^{-\tau},2^{-\tau-1}]}k_{x}\mathcal{J}_{T,k_1} w_{k_2}e^{i s\Omega(k_1,k_2,k)- \nu|k|^2(t-s)} ds \end{equation} We also introduce the bilinear operator $\mathcal{T}^l(\phi,\phi)_k$ \begin{equation} \mathcal{T}^l(\phi,\phi)_k=\frac{i\lambda}{L^{d}} \sum\limits_{S(k_1,k_2,k)=0}\int_{(t-s)/T_{\text{max}}\in [2^{-\tau},2^{-\tau-1}]}k_{x}\phi_{k_1} \phi_{k_2}e^{i s\Omega(k_1,k_2,k)- \nu|k|^2(t-s)} ds \end{equation} Proposition \ref{prop.operatorupperbound} is a corollary of the following proposition. \begin{prop}\label{prop.operatorupperbound'} Let $\rho=\alpha\, T^{\frac{1}{2}}_{\text{max}}$ and $\mathcal{P}^l_{T}$ be the operator defined above. Define the space $X^{p}_{L^{2M}}=\{w\in X^p: w_k=0\text{ if }|k|\gtrsim L^{2M}\}$ and the norm $||w||_{X^{p}_{L^{2M}}}=\sup_{|k|\lesssim L^{2M}} \langle k\rangle^{p} |w_k|$. Then for any sequence of trees and numbers $\{T_1,\cdots,T_K\}$ and $\{\tau_1,\cdots,\tau_K\}$, we have $L$-certainly the operator bound \begin{equation}\label{eq.operatornormtau} \left|\left|\sum_{\tau_1,\cdots,\tau_K}\prod_{j=1}^K\mathcal{P}^{\tau_j}_{T_j}\right|\right|_{L_t^{\infty}X^{p}_{L^{2M}}\rightarrow L_t^{\infty}X^{p}}\le L^{O\left(1+\theta \sum_{j=1}^K l(T_j)\right)} \rho^{\sum_{j=1}^K l(T_j)}. \end{equation} for any $T_j$ with $l(T_j)\le N$. \end{prop} \begin{proof}[Proof of Proposition \ref{prop.operatorupperbound}:] By Proposition \ref{prop.operatorupperbound'}, we have \begin{equation} \left|\left|\prod_{j=1}^K\mathcal{P}_{T_j}\right|\right|_{L_t^{\infty}X^p_{L^{2M}}\rightarrow L_t^{\infty}X^p}=\left|\left|\sum_{\tau_1,\cdots,\tau_K}\prod_{j=1}^K\mathcal{P}^{\tau_j}_{T}\right|\right|_{L_t^{\infty}X^{p}_{L^{2M}}\rightarrow L_t^{\infty}X^{p}}\le L^{O\left(1+\theta \sum_{j=1}^K l(T_j)\right)} \rho^{\sum_{j=1}^K l(T_j)}. \end{equation} Define $\left(X^{p}_{L^{2M}}\right)^{\perp}=\{w\in X^p: w_k=0\text{ if }|k|\lesssim L^{2M}\}$. To prove Proposition \ref{prop.operatorupperbound}, it suffices to show that for all $w\in \left(X^{p}_{L^{2M}}\right)^{\perp}$, \begin{equation}\label{eq.prop2.8eq1} \left|\left|\prod_{j=1}^K\mathcal{P}_{T_j}w\right|\right|_{L_t^{\infty}X^p}\lesssim L^{O\left(1+\theta \sum_{j=1}^K l(T_j)\right)} \rho^{\sum_{j=1}^K l(T_j)} \left|\left|w\right|\right|_{L_t^{\infty}X^p}. \end{equation} By \eqref{eq.formulaP_T}, if $w\in \left(X^{p}_{L^{2M}}\right)^{\perp}$, then $\mathcal{P}_{T_j} w\in \left(X^{p}_{L^{2M}}\right)^{\perp}$ if we enlarge the constant in $|k|\lesssim L^{2M}$ in the definition of $\left(X^{p}_{L^{2M}}\right)^{\perp}$. Therefore, if we want to prove \eqref{eq.prop2.8eq1}, it suffices to prove \begin{equation}\label{eq.prop2.8eq2} \left|\left|\mathcal{P}_{T_j}w\right|\right|_{L_t^{\infty}X^p}\lesssim L^{O(1+l(T_j)\theta)} \rho^{l(T_j)} \left|\left|w\right|\right|_{L_t^{\infty}X^p}, \end{equation} for all $w\in \left(X^{p}_{L^{2M}}\right)^{\perp}$. By \eqref{eq.formulaP_T}, \begin{equation}\label{eq.prop2.8eq3} \begin{split} |\mathcal{P}_{T_j}(w)_k|\le &\frac{i\lambda}{L^{d}} \sum\limits_{S(k_1,k_2,k)=0}\int^{t}_0|k_{x}||\mathcal{J}_{T_j,k_1}| |w_{k_2}|e^{- \nu|k|^2(t-s)} ds \\ \lesssim& L^{O(l(T_j)\theta)} \rho^{l(T_j)}\frac{i\lambda}{L^{d}}|k_{x}| \int^{t}_0e^{- \nu|k|^2(t-s)} ds \sum_{k_2:|k_2-k|\lesssim 1} \langle k_2\rangle^{-p} \\ \lesssim& L^{O(l(T_j)\theta)} \rho^{l(T_j)}\frac{i\lambda}{L^{d}} |k_{x}| \nu^{-1} \langle k\rangle^{-2} \sum_{k_2:|k_2-k|\lesssim 1} \langle k_2\rangle^{-p} \\ \lesssim& L^{O(l(T_j)\theta)} \rho^{l(T_j)}\frac{i\lambda}{L^{d}} \nu^{-1} \langle k\rangle^{-1} \langle k\rangle^{-p} \lesssim L^{O(l(T_j)\theta)} \rho^{l(T_j)}\frac{i\lambda}{L^{d}} \nu^{-1} L^{-2M} \langle k\rangle^{-p} \\ \lesssim& L^{-M} \rho^{l(T_j)} \langle k\rangle^{-p} \end{split} \end{equation} Here in the second inequality we apply Proposition \ref{prop.treetermsupperbound}. In the third line we use the fact that $\int^{t}_0e^{- \nu|k|^2(t-s)} ds\le \nu^{-1} \langle k\rangle^{-2}$. In the fourth line we use the fact that $\sum_{k_2:|k_2-k|\lesssim 1} \langle k_2\rangle^{-p}\le \langle k\rangle^{-p}$ and $|k|\gtrsim L^{2M}$ (since if $|k|\lesssim L^{2M}$ then $|\mathcal{P}_{T_j}(w)_k|$ vanishes and there is nothing to prove). \eqref{eq.prop2.8eq3} implies that $ \left|\left|\mathcal{P}_{T_j}w\right|\right|_{L_t^{\infty}X^p}\lesssim L^{-M} \rho^{l(T_j)} \left|\left|w\right|\right|_{L_t^{\infty}X^p}\lesssim L^{O(1+l(T_j)\theta)} \rho^{l(T_j)} \left|\left|w\right|\right|_{L_t^{\infty}X^p}$, so we prove \eqref{eq.prop2.8eq2} and thus \eqref{eq.operatornorm'}. Now we prove \eqref{eq.operatornorm}. Because $L=\sum_{1\le l(T)\le N} \mathcal{P}_{T}$ and $L^K=\sum_{1\le l(T_1),\cdots l(T_K)\le N} \mathcal{P}_{T_1}\cdots\mathcal{P}_{T_K}$, by \eqref{eq.operatornorm'} \begin{equation} \left|\left|L^K\right|\right|_{L_t^{\infty}X^p\rightarrow L_t^{\infty}X^p}\lesssim L^{O(1)} \sum_{1\le l(T_1),\cdots l(T_K)\le N} (L^{O(\theta)}\rho)^{\sum_{j=1}^K l(T_j)}\le L^{O\left(1+\theta \sum_{j=1}^K l(T_j)\right)} \rho^{K} \end{equation} Here in the last step we use the fact that $l(T_j)\ge 1$ for all $j$ and there are bounded many trees satisfy $1\le l(T_1),\cdots l(T_K)\le N$. Therefore, we complete the proof of Proposition \ref{prop.operatorupperbound}. \end{proof} \subsubsection{Formulas for product of random matrices $\mathcal{P}^l_{T_j}$} In order to prove Proposition \ref{prop.operatorupperbound'}, it is very helpful to find a good formula of $\prod_{j=1}^K\mathcal{P}^{\tau_j}_{T_j}$ and this is the main goal of this section. $\mathcal{P}^{l}_{T}$ looks almost the same as $\mathcal{P}_{T}$ except for the limits in the time integral, so in the rest part of this section we do not stress their difference. Now let's find a tree representation for $\mathcal{P}^{l}_{T}$ or $\mathcal{P}_{T}$ as in section \ref{sec.connection}. By \eqref{eq.treeterm}, we know that $\mathcal{J}_{T}=\mathcal{T}(\mathcal{J}_{T_{\mathfrak{n}_1}}, \mathcal{J}_{T_{\mathfrak{n}_2}})$ corresponds to the tree $T$ in which the two subtrees of the children of the root nodes are $T_{\mathfrak{n}_1}$ and $T_{\mathfrak{n}_2}$. Taking $T_{\mathfrak{n}_2}$ to be an one node tree, as the left tree in Figure \ref{fig.T(J,xi)andT(J,w)}, then this graph represents a term $\mathcal{T}(\mathcal{J}_{T_{\mathfrak{n}_1}}, \xi)$. The right tree in Figure \ref{fig.T(J,xi)andT(J,w)} represents the term $\mathcal{T}(\mathcal{J}_{T_{\mathfrak{n}_1}}, w)$, because as in section \ref{sec.connection}, the $\Box$ node can represents a function $w$ (in section \ref{sec.connection} the function is $\phi$). \begin{figure}[H] \centering \scalebox{0.5}{ \begin{tikzpicture}[level distance=80pt, sibling distance=100pt] \node[] at (0,0) (1) {} child {node[fillcirc] (2) {} child {node[draw, circle, minimum size=1cm, scale=2] (3) {$T_{\mathfrak{n}_1}$}} child {node[fillstar] (4) {}} }; \draw[-{Stealth[length=5mm, width=3mm]}] (1) -- (2); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (3); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (4); \node[] at (12,0) (1) {} child {node[fillcirc] (2) {} child {node[draw, circle, minimum size=1cm, scale=2] (3) {$T_{\mathfrak{n}_1}$}} child {node[draw, minimum size=0.4cm] (4) {}} }; \draw[-{Stealth[length=5mm, width=3mm]}] (1) -- (2); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (3); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (4); \end{tikzpicture} } \caption{Graphical representations of $\mathcal{T}(\mathcal{J}_{T_{\mathfrak{n}_1}}, \xi)$ and $\mathcal{T}(\mathcal{J}_{T_{\mathfrak{n}_1}}, w)$.} \label{fig.T(J,xi)andT(J,w)} \end{figure} Let's then find a tree representation for $\prod_{j=1}^K\mathcal{P}^{\tau_j}_{T_j}$ or $\prod_{j=1}^K\mathcal{P}_{T_j}$. Recall the expansion process in section \ref{sec.connection}: the replacement of $\Box$ by a branching indicates the substitution of $\phi$ by $\mathcal{T}(\xi, \xi)$. The action of composition $\mathcal{P}_{T_1}\circ \mathcal{P}_{T_2}(w)=\mathcal{T}(\mathcal{J}_{T_{1}}, \mathcal{T}(\mathcal{J}_{T_{2}}, w))$ is the substitution of $\cdot$ by $\mathcal{T}(\mathcal{J}_{T_{2}}, w)$ in $\mathcal{T}(\mathcal{J}_{T_{1}},\cdot)$. As in Figure \ref{fig.substitution}, if $\mathcal{P}_{T_1}=\mathcal{T}(\mathcal{J}_{T_{1}},\cdot)$ is represented by the left tree, then as an analog that a $\Box$ node is replaced by a branching, the substitution of $\cdot$ by $\mathcal{T}(\mathcal{J}_{T_{2}}, w)$ should correspond to the operation that the $\Box$ node in the left tree is replaced by the middle tree corresponding to $\mathcal{T}(\mathcal{J}_{T_{2}}, w)$, and the finally resulting right tree should correspond to $\mathcal{P}_{T_1}\circ \mathcal{P}_{T_2}(w)$. \begin{figure}[H] \centering \scalebox{0.5}{ \begin{tikzpicture}[level distance=80pt, sibling distance=100pt] \node[] at (0,0) (1) {} child {node[fillcirc] (2) {} child {node[draw, circle, minimum size=1cm, scale=2] (3) {$T_{1}$}} child {node[draw, minimum size=0.4cm] (4) {}} }; \draw[-{Stealth[length=5mm, width=3mm]}] (1) -- (2); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (3); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (4); \node[] at (8,0) (1) {} child {node[fillcirc] (2) {} child {node[draw, circle, minimum size=1cm, scale=2] (3) {$T_{2}$}} child {node[draw, minimum size=0.4cm] (4) {}} }; \draw[-{Stealth[length=5mm, width=3mm]}] (1) -- (2); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (3); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (4); \node[] at (16,1.5) (1) {} child {node[fillcirc] (2) {} child {node[draw, circle, minimum size=1cm, scale=2] (3) {$T_{1}$}} child {node[fillcirc] (4) {} child {node[draw, circle, minimum size=1cm, scale=2] (5) {$T_{2}$}} child {node[draw, minimum size=0.4cm] (6) {}} } }; \draw[-{Stealth[length=5mm, width=3mm]}] (1) -- (2); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (3); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (4); \draw[-{Stealth[length=5mm, width=3mm]}] (4) -- (5); \draw[-{Stealth[length=5mm, width=3mm]}] (4) -- (6); \end{tikzpicture} } \caption{The substitution process.} \label{fig.substitution} \end{figure} In conclusion, $\mathcal{P}_{T_1}\circ \mathcal{P}_{T_2}(w)$ corresponds to the right tree in Figure \ref{fig.substitution} and more generally, $\prod_{j=1}^K\mathcal{P}_{T_j}(w)$ (or $\prod_{j=1}^K\mathcal{P}^{\tau_j}_{T_j}(w)$) should correspond the tree in in Figure \ref{fig.productformula}. \begin{figure}[H] \centering \scalebox{0.4}{ \begin{tikzpicture}[level distance=80pt, sibling distance=100pt] \node[] at (0,0) (1) {} child {node[fillcirc] (2) {} child {node[draw, circle, minimum size=1cm, scale=2] (3) {$T_{1}$}} child {node[fillcirc] (4) {} child {node[draw, circle, minimum size=1cm, scale=2] (5) {$T_{2}$}} child {node[] (6) {}} } }; \node[scale =3] at (3,-10.5) {$\cdots$}; \draw[-{Stealth[length=5mm, width=3mm]}] (1) -- (2); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (3); \draw[-{Stealth[length=5mm, width=3mm]}] (2) -- (4); \draw[-{Stealth[length=5mm, width=3mm]}] (4) -- (5); \draw[-{Stealth[length=5mm, width=3mm]}] (4) -- (6); \node[fillcirc] at (5,-12) (11) {} child {node[draw, circle, minimum size=1cm, scale=2] (12) {$T_{K}$}} child {node[draw, minimum size=0.4cm] (13) {} }; \node[scale =2] at (0.7,-2.8) {$s_1$}; \node[scale =2] at (2.5,-5.6) {$s_2$}; \node[scale =2] at (5.8,-12) {$s_K$}; \end{tikzpicture} } \caption{Picture of $T_1\circ \cdots \circ T_{K}$} \label{fig.productformula} \end{figure} We introduce the following definition. \begin{defn} \begin{enumerate} \item \textbf{Definition of $T_1\circ \cdots \circ T_{K}$:} We define $T_1\circ \cdots \circ T_{K}$ to be the tree in Figure \ref{fig.productformula}. \item \textbf{Substitution nodes:} A node in a tree $T$ with $\Box$ nodes is defined to be a \underline{substitution node} if it is an ancestor of some $\Box$ nodes. For a substitution node, we assign a number $\tau$ to it, called its index. In Figure \ref{fig.productformula}, $s_1,\cdots,s_{K}$ are all the substitution nodes in $T_1\circ \cdots \circ T_{K}$ and we assign index $\tau_1,\cdots,\tau_{K}$ to them. Notice that $s_1$ is the root $\mathfrak{r}$. \end{enumerate} \end{defn} Because the tree in this section contains $\Box$ nodes and substitution nodes, we propose the following generalization of Definition \ref{def.treeterms} of tree terms. \begin{defn}\label{def.treetermsoperator} Given a binary tree $T$ with $\Box$ nodes and substitution nodes, we inductively define the quantity $\mathcal{J}_T$ by: \begin{equation}\label{eq.treetermoperator} \mathcal{J}_T= \begin{cases} \xi, \qquad\qquad\quad\ \ \textit{ if $T$ has only one node $\star$.} \\ w, \qquad\qquad\quad\ \textit{ if $T$ has only one node $\Box$.} \\ \mathcal{T}^l(\mathcal{J}_{T_{\mathfrak{n}_1}}, \mathcal{J}_{T_{\mathfrak{n}_2}}), \textit{ if the root of $T$ is a substitution node with index $\tau$.} \\ \mathcal{T}(\mathcal{J}_{T_{\mathfrak{n}_1}}, \mathcal{J}_{T_{\mathfrak{n}_2}}),\ \textit{ otherwise.} \end{cases} \end{equation} Here $\mathfrak{n}_1$, $\mathfrak{n}_2$ are two children of the root node $\mathfrak{r}$ and $T_{\mathfrak{n}_1}$, $T_{\mathfrak{n}_2}$ are the subtrees of $T$ rooted at above nodes. \end{defn} Then we have the following formula of $\prod_{j=1}^K\mathcal{P}^{\tau_j}_{T_j}(w)$ \begin{lem} With above definition of $T_1\circ \cdots \circ T_{K}$ and $\mathcal{J}_T$, we have \begin{equation}\label{eq.operatoreqsimple} \prod_{j=1}^K\mathcal{P}^{\tau_j}_{T_j}(w)=\mathcal{J}_{T_1\circ \cdots \circ T_{K}} \end{equation} \end{lem} \begin{proof} This lemma follows from the above explanation . \end{proof} To get a good upper bound, we need a better formula of $\prod_{j=1}^K\mathcal{P}^{\tau_j}_{T_j}(w)$ which is an analog of Lemma \ref{lem.treeterms}. \begin{lem}\label{lem.treetermsoperator} (1) Using the same notation as Lemma \ref{lem.treeterms}. Given a tree $T$ wih $\Box$ nodes and substitution nodes. Assume that the root $\mathfrak{r}$ is a substitution nodes of index $\tau_{1}$. Let $\mathcal{J}_T$ be terms defined in Definition \ref{def.treetermsoperator}, then their Fourier coefficients $\mathcal{J}_{T,k}$ are degree $l$ polynomials of $\xi$ and $w$ given by the following formula \begin{equation}\label{eq.coeftermoperator} \mathcal{J}_{T,k}=\left(\frac{i\lambda}{L^{d}}\right)^l\sum_{k_1,\, k_2,\, \cdots,\, k_{l+1}} \int_{\cup_{\mathfrak{n}\in T_{\text{in}}} A_{\mathfrak{n}}} e^{\sum_{\mathfrak{n}\in T_{\text{in}}} it_{\mathfrak{n}}\Omega_{\mathfrak{n}}-\nu(t_{\widehat{\mathfrak{n}}}-t_{\mathfrak{n}})|k_{\mathfrak{e}}|^2} \prod_{j=1}^{l+1} (\xi|w)_{k_j} \prod_{\mathfrak{n}\in T_{\text{in}}} dt_{\mathfrak{n}} \ \delta_{\cap_{\mathfrak{n}\in T_{\text{in}}} \{S_{\mathfrak{n}}=0\}}\ \prod_{\mathfrak{e}\in T_{\text{in}}}\iota_{\mathfrak{e}}k_{\mathfrak{e},x} \end{equation} Here $\iota$, $(\xi|w)_{k_j}$, $A_{\mathfrak{n}}$, $S_{\mathfrak{n}}$, $\Omega_{\mathfrak{n}}$ are defined by \begin{equation} \iota_{\mathfrak{e}}=\begin{cases} +1 \qquad \textit{if $\mathfrak{e}$ pointing inwards to $\mathfrak{n}$} \\ -1 \qquad \textit{if $\mathfrak{e}$ pointing outwards from $\mathfrak{n}$} \end{cases} \end{equation} \begin{equation} (\xi|w)_{k_j}=\begin{cases} \xi \qquad\ \textit{if $j$-th leaf node is a $\star$ node} \\ w \qquad \textit{if $j$-th leaf node is a $\Box$ node} \end{cases} \end{equation} \begin{equation} A_{\mathfrak{n}}=\left\{ \begin{aligned} &\{t_{\mathfrak{r}}\le t, (t-t_{\mathfrak{r}})/T_{\text{max}}\in [2^{-\tau_{1}},2^{-\tau_{1}-1}]\} && \textit{if $\mathfrak{n}$ is the root $\mathfrak{r}$ } \\ &\{t_{\mathfrak{n}_1},\, t_{\mathfrak{n}_2},\, t_{\mathfrak{n}_3}\le t_{\mathfrak{n}}\} && \textit{if $\mathfrak{n}\ne \mathfrak{r}$ and is not a substitution node} \\ &\{t_{\mathfrak{n}_1},\, t_{\mathfrak{n}_2},\, t_{\mathfrak{n}_3}\le t_{\mathfrak{n}}, (t_{\widehat{\mathfrak{n}}}-t_{\mathfrak{n}})/T_{\text{max}}\in [2^{-\tau},2^{-\tau-1}]\} &&\textit{if $\mathfrak{n}\ne \mathfrak{r}$ is an index $\tau$ substitution node} \end{aligned}\right. \end{equation} \begin{equation}\label{eq.defnS_noperator} S_{\mathfrak{n}}=\iota_{\mathfrak{e}_1}k_{\mathfrak{e}_1}+\iota_{\mathfrak{e}_2}k_{\mathfrak{e}_2}+\iota_{\mathfrak{e}}k_{\mathfrak{e}} \end{equation} \begin{equation} \Omega_{\mathfrak{n}}=\iota_{\mathfrak{e}_1}\Lambda_{k_{\mathfrak{e}_1}}+\iota_{\mathfrak{e}_2}\Lambda_{k_{\mathfrak{e}_2}}+\iota_{\mathfrak{e}}\Lambda_{k_{\mathfrak{e}}} \end{equation} For root node $\mathfrak{r}$, we impose the constrain that $k_{\mathfrak{r}}=k$ and $t_{\widehat{\mathfrak{r}}}=t$ (notice that $\mathfrak{r}$ does not have a parent so $\widehat{\mathfrak{r}}$ is not well defined). (2) Define $T=T_1\circ \cdots \circ T_{K}$ and by definition there are only one $\Box$ leaf. Assume that there are $l+1$ leaves in $T$ and label all $\star$ leaves by $1$, $\cdots$, $l$, then $l=\sum_{j=1}^K l(T_j)$. Then as a corollary of (1), we have the following formula for Fourier coefficients of $\prod_{j=1}^K\mathcal{P}^{\tau_j}_{T_j}$. \begin{equation} \left(\prod_{j=1}^K\mathcal{P}^{\tau_j}_{T_j}(w)\right)_{k}(t)=\sum_{k'}\int_0^t H^{\tau_1\cdots \tau_{K}}_{Tkk'}(t,s) w_{k'}(s) ds \end{equation} and the kernel $H^{\tau_1\cdots \tau_{K}}_{Tkk'}$ is given by \begin{equation} \begin{split} H^{\tau_1\cdots \tau_{K}}_{Tkk'}(t,s)=\left(\frac{i\lambda}{L^{d}}\right)^l\sum_{k_1,\, k_2,\, \cdots,\, k_{l}} H^{\tau_1\cdots \tau_{K}}_{Tk_1\cdots k_{l}kk'} \xi_{k_1}\cdots \xi_{k_{l}} \end{split} \end{equation} and the coefficients $H^{\tau_1\cdots \tau_{K}}_{Tk_1\cdots k_{l}kk'}$ of the kernel is given by \begin{equation} H^{\tau_1\cdots \tau_{K}}_{Tk_1\cdots k_{l}kk'}(t,s)=\int_{\cup_{\mathfrak{n}\in T_{\text{in}}} B_{\mathfrak{n}}} e^{\sum_{\mathfrak{n}\in T_{\text{in}}} it_{\mathfrak{n}}\Omega_{\mathfrak{n}}-\nu(t_{\widehat{\mathfrak{n}}}-t_{\mathfrak{n}})|k_{\mathfrak{e}}|^2} \prod_{\mathfrak{n}\in T_{\text{in}}} dt_{\mathfrak{n}} \ \delta_{\cap_{\mathfrak{n}\in T_{\text{in}}} \{S_{\mathfrak{n}}=0\}}\ \prod_{\mathfrak{e}\in T_{\text{in}}}\iota_{\mathfrak{e}}k_{\mathfrak{e},x} \end{equation} Here $k_1,\cdots,k_{l}$ are all variables corresponding to $\star$ leaves, $k'$ is the variable corresponding to the $\Box$ node and $B_{\mathfrak{n}}$ is defined by \begin{equation} B_{\mathfrak{n}}=\left\{ \begin{aligned} &\{t_{\mathfrak{r}}\le t, (t-t_{\mathfrak{r}})/T_{\text{max}}\in [2^{-\tau_{1}},2^{-\tau_{1}-1}]\} && \textit{if $\mathfrak{n}$ is the root $\mathfrak{r}$ } \\ &\{t_{\mathfrak{n}}\ge s\} && \textit{if $\mathfrak{n}$ is a parent of the $\Box$ nodes} \\ &\ &&\textit{and is not a substitution nodes} \\ &\{t_{\mathfrak{n}}\ge s, (t_{\widehat{\mathfrak{n}}}-t_{\mathfrak{n}})/T_{\text{max}}\in [2^{-\tau},2^{-\tau-1}]\} && \textit{if $\mathfrak{n}$ is a parent of the $\Box$ nodes} \\ &\ &&\textit{and is a substitution nodes of index $\tau$} \\ &\{t_{\mathfrak{n}_1},\, t_{\mathfrak{n}_2},\, t_{\mathfrak{n}_3}\le t_{\mathfrak{n}}, (t_{\widehat{\mathfrak{n}}}-t_{\mathfrak{n}})/T_{\text{max}}\in [2^{-\tau},2^{-\tau-1}]\} &&\textit{if not in the first three cases} \\ &\ &&\textit{and $\mathfrak{n}$ is a substitution node of index $\tau$} \\ &\{t_{\mathfrak{n}_1},\, t_{\mathfrak{n}_2},\, t_{\mathfrak{n}_3}\le t_{\mathfrak{n}}\} && \textit{otherwise} \end{aligned}\right. \end{equation} \end{lem} \begin{proof} Lemma \ref{lem.treetermsoperator} (1) can be proved by the same method as Lemma \ref{lem.treeterms}. We can check that $\mathcal{J}_T$ defined by \eqref{eq.coefterm} and \eqref{eq.coef} satisfies the recursive formula \eqref{eq.treeterm} by a direct substitution, so they are the unique solution of that recursive formula, and this proves (1). (2) is a corollary of (1). \end{proof} We can also calculate $\prod_{j=1}^K\mathcal{P}^{\tau_j}_{T_j}(\mathcal{J}_{T})$, replacing $w$ by $\mathcal{J}_{T}$ corresponds to replace a $\Box$ node by a tree $T$, so $\prod_{j=1}^K\mathcal{P}^{\tau_j}_{T_j}(\mathcal{J}_{T})=\mathcal{J}_{T_1\circ T_2\circ \cdots\circ T_K\circ T}$. Since $T$ does not contain $\Box$ node, so does $T_1\circ T_2\circ \cdots\circ T_K\circ T$ and $\mathcal{J}_{T_1\circ T_2\circ \cdots\circ T_K\circ T}$ is a polynomial of Gaussian variables $\xi_k$. Then we have the following lemma \begin{lem} We have \begin{equation}\label{eq.operatoreqsimpleJ_T} \prod_{j=1}^K\mathcal{P}^{\tau_j}_{T_j}(\mathcal{J}_{T})=\mathcal{J}_{T_1\circ T_2\circ \cdots\circ T_K\circ T} \end{equation} \end{lem} \begin{proof} This lemma follows from the above explanation . \end{proof} \subsubsection{The upper bound for coefficients} In this section, we prove the Lemma \ref{lem.boundcoefoperator} which gives an upper bound for $H^{\tau_1\cdots \tau_{K}}_{Tk_1\cdots k_{l}kk'}$. This lemma is an analog of Lemma \ref{lem.boundcoef}. \begin{lem}\label{lem.boundcoefoperator} Assume that $|k_1|, \cdots, |k_{l}|\lesssim 1$, then for $t\le T_{\text{max}}$, we have the following upper bound for $H^{\tau_1\cdots \tau_{K}}_{Tk_1\cdots k_{l}kk'}$, \begin{equation}\label{eq.boundcoefoperator} |H^{\tau_1\cdots \tau_{K}}_{Tk_1\cdots k_{l}kk'}|\lesssim \sum_{\{d_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}}\in\{0,1\}^{l(T)}}\prod_{\mathfrak{n}\in T_{\text{in}}}\frac{2^{-\frac{\tau_{\mathfrak{n}}}{2}}}{|q_{\mathfrak{n}}|+T^{-1}_{\text{max}}}\ \delta_{\cap_{\mathfrak{n}\in T_{\text{in}}} \{S_{\mathfrak{n}}=0\}} \prod_{\mathfrak{e}\in T_{\text{in}}} p_{\mathfrak{e}}. \end{equation} Here $\tau_{\mathfrak{n}}$ is defined by \begin{equation} \tau_{\mathfrak{n}}=\left\{ \begin{aligned} &0 && \textit{if $\mathfrak{n}$ is not a substitution node} \\ &\tau_j &&\textit{if $\mathfrak{n}$ is the $j$-th substitution node} \end{aligned}\right. \end{equation} Fix a sequence $\{d_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}}$ whose elements $d_{\mathfrak{n}}$ takes boolean values $\{0,1\}$. We define the sequences $\{p_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}}$, $\{q_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}}$, $\{r_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}}$ by following formulas \begin{equation}\label{eq.p_noperator} p_{\mathfrak{e}}=\frac{|k_{\mathfrak{e},x}|}{|k_{\mathfrak{e},x}|+1} \end{equation} \begin{equation}\label{eq.q_noperator} q_{\mathfrak{n}}= \begin{cases} \Omega_{\mathfrak{r}}, \qquad\qquad \textit{ if $\mathfrak{n}=$ the root $\mathfrak{r}$.} \\ \Omega_{\mathfrak{n}}+d_{\mathfrak{n}}q_{\mathfrak{n}'},\ \ \textit{ if $\mathfrak{n}\neq\mathfrak{r}$ and $\mathfrak{n}'$ is the parent of $\mathfrak{n}$.} \end{cases} \end{equation} \begin{equation}\label{eq.r_noperator} r_{\mathfrak{n}}= \begin{cases} |k_{\mathfrak{r}}|^2, \qquad\qquad \textit{ if $\mathfrak{n}=$ the root $\mathfrak{r}$.} \\ |k_{\mathfrak{n}}|^2+d_{\mathfrak{n}}q_{\mathfrak{n}'},\ \ \textit{ if $\mathfrak{n}\neq\mathfrak{r}$ and $\mathfrak{n}'$ is the parent of $\mathfrak{n}$.} \end{cases} \end{equation} \end{lem} \begin{proof} By definition \begin{equation} H^{\tau_1\cdots \tau_{K}}_{Tk_1\cdots k_{l}kk'}(t,s)=\int_{\cup_{\mathfrak{n}\in T_{\text{in}}} B_{\mathfrak{n}}} e^{\sum_{\mathfrak{n}\in T_{\text{in}}} it_{\mathfrak{n}}\Omega_{\mathfrak{n}}-\nu(t_{\widehat{\mathfrak{n}}}-t_{\mathfrak{n}})|k_{\mathfrak{e}}|^2} \prod_{\mathfrak{n}\in T_{\text{in}}} dt_{\mathfrak{n}} \ \delta_{\cap_{\mathfrak{n}\in T_{\text{in}}} \{S_{\mathfrak{n}}=0\}}\ \prod_{\mathfrak{e}\in T_{\text{in}}}\iota_{\mathfrak{e}}k_{\mathfrak{e},x} \end{equation} For any edge $\mathfrak{e}$, assume that the two end points of $\mathfrak{e}$ are $\mathfrak{n}_1$ and $\mathfrak{n}_2$. If neither $\mathfrak{n}_1$ and $\mathfrak{n}_2$ is a substitution node, then we claim that $|k_{\mathfrak{e}}|\lesssim 1$. This is because if no end point of $\mathfrak{e}$ is a substitution node, then the subtree $T_{\mathfrak{e}}$ rooted at the upper end points of $\mathfrak{e}$ does not contain the $\Box$ node as its leaf. Applying momentum conservation (Lemma \ref{lem.freeleg}) we know that $k_{\mathfrak{e}}$ is a linear combination of $k_1,\cdots,k_{l}$. Then by $|k_1|, \cdots, |k_{l}|\lesssim 1$, we get $|k_{\mathfrak{e}}|\lesssim 1$. Since $|k_{\mathfrak{e}}|\lesssim 1$, we get $\left|\prod_{\mathfrak{e}\in T_{\text{in}}}\iota_{\mathfrak{e}}k_{\mathfrak{e},x}\right|\lesssim \prod^K_{j=1}|k_{s_j,x}|$. Here $s_1, \cdots, s_K$ are all substitution nodes. As in section \ref{sec.uppcoef}, we define \begin{equation}\label{eq.defF_Toperator} F_{T}(t,\{a_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}},\{b_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}})=\int_{\cup_{\mathfrak{n}\in T_{\text{in}}} B_{\mathfrak{n}}} e^{\sum_{\mathfrak{n}\in T_{\text{in}}} it_{\mathfrak{n}} a_{\mathfrak{n}} - \nu(t_{\widehat{\mathfrak{n}}}-t_{\mathfrak{n}})b_{\mathfrak{n}}} \prod_{\mathfrak{n}\in T_{\text{in}}} dt_{\mathfrak{n}} \prod^K_{j=1}|k_{s_j,x}| \end{equation} Keeping notation the same as Lemma \ref{lem.boundcoef'}, if we can show that \begin{equation}\label{eq.lemboundop1} |F_{T}(t,\{a_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}},\{b_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}})|\lesssim\sum_{\{d_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}}\in\{0,1\}^{l(T)}}\prod_{\mathfrak{n}\in T_{\text{in}}}\frac{2^{-\frac{\tau_{\mathfrak{n}}}{2}}}{|q_{\mathfrak{n}}|+T^{-1}_{\text{max}}}\prod_{\mathfrak{e}\in T_{\text{in}}} p_{\mathfrak{e}} , \end{equation} then this lemma can be proved by taking $a_{\mathfrak{n}}=\Omega_{\mathfrak{n}}$, $b_{\mathfrak{n}}=|k_{\mathfrak{e}}|^2$ in \eqref{eq.lemboundop1}. Therefore, it suffices to prove \eqref{eq.lemboundop1}. We run a similar inductive integration by parts argument of Lemma \ref{lem.boundcoef'}. If the roots $\mathfrak{r}$ is not a substitution node, then exactly the same argument of Lemma \ref{lem.boundcoef'} works. If the roots $\mathfrak{r}$ is a substitution, the upper and lower limits of the integration should be changed. Using the same calculation as \eqref{eq.lemboundcoef'1}, we get \begin{equation} \begin{split} F_{T}(t)=\int_{\cup_{\mathfrak{n}\in T_{\text{in}}} B_{\mathfrak{n}}}&e^{it_{\mathfrak{r}}(a_{\mathfrak{r}}+T^{-1}_{\text{max}}\, \text{sgn}(a_{\mathfrak{r}}))- \nu(t-t_{\mathfrak{r}})b_{\mathfrak{r}}} e^{-iT^{-1}_{\text{max}}t_{\mathfrak{r}} \text{sgn}(a_{\mathfrak{r}})} \\ &e^{\sum_{\mathfrak{n}\in T_{\text{in},1}\cup T_{\text{in},2}} it_{\mathfrak{n}} a_{\mathfrak{n}} - \nu(t_{\widehat{\mathfrak{n}}}-t_{\mathfrak{n}})b_{\mathfrak{n}}} \left(dt_{\mathfrak{r}}\prod_{j=1}^2\prod_{\mathfrak{n}\in T_{\text{in},j}}dt_{\mathfrak{n}} \right)|k_{s_1,x}|\prod^K_{j=2}|k_{s_j,x}| \end{split} \end{equation} We do integration by parts in above integrals using Stokes formula. Notice that for $t_{\mathfrak{r}}$, there are four inequality constrains, $t_{\mathfrak{r}}\le t-2^{-\tau_{1}}T_{\text{max}}$, $t_{\mathfrak{r}}\ge t-2^{-\tau_{1}-1}T_{\text{max}}$ and $t_{\mathfrak{r}}\ge t_{\mathfrak{n}_1},t_{\mathfrak{n}_2}$. Notice that the first two come from $(t-t_{\mathfrak{r}})/T_{\text{max}}\in [2^{-\tau_{1}},2^{-\tau_{1}-1}]$. \begin{equation}\label{eq.lemboundcoefexpandop} \begin{split} F_{T}(t)=&\frac{|k_{s_1,x}|}{ia_{\mathfrak{r}}+iT^{-1}_{\text{max}} \text{sgn}(a_{\mathfrak{r}})+\nu b_{\mathfrak{r}} }\int_{\cup_{\mathfrak{n}\in T_{\text{in}}} B_{\mathfrak{n}}} \frac{d}{dt_{\mathfrak{r}}}e^{it_{\mathfrak{r}}(a_{\mathfrak{r}}+T^{-1}_{\text{max}}\, \text{sgn}(a_{\mathfrak{r}}))- \nu(t-t_{\mathfrak{r}})b_{\mathfrak{r}}} \\ &\qquad\qquad\qquad\ \ e^{-iT^{-1}_{\text{max}}t_{\mathfrak{r}} \text{sgn}(a_{\mathfrak{r}})} e^{\sum_{\mathfrak{n}\in T_{\text{in},1}\cup T_{\text{in},2}} it_{\mathfrak{n}} a_{\mathfrak{n}} - \nu(t_{\widehat{\mathfrak{n}}}-t_{\mathfrak{n}})b_{\mathfrak{n}}} \left(dt_{\mathfrak{r}}\prod_{j=1}^2\prod_{\mathfrak{n}\in T_{\text{in},j}}dt_{\mathfrak{n}} \right)\prod^K_{j=2}|k_{s_j,x}| \end{split} \end{equation} \begin{flalign*} \hspace{1.3cm} =&\frac{|k_{s_1,x}|}{ia_{\mathfrak{r}}+iT^{-1}_{\text{max}} \text{sgn}(a_{\mathfrak{r}})+\nu b_{\mathfrak{r}} }&& \\ &\left(\int_{\cup_{\mathfrak{n}\in T_{\text{in}}} B_{\mathfrak{n}},\ t_{\mathfrak{r}}=t-2^{-\tau_{1}}T_{\text{max}}} -\int_{\cup_{\mathfrak{n}\in T_{\text{in}}} B_{\mathfrak{n}},\ t_{\mathfrak{r}}=t-2^{-\tau_{1}-1}T_{\text{max}}} -\int_{\cup_{\mathfrak{n}\in T_{\text{in}}} B_{\mathfrak{n}},\ t_{\mathfrak{r}}=t_{\mathfrak{n}_1}} -\int_{\cup_{\mathfrak{n}\in T_{\text{in}}} B_{\mathfrak{n}},\ t_{\mathfrak{r}}=t_{\mathfrak{n}_2}}\right) && \\ & e^{it_{\mathfrak{r}}(a_{\mathfrak{r}}+T^{-1}_{\text{max}}\, \text{sgn}(a_{\mathfrak{r}}))- \nu(t-t_{\mathfrak{r}})b_{\mathfrak{r}}} e^{-iT^{-1}_{\text{max}}t_{\mathfrak{r}} \text{sgn}(a_{\mathfrak{r}})} e^{\sum_{\mathfrak{n}\in T_{\text{in},1}\cup T_{\text{in},2}} it_{\mathfrak{n}} a_{\mathfrak{n}} - \nu(t_{\widehat{\mathfrak{n}}}-t_{\mathfrak{n}})b_{\mathfrak{n}}} \left(dt_{\mathfrak{r}}\prod_{j=1}^2\prod_{\mathfrak{n}\in T_{\text{in},j}}dt_{\mathfrak{n}} \right)\prod^K_{j=2}|k_{s_j,x}| && \end{flalign*} \begin{flalign*} \hspace{1.3cm} -&\frac{|k_{s_1,x}|}{ia_{\mathfrak{r}}+iT^{-1}_{\text{max}} \text{sgn}(a_{\mathfrak{r}})+\nu b_{\mathfrak{r}} }\int_{\cup_{\mathfrak{n}\in T_{\text{in}}} B_{\mathfrak{n}}}e^{it_{\mathfrak{r}}(a_{\mathfrak{r}}+T^{-1}_{\text{max}}\, \text{sgn}(a_{\mathfrak{r}}))- \nu(t-t_{\mathfrak{r}})b_{\mathfrak{r}}} && \\ &\qquad\qquad \frac{d}{dt_{\mathfrak{r}}}(e^{-iT^{-1}_{\text{max}}t_{\mathfrak{r}} \text{sgn}(a_{\mathfrak{r}})}) e^{\sum_{\mathfrak{n}\in T_{\text{in},1}\cup T_{\text{in},2}} it_{\mathfrak{n}} a_{\mathfrak{n}} - \nu(t_{\widehat{\mathfrak{n}}}-t_{\mathfrak{n}})b_{\mathfrak{n}}} \left(dt_{\mathfrak{r}}\prod_{j=1}^2\prod_{\mathfrak{n}\in T_{\text{in},j}}dt_{\mathfrak{n}} \right)\prod^K_{j=2}|k_{s_j,x}| && \end{flalign*} \begin{flalign*} \hspace{1.3cm} = \frac{|k_{s_1,x}|}{ia_{\mathfrak{r}}+iT^{-1}_{\text{max}} \text{sgn}(a_{\mathfrak{r}})+\nu b_{\mathfrak{r}} }(F_{I}-F_{I'}-\widetilde{F}_{T^{(1)}}-\widetilde{F}_{T^{(2)}}-F_{II}) && \end{flalign*} We now derive upper bounds for $F_{I}$, $F_{I'}$, $\widetilde{F}_{T^{(1)}}$, $\widetilde{F}_{T^{(2)}}$, $F_{II}$. The argument of $F_{I}$ and $F_{I'}$ is very similar, so we just consider $F_{I}$. By a direct calculation, we know that $\frac{|k_{s_1,x}|}{ia_{\mathfrak{r}}+iT^{-1}_{\text{max}} \text{sgn}(a_{\mathfrak{r}})+\nu b_{\mathfrak{r}} }F_{I}(t)$ equals to \begin{equation} \begin{split} &\frac{|k_{s_1,x}|}{ia_{\mathfrak{r}}+iT^{-1}_{\text{max}} \text{sgn}(a_{\mathfrak{r}})+\nu b_{\mathfrak{r}} } \int_{\cup_{\mathfrak{n}\in T_{\text{in}}} B_{\mathfrak{n}},\ t_{\mathfrak{r}}=t-2^{-\tau_{1}}T_{\text{max}}} e^{it_{\mathfrak{r}}(a_{\mathfrak{r}}+T^{-1}_{\text{max}}\, \text{sgn}(a_{\mathfrak{r}}))- \nu(t-t_{\mathfrak{r}})b_{\mathfrak{r}}} e^{-iT^{-1}_{\text{max}}t_{\mathfrak{r}} \text{sgn}(a_{\mathfrak{r}})} \\ &e^{\sum_{\mathfrak{n}\in T_{\text{in},1}\cup T_{\text{in},2}} it_{\mathfrak{n}} a_{\mathfrak{n}} - \nu(t_{\widehat{\mathfrak{n}}}-t_{\mathfrak{n}})b_{\mathfrak{n}}} \left(dt_{\mathfrak{r}}\prod_{j=1}^2\prod_{\mathfrak{n}\in T_{\text{in},j}}dt_{\mathfrak{n}} \right)\prod^K_{j=2}|k_{s_j,x}| \\ =&\frac{|k_{s_1,x}|}{ia_{\mathfrak{r}}+iT^{-1}_{\text{max}} \text{sgn}(a_{\mathfrak{r}})+\nu b_{\mathfrak{r}} } e^{i(t-2^{-\tau_{1}}T_{\text{max}})a_{\mathfrak{r}}- \nu T_{\text{max}} 2^{-\tau_{1}}b_{\mathfrak{r}}}\int_{\cup_{\mathfrak{n}\in T_{\text{in},1}\cup T_{\text{in},2}} B_{\mathfrak{n}},\ t_{\mathfrak{r}_1}, t_{\mathfrak{r}_2}\lesssim t-2^{-\tau_{1}}T_{\text{max}}} \\ & e^{\sum_{\mathfrak{n}\in T_{\text{in},1}\cup T_{\text{in},2}} it_{\mathfrak{n}} a_{\mathfrak{n}} - \nu(t_{\widehat{\mathfrak{n}}}-t_{\mathfrak{n}})b_{\mathfrak{n}}}\left(dt_{\mathfrak{r}}\prod_{j=1}^2\prod_{\mathfrak{n}\in T_{\text{in},j}}dt_{\mathfrak{n}} \right)\prod^K_{j=2}|k_{s_j,x}| \\ =&O\left(\frac{(|k_{s_1,x}|+1)e^{- 2^{-\tau_{1}}|k_{s_1}|^2} }{|q_{\mathfrak{r}}|+T^{-1}_{\text{max}}} |F_{T_1}(t)| |F_{T_2}(t)|\frac{|k_{s_1,x}|}{|k_{s_1,x}|+1}\right)=O\left(\frac{2^{-\frac{\tau_{1}}{2}}|F_{T_1}(t)| |F_{T_2}(t)|}{|q_{\mathfrak{r}}|+T^{-1}_{\text{max}}}p_{\mathfrak{e}}\right) \end{split} \end{equation} Here in the last line we use the fact that $b_{\mathfrak{r}}=|k_{s_1}|^2$, $\nu T_{\text{max}}\gtrsim 1$ \eqref{eq.conditionnu} and $(|k_{s_1,x}|+1)e^{- 2^{-\tau_{1}}|k_{s_1}|^2} \lesssim 2^{-\frac{\tau_{1}}{2}}$. By above equation and the induction assumption, we know that $\frac{|k_{s_1,x}|}{ia_{\mathfrak{r}}+iT^{-1}_{\text{max}} \text{sgn}(a_{\mathfrak{r}})+\nu b_{\mathfrak{r}} }F_{I}(t)$ can be bounded by the right hand side of \eqref{eq.boundcoefoperator}. Now we find upper bounds of $\widetilde{F}_{T^{(1)}}$ and $\widetilde{F}_{T^{(2)}}$. Let $T^{(j)}$, $j=1,2$ are trees that is obtained by deleting the root $\mathfrak{r}$, adding edges that connecting $\mathfrak{n}_j$ with another node and defining $\mathfrak{n}_j$ to be the new root. For $T^{(j)}$, we can define the term $F_{T^{(j)}}$ by \eqref{eq.defF_Toperator}. By a direct calculation, we know that $\frac{|k_{s_1,x}|}{ia_{\mathfrak{r}}+iT^{-1}_{\text{max}} \text{sgn}(a_{\mathfrak{r}})+\nu b_{\mathfrak{r}} }\widetilde{F}_{T^{(1)}}(t)$ equals to \begin{equation} \begin{split} &\frac{|k_{s_1,x}|}{ia_{\mathfrak{r}}+iT^{-1}_{\text{max}} \text{sgn}(a_{\mathfrak{r}})+\nu b_{\mathfrak{r}} } \int_{\cup_{\mathfrak{n}\in T_{\text{in}}} B_{\mathfrak{n}},\ t_{\mathfrak{r}}=t_{\mathfrak{n}_j}} e^{it_{\mathfrak{r}}(a_{\mathfrak{r}}+T^{-1}_{\text{max}}\, \text{sgn}(a_{\mathfrak{r}}))- \nu(t-t_{\mathfrak{r}})b_{\mathfrak{r}}} e^{-iT^{-1}_{\text{max}}t_{\mathfrak{r}} \text{sgn}(a_{\mathfrak{r}})} \\ &e^{\sum_{\mathfrak{n}\in T_{\text{in},1}\cup T_{\text{in},2}} it_{\mathfrak{n}} a_{\mathfrak{n}} - \nu(t_{\widehat{\mathfrak{n}}}-t_{\mathfrak{n}})b_{\mathfrak{n}}} \left(dt_{\mathfrak{r}}\prod_{j=1}^2\prod_{\mathfrak{n}\in T_{\text{in},j}}dt_{\mathfrak{n}} \right)\prod^K_{j=2}|k_{s_j,x}| \\ =&\frac{|k_{s_1,x}|}{ia_{\mathfrak{r}}+iT^{-1}_{\text{max}} \text{sgn}(a_{\mathfrak{r}})+\nu b_{\mathfrak{r}} } e^{- \nu T_{\text{max}} 2^{-\tau_{1}}b_{\mathfrak{r}}}\int_{\cup_{\mathfrak{n}\in T_{\text{in}}} B_{\mathfrak{n}}} e^{it_{\mathfrak{n}_j}(a_{\mathfrak{r}}+T^{-1}_{\text{max}}\, \text{sgn}(a_{\mathfrak{r}}))- \nu((t-T_{\text{max}} 2^{-\tau_{1}})-t_{\mathfrak{n}_j})b_{\mathfrak{r}}} \\ & e^{\sum_{\mathfrak{n}\in T_{\text{in},1}\cup T_{\text{in},2}} it_{\mathfrak{n}} a_{\mathfrak{n}} - \nu(t_{\widehat{\mathfrak{n}}}-t_{\mathfrak{n}})b_{\mathfrak{n}}}\left(dt_{\mathfrak{r}}\prod_{j=1}^2\prod_{\mathfrak{n}\in T_{\text{in},j}}dt_{\mathfrak{n}} \right)\prod^K_{j=2}|k_{s_j,x}| \\ =&O\left(\frac{|k_{s_1,x}|e^{- 2^{-\tau_{1}}|k_{s_1}|^2} }{|q_{\mathfrak{r}}|+T^{-1}_{\text{max}}} |F_{T^{(j)}}(t)|\right)=O\left(\frac{2^{-\frac{\tau_{1}}{2}}|F_{T^{(j)}}(t)| }{|q_{\mathfrak{r}}|+T^{-1}_{\text{max}}}p_{\mathfrak{e}}\right) \end{split} \end{equation} We can apply the induction assumption to $F_{T^{(j)}}$ and show that $\frac{|k_{s_1,x}|}{ia_{\mathfrak{r}}+iT^{-1}_{\text{max}} \text{sgn}(a_{\mathfrak{r}})+\nu b_{\mathfrak{r}} } F_{T^{(j)}}$ can be bounded by the right hand side of \eqref{eq.boundcoefoperator}. Another direct calculation gives that \begin{equation} F_{II}(t)=\int_{(t-t_{\mathfrak{r}})/T_{\text{max}}\in [2^{-\tau_{1}},2^{-\tau_{1}-1}]} e^{it_{\mathfrak{r}}(a_{\mathfrak{r}}+T^{-1}_{\text{max}}\, \text{sgn}(a_{\mathfrak{r}}))- \nu(t-t_{\mathfrak{r}})b_{\mathfrak{r}}} \frac{d}{dt_{\mathfrak{r}}}(e^{-iT^{-1}_{\text{max}}t_{\mathfrak{r}} \text{sgn}(a_{\mathfrak{r}})}) F_{T_1}(t_{\mathfrak{r}})F_{T_2}(t_{\mathfrak{r}}) dt_{\mathfrak{r}}. \end{equation} Applying the induction assumption \begin{equation} \begin{split} &\left| \frac{|k_{s_1,x}|}{ia_{\mathfrak{r}}+iT^{-1}_{\text{max}} \text{sgn}(a_{\mathfrak{r}})+\nu b_{\mathfrak{r}} } F_{II}(t)\right| \\ \le &\frac{T^{-1}_{\text{max}}|k_{s_1,x}|}{|q_{\mathfrak{r}}|+T^{-1}_{\text{max}}}\int_{(t-t_{\mathfrak{r}})/T_{\text{max}}\in [2^{-\tau_{1}},2^{-\tau_{1}-1}]} e^{- \nu(t-t_{\mathfrak{r}})b_{\mathfrak{r}}} |F_{T_1}(t_{\mathfrak{r}})| |F_{T_2}(t_{\mathfrak{r}})| dt_{\mathfrak{r}} \\ \le& \frac{T^{-1}_{\text{max}} |k_{s_1,x}| e^{- \nu T_{\text{max}} 2^{-\tau_{1}}|k_{s_1}|^2}}{|q_{\mathfrak{r}}|+T^{-1}_{\text{max}}}\prod_{j=1}^2\left(\sum_{\{d_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in},j}}\in\{0,1\}^{l(T_j)}}\prod_{\mathfrak{n}\in T_{\text{in},j}}\frac{2^{-\frac{\tau_{\mathfrak{n}}}{2}}}{|q_{\mathfrak{n}}|+T^{-1}_{\text{max}}}\prod_{\mathfrak{e}\in T_{\text{in},j}} p_{\mathfrak{e}}\right) \\ \le& \sum_{\{d_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}}\in\{0,1\}^{l(T)}}\prod_{\mathfrak{n}\in T_{\text{in}}}\frac{2^{-\frac{\tau_{\mathfrak{n}}}{2}}}{|q_{\mathfrak{n}}|+T^{-1}_{\text{max}}}\prod_{\mathfrak{e}\in T_{\text{in}}} p_{\mathfrak{e}}. \end{split} \end{equation} Therefore, we get an upper bound for $F_{II}$. Combining the bounds of $F_{I}$, $F_{I'}$, $\widetilde{F}_{T^{(1)}}$, $\widetilde{F}_{T^{(2)}}$, $F_{II}$, we conclude that $F_T$ can be bounded by the right hand side of \eqref{eq.boundcoefoperator} and thus complete the proof of Lemma \ref{lem.boundcoef'}. \end{proof} \subsubsection{The upper bound for expectation of entries} In this section, we prove the Proposition \ref{prop.treetermsvarianceoperator} which gives an upper bound for $\mathbb{E}|H^{\tau_1\cdots \tau_{K}}_{Tkk'}|^2$. This lemma is an analog of Proposition \ref{prop.treetermsvariance}. \begin{prop}\label{prop.treetermsvarianceoperator} Assume that $\alpha$ satisfies \eqref{eq.conditionalpha}. For any $\theta>0$, we have \begin{equation} \sup_k\, \mathbb{E}|H^{\tau_1\cdots \tau_{K}}_{Tkk'}|^2\lesssim L^{O(l(T)\theta)} 2^{-\frac{1}{2}\sum_{j=1}^K \tau_{j}} \rho^{2l(T)}. \end{equation} and $\mathbb{E}|H^{\tau_1\cdots \tau_{K}}_{Tkk'}|^2=0$ if $|k-k'|\gtrsim 1$. \end{prop} \begin{proof} We first find a formula of $\mathbb{E}|H^{\tau_1\cdots \tau_{K}}_{Tkk'}|^2$ which is similar to \eqref{eq.termTp} and \eqref{eq.termexp}. A direct calculation gives \begin{equation}\label{eq.termexp1op} \begin{split} \mathbb{E}|H^{\tau_1\cdots \tau_{K}}_{Tkk'}|^2=&\mathbb{E}(H^{\tau_1\cdots \tau_{K}}_{Tkk'}\overline{H^{\tau_1\cdots \tau_{K}}_{Tkk'}})=\left(\frac{\lambda}{L^{d}}\right)^{2l(T)} \sum_{k_1,\, k_2,\, \cdots,\, k_{l(T)}}\sum_{k'_1,\, k'_2,\, \cdots,\, k'_{l(T)}} \\[0.5em] & H^{\tau_1\cdots \tau_{K}}_{Tk_1\cdots k_{l(T)}kk'} \overline{H^{\tau_1\cdots \tau_{K}}_{Tk_1'\cdots k_{l(T)}'kk'}} \mathbb{E}\Big(\xi_{k_1}\xi_{k_2}\cdots\xi_{k_{l(T)}}\xi_{k'_1}\xi_{k'_2}\cdots\xi_{k'_{l(T)}}\Big) \end{split} \end{equation} Applying Wick theorem (Lemma \ref{th.wick}) to \eqref{eq.termexp1op}, we get \begin{equation}\label{eq.termexpop} \mathbb{E}|H^{\tau_1\cdots \tau_{K}}_{Tkk'}|^2=\left(\frac{\lambda}{L^{d}}\right)^{2l(T)} \sum_{p\in \mathcal{P}(\{k_1,\cdots, k_{l(T)}, k'_1,\cdots, k'_{l(T)}\})} Term(T, p)_{op,k,k '}. \end{equation} and \begin{equation}\label{eq.termTpop} \begin{split} &Term(T, p)_{op,k,k '} \\ =&\sum_{k_1,\, k_2,\, \cdots,\, k_{l(T)}}\sum_{k'_1,\, k'_2,\, \cdots,\, k'_{l(T)}} H^{\tau_1\cdots \tau_{K}}_{Tk_1\cdots k_{l(T)}kk'} \overline{H^{\tau_1\cdots \tau_{K}}_{Tk_1'\cdots k_{l(T)}'kk'}} \delta_{p}(k_1,\cdots, k_{l(T)}, k'_1,\cdots, k'_{l(T)})\sqrt{n_{\textrm{in}}(k_1)}\cdots. \end{split} \end{equation} Since $\alpha=\frac{\lambda}{L^{\frac{d}{2}}}$, so $\frac{\lambda}{L^{d}}=\alpha L^{-\frac{d}{2}}$. Since the number of elements in $\mathcal{P}$ can be bounded by a constant, by Lemma \ref{lem.Tpvarianceop} proved below, we get \begin{equation} \begin{split} \mathbb{E}|H^{\tau_1\cdots \tau_{K}}_{Tkk'}|^2\lesssim& (\alpha L^{-\frac{d}{2}})^{2l(T)} L^{O(l(T)\theta)}2^{-\frac{1}{2}\sum_{j=1}^K \tau_{j}} (L^dT^{-1}_{\text{max}})^{l(T)} T_{\text{max}}^{2l(T)} \\ =& L^{O(l(T)\theta)} 2^{-\frac{1}{2}\sum_{j=1}^K \tau_{j}} \rho^{2l(T)} \end{split} \end{equation} By Lemma \ref{lem.Tpvariance} below $Term(T, p)_{op,k,k '}=0$ if $|k-k'|\gtrsim 1$, we know that the same is true for $\mathbb{E}|H^{\tau_1\cdots \tau_{K}}_{Tkk'}|^2=0$. Therefore, we complete the proof of this proposition. \end{proof} \begin{lem}\label{lem.Tpvarianceop} Let $Q=L^dT^{-1}_{\text{max}}$ be the same as in Proposition \ref{prop.counting}. Assume that $\alpha$ satisfies \eqref{eq.conditionalpha} and $n_{\mathrm{in}} \in C^\infty_0(\mathbb{R}^d)$ is compactly supported. Then for any $\theta>0$, we have \begin{equation} \sup_k\, |Term(T, p)_{op,k,k '}|\le L^{O(l(T)\theta)}2^{-\frac{1}{2}\sum_{j=1}^K \tau_{j}} Q^{l(T)} T_{\text{max}}^{2l(T)}. \end{equation} and $Term(T, p)_{op,k,k '}=0$ if $|k-k'|\gtrsim 1$. \end{lem} \begin{proof} By \eqref{eq.termTpop}, we get \begin{equation} \begin{split} &Term(T, p)_{op,k,k '} \\ =&\sum_{k_1,\, k_2,\, \cdots,\, k_{l(T)}}\sum_{k'_1,\, k'_2,\, \cdots,\, k'_{l(T)}} H^{\tau_1\cdots \tau_{K}}_{Tk_1\cdots k_{l(T)}kk'} \overline{H^{\tau_1\cdots \tau_{K}}_{Tk_1'\cdots k_{l(T)}'kk'}} \delta_{p}(k_1,\cdots, k_{l(T)}, k'_1,\cdots, k'_{l(T)})\sqrt{n_{\textrm{in}}(k_1)}\cdots. \end{split} \end{equation} Since $n_{\mathrm{in}}$ are compactly supported and there are bounded many of them in $Term(T, p)_{op,k,k'}$, by $k_1 + k_2 + \cdots + k_{l(T)}=k-k'$, we know that $Term(T, p)_{op,k,k'}=0$ if $|k-k'|\gtrsim 1$. By \eqref{eq.boundcoefoperator}, we get \begin{equation}\label{eq.termlemmaeq1op} |H^{\tau_1\cdots \tau_{K}}_{Tk_1\cdots k_{l}kk'}|\lesssim \sum_{\{d_{\mathfrak{n}}\}_{\mathfrak{n}\in T_{\text{in}}}\in\{0,1\}^{l(T)}}\prod_{\mathfrak{n}\in T_{\text{in}}}\frac{2^{-\frac{\tau_{\mathfrak{n}}}{2}}}{|q_{\mathfrak{n}}|+T^{-1}_{\text{max}}}\ \delta_{\cap_{\mathfrak{n}\in T_{\text{in}}} \{S_{\mathfrak{n}}=0\}}\prod_{\mathfrak{e}\in T_{\text{in}}} p_{\mathfrak{e}}. \end{equation} Define $[c_{\mathfrak{n},\widetilde{\mathfrak{n}}}]$, $\mathscr{M}(T)$, $c(\Omega)$ in the same way as the proof of Lemma \ref{lem.Tpvariance}. We can apply the same derivation of \eqref{eq.termlemmaeq3} to obtain \begin{equation}\label{eq.termlemmaeq3op} \begin{split} &|Term(T, p)_{op,k,k'}|\lesssim \sum_{\substack{k_1,\, \cdots,\, k_{l(T)},\, k'_1,\, \cdots,\, k'_{l(T)}\\ |k_{j}|, |k'_j|\lesssim 1, \forall j}} \sum_{c\in \mathscr{M}(T) }\prod_{\mathfrak{n}\in T_{\text{in}}}\frac{2^{-\frac{\tau_{\mathfrak{n}}}{2}}}{|c(\Omega)_{\mathfrak{n}}|+T^{-1}_{\text{max}}} \ \delta_{\cap_{\mathfrak{n}\in T_{\text{in}}} \{S_{\mathfrak{n}}=0\}}\prod_{\mathfrak{e}\in T_{\text{in}}} p_{\mathfrak{e}} \\ &\sum_{c'\in \mathscr{M}(T)}\prod_{\mathfrak{n}'\in T_{\text{in}}}\frac{2^{-\frac{\tau_{\mathfrak{n}}}{2}}}{|c'(\Omega)_{\mathfrak{n}'}|+T^{-1}_{\text{max}}} \prod_{\mathfrak{e}'\in T_{\text{in}}} p_{\mathfrak{e}'} \ \delta_{\cap_{\mathfrak{n}'\in T_{\text{in}}} \{S_{\mathfrak{n}'}=0\}} \delta_{p}(k_1,\cdots, k_{l(T)}, k'_1,\cdots, k'_{l(T)}) \end{split} \end{equation} We obviously have the following inequality \begin{equation}\label{eq.termlemmaeq4op} \begin{split} &|Term(T, p)_{op,k,k'}|\lesssim \sum_{k'}|Term(T, p)_{op,k,k'}| \\ \lesssim& \sum_{\substack{k_1,\, \cdots,\, k_{l(T)},\, k'_1,\, \cdots,\, k'_{l(T)}, k'\\ |k_{j}|, |k'_j|\lesssim 1, \forall j,\ |k'-k|\lesssim 1}} \sum_{c\in \mathscr{M}(T) }\prod_{\mathfrak{n}\in T_{\text{in}}}\frac{2^{-\frac{\tau_{\mathfrak{n}}}{2}}}{|c(\Omega)_{\mathfrak{n}}|+T^{-1}_{\text{max}}} \ \delta_{\cap_{\mathfrak{n}\in T_{\text{in}}} \{S_{\mathfrak{n}}=0\}} \prod_{\mathfrak{e}\in T_{\text{in}}} p_{\mathfrak{e}} \\ &\sum_{c'\in \mathscr{M}(T)}\prod_{\mathfrak{n}'\in T_{\text{in}}}\frac{2^{-\frac{\tau_{\mathfrak{n}}}{2}}}{|c'(\Omega)_{\mathfrak{n}'}|+T^{-1}_{\text{max}}} \prod_{\mathfrak{e}'\in T_{\text{in}}} p_{\mathfrak{e}'} \ \delta_{\cap_{\mathfrak{n}'\in T_{\text{in}}} \{S_{\mathfrak{n}'}=0\}} \delta_{p}(k_1,\cdots, k_{l(T)}, k'_1,\cdots, k'_{l(T)}) \end{split} \end{equation} Switch the order of summations and products in \eqref{eq.termlemmaeq3}, then we get \begin{equation}\label{eq.termlemmaeq2op} \begin{split} |Term(T, p)_{op,k,k'}|\lesssim& \sum_{\substack{k_1,\, \cdots,\, k_{l(T)},\, k'_1,\, \cdots,\, k'_{l(T)}, k'\\ |k_{j}|, |k'_j|\lesssim 1, \forall j,\ |k'-k|\lesssim 1}} \sum_{c, c'\in \mathscr{M}(T) }\prod_{\mathfrak{n}, \mathfrak{n}'\in T_{\text{in}}}\frac{2^{-\frac{\tau_{\mathfrak{n}}}{2}}}{|c(\Omega)_{\mathfrak{n}}|+T^{-1}_{\text{max}}}\frac{2^{-\frac{\tau_{\mathfrak{n}}}{2}}}{|c'(\Omega)_{\mathfrak{n}'}|+T^{-1}_{\text{max}}} \\ & \prod_{\mathfrak{e},\mathfrak{e}'\in T_{\text{in}}} (p_{\mathfrak{e}}p_{\mathfrak{e}'})\ \delta_{\cap_{\mathfrak{n},\mathfrak{n}'\in T_{\text{in}}} \{S_{\mathfrak{n}}=0, S_{\mathfrak{n}'}=0\}} \delta_{p}(k_1,\cdots, k_{l(T)}, k'_1,\cdots, k'_{l(T)}). \end{split} \end{equation} Consider a tree $T$ with a $\Box$ nodes and a pairing $p\in \mathcal{P}(\{k_1,\cdots, k_{l(T)}, k'_1,\cdots, k'_{l(T)}\})$. $p$ can be viewed as a pairing of all star nodes of two copies of $T$. If we pair the two $\Box$ nodes in two copies of $T$, then we can construct a couple $\mathcal{C}$ from $p$ and the two copies. As in \eqref{eq.termlemmaeq4}, we can show that \begin{equation}\label{eq.termlemmaeq4op'} \sum_{c, c'\in \mathscr{M}(T) }=\sum_{c\in \mathscr{M}(\mathcal{C}) },\qquad \prod_{\mathfrak{n}, \mathfrak{n}'\in T_{\text{in}}}=\prod_{\mathfrak{n}\in \mathcal{C}}, \qquad \prod_{\mathfrak{e},\mathfrak{e}'\in T_{\text{in}}}=\prod_{\mathfrak{e}\in \mathcal{C}_{\text{norm}}},\qquad \cap_{\mathfrak{n},\mathfrak{n}'\in T_{\text{in}}}=\cap_{\mathfrak{n}\in \mathcal{C}}. \end{equation} The analog of \eqref{eq.termlemmaeq5} is \begin{equation}\label{eq.termlemmaeq5op} |Term(T, p)_{op,k,k'}|\lesssim \sum_{\substack{k_1,\, \cdots,\, k_{l(T)},\, k'_1,\, \cdots,\, k'_{l(T)}, k'\\ |k_{j}|, |k'_j|\lesssim 1, \forall j,\ |k'-k|\lesssim 1}} \sum_{c\in \mathscr{M}(\mathcal{C}) }\prod_{\mathfrak{n}\in \mathcal{C}}\frac{2^{-\frac{\tau_{\mathfrak{n}}}{2}}}{|c(\Omega)_{\mathfrak{n}}|+T^{-1}_{\text{max}}} \prod_{\mathfrak{e}\in \mathcal{C}_{\text{norm}}} p_{\mathfrak{e}}\ \delta_{\cap_{\mathfrak{n}\in \mathcal{C}} \{S_{\mathfrak{n}}=0\}} \end{equation} The rest part of the proof is exactly the same as the proof of Lemma \ref{lem.Tpvariance} after \eqref{eq.termlemmaeq5}. For completeness, we include a sketch. As \eqref{eq.termlemmaeq8}, we have \begin{equation} \sum_{\substack{k_1,\, \cdots,\, k_{l(T)+1},\, k'_1,\, \cdots,\, k'_{l(T)+1}\\\cap_{\mathfrak{n}\in \mathcal{C}} \{S_{\mathfrak{n}}=0\}}}=\sum_{\kappa_{\mathfrak{e}}\in \mathcal{D}(\alpha,1)}\sum_{\sigma_{\mathfrak{n}}\in \mathbb{Z}_{T_{\text{max}}}}\sum_{Eq(\mathcal{C}, \{\sigma_{\mathfrak{n}}\}_{\mathfrak{n}}, \{\kappa_{\mathfrak{e}}\}_{\mathfrak{e}},k)}, \end{equation} which implies that \begin{equation}\label{eq.termlemmaeq6op} \begin{split} |Term(T, p)_{op,k,k'}|\lesssim \sum_{\kappa_{\mathfrak{e}}\in \mathcal{D}(\alpha,1)}\sum_{\sigma_{\mathfrak{n}}\in \mathbb{Z}_{T_{\text{max}}}}\sum_{Eq(\mathcal{C}, \{\sigma_{\mathfrak{n}}\}_{\mathfrak{n}}, \{\kappa_{\mathfrak{e}}\}_{\mathfrak{e}},k)} \sum_{c\in \mathscr{M}(\mathcal{C}) }\prod_{\mathfrak{n}\in \mathcal{C}}\frac{2^{-\frac{\tau_{\mathfrak{n}}}{2}}}{|c(\Omega)_{\mathfrak{n}}|+T^{-1}_{\text{max}}} \prod_{\mathfrak{e}\in \mathcal{C}_{\text{norm}}} p_{\mathfrak{e}} \end{split} \end{equation} As \eqref{eq.lemboundtermTp}, we get \begin{equation}\label{eq.lemboundtermTpop} \begin{split} &Term(T, p)_k \\ \lesssim& \sum_{\kappa_{\mathfrak{e}}\in \mathcal{D}(\alpha,1)}\sum_{\sigma_{\mathfrak{n}}\in \mathbb{Z}_{T_{\text{max}}}}\sum_{Eq(\mathcal{C}, \{\sigma_{\mathfrak{n}}\}_{\mathfrak{n}}, \{\kappa_{\mathfrak{e}}\}_{\mathfrak{e}},k)} \sum_{c\in \mathscr{M}(\mathcal{C}) }\prod_{\mathfrak{n}\in \mathcal{C}}\frac{2^{-\frac{\tau_{\mathfrak{n}}}{2}}}{|c(\Omega)_{\mathfrak{n}}|+T^{-1}_{\text{max}}} \prod_{\mathfrak{e}\in \mathcal{C}_{\text{norm}}} p_{\mathfrak{e}} \\ \lesssim &\sum_{c\in \mathscr{M}(\mathcal{C}) }\sum_{\kappa_{\mathfrak{e}}\in \mathcal{D}(\alpha,1)}\sum_{\substack{\sigma_{\mathfrak{n}}\in \mathbb{Z}_{T_{\text{max}}}\\ |\sigma_{\mathfrak{n}}|\lesssim 1}}\frac{2^{-\frac{\tau_{\mathfrak{n}}}{2}}}{|c(\{\sigma_{\mathfrak{n}}\})_{\mathfrak{n}}|+T^{-1}_{\text{max}}} \sum_{Eq(\mathcal{C}, \{\sigma_{\mathfrak{n}}\}_{\mathfrak{n}}, \{\kappa_{\mathfrak{e}}\}_{\mathfrak{e}},k)} 1 \prod_{\mathfrak{e}\in \mathcal{C}_{\text{norm}}} \frac{\kappa_{\mathfrak{e}}}{\kappa_{\mathfrak{e}}+1} \\ \lesssim &\sum_{c\in \mathscr{M}(\mathcal{C}) }\sum_{\kappa_{\mathfrak{e}}\in \mathcal{D}(\alpha,1)}\sum_{\substack{\sigma_{\mathfrak{n}}\in \mathbb{Z}_{T_{\text{max}}}\\ |\sigma_{\mathfrak{n}}|\lesssim 1}}\frac{2^{-\frac{\tau_{\mathfrak{n}}}{2}}}{|c(\{\sigma_{\mathfrak{n}}\})_{\mathfrak{n}}|+T^{-1}_{\text{max}}} \#Eq(\mathcal{C}, \{\sigma_{\mathfrak{n}}\}_{\mathfrak{n}}, \{\kappa_{\mathfrak{e}}\}_{\mathfrak{e}},k) \prod_{\mathfrak{e}\in \mathcal{C}_{\text{norm}}} \frac{\kappa_{\mathfrak{e}}}{\kappa_{\mathfrak{e}}+1} \\ \lesssim &\sum_{c\in \mathscr{M}(\mathcal{C}) }\sum_{\kappa_{\mathfrak{e}}\in \mathcal{D}(\alpha,1)}\sum_{\substack{\sigma_{\mathfrak{n}}\in \mathbb{Z}_{T_{\text{max}}}\\ |\sigma_{\mathfrak{n}}|\lesssim 1}}\frac{2^{-\frac{\tau_{\mathfrak{n}}}{2}}}{|c(\{\sigma_{\mathfrak{n}}\})_{\mathfrak{n}}|+T^{-1}_{\text{max}}} L^{O(n\theta)} Q^{\frac{n}{2}} \prod_{\mathfrak{e}\in \mathcal{C}_{\text{norm}}} \frac{\kappa_{\mathfrak{e}}}{\kappa_{\mathfrak{e}}+1} \prod_{\mathfrak{e}\in \mathcal{C}_{\text{norm}}} \kappa_{\mathfrak{e}}^{-1} \end{split} \end{equation} Here in the last inequality we applied \eqref{eq.countingbd0} in Proposition \ref{prop.counting}. After simplification, \eqref{eq.lemboundtermTpop} gives us \begin{equation} \begin{split} |Term(T, p)_{op,k,k'}|\lesssim &L^{O(n\theta)} Q^{\frac{n}{2}}\sum_{c\in \mathscr{M}(\mathcal{C}) }\sum_{\kappa_{\mathfrak{e}}\in \mathcal{D}(\alpha,1)}\sum_{\substack{\sigma_{\mathfrak{n}}\in \mathbb{Z}_{T_{\text{max}}}\\ |\sigma_{\mathfrak{n}}|\lesssim 1}} \prod_{\mathfrak{n}\in \mathcal{C}}\frac{2^{-\frac{\tau_{\mathfrak{n}}}{2}}}{|c(\{\sigma_{\mathfrak{n}}\})_{\mathfrak{n}}|+T^{-1}_{\text{max}}} \\ \lesssim &L^{O(n\theta)} Q^{\frac{n}{2}} \left(\sum_{\kappa_{\mathfrak{e}}\in \mathcal{D}(\alpha,1)} 1\right) \sum_{c\in \mathscr{M}(\mathcal{C})}\sum_{\substack{\sigma_{\mathfrak{n}}\in \mathbb{Z}_{T_{\text{max}}}\\ |\sigma_{\mathfrak{n}}|\lesssim 1}} \prod_{\mathfrak{n}\in \mathcal{C}}\frac{2^{-\frac{\tau_{\mathfrak{n}}}{2}}}{|c(\{\sigma_{\mathfrak{n}}\})_{\mathfrak{n}}|+T^{-1}_{\text{max}}} \\ \lesssim & L^{O(n\theta)} Q^{\frac{n}{2}} \sum_{c\in \mathscr{M}(\mathcal{C})}\sum_{\substack{\sigma_{\mathfrak{n}}\in \mathbb{Z}_{T_{\text{max}}}\\ |\sigma_{\mathfrak{n}}|\lesssim 1}} \prod_{\mathfrak{n}\in \mathcal{C}}\frac{2^{-\frac{\tau_{\mathfrak{n}}}{2}}}{|c(\{\sigma_{\mathfrak{n}}\})_{\mathfrak{n}}|+T^{-1}_{\text{max}}} \end{split} \end{equation} Here in the first step we use the fact that $\prod_{\mathfrak{e}\in \mathcal{C}_{\text{norm}}} \frac{\kappa_{\mathfrak{e}}}{\kappa_{\mathfrak{e}}+1} \prod_{\mathfrak{e}\in \mathcal{C}_{\text{norm}}} \kappa_{\mathfrak{e}}^{-1}=\prod_{\mathfrak{e}\in \mathcal{C}_{\text{norm}}} \frac{1}{\kappa_{\mathfrak{e}}+1}\le 1$. The reason for other steps can be find in the derivation of \eqref{eq.lemboundtermTpsimplify}. We claim that \begin{equation}\label{eq.lemTpvarianceclaimop} \sup_{c}\sum_{\substack{\sigma_{\mathfrak{n}}\in \mathbb{Z}_{T_{\text{max}}}\\ |\sigma_{\mathfrak{n}}|\lesssim 1}} \prod_{\mathfrak{n}\in \mathcal{C}}\frac{2^{-\frac{\tau_{\mathfrak{n}}}{2}}}{|c(\{\sigma_{\mathfrak{n}}\})_{\mathfrak{n}}|+T^{-1}_{\text{max}}}\lesssim L^{O(l(T)\theta)}2^{-\frac{1}{2}\sum_{j=1}^K \tau_{j}} T^{2l(T)}_{\text{max}} \end{equation} Since there are only bounded many matrices in $\mathscr{M}(\mathcal{C})$. Given above claim, we know that \begin{equation} |Term(T, p)_{op,k,k'}|\lesssim L^{O(l(T)\theta)} 2^{-\frac{1}{2}\sum_{j=1}^K \tau_{j}} Q^{\frac{n}{2}} T^{2l(T)}_{\text{max}}, \end{equation} which proves the lemma since $n=2l(T)$. Now prove the claim. In a tree $T$, there are $l(T)$ branching nodes, so there are $l(T)$ nodes in $T_{\text{in}}$. Since all nodes of $\mathcal{C}$ comes the two copies of $T_{\text{in}}$, so there are $2l(T)$ nodes in $\mathcal{C}$. Label these nodes by $h=1,\cdots,2l(T)$ and denote $\sigma_{\mathfrak{n}}$ by $\sigma_{h}$ if $\mathfrak{n}$ is labelled by $h$. Since $\sigma_{h}\in \mathbb{Z}_{T_{\text{max}}}$, there exists $m_{h}\in \mathbb{Z}$ such that $\sigma_{h}=T^{-1}_{\text{max}} m_{h}$. \eqref{eq.lemTpvarianceclaimop} is thus equivalent to \begin{equation}\label{eq.lemTpvarianceclaim1op} T^{2l(T)}_{\text{max}}\sum_{\substack{m_{h}\in \mathbb{Z}\\ |m_{h}|\lesssim T_{\text{max}}}} \prod_{h=1}^{2l(T)}\frac{2^{-\frac{\tau_{\mathfrak{n}}}{2}}}{|c(\{m_{h}\})_{h}|+1}\lesssim L^{O(l(T)\theta)}2^{-\frac{1}{2}\sum_{j=1}^K \tau_{j}} T^{2l(T)}_{\text{max}} \end{equation} To we prove \eqref{eq.lemTpvarianceclaim1op}. We just need to show that \begin{equation} \sum_{\substack{m_{h}\in \mathbb{Z}\\ |m_{h}|\lesssim T_{\text{max}}}} \prod_{h=1}^{2l(T)}\frac{1}{|c(\{m_{h}\})_{h}|+1}\lesssim L^{O(l(T)\theta)} \end{equation} This can be proved by Euler-Maclaurin formula \eqref{eq.EulerMaclaurin} as \eqref{eq.lemTpvarianceEulerMac}. Now we complete the proof of the claim and thus the proof of the lemma. \end{proof} \subsubsection{Proof of the operator norm bound} In this subsection, we finish the proof of Proposition \ref{prop.operatorupperbound'}. \begin{proof}[Proof of Proposition \ref{prop.operatorupperbound'}] By Lemma \ref{lem.treetermsoperator}, we have \begin{equation} \left(\prod_{j=1}^K\mathcal{P}^{\tau_j}_{T_j}(w)\right)_{k}(t)=\sum_{k'}\int_0^t H^{\tau_1\cdots \tau_{K}}_{Tkk'}(t,s) w_{k'}(s) ds \end{equation} and the kernel $H^{\tau_1\cdots \tau_{K}}_{Tkk'}$ is a polynomial of Gaussian variables given by \begin{equation} \begin{split} H^{\tau_1\cdots \tau_{K}}_{Tkk'}(t,s)=\left(\frac{i\lambda}{L^{d}}\right)^l\sum_{k_1,\, k_2,\, \cdots,\, k_{l}} H^{\tau_1\cdots \tau_{K}}_{Tk_1\cdots k_{l}kk'} \xi_{k_1}\cdots \xi_{k_{l}}. \end{split} \end{equation} By Proposition \ref{prop.treetermsvarianceoperator}, we have \begin{equation} \sup_k\, \mathbb{E}|H^{\tau_1\cdots \tau_{K}}_{Tkk'}|^2\lesssim L^{O(l(T)\theta)}2^{-\frac{1}{2}\sum_{j=1}^K \tau_{j}} \rho^{2l(T)}. \end{equation} Then the large deviation estimate Lemma \ref{lem.largedev} gives \begin{equation} |H^{\tau_1\cdots \tau_{K}}_{Tkk'}(t,s)|\lesssim L^{\frac{n}{2}\theta} \sqrt{\mathbb{E}|H^{\tau_1\cdots \tau_{K}}_{Tkk'}|^2}\lesssim L^{O(l(T)\theta)} 2^{-\frac{1}{4}\sum_{j=1}^K \tau_{j}} \rho^{l(T)},\qquad \textit{L-certainly}. \end{equation} Summing over all $\tau_1,\cdots, \tau_{K}$ gives \begin{equation} \sum_{\tau_1,\cdots, \tau_{K}}|H^{\tau_1\cdots \tau_{K}}_{Tkk'}(t,s)|\lesssim L^{O(l(T)\theta)} \rho^{l(T)},\qquad \textit{L-certainly}. \end{equation} Applying the epsilon net and union bound method as in \eqref{eq.unionbound}, we obtain \begin{equation} \sup_{t,s}\sup_{|k|,|k'|\lesssim L^{2M}}\sum_{\tau_1,\cdots, \tau_{K}}|H^{\tau_1\cdots \tau_{K}}_{Tkk'}(t,s)|\lesssim L^{O(l(T)\theta)} \rho^{l(T)},\qquad \textit{L-certainly}. \end{equation} For $w\in X^{p}_{L^{2M}}$, we have $|w_k(t)|\le \sup_{t}||w(t)||_{X^{p}_{L^{2M}}} \langle k\rangle^{-p}$. Since $w_{k'}=0$ if $|k'|\gtrsim L^{2M}$ and $H^{\tau_1\cdots \tau_{K}}_{Tkk'}=0$ if $|k-k'|\gtrsim 1$, we know that \begin{equation} \begin{split} \left|\left(\sum_{\tau_1,\cdots, \tau_{K}}\prod_{j=1}^K\mathcal{P}^{\tau_j}_{T_j}(w)\right)_{k}(t)\right|\le&\sum_{|k'|\lesssim L^{2M}}\int_0^t \sum_{\tau_1,\cdots, \tau_{K}}|H^{\tau_1\cdots \tau_{K}}_{Tkk'}(t,s)| |w_{k'}(s)| ds \\ \lesssim& L^{O(l(T)\theta)} \rho^{l(T)}t \sup_{t}||w(t)||_{X^{p}_{L^{2M}}} \sum_{\substack{|k'|\lesssim L^{2M}\\ |k'-k|\lesssim 1}} \langle k'\rangle^{-p} \\ \lesssim& L^{O(1+l(T)\theta)} \rho^{l(T)} \sup_{t}||w(t)||_{X^{p}_{L^{2M}}} \langle k\rangle^{-p}. \end{split} \end{equation} Since $l(T)=\sum_{j=1}^K l(T_j)$, $L$-certainly we have \begin{equation} \left|\left|\sum_{\tau_1,\cdots,\tau_K}\prod_{j=1}^K\mathcal{P}^{\tau_j}_{T_j}\right|\right|_{L_t^{\infty}X^{p}}\le L^{O\left(1+\theta\sum_{j=1}^K l(T_j)\right)} \rho^{\sum_{j=1}^K l(T_j)}||w||_{L_t^{\infty}X^{p}_{L^{2M}}}. \end{equation} Therefore, we complete the proof of Proposition \ref{prop.operatorupperbound'}. \end{proof} \subsection{Asymptotics of the main terms} In this section, we prove \eqref{eq.n1} in Theorem \ref{th.main} which characterize the asymptotic behavior of $n^{(1)}(k)$. \begin{prop}\label{prop.mainterms} Using the same notation as Theorem \ref{th.main} (2), then we have \begin{equation} n^{(1)}(k)=\left\{ \begin{aligned} &\frac{t}{T_{\mathrm{kin}}}\mathcal K(n_{\mathrm{in}})(k)+O_{\ell^\infty_k}\left(L^{-\theta}\frac{T_{\text{max}}}{T_{\mathrm {kin}}}\right)+\widetilde{O}_{\ell^\infty_k}\left(\epsilon_1\text{Err}_{D}(k_x)\frac{T_{\text{max}}}{T_{\mathrm {kin}}}\right) && \text{for any } |k|\le \epsilon_1 l_{d}^{-1}, \\ &0, && \text{for any } |k|\ge 2C_{2} l_{d}^{-1} \end{aligned}\right. \end{equation} Here \begin{equation} \text{Err}_{D}(k_x)=\left\{\begin{aligned} &D^{d+1}, && \text{if } |k_x|\le D, \\ &D^{d-1}(|k_x|^2+D|k_x|), && \text{if } |k_x|\ge D. \end{aligned} \right. \end{equation} \end{prop} \begin{proof} The second case of \eqref{eq.n1} is obvious. We divide the proof of the first case into several steps. \textbf{Step 1.} (Calculation of $\psi_{app,k}$ and $n^{(1)}(k)$) The first three terms in the tree expansion \eqref{eq.approxsol} can be calculated explicitly. \begin{equation} \begin{split} \psi_{app,k}=\psi^{(0)}_{app,k}+\psi^{(1)}_{app,k}+\psi^{(2)}_{app,k}+\cdots \end{split} \end{equation} where $\psi^{(0)}_{app,k}$,$\psi^{(1)}_{app,k}$, $\psi^{(2)}_{app,k}$ are given by \begin{equation} \psi^{(0)}_{app,k}=\xi_k \end{equation} \begin{equation} \psi^{(1)}_{app,k}=\frac{i\lambda}{L^{d}} \sum\limits_{k_1+k_2=k} k_{x}\xi_{k_1} \xi_{k_2} \int^{t}_0e^{i s\Omega(k_1,k_2,k)- \nu|k|^2(t-s)} ds \end{equation} \begin{equation} \begin{split} \psi^{(2)}_{app,k}=&-2\left(\frac{\lambda}{L^{d}}\right)^2 \sum\limits_{k_1+k_2+k_3=k} k_{x}(k_{2x}+k_{3x})\xi_{k_1} \xi_{k_2}\xi_{k_3}\times \\ &\int_{0\le r<s\le t}e^{i s\Omega(k_1,k_2+k_3,k)- \nu|k|^2(t-s)} e^{i r\Omega(k_2,k_3,k_2+k_3)- \nu|k_2+k_3|^2(s-r)} dsdr \end{split} \end{equation} By \eqref{eq.n(j)}, we know that \begin{equation}\label{eq.n(1)} n^{(1)}(k)=\mathbb E \left|\psi^{(1)}_{app,k}\right|^2+ 2\text{Re}\ \mathbb E \left(\psi^{(2)}_{app,k}\overline{\xi_k}\right). \end{equation} \textbf{Step 2.} (Decomposition of $\psi_{app,k}$) $e^{- \nu|k|^2(t-s)}$ is supposed to be close to $1$, so we have the decomposition $e^{- \nu|k|^2(t-s)}=1+(e^{- \nu|k|^2(t-s)}-1)=1-\left(\int_{0}^1 e^{-a\nu|k|^2(t-s)} da\right) \nu|k|^2(t-s)$. We can also decompose $\psi^{(1)}_{app,k}=\psi^{(11)}_{app,k}+\psi^{(12)}_{app,k}$ and $\psi^{(2)}_{app,k}=\psi^{(21)}_{app,k}+\psi^{(22)}_{app,k}$ into $\psi^{(1)}_{app,k}$ accordingly. \begin{equation} \psi^{(11)}_{app,k}=\frac{i\lambda}{L^{d}} \sum\limits_{k_1+k_2=k} k_{x}\xi_{k_1} \xi_{k_2} \int^{t}_0e^{i s\Omega(k_1,k_2,k)} ds \end{equation} \begin{equation} \psi^{(12)}_{app,k}=-\nu|k|^2\frac{i\lambda}{L^{d}} \int_{0}^1\Bigg(\sum\limits_{k_1+k_2=k} k_{x}\xi_{k_1} \xi_{k_2} \int^{t}_0e^{i s\Omega(k_1,k_2,k)- a\nu|k|^2(t-s)} (t-s)ds\Bigg) da \end{equation} \begin{equation} \psi^{(21)}_{app,k}=-2\left(\frac{\lambda}{L^{d}}\right)^2 \sum\limits_{k_1+k_2+k_3=k} k_{x}(k_{2x}+k_{3x})\xi_{k_1} \xi_{k_2}\xi_{k_3}\int_{0\le r<s\le t}e^{i s\Omega(k_1,k_2+k_3,k)} e^{i r\Omega(k_2,k_3,k_2+k_3)} dsdr \end{equation} \begin{equation} \begin{split} \psi^{(22)}_{app,k}=&2\left(\frac{\lambda}{L^{d}}\right)^2 \int_{0}^1\Bigg(\sum\limits_{k_1+k_2+k_3=k} k_{x}(k_{2x}+k_{3x})\int_{0\le r<s\le t}(\nu|k|^2(t-s)+\nu|k_2+k_3|^2(s-r)) \\ & e^{i s\Omega(k_1,k_2+k_3,k)- a\nu|k|^2(t-s)} e^{i r\Omega(k_2,k_3,k_2+k_3)- a\nu|k_2+k_3|^2(s-r)} dsdr\xi_{k_1}\xi_{k_2}\xi_{k_3}\Bigg)da \end{split} \end{equation} $n^{(1)}(k)$ is also decomposed into $n^{(1)}(k)=n^{(11)}(k)+n^{(12)}(k)$ \begin{equation}\label{eq.n(11)} n^{(11)}(k)=\mathbb E \left|\psi^{(11)}_{app,k}\right|^2+ 2\text{Re}\ \mathbb E \left(\psi^{(21)}_{app,k}\overline{\xi_k}\right). \end{equation} \begin{equation} n^{(12)}(k)=\mathbb E \left|\psi^{(12)}_{app,k}\right|^2+2\text{Re}\ \mathbb E \left(\psi^{(11)}_{app,k}\overline{\psi^{(12)}_{app,k}}\right)+ 2\text{Re}\ \mathbb E \left(\psi^{(22)}_{app,k}\overline{\xi_k}\right). \end{equation} \textbf{Step 3.} (Estimate of $n^{(12)}(k)$ for $|k|\le \epsilon_1 l_{d}^{-1}$) In this step, all constants in $\lesssim$ depend only on the dimension $d$. Define \begin{equation}\label{eq.H(11)} H^{(11)}_{k_1k_2k}=k_{x} \int^{t}_0e^{i s\Omega(k_1,k_2,k)} ds \end{equation} \begin{equation}\label{eq.H(12)} H^{(12)}_{k_1k_2k}=-\nu|k|^2k_{x} \int^{t}_0e^{i s\Omega(k_1,k_2,k)- a\nu|k|^2(t-s)} (t-s)ds \end{equation} \begin{equation}\label{eq.H(21)} H^{(21)}_{k_1k_2k_3k}=2k_{x}(k_{2x}+k_{3x})\int_{0\le r<s\le t}e^{i s\Omega(k_1,k_2+k_3,k)} e^{i r\Omega(k_2,k_3,k_2+k_3)} dsdr \end{equation} \begin{equation}\label{eq.H(22)} \begin{split} H^{(22)}_{k_1k_2k_3k}=&-2k_{x}(k_{2x}+k_{3x})\int_{0\le r<s\le t}(\nu|k|^2(t-s)+\nu|k_2+k_3|^2(s-r)) \\ & e^{i s\Omega(k_1,k_2+k_3,k)- a\nu|k|^2(t-s)} e^{i r\Omega(k_2,k_3,k_2+k_3)- a\nu|k_2+k_3|^2(s-r)} dsdr \end{split} \end{equation} Then we have \begin{equation}\label{eq.psi(11)app} \psi^{(11)}_{app,k}=\frac{i\lambda}{L^{d}} \sum\limits_{k_1+k_2=k} H^{(11)}_{k_1k_2k}\xi_{k_1} \xi_{k_2} \end{equation} \begin{equation}\label{eq.psi(12)app} \psi^{(12)}_{app,k}=\int^1_{0}\frac{i\lambda}{L^{d}} \sum\limits_{k_1+k_2=k} H^{(12)}_{k_1k_2k}\xi_{k_1} \xi_{k_2} da \end{equation} \begin{equation}\label{eq.psi(21)app} \psi^{(21)}_{app,k}=\left(\frac{i\lambda}{L^{d}}\right)^2 \sum\limits_{k_1+k_2+k_3=k} H^{(11)}_{k_1k_2k_3k}\xi_{k_1} \xi_{k_2}\xi_{k_3} \end{equation} \begin{equation}\label{eq.psi(22)app} \psi^{(22)}_{app,k}=\int^1_{0}\left(\frac{i\lambda}{L^{d}}\right)^2 \sum\limits_{k_1+k_2+k_3=k} H^{(12)}_{k_1k_2k_3k}\xi_{k_1} \xi_{k_2}\xi_{k_3} da \end{equation} To derive an upper bound for $n^{(12)}(k)$, it suffices to consider $\mathbb E \left|\psi^{(12)}_{app,k}\right|^2$, $\text{Re}\ \mathbb E \left(\psi^{(11)}_{app,k}\overline{\psi^{(12)}_{app,k}}\right)$ and $\text{Re}\ \mathbb E \left(\psi^{(22)}_{app,k}\overline{\xi_k}\right)$ separately. \textbf{Step 3.1.} (Upper bounds of $\mathbb E \left|\psi^{(12)}_{app,k}\right|^2$ and $\text{Re}\ \mathbb E \left(\psi^{(11)}_{app,k}\overline{\psi^{(12)}_{app,k}}\right)$) We first derive upper bounds for $H^{(1j)}$. For $H^{(11)}$, we have \begin{equation} \begin{split} |H^{(11)}_{k_1k_2k}|\lesssim& \left|\int^{t}_0\frac{k_x}{i(\Omega+T^{-1}_{\text{max}}\text{sgn}(\Omega))}e^{-isT^{-1}_{\text{max}}\text{sgn}(\Omega)}\frac{d}{ds}e^{i s\Omega+is\, \text{sgn}(\Omega)/T_{\text{max}}} ds\right| \end{split} \end{equation} Integration by parts we get \begin{equation}\label{eq.H(11)bound} \begin{split} |H^{(11)}_{k_1k_2k}|\lesssim&\left|\int^{t}_0\frac{T_{\text{max}}^{-1}k_x}{\Omega+T^{-1}_{\text{max}}\text{sgn}(\Omega)}e^{i s\Omega} ds\right|+\left|\left[\frac{k_x}{i(\Omega+T^{-1}_{\text{max}}\text{sgn}(\Omega)}e^{i s\Omega}\right]_{0}^t\right| \\ \lesssim& \frac{|k_x|}{|\Omega|+T^{-1}_{\text{max}}} \end{split} \end{equation} By a similar integration by parts method, we get \begin{equation}\label{eq.H(12)bound} |H^{(12)}_{k_1k_2k}|\lesssim \frac{|k|^2|k_x|\nu T_{\text{max}}}{|\Omega|+T^{-1}_{\text{max}}}. \end{equation} By \eqref{eq.psi(12)app}, we know that \begin{equation}\label{eq.psi(12)bound} \begin{split} \mathbb E \left|\psi^{(12)}_{app,k}\right|^2\le& \int^1_{0}\frac{\lambda^2}{L^{2d}} \mathbb E\left|\sum\limits_{k_1+k_2=k} H^{(12)}_{k_1k_2k}\xi_{k_1} \xi_{k_2}\right|^2 da \\ \le& \frac{2\lambda^2}{L^{2d}} \sup_{a\in[0,1]}\sum\limits_{k_1+k_2=k} \left|H^{(12)}_{k_1k_2k}\right|^2n(k_1) n(k_2) \\ \le& \frac{2\lambda^2}{L^{2d}} (|k|^2|k_x|\nu T_{\text{max}})^2 \sum\limits_{k_1} \frac{n(k_1) n(k-k_1)}{(|\Omega(k_1,k-k_1,k)|+T^{-1}_{\text{max}})^2} \\ \le& \frac{2\epsilon_1^2\lambda^2}{L^{2d}} \max(|k_x|^2,D^2) T_{\text{max}} L^dD^{d-1}=2\epsilon_1^2 \max(|k_x|^2,D^2) T_{\text{max}} T^{-1}_{\text{kin}}D^{d-1} \end{split} \end{equation} Here in the second inequality we apply the Wick theorem to calculate the expectation. In the third inequality we apply \eqref{eq.H(12)bound}. In the last line we apply \eqref{eq.asymptoticsbound} in Theorem \ref{th.numbertheory} by taking $t=T_{\text{max}}$, $g(x)=\frac{1}{(1+|x|)^2}$ and $F(k_1)=n(k_1) n(k-k_1)$. In the last line we also use the facts that $|k|\le \epsilon_1 l_{d}^{-1}=\epsilon_1 (\nu T_{\text{max}})^{-\frac{1}{2}}$, $T_{\text{kin}}=\frac{1}{8\pi\alpha^2}=\frac{L^{d}}{8\pi\lambda^2}$ and $|k_x|\le |k_{x1}|+|k_{x2}|\lesssim D$. By \eqref{eq.H(11)bound} and \eqref{eq.H(12)bound}, we know that \begin{equation}\label{eq.psi(11)(12)bound} \begin{split} \left|\text{Re}\ \mathbb E \left(\psi^{(11)}_{app,k}\overline{\psi^{(12)}_{app,k}}\right)\right| \le& \int^1_{0}\frac{2\lambda^2}{L^{2d}} \text{Re}\,\mathbb E\left(\sum\limits_{k_1+k_2=k} H^{(11)}_{k_1k_2k}\xi_{k_1} \xi_{k_2} \sum\limits_{k_1+k_2=k} \overline{H^{(12)}_{k_1k_2k}\xi_{k_1} \xi_{k_2}}\right) da \\ \le& \frac{2\lambda^2}{L^{2d}} \sup_{a\in[0,1]}\sum\limits_{k_1+k_2=k} \left|H^{(11)}_{k_1k_2k}\right|\left|H^{(12)}_{k_1k_2k}\right|n(k_1) n(k_2) \\ \le& \frac{2\lambda^2}{L^{2d}} |k|^2|k_x|^2\nu T_{\text{max}} \sum\limits_{k_1} \frac{n(k_1) n(k-k_1)}{(|\Omega(k_1,k-k_1,k)|+T^{-1}_{\text{max}})^2}D^{d-1} \\ \le& \frac{2\epsilon_1\lambda^2}{L^{2d}} \max(|k_x|^2,D^2) T_{\text{max}} L^d=\epsilon_1 \max(|k_x|^2,D^2) T_{\text{max}} T^{-1}_{\text{kin}} D^{d-1} \end{split} \end{equation} Here in the second inequality we apply the Wick theorem to calculate the expectation. In the third inequality we apply \eqref{eq.H(11)bound} and \eqref{eq.H(12)bound}. In the last line we apply \eqref{eq.asymptoticsbound} in Theorem \ref{th.numbertheory} by taking $t=T_{\text{max}}$, $g(x)=\frac{1}{(1+|x|)^2}$ and $F(k_1)=n(k_1) n(k-k_1)$. In the last line we also use the facts that $|k|\le \epsilon_1 l_{d}^{-1}=\epsilon_1 (\nu T_{\text{max}})^{-\frac{1}{2}}$, $T_{\text{kin}}=\frac{1}{8\pi\alpha^2}=\frac{L^{d}}{8\pi\lambda^2}$ and $|k_x|\le |k_{x1}|+|k_{x2}|\lesssim D$. \eqref{eq.psi(12)bound} and \eqref{eq.psi(11)(12)bound} give desire upper bounds of $\mathbb E \left|\psi^{(12)}_{app,k}\right|^2$ and $\text{Re}\ \mathbb E \left(\psi^{(11)}_{app,k}\overline{\psi^{(12)}_{app,k}}\right)$. \textbf{Step 3.2.} (Upper bound of $\text{Re}\ \mathbb E \left(\psi^{(22)}_{app,k}\overline{\xi_k}\right)$) By \eqref{eq.psi(22)app}, we have \begin{equation} \text{Re}\ \mathbb E \left(\psi^{(22)}_{app,k}\overline{\xi_k}\right)=\int^1_{0}\left(\frac{i\lambda}{L^{d}}\right)^2 \text{Re}\,\sum\limits_{k_1+k_2+k_3=k} H^{(22)}_{k_1k_2k_3k}\mathbb E\left(\xi_{k_1} \xi_{k_2}\xi_{k_3}\overline{\xi_k}\right) da \end{equation} By Wick theorem, $\mathbb E\left(\xi_{k_1} \xi_{k_2}\xi_{k_3}\overline{\xi_k}\right)=\delta_{k_1=-k_2}\delta_{k_3=k}+\delta_{k_1=-k_3}\delta_{k_2=k}+\delta_{k_1=k}\delta_{k_2=-k_3}$. Therefore we get \begin{equation}\label{eq.H(22)wick} \begin{split} \text{Re}\ \mathbb E \left(\psi^{(22)}_{app,k}\overline{\xi_k}\right)=&\int^1_{0}\left(\frac{i\lambda}{L^{d}}\right)^2 \text{Re}\,\sum\limits_{k_1+k_2+k_3=k} H^{(22)}_{k_1k_2k_3k}(\delta_{k_1=-k_2}\delta_{k_3=k}+\delta_{k_1=-k_3}\delta_{k_2=k}+0) da \\ =&\int^1_{0}2\left(\frac{i\lambda}{L^{d}}\right)^2 \sum\limits_{k_1+k_2+k_3=k} \text{Re}\,\left(H^{(22)}_{k_1,-k_1,k,k}\right) da \end{split} \end{equation} Here in the first equality the term corresponding to $\delta_{k_1=k}\delta_{k_2=-k_3}$ vanishes because $H^{(22)}_{k,k_2,-k_2,k}=0$ and the two terms corresponding to $\delta_{k_1=-k_2}\delta_{k_3=k}$, $\delta_{k_1=-k_3}\delta_{k_2=k}$ are equal. By \eqref{eq.H(22)}, we get \begin{equation} \begin{split} H^{(22)}_{k_1,-k_1,k,k}=-2k_{x}(k_{x}-k_{1x})\int_{0\le r<s\le t}&(\nu|k|^2(t-s)+\nu|k-k_1|^2(s-r)) \\ & e^{i (s-r)\Omega(k_1,k-k_1,k)- a\nu|k|^2(t-s)-a\nu|k-k_1|^2(s-r)} dsdr. \end{split} \end{equation} We find upper bound of $\text{Re}\,\left(H^{(22)}_{k_1,-k_1,k,k}\right)$ using integration by parts. \begin{equation}\label{eq.ReH(22)bound} \begin{split} &\text{Re}\,\left(H^{(22)}_{k_1,-k_1,k,k}\right)=2k_{x}(k_{x}-k_{1x})\int_{0\le r<s\le t}(\nu|k|^2(t-s)+\nu|k-k_1|^2(s-r)) \\ & \frac{e^{irT^{-1}_{\text{max}}sgn\, \Omega}}{i(\Omega+T^{-1}_{\text{max}}sgn\, \Omega)}\frac{d}{dr}e^{i (s-r)\Omega-irT^{-1}_{\text{max}}sgn\, \Omega}e^{- a\nu|k|^2(t-s)-a\nu|k-k_1|^2(s-r)} dsdr \\ =&\text{Re}\,\frac{2k_{x}(k_{x}-k_{1x})}{i(\Omega+T^{-1}_{\text{max}}sgn\, \Omega)}\int_{0\le s\le t}\nu|k|^2(t-s) e^{- a\nu|k|^2(t-s)} ds \\ -&\text{Re}\,\frac{2k_{x}(k_{x}-k_{1x})}{i(\Omega+T^{-1}_{\text{max}}sgn\, \Omega)}\int_{0\le s\le t}(\nu|k|^2(t-s)+\nu|k-k_1|^2s) e^{- a\nu|k|^2(t-s)-a\nu|k-k_1|^2s} e^{is\Omega} ds \\ -&\text{Re}\,\frac{2k_{x}(k_{x}-k_{1x})}{i(\Omega+T^{-1}_{\text{max}}sgn\, \Omega)}\int_{0\le r<s\le t}\big[\nu|k-k_1|^2(\nu|k|^2(t-s)+\nu|k-k_1|^2(s-r)-1) \\ & -iT^{-1}_{\text{max}}sgn\, \Omega\big]e^{i (s-r)\Omega- a\nu|k|^2(t-s)-a\nu|k-k_1|^2(s-r)} dsdr. \end{split} \end{equation} The first term on the right hands side equals to $0$ after taking the real part. Using the same integration by parts argument as in \eqref{eq.H(11)bound}, the second term can be bounded by \begin{equation}\label{eq.H(22)secondterm} \frac{|k|^2|k_x|(|k_{1x}|+|k_x|)\nu T_{\text{max}}}{(|\Omega(k_1,k-k_1,k)|+T^{-1}_{\text{max}})^2}. \end{equation} The last term can be bounded by integration by parts in the following integral \begin{equation}\label{eq.asymstep3.2} \begin{split} \int_{0\le r<s\le t}\big[&\nu|k-k_1|^2(\nu|k|^2(t-s)+\nu|k-k_1|^2(s-r)-1) \\ & -iT^{-1}_{\text{max}}sgn\, \Omega\big]\frac{e^{irT^{-1}_{\text{max}}sgn\, \Omega}}{i(\Omega+T^{-1}_{\text{max}}sgn\, \Omega)}\frac{d}{dr}e^{i (s-r)\Omega}e^{- a\nu|k|^2(t-s)-a\nu|k-k_1|^2(s-r)} dsdr \end{split} \end{equation} Integration by parts and bound the three resulting integral by taking the absolute value of the integrand, then we get \begin{equation} |\eqref{eq.asymstep3.2}|\le \frac{|k|^2}{|\Omega|+T^{-1}_{\text{max}}} \end{equation} Therefore, the last term in \eqref{eq.ReH(22)bound} can also be bounded by \eqref{eq.H(22)secondterm}. Then we get \begin{equation} \text{Re}\,\left(H^{(22)}_{k_1,-k_1,k,k}\right)\lesssim \nu T_{\text{max}}|k|^2|k_x|\frac{|k_{1x}|+|k_x|}{(|\Omega(k_1,k-k_1,k)|+T^{-1}_{\text{max}})^2}. \end{equation} Substitute into \eqref{eq.H(22)wick}, then we have \begin{equation}\label{eq.psi(22)bound} \begin{split} \text{Re}\ \mathbb E \left(\psi^{(22)}_{app,k}\overline{\xi_k}\right)&\lesssim\frac{\lambda^2}{L^{2d}} |k|^2|k_x|\nu T_{\text{max}}n(k)\sum\limits_{k_1} \frac{|k_{1x}|+|k_x|}{(|\Omega(k_1,k-k_1,k)|+T^{-1}_{\text{max}})^2}n(k_1) \\ &\lesssim \frac{\epsilon_1\lambda^2}{L^{2d}}|k_x|\sum\limits_{k_1} \frac{D}{(|\Omega(k_1,k-k_1,k)|+T^{-1}_{\text{max}})^2}n(k_1) \\ &+\frac{\epsilon_1\lambda^2}{L^{2d}}|k_x|^2\sum\limits_{k_1} \frac{1}{(|\Omega(k_1,k-k_1,k)|+T^{-1}_{\text{max}})^2}n(k_1) \\ &\lesssim \epsilon_1\frac{T_{\text{max}}}{T_{\text{kin}}}\max(D^{d+1},|k_x|^2D^{d-1}+|k_x|D^{d}) \end{split} \end{equation} In the second inequality, we also use the facts that $|k|\le \epsilon_1 l_{d}^{-1}=\epsilon_1 (\nu T_{\text{max}})^{-\frac{1}{2}}$ and $T_{\text{kin}}=\frac{1}{8\pi\alpha^2}=\frac{L^{d}}{8\pi\lambda^2}$. In the last inequality we apply \eqref{eq.asymptoticsbound} in Theorem \ref{th.numbertheory} by taking $t=T_{\text{max}}$, $g(x)=\frac{1}{(1+|x|)^2}$, $F(k_1)=n(k_1) n(k-k_1)$ and $|k_x|\le |k_{x1}|+|k_{x2}|\lesssim D$. Combining \eqref{eq.psi(12)bound}, \eqref{eq.psi(11)(12)bound} and \eqref{eq.psi(22)bound}, we get the following upper bound \begin{equation}\label{eq.n(12)final} n^{(12)}(k)\lesssim \epsilon_1\frac{T_{\text{max}}}{T_{\text{kin}}}\underbrace{\max(D^{d+1},|k_x|^2D^{d-1}+|k_x|D^{d})}_{\text{Err}_{D}(k_x)}. \end{equation} \textbf{Step 4.} (Asymptotics of $n^{(11)}(k)$) By \eqref{eq.n(11)} and Wick theorem, we get \begin{equation}\label{eq.n(11)asym} \begin{split} n^{(11)}(k)=&\frac{2\lambda^2}{L^{2d}} |k_x|^2\sum\limits_{k_1+k_2=k}n(k_1) n(k_2) \left|\int^{t}_0e^{i s\Omega(k_1,k_2,k)} ds\right|^2 \\ -&\frac{8\lambda^2}{L^{2d}}\sum_{k_1}k_x(k_x-k_{1x})n(k_1) n(k)\text{Re}\left(\int_{0\le r<s\le t} e^{i (s-r)\Omega(k_1,k-k_1,k)} dsdr\right) \\ =&\frac{2\lambda^2}{L^{2d}} |k_x|^2\sum\limits_{k_1+k_2=k}n(k_1) n(k_2) \frac{4\sin^2 \left(\frac{t}{2}\Omega(k_1,k_2,k\right))}{\Omega^2(k_1,k_2,k)} \\ -&\frac{8\lambda^2}{L^{2d}}\sum_{k_1}k_x(k_x-k_{1x})n(k_1) n(k) \frac{2\sin^2 \left(\frac{t}{2}\Omega(k_1,k-k_1,k\right))}{\Omega^2(k_1,k-k_1,k)} \\ =&8\pi \alpha^2t|k_x|^2\int_{\substack{(k_1, k_2)\in \mathbb R^{2d}\\k_1+k_2=k}}n(k_1) n(k_2)\delta(|k_1|^2k_{1x}+|k_2|^2k_{2x}-|k|^2k_{x})\, dk_1 dk_2 \\ -& 16\pi \alpha^2t\, n(k)\int_{\mathbb{R}^d}k_x(k_x-k_{1x})n(k_1) \delta(|k_1|^2k_{1x}+|k_2|^2k_{2x}-|k|^2k_{x})\, dk_1+O\left(L^{-\theta}\frac{T_{\text{max}}}{T_{\mathrm {kin}}}\right) \\ =&\frac{t}{T_{\text{kin}}}\mathcal{K}(n)(k)+O\left(L^{-\theta}\frac{T_{\text{max}}}{T_{\mathrm {kin}}}\right) \end{split} \end{equation} Here in the third equality we apply \eqref{eq.numbertheory1} in Theorem \ref{th.numbertheory} by taking $t\rightarrow\frac{t}{2}$ $g(x)=\frac{\sin^2(x)}{x^2}$ and $F(k_1)=n(k_1) n(k-k_1)$ or $F(k_1)=n(k_1)$. In the third equality we also use the facts that $T_{\text{kin}}=\frac{1}{8\pi\alpha^2}=\frac{L^{d}}{8\pi\lambda^2}$. Combining \eqref{eq.n(12)final} and \eqref{eq.n(11)asym}, we complete the prove of the first case in \eqref{eq.n1}. \end{proof} \medskip
1812.04057
\section{Introduction} \label{INTRODUCTION} In \cite{Man84} and \cite{Man87}, Manin suggested that physical theories carry arithmetic structures and that physics as we know it in the real world in fact exists in an adelic form, with $p$-adic realizations alongside its real form. These $p$-adic manifestations of physical theories can be used, by virtue of consistency constraints between the archimedean and the nonarchimedean places of adelic objects, to determine the real part through knowledge of the $p$-adic side. Since some of the objects that exist on the $p$-adic side, such as the Bruhat--Tits trees \cite{BT72}, have a discrete, combinatorial nature, one can take advantage of this structure to carry out computations in a more convenient discretized setting. There has been a considerable amount of work over the years concerned with developing various aspects of physics in a $p$-adic setting, see for instance the reviews \cite{Vlad,Khren,Dragovich:2009hd,Dragovich:2017kge}. In particular, the Bruhat--Tits tree of ${\mathbb Q}_p$ (the $p$-adic numbers) and its boundary ${\mathbb P}^1({\mathbb Q}_p)$ were used in the setting of $p$-adic string theory~\cite{Freund:1987ck,Brekke:1988dg,Chekhov:1989bg,Zab,Brekke:1993gf}. More recently, a different perspective was taken on $p$-adic numbers in physics based on the observation that there exist natural pairs of ``bulk and boundary'' spaces associated to $p$-adic algebraic curves, with many similarities to the AdS/CFT correspondence. In AdS$_3$/CFT$_2$, one often realizes the boundary conformal field theory as living on the genus zero Riemann surface $\mathbb{P}^1(\mathbb{C})$ with the hyperbolic bulk AdS space as a coset. Higher genus bulk and boundary spaces can be seen as generalizations of the (Euclidean) BTZ black hole~\cite{Krasnov:2000zq}. The original BTZ black hole~\cite{Banados:1992wn} corresponds to the genus one case. As curves and cosets, the construction of these spaces is entirely algebraic, and one may find other spaces by changing the underlying field. Based on this analogy, in \cite{Manin:2002hn} it was suggested that certain $p$-adic algebraic curves coming from the replacement $\mathbb{C} \rightarrow \mathbb{Q}_p$ such as $\mathbb{P}^1(\mathbb{Q}_p)$ and the higher genus Mumford curves of \cite{Mum}, are suitable boundary spaces for holography. An attractive feature of this proposal is that the bulk space is still a coset, but now becomes the Bruhat--Tits tree at genus zero. This opens up the possibility of studying certain features of AdS/CFT by passing to a $p$-adic setting where the bulk and boundary geometry are relatively simple. A detailed theory of the $p$-adic AdS/CFT correspondence was only established much more recently. Appropriate boundary and bulk field theories for $p$-adic holography were developed recently and independently in \cite{Gubser:2016guj, Heydeman:2016ldy}, where certain essential features such as the holographic computation of correlation functions in $p$-adic conformal field theories were established. Many lines of inquiry parallel the situation in real AdS/CFT, but the discrete $p$-adic geometry often makes these models much more solvable. These models have been explored further in a series of subsequent papers, such as \cite{Gubser:2016htz,Gubser:2017vgc,Gubser:2017tsi,Bhattacharyya:2017aly,Gubser:2017qed,Bhowmick:2018bmn,Qu:2018ned,Jepsen:2018dqp,Gubser:2018cha}. Much of this work has focused on exploring analogies between $p$-adic models and ordinary AdS/CFT, and searching for structures familiar from the traditional holographic correspondence in the discretized or $p$-adic world. Beyond holographic correlators, one may look for structures associated to the bulk geometry directly, including the Ryu--Takayanagi (RT) formula~\cite{Ryu:2006bv,Ryu:2006ef} and its quantum and covariant generalizations~\cite{Faulkner:2013ana,Hubeny:2007xt,Wall:2012uf}. One can also ask about other basic properties of the boundary entropy, such as the strong subadditivity property~\cite{Headrick:2007km} or other entropy inequalities. One may expect these aspects of holographic entropy to have a $p$-adic analog as well. The purpose of the present paper is to focus on the tensor network approach to holography, and in particular on holographic states generated by networks of perfect tensors as in~\cite{Pastawski:2015qua}. Like the Bruhat--Tits tree, tensor networks are generally discrete and provide simplified models to study bulk and boundary entanglement properties. In fact one is tempted to view the Bruhat--Tits tree itself as a tensor network~\cite{Bhattacharyya:2017aly,Marcolli:2018ohd}, or closely related to a tensor network~\cite{Heydeman:2016ldy}; however these models were unsuccessful in reproducing the RT formula or various expected entropy inequalities in the $p$-adic setting, so the question of whether entanglement entropy results familiar from the usual real holography hold over the $p$-adics has remained open. In this work, we present a simple model of holographic quantum error correction in the $p$-adic setting based on the existence of an infinite class of perfect tensors which can be used to build networks associated with the bulk geometry; either the Bruhat--Tits tree or its black hole variant. The tensor networks we use are closely related to those of ~\cite{Pastawski:2015qua} based on hyperbolic tessellations; in the simplest case of vacuum AdS this is described by the Schl\"{a}fli symbol $\{m,n\}$ and the associated tensor network $\{n,m\}$. Heuristically, in our model we study the limit in which the Sch\"{a}fli symbols of the associated bulk geometry and the dual tensor network tend to $\{\infty,p+1\}$ and $\{p+1,\infty\}$ respectively. The subtleties associated with the $p$-adic interpretation of such tensor networks built from perfect tensors of rank tending to infinity are discussed at length in this paper. We emphasize, however, that the genus 1 black hole tensor networks we consider are fundamentally different from those proposed in~\cite{Pastawski:2015qua}, and are instead obtained via a physically well motivated quotient procedure. This model addresses a number of shortcomings of previous approaches to tensor network holography and allows for the explicit analytic computation of holographic states, density matrices, and entropies. While this model is discrete and preserves a (finite) group of conformal symmetries at finite cutoff, we see a full restoration of conformal symmetry as the cutoff is taken to zero, in the form of the $p$-adic fractional linear transformations $\PGL(2,\mathbb{Q}_p)$, which we interpret as the conformal group acting on a spatial region of the boundary. This group acts by isometries on the Bruhat--Tits tree, thought of as the analog of a time slice of AdS$_3$. The tensor network inherits the symmetries, and is essentially related to minimal geodesics of the Bruhat--Tits tree. Additionally, while the network is defined and manipulated in the bulk space, all quantities we compute can ultimately be defined or described in terms of purely boundary data such as configurations of points and sets on $\mathbb{P}^1(\mathbb{Q}_p)$ or the genus one curve. Given a choice of prime number $p$ and choice of bulk IR cut off, there is essentially a single network defined for each bulk space which generates a highly entangled state of boundary qudits, interpreted as the analog of a vacuum state of a boundary conformal field theory. Equipped with the $p$-adic analogs of various quantities and the knowledge of how to manipulate perfect tensors, we are able to explicitly compute nonarchimedean entanglement entropies for connected and disconnected intervals; the results are dual to minimal surfaces in the bulk, as expected from the Ryu--Takayanagi formula. We also compute the black hole entropy and find that it is proportional to the perimeter of the $p$-adic BTZ black hole, as expected according to the Bekenstein--Hawking formula, and verify the RT formula, which equates the von Neumann entropy of reduced density matrices obtained from mixed states on the boundary to the lengths of minimal geodesics in the $p$-adic black hole background homologous to boundary regions. We also give a holographic derivation of subadditivity, strong subadditivity, and the monogamy of mutual information in the $p$-adic setting. In the limit of an infinite network, all of these results can be phrased in terms of conformally invariant information on the boundary, where the ultrametric geometry plays an essential simplifying role. Another interesting feature is our use of graphical tools to perform bulk computations; essentially all entropy quantities can be obtained by geometric operations such as cutting, gluing, and tracing the discrete vertices and bonds of the network. Among other things, this leads to the interpretation of a thermal density as being dual to a two-sided AdS black hole obtained by gluing two bulk regions together. We construct the network of perfect tensors associated to the Bruhat--Tits tree as follows. Rather than placing the tensors at the nodes of the Bruhat--Tits tree as previously suggested, we identify two explicit conditions for the construction of an appropriate ``dual graph" on which the tensor network lives. While this at first appears to require an embedding of a nonarchimedean bulk space into an ordinary archimedean plane, we show that this embedding and the dual graph can also be constructed by remaining entirely inside the $p$-adic world, using the Drinfeld $p$-adic plane \cite{Dri} together with the choice of a section of its projection to the Bruhat--Tits tree. For practical purposes, all the computations can be carried out using an embedding in the ordinary plane. The two main conditions required for the construction of the tensor network are that the sets of edges of the Bruhat--Tits tree and its dual graph are in one-to-one correspondence, with an edge of the dual graph cutting exactly one edge of the Bruhat--Tits tree and that the arrangements of dual graph edges around each vertex of the Bruhat--Tits tree form ``plaquettes," i.e., admit a cyclic ordering. Any construction of a dual graph that satisfies these properties can be used for the purpose of von Neumann entropy computations. In trying to capture holographic states, perfect tensors have appeared as a convenient way of generating maximally entangled states. We offer a refined point of view on perfect tensors, which was already partially outlined in~\cite{Heydeman:2016ldy} and~\cite{Marcolli:2018ohd} by some of the present authors. Starting with classical error-correcting codes in the form of Reed--Solomon codes built over projective lines over finite fields~\cite{Tsfa}, one may upgrade these to quantum codes by applying the CRSS algorithm~\cite{CRSS}, which we show can be generalized to directly obtain perfect tensors from certain self-orthogonal codes. These self-orthogonal codes are Lagrangian subspaces of symplectic vector spaces over finite fields; they can thus be thought of as analogous to semiclassical states, and the theory of the Heisenberg group over finite fields can be used to quantize them, replacing the equations defining the Lagrangian by operator equations (or eigenvalue problems) and producing the corresponding quantum codes. Our construction both generalizes the families of perfect tensors used in the construction of holographic codes in~\cite{Pastawski:2015qua}, and gives a physical interpretation of the perfect-tensor condition. In fact, we also prove more generally that the perfect tensor condition is, in a suitable sense, ``generic" within the CRSS construction of quantum codes, where the generic condition is described geometrically in terms of the position of the corresponding Lagrangian subspaces or semiclassical states. We work in the setting of the Gurevich--Hadani functorial quantization of symplectic vector spaces over finite fields~\cite{GH}. We now give a concise summary of the main results obtained in this paper, followed by a more detailed organization of the paper. \subsection{Summary of the main results} \begin{itemize} \item We show that for static holographic states built through a network of perfect tensors dual to the $p$-adic Bruhat--Tits tree (both when the boundary is the ``infinite line'' $\mathbb{Q}_p$ and when it is the projective line $\mathbb{P}^1(\mathbb{Q}_p)$), the bipartite entanglement entropy of a single connected interval as well as disconnected intervals obeys a Ryu--Takayanagi like formula. The perfect tensors may be viewed as built via the CRSS algorithm from algebro-geometric codes on projective lines over finite fields but this is not crucial to our setup. The entanglement is computed by constructing the holographic state, tracing out regions of the tensor network, and explicitly computing the reduced density matrices and the von Neumann entropy, which is expressed in terms of (regularized) lengths of minimal geodesics in the bulk Bruhat--Tits tree. We also prove subadditivity, strong subadditivity and monogamy of mutual information in this setup. \item We construct $p$-adic BTZ black holes as quotients of the Bruhat--Tits tree by a rank-one Schottky group with boundary a Mumford--Tate elliptic curve, and demonstrate that the construction of the tensor network adapts naturally to this case. Essentially the tensor network is obtained as a quotient of the genus 0 tensor network, paralleling the quotient construction of the geometry. Instead of a pure state at the boundary one has in this case a vertex behind the horizon that needs to be traced out, which results in a thermal density matrix with a Bekenstein--Hawking entropy measured in terms of the length of the horizon (the polygon in the quotient of the Bruhat--Tits tree). This density matrix can be seen to be dual to a bulk geometry with two asymptotic regions connected by the analog of a two-sided black hole, with the entropy given by the number of tensor bonds suspended between the two sides. We also prove that the entanglement entropy satisfies an analog of the Ryu--Takayanagi formula in this geometry in terms of the minimal length of homologous geodesics in the black hole background. \item We prove that perfect tensors can be constructed through a general procedure of geometric quantization from general-position Lagrangians in a symplectic vector space over a finite field. This shows that perfect tensors are ``generic" for the CRSS algorithm producing quantum codes from classical codes. The construction also provides natural Hamiltonians for which the vacuum state is the perfect tensor state. \end{itemize} \subsection{Organization of the paper} In section~\ref{REVIEW} we review some basic background material that we will be using throughout the paper. Section~\ref{ssec:padic} gives the minimal background on the geometry of the $p$-adic Bruhat--Tits trees, which serve as our bulk spaces in the rest of the paper. A discussion of quotients of Bruhat--Tits trees by $p$-adic Schottky groups is given later, in section~\ref{GENUSONE}. In section~\ref{ssec:tensors} we briefly review networks of perfect tensors and maximally entangled states. Section~\ref{CLASSQUANTCODES} recalls several facts about classical and quantum codes that we will be using in the rest of the paper, with particular focus on the CRSS algorithm that promotes classical to quantum error correcting codes, which we describe in terms of Heisenberg group representations. We review in particular the classical algebro-geometric codes associated to the projective line over a finite field (the Reed--Solomon codes), and we show that they can be used to construct, through the CRSS algorithm, quantum codes given by perfect tensors. An explicit example is illustrated in appendix \ref{THREEQ}. Section~\ref{CRSS} focuses on the construction and physical interpretation of perfect tensors, proving some new general results about the CRSS algorithm and identifying conditions under which it can be used to produce perfect tensors in terms of the geometry of semiclassical states. Section~\ref{ssec:HeisGrp} provides more details on irreducible representations of the Heisenberg group than what was discussed in section~\ref{CLASSQUANTCODES}, in particular discussing their construction from the regular representation as invariant subspaces of the commuting action of an abelian group, corresponding to a choice of Lagrangian in the symplectic vector space; this provides a choice of polarization data analogous to the choice of the Hilbert space of wave functions in quantum mechanics. Section~\ref{ssec:quant} reviews the Gurevich--Hadani functorial quantization of \cite{GH} from a category of symplectic vector spaces and isomorphisms over a finite field to complex vector spaces and isomorphisms, which assign canonical models of Weil representations, in a way that is monoidal and compatible with symplectic reduction. Section~\ref{ssec:PT} presents our general construction of perfect tensors, from the data of a symplectic vector space over a finite field with a Darboux basis and a Lagrangian subspace in general position with respect to the splitting determined by the Darboux basis. This gives a simple physical interpretation of the CRSS algorithm as canonical quantization, in the stronger sense of~\cite{GH}, and translates properties of perfect tensors naturally into properties of the corresponding semiclassical states (Lagrangian subspaces). We show how our construction works in a simple explicit example in section~\ref{ssec:example}. In section~\ref{ssec:Ham} we show how to write Hamiltonians (in a form similar to random walk/discretized Laplacian operators) that have the perfect tensor state as ground state. In section~\ref{GENUSZERO} we present the main results on our construction of a quantum error-correcting tensor network, built using perfect tensors, associated to the $p$-adic Bruhat--Tits trees via a ``dual graph'' construction, and we establish the $p$-adic analog of the Ryu--Takayanagi formula. Section~\ref{DUALGRAPH} describes the construction of the ``dual graph'' tensor network associated to the $p$-adic Bruhat--Tits trees by identifying two axiomatic properties that characterize the network in relation to the tree. As discussed more in detail in section~\ref{TNPROPS}, different choices satisfying these properties are possible, which can be characterized in terms of different choices of embeddings. For the purpose of the entanglement entropy computation any such choice of a ``dual graph'' will achieve the desired result. We also describe the perfect tensors associated to the nodes of the dual graph for a finite cutoff of the infinite Bruhat--Tits tree, the number of dangling (uncontracted) legs at the vertices and the resulting boundary wavefunction. The rank of the perfect tensors is related to the cutoff on the tree and goes to infinity in the limit of the infinite tree. In section~\ref{GENUSZERORESULTS} we summarize the main technical results in the genus 0 background, that the dual graph tensor network satisfies a Ryu--Takayanagi like formula, where instead of ``intervals" we specify the boundary datum in terms of configurations of points (though we still use the terminology ``connected interval" or ``disconnected interval"). We show that the von~Neumann entropy computation matches what is expected for CFT$_2$ and that it is naturally expressed in terms of the $p$-adic norm, which leads to the expected bulk interpretation (consistent with the minimal cut rule obeyed by perfect tensor networks~\cite{Pastawski:2015qua}) as the length of the minimal geodesic joining the entangling surfaces determined by the chosen configuration of boundary points. We also comment on the disconnected interval (four points) entropy case, the dependence of the mutual information on the cross-ratio and entropy inequalities such as subadditivity, strong subadditivity and monogamy of mutual information, where the ultrametric property plays a direct, simplifying role, the details of which are found in sections \ref{GENUS0DOUBLE}-\ref{ssec:Inequalities}. Section~\ref{GENUSONE} deals with the $p$-adic BTZ black hole, described in terms of Mumford--Tate elliptic curves as boundary and with bulk space a quotient of the $p$-adic Bruhat--Tits tree by a rank one Schottky group. In section~\ref{ssec:Schottky} we review the $p$-adic geometry of Mumford curves of genus one and the associated bulk spaces, comparing it with the case of complex elliptic curves with Tate uniformization by the multiplicative group. We also explain how to adapt our construction of the tensor network as dual graph of the Bruhat--Tits tree to a network similarly dual to the homologically non-trivial quotient of the Bruhat--Tits tree in the genus one case. In particular, the tensor network obtained in this way has a vertex beyond the black hole horizon that does not correspond to boundary degrees of freedom. In section~\ref{BTZResults} we come to our main results in the BTZ black hole case. Computing on the tensor network the thermal entropy of the boundary density matrix obtained by tracing out this special vertex gives the black hole horizon perimeter, which can be seen as a Bekenstein--Hawking formula for the $p$-adic BTZ black hole. In section~\ref{RTBTZresults} we discuss the Ryu--Takayanagi formula in genus one backgrounds, with the boundary entanglement entropy of a single interval corresponding to the length of a minimal geodesic in the bulk black hole geometry. Unlike the genus zero case, the entropy of a boundary region and its complement are not necessarily the same, corresponding to the fact that the boundary state is no longer pure, and in the bulk geometry a geodesic may wrap around the loop of the quotient graph (the black hole horizon). Section~\ref{COMPUTATION} contains all the detailed explicit computations used in section~\ref{GENUSZERO} and section~\ref{GENUSONE} for obtaining the von Neumann entropy via the density matrices determined by the tensor network. Section~\ref{SIMPLESTATES} illustrates the rules for the computation of states and reduced density matrices from perfect tensors, and the graphical calculus used to keep track of contractions, and a convenient representation of the resulting density matrices in block-diagonal form. We describe how to obtain the reduced density matrices corresponding to tracing out regions determined by sets of vertices of the tensor network, and we compute the associated von Neumann entropy. In section~\ref{SPLITSCYCLES} we discuss the computation of the inner product of the holographic state with itself. The computation method is described in terms of certain graphical contraction rules (``splits''), decomposing the network into disjoint simple curves; each resulting closed cycle then determines an overall multiplicative factor. Section~\ref{NORM} then contains the computation of the norm of the holographic state obtained from our tensor network dual to the Bruhat--Tits tree, with an assigned cutoff on the infinite tree. This depends on different types of vertices (in terms of number of dangling legs) and the corresponding multiplicities and the application of the ``splits and cycles" method. In section~\ref{GENUS0SINGLE} we then show how a Ryu--Takayanagi like formula is obeyed exactly in the single connected interval (``two point'') case. The entanglement of a boundary region with its complement is computed by computing the density matrix of the full holographic state produced by the tensor network and then computing a partial trace using the computational techniques developed in the previous subsections. The result is then compared with the (regulated) geodesic length in the bulk Bruhat--Tits tree. The disconnected interval case (in particular the ``four point'' case) is discussed in section~\ref{GENUS0DOUBLE} in terms of overlapping or non-overlapping geodesics in the bulk, depending on the sign of the logarithm of the cross-ratio, and the corresponding properties of the mutual information. We show subadditivity (both Araki--Lieb inequality as well as non-negativity of mutual information) and give the exact dependence of mutual information on the cross-ratio constructed from the boundary points. We show that a Ryu--Takayanagi like formula is satisfied exactly for disconnected intervals, and in section~\ref{ssec:Inequalities} we proceed to prove strong subadditivity and monogamy of mutual information. In fact we show that in this tensor network mutual information is exactly extensive. Section~\ref{GENUS1BTZ} contains the computation of the black hole entropy as well as the Ryu--Takayanagi formula for the minimal geodesics in the black hole background. Using tools from the previous sections, the norm of the black hole boundary state is computed in terms of types of vertices and multiplicities, and the density matrix and corresponding von Neumann entropy is determined. This entropy is seen to be proportional to the length of the horizon or the length of the minimal geodesic homologous to the boundary interval, where the homologous condition, an important feature of the RT formula, is obeyed automatically by the tensor network. Section~\ref{GEOMETRIC} further discusses some of the geometric aspects of our tensor network construction. In section~\ref{TNPROPS} we discuss more in detail the symmetry properties of the tensor networks with respect to the global symmetries of the Bruhat--Tits tree, showing how the properties needed for the construction of a suitable ``dual graph" reduce the symmetries and obtaining in this way a characterization of all the possible choices of dual graph. In section~\ref{ssec:Drinfeld} we show that the construction of the dual graph and the tensor network can be done entirely within the $p$-adic world, by embedding it in the Drinfeld $p$-adic plane, using a choice of lifts of the projection from the Drinfeld plane to the Bruhat--Tits tree. This is illustrated in section~\ref{ssec:toymodel} in a toy model given by the tubular neighborhood of an infinite tree, and adapted in section~\ref{ssec:pplane} and section~\ref{ssec:dualpplane} to the $p$-adic plane. Similarly, in section~\ref{ssec:geometryTATE}, the tensor network for the genus one $p$-adic BTZ black hole is embedded in the quotient of the Drinfeld $p$-adic plane by the uniformizing $p$-adic Schottky group. In the same section we also discuss the construction of measures on the Mumford--Tate curve induced from the Patterson--Sullivan measure on the $p$-adic projective line. Finally, in section~\ref{AFALGEBRA} we show how to interpret the density matrices in the limit of the infinite Bruhat--Tits tree, as states on an approximately finite dimensional $C^*$-algebra. Finally, a list of possible further directions of investigation and open questions is given in section~\ref{DISCUSSION}. Since several sections of the paper are quite independent, it may be useful for readers to know the shortest path required to reach a given result. In the leitfaden below, arrows indicate logical dependence between sections. For example, in order to read section~6, one should read all of its predecessors in the diagram: sections 1, 2, and~4. \[ \begin{tikzcd}[row sep = 1 em, column sep = 0.4 em] & \parbox[c]{2cm}{\centering \S\ref{INTRODUCTION}\\ Introduction} \ar[d] & & & \\ & \parbox[c]{2cm}{\centering \S\ref{REVIEW}\\ Background}\ar[dr] \ar[dl] & & & \\ \parbox[c]{3cm}{\centering \S\ref{CRSS}\\ Perfect~tensors} & & \parbox[c]{2cm}{\centering \S\ref{GENUSZERO}\\ Empty~AdS} \ar[dr] \ar[dl] & & \\ & \parbox[c]{2.5cm}{\centering \S\ref{COMPUTATION}\\ Details} & & \parbox[c]{2cm}{\centering \S\ref{GENUSONE} \\ Black~hole}\ar[dl] \ar[dr] & \\ & & \parbox[c]{2cm}{\centering \S\ref{DISCUSSION}\\ Discussion} & & \parbox[c]{4cm}{\centering \S\ref{GEOMETRIC}\\ Geometric~properties} \end{tikzcd} \] However, readers wishing to directly view the entanglement entropy results in the $p$-adic setting may consider skipping directly ahead to sections \ref{GENUSZERO} and \ref{GENUSONE}, and refer back to section \ref{REVIEW} when needed. \section{Background} \label{REVIEW} \subsection{The Bruhat--Tits tree as a $p$-adic bulk space} \label{ssec:padic} Let us briefly recall the setup of $p$-adic AdS/CFT~\cite{Gubser:2016guj,Heydeman:2016ldy}. In the simplest formulation, the bulk geometry is described by an infinite $(p+1)$-regular graph (without cycles), called the Bruhat--Tits tree, and its asymptotic boundary is given by the projective line over the $p$-adic numbers, $\mathbb{P}^1(\mathbb{Q}_p)$. (For an introduction to the theory of $p$-adic numbers, see~\cite{Koblitz,Gouvea}; a shorter discussion in the physics literature can be found in e.g.~\cite{Brekke:1993gf}.) The Bruhat--Tits tree ${\cal T}_p$ is a discrete, maximally symmetric space of constant negative curvature, which plays the role of (Euclidean) AdS space. One can study perturbative bulk dynamics on the tree by considering lattice actions defined on its nodes (or bonds)~\cite{Gubser:2016guj,Heydeman:2016ldy,Gubser:2016htz}. A central result of these works is that semiclassical dynamics of these lattice models in the bulk ${\cal T}_p$ compute correlation functions in a dual conformal field theory defined on $\mathbb{P}^1(\mathbb{Q}_p)$. In the following, we will view the Bruhat--Tits tree as a constant-time spatial slice of a higher-dimensional $p$-adic analog of Lorentzian AdS space. To make the analogy with the real setup more concrete, we view the Bruhat--Tits tree as the $p$-adic analog of the Poincar\'{e} disk (or equivalently the hyperbolic plane $\mathbb{H}^2$), arising as a constant-time slice of an appropriate higher-dimensional building describing ($p$-adic) Lorentzian AdS$_3$.\footnote{In this paper we remain agnostic about the appropriate higher dimensional origins of the Bruhat--Tits tree (such as hyperbolic buildings), and will only be interested in studying entanglement entropy in the static (time symmetric) case.} This analogy is motivated by the fact that the real hyperbolic plane $\mathbb{H}^2$ is a symmetric, homogeneous space of constant negative curvature, and arises algebraically as the quotient space $\mathbb{H}^2= \SL(2,\mathbb{R})/SO(2,\mathbb{R})$, where $\SL(2,\mathbb{R})$ is the isometry group of $\mathbb{H}^2$ and $SO(2,\mathbb{R})$ is its maximal compact subgroup. Similarly, the Bruhat--Tits tree is a symmetric, homogeneous space which can be viewed as the quotient ${\cal T}_p = \PGL(2,\mathbb{Q}_p)/\PGL(2,\mathbb{Z}_p)$. Here, the group $\PGL(2,\mathbb{Q}_p)$ acts on ${\cal T}_p$ by isometries, and $\PGL(2,\mathbb{Z}_p)$ is its maximal compact subgroup. However, while the hyperbolic plane is a two-dimensional manifold, the Bruhat--Tits tree is described by a discrete (but infinite) collection of points (as seen later in figure~\ref{fig:clopen}). In section~\ref{GENUSZERO}, we use the ``dual'' of this discrete tree to define a tensor network, and the entanglement properties of the boundary state are described geometrically in terms of the Bruhat--Tits tree. We now describe the action of~$G = \PGL(2,\mathbb{Q}_p)$ on the Bruhat--Tits tree in more detail. Let $H=\PGL(2,\mathbb{Z}_p) < G$ denote a maximal compact subgroup. Choose representatives $g_i \in G$ of the left cosets of~$H$. In other words, $G = \bigcup_{i=0}^\infty g_i H$, where the cosets $g_i H, g_j H$ are pairwise disjoint for $i\neq j$. The cosets $g_i H$ are in bijective correspondence with the equivalence classes of $\mathbb{Z}_p$-lattices in $\mathbb{Q}_p \times \mathbb{Q}_p$, as well as with the nodes on the Bruhat--Tits tree (see e.g.\ \cite{Brekke:1993gf}). The group $G$ has a natural action on equivalence classes of lattices $(r,s)$ by matrix multiplication: \begin{equation} g \cdot (r,s) \equiv g \cdot \left\{ \begin{pmatrix} a r_1 + b s_1 \\ a r_2 + bs_2 \end{pmatrix}: a,b \in \mathbb{Z}_p \right\} = \begin{pmatrix} A & B \\ C & D \end{pmatrix} \left\{ \begin{pmatrix} a r_1 + b s_1 \\ a r_2 + bs_2 \end{pmatrix}: a,b \in \mathbb{Z}_p \right\}, \end{equation} for $r = (r_1,r_2)^T, s=(s_1,s_2)^T \in \mathbb{Q}_p\times \mathbb{Q}_p$, $g \in G$. Equivalently, $G$ acts on the space of cosets $G/H$, by the rule $g_i H \mapsto g g_i H$. Each equivalence class (or equivalently, coset) is stabilized by a conjugate of $H$; the coset $g_i H$ is stabilized by $g_i H g_i^{-1} < G$. Either of these descriptions gives the action of~$G$ on the nodes of~${\cal T}_p$. Two nodes on the tree are defined to be adjacent when the relation \begin{equation} p\Lambda \subset \Lambda^\prime \subset \Lambda \end{equation} holds between the corresponding $\mathbb{Z}_p$-lattices $\Lambda$ and~$\Lambda'$. This relation is reflexive, so that the previous inclusion holds if and only if $p\Lambda^\prime \subset \Lambda \subset \Lambda^\prime$. The action of the group $G$ on the nodes of the Bruhat--Tits tree preserves these incidence relations. In other words, the group $G$ acts by isometries of the Bruhat--Tits tree, preserving the graph distance between any pair of nodes. Intuitively, $G$ acts by translations and rotations on the (infinite) nodes of the tree in analogy to the ordinary isometries of AdS; additionally for any given vertex we may find a stabilizer subgroup, which is always a conjugate of $H = \PGL(2,\mathbb{Z}_p)$, which rotates the entire tree around this point. As is well known in AdS/CFT, the isometry group of the bulk acts as conformal transformations on the boundary. In $p$-adic AdS/CFT, we have $\partial {\cal T}_p = \mathbb{P}^1(\mathbb{Q}_p)$, with $G$ acting as fractional linear transformations: \eqn{}{ \mathbb{P}^1(\mathbb{Q}_p) \ni z \mapsto g \cdot z = {A z + B \over C z + D} \qquad g = \begin{pmatrix} A & B \\ C & D \end{pmatrix} \in G=\PGL(2,\mathbb{Q}_p)\,. } These are interpreted as the global ($p$-adic) conformal transformations acting on the dual theory defined on the boundary $\partial {\cal T}_p$. In analogy with static AdS$_3$, it is possible to obtain black hole like bulk geometries algebraically. One may quotient the Bruhat--Tits tree by an abelian discrete subgroup $\Gamma \in PGL(2,\mathbb{Q}_p)$ to obtain an analog of the BTZ black hole. The bulk and boundary properties of this construction are explored in section~\ref{GENUSONE}, where we will describe why this is a good model of a $p$-adic black hole geometry and compute the entropy via the tensor network proposed in this work. \subsection{Tensor networks} \label{ssec:tensors} In the recent literature, there has been much interest in so-called \emph{tensor network} models, which describe a state or family of states in a Hilbert space that is the tensor product of many qubits or local Hilbert spaces of fixed rank. Such states are built by considering concatenations of many tensors, each operating on a finite number of qubits in the manner of a quantum circuit. Early proposals~\cite{Vidal:2007hda,EvenblyVidal12} showed that such setups could be used to construct states whose entanglement structure mimics that of the vacuum state of a conformal field theory~\cite{Evenbly:2007hxg,EvenblyVidal08,EvenblyVidal10}. Subsequently, it was proposed~\cite{Swingle:2009bg,Swingle:2012wq} that the geometry of the tensor network could be thought of as a discrete analogue of an AdS bulk space, and various models have been developed to try and exhibit this correspondence more precisely~\cite{Pastawski:2015qua,Hayden:2016cfa,Almheiri:2014lwa,Harlow:2016vwg,Harlow:2018fse,Osborne:2017woa}. In particular, in the proposal of~\cite{Pastawski:2015qua}, tensor networks were associated to uniform tilings of hyperbolic two-dimensional space by $k$-gons, by placing a tensor with $m$ indices on each polygon and contracting indices across each adjacent face. The residual $m-k$ indices represent ``logical'' inputs in the bulk. (Of course, $m\geq k$; equality is not necessary, but $2m-k$ should not be too large. Furthermore, $m$ is taken to be even. See~\cite{Pastawski:2015qua} for a discussion of the precise conditions.) Using this construction, analogues of the Ryu--Takayanagi formula for the entanglement entropy were proved. This formula follows from a key property of the $m$-index tensors that are used on each plaquette: \begin{definition} \label{def:PT} Let $T\in V^{\otimes m}$ be an $m$-index tensor, where each index labels an identical tensor factor or ``qubit'' $V\cong \mathbb{C}^q$. $V$ is equipped with a Hilbert space structure, so that we can raise and lower indices using the metric. Let $I\subseteq M = \{1,\ldots, m\}$ be any subset of the index set, and $J$ its complement; without loss of generality, we can take $\# I \leq \# J$. $T$ is said to be \emph{perfect} if, for every such bipartition of the indices, \deq{ T_I^J: V^I \rightarrow V^J } is an isometric map of Hilbert spaces. Here we are using the notation that $T_I^J$ means $T$ with the indices in the set $I$ lowered, and those in the set $J$ raised. (In particular, $T_I^J$ is injective, so that one can think of the condition as asking that $T$ have the largest possible rank for any such tensor decomposition.) \end{definition} The parity of~$m$ is not important to the above definition, but the applications in~\cite{Pastawski:2015qua} make use of perfect tensors for which $m$ is even. It is then shown that requiring $T_I^J$ to define a unitary map for every bipartition with $\# I = \# J$ is sufficient to imply perfection in the sense of definition~\ref{def:PT}. The connection to maximally entangled states should hopefully be apparent: Recall that a state is said to be maximally entangled between two subsystems if the reduced density matrix, obtained by tracing out one subsystem, is ``as mixed as possible.'' At the level of density matrices, this means ``proportional to the identity matrix'' (recall that pure states correspond one-to-one to density matrices of rank one). So, for a state defined by a perfect tensor, we can write \deq{ \rho = |\alpha|^2 \, T_M T^M, } where $\alpha$ is a normalization constant required so that $\rho$ has unit trace (equivalently, so that the state $T_M \in V^{\otimes m}$ is normalized). Note that no Einstein summation convention applies; $M$ denotes a set of indices, rather than an index. To reduce the density matrix, though, we \emph{do} contract along the indices in the set $J$: \deq{ \qty(\rho_\text{red})_I^I = |\alpha|^2 \, T_I^J \circ T_J^I. } In the case that $m$ is even and $\# I = \# J$, this shows that $\rho_\text{red}$ is the composition of two unitary maps; as a consequence, it is full-rank. Indeed, by unitarity, the two maps $T_I^J$ and~$T_J^I$ are inverses of one another, so that the reduced density matrix is proportional to the identity. From the condition of unit trace, it follows that the normalization constant must be taken to be \deq{ |\alpha|^2 = p^{-m/2}. } For more details on computations with perfect tensors, look forward to section~\ref{COMPUTATION}. \subsection{Classical and quantum codes} \label{CLASSQUANTCODES} There are many close relations between perfect tensors, maximally entangled states, and quantum error-correcting codes. We have outlined some of the connections between the first two ideas above; in this subsection, we will discuss the third, which will give us a way to produce examples of perfect tensors. The key construction will be the CRSS algorithm~\cite{CRSS}, which produces quantum error-correcting codes from a particular class of classical codes. In turn, the CRSS algorithm makes use of a particular complete set of matrices acting on qubits, which come from the theory of Heisenberg groups; these groups generalize the familiar theory of the canonical commutation relations to variables which are discrete ($\FF_q$-valued) rather than continuous. As such, the CRSS procedure can be seen as perfectly analogous to canonical quantization problems of a familiar sort. We review the CRSS algorithm and the necessary theory of Heisenberg groups here; in~section~ \ref{CRSS}, we will develop this analogy further, and show that it provides a natural way to write down Hamiltonians whose vacuum states arise from perfect tensors. \subsubsection{Heisenberg groups} The simplest example of a finite Heisenberg group can be presented as follows: \deq[eq:simplestHeis]{ G = \langle X,Z,c : ZX = c XZ, ~ X^p = Z^p = 1, ~ c \text{ central}\rangle. } (It follows from these relations that $c^p = 1$ as well.) The center of the group is a copy of $\mathbb{F}_p$, generated by~$c$, and the quotient by the center is also abelian, so that the group fits into a short exact sequence \deq[eq:cseq]{ 0 \rightarrow \mathbb{F}_p \rightarrow G \rightarrow \mathbb{F}_p^2 \rightarrow 0 } exhibiting it as a central extension of one abelian group by another. Despite this, $G$ itself is nonabelian. Representations of this group are also easy to understand; each representation can be restricted to $Z(G) = \mathbb{F}_p$, and defines a character $\chi$ of that group, called the central character. In the case at hand, a central character is just a choice of $p$-th root of unity, corresponding to~$\chi(c)$. Given a choice of representation, a corresponding representation can be constructed on the vector space $\mathscr{H}=\mathbb{C}^p$. In a particular basis, the generators of~$V$ act according to the rule \deq{ X\ket{a} = \ket{a+1}, \quad Z\ket{a} = \chi(c)^a \ket{a}. } For a nontrivial central character, this representation is irreducible. An analogue of the Stone--von~Neumann theorem shows that this is in fact the unique irreducible representation with central character $\chi$. Furthermore, the representation matrices form an additive basis (over~$\mathbb{C}$) for the matrix algebra $M_{p\times p}(\mathbb{C})$. On the other hand, when the central character is trivial, any representation factors through the quotient map to~$\mathbb{F}_p^2$; since that (additive) group is abelian, there are $p^2$ different one-dimensional representations. As such, we have understood the complete representation theory of $G$. A quick check reveals that we've found the whole character table: there are $(p-1)$ nontrivial central characters, each with a representation of dimension~$p$, together with $p^2$ abelian representations. This makes a total of $p^2 + p - 1$ irreps, which corresponds to the number of conjugacy classes: these are the powers of~$c$, together with the nonzero powers $X^i Z^j$. One can also double-check that \deq{ \sum_\rho (\dim \rho)^2 = (p-1)p^2 + p^2 = p^3 = \# G. } This simple example already contains most of the structural features, and motivates the following definition. In what follows, $k$ will be an arbitrary field, though the cases that will be relevant will be when $k$ is locally compact (i.e., $k$ is a local field or a finite field). In fact, we will only really consider the cases where $k=\mathbb{R}$ or~$\mathbb{F}_q$, although some amount of the discussion even continues to make sense over an arbitrary commutative ring, for instance~$\mathbb{Z}$. \begin{definition} Let $V$ be a symplectic vector space over~$k$, with symplectic form~$\omega$. The \emph{Heisenberg group} associated to this data, denoted $\Heis(V)$, is the central extension of the additive abelian group $V$ by the cocycle \deq{ \omega: V^2 \rightarrow k. } \end{definition} Note that, since $\Heis(V,\omega)$ is a central extension, there is a natural short exact sequence of abelian groups \deq[eq:seq]{ 0 \rightarrow k \rightarrow \Heis(V) \rightarrow V \rightarrow 0, } generalizing the sequence~\eqref{eq:cseq}. Furthermore, the image of~$k$ is the center of the group. Our previous example arises in the case $k= \mathbb{F}_p$, $V = \mathbb{F}_p^2$, and $\omega$ the standard Darboux symplectic form on a two-dimensional vector space. In speaking of $\omega$ as a group cocycle, we are thinking of the inhomogeneous group cochains of~$(V,+)$. The cocycle condition is obeyed because \begin{align} d\omega(u,v,w) &\doteq \omega(v,w) - \omega(u+v,w) + \omega(u,v+w) - \omega(u,v) \nonumber \\ &= 0. \end{align} An analogue of the Stone--von~Neumann theorem also holds in this more general case, so that $\Heis(V)$ admits a unique irreducible representation for any choice of central character. We will discuss this theorem further below. Furthermore, just as in our example above, it is true for more general Heisenberg groups that the representation matrices form an additive basis for~$M_{p^n\times p^n}(\mathbb{C})$, where $n = \dim(V)/2$. An explicit construction of that unique irreducible representation can be given as follows. Let ${\mathscr{H}} = \Fun(\mathbb{F}_q,\mathbb{C}) \cong \mathbb{C}^q $ be the Hilbert space of a single $q$-ary qubit. An orthonormal basis of ${\mathscr{H}}$ is labeled by states $|a\rangle$ where $a \in \mathbb{F}_q$. Quantum error-correcting spaces are subspaces of ${\mathscr{H}}^{\otimes n}$ which are error-correcting for a certain number of qubits. All errors can be constructed from the error operators $E = E_1 \otimes \cdots \otimes E_n$ which are the representation matrices of the Heisenberg group. Each of the $E_i$ can be thought of as a particular combination of bit-flip and phase-flip operators, which we now describe. Define the $p \times p$ matrices \eqn{TRDefs}{ T = \begin{pmatrix} 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ & \vdots & & \ddots & \vdots \\ 0 & 0 & 0 & \cdots & 1 \\1 & 0 & 0 & \cdots & 0\end{pmatrix} \qquad R = \begin{pmatrix} 1 & & & & \\ & \xi & & & \\ & & \xi^2 & & \\ & & & \ddots & \\ & & & & \xi^{p-1} \end{pmatrix}\,, } where $\xi = e^{2\pi i/ p}$ is a (nontrivial) $p$-th root of unity. If $q=p$, $\xi$ is precisely a choice of central character, and it is easy to check that these matrices define the representation of the simplest Heisenberg group~\eqref{eq:simplestHeis} with that central character. However, in the case $q=p^r$, we must do slightly more work. Let $\{\gamma_j: 1 \leq j \leq r\}$ be a basis of $\mathbb{F}_q$ as an $\mathbb{F}_p$-vector space; see e.g.~\eno{abComponents}. Then we can write $a = \sum_{j=1}^r a_j \gamma_j$ for any $a \in \mathbb{F}_q$, where $a_i \in \mathbb{F}_p$. This also defines a tensor product basis in~$\mathscr{H}=\Fun(\mathbb{F}_q,\mathbb{C})$, such that $|a\rangle = |a_1 \rangle \otimes \cdots \otimes |a_r \rangle$ when $a$ is decomposed as a direct sum, as above. Then the error operators act on individual copies of $\mathbb{C}^p$ as follows: \eqn{TRCp}{ T^{b_j} |a_j\rangle = |a_j + b_j \rangle \qquad R^{b_j} |a_j \rangle = \xi^{\Tr ( a_j b_j )} |a_j \rangle \qquad a_j,b_j \in \mathbb{F}_p\,. } Here, the trace function $\Tr_{q:p}: \mathbb{F}_q \to \mathbb{F}_p$ (with $q=p^r$) is defined as \eqn{TrDef}{ \Tr_{q:p}(a) = \sum_{i=0}^{r-1} a^{p^i} \qquad a \in \mathbb{F}_q \,. } It is easy to see that this is precisely the trace of the endomorphism of~$\mathbb{F}_q$ that is multiplication by the element $a$, regarded as an $n\times n$ matrix over~$\mathbb{F}_p$. It is now simple to define the bit- and phase-flip operators acting on single $q$-ary qubits; they are the $q \times q$ matrices \eqn{TbRbDefs}{ T_b = T^{b_1} \otimes \cdots \otimes T^{b_r} \qquad R_b = R^{b_1} \otimes \cdots \otimes R^{b_r}\,. } These operators act on a single $q$-ary qubit via \eqn{TRsingle}{ T_{b} |a \rangle = |a + b \rangle \qquad R_{b} |a \rangle = \xi^{\Tr (\langle a, b \rangle)} |a \rangle\,, } where $a = \sum_{j=1}^r a_j \gamma_j\in \mathbb{F}_q$, and \eqn{aSingleState}{ |a \rangle = \otimes_{j=1}^r |a_j\rangle\,. } As emphasized above, the operators $T_a R_b$ form an orthonormal basis for $M_{q \times q}(\mathbb{C})$ under the inner product $\langle A,B \rangle = q^{-1} \Tr(A^\dagger B)$, and thus generate all possible errors on $\mathscr{H}$. We can further construct error operators which act on~$\mathscr{H}^{\otimes n}$ as follows. Given $a = (a_1, \ldots, a_n), b=(b_1, \ldots , b_n) \in \mathbb{F}_q^n$, define \eqn{EabDef}{ E_{a,b} = T_a R_b = (T_{a_1} \otimes \cdots \otimes T_{a_n}) (R_{b_1} \otimes \cdots \otimes R_{b_n})\,. } It is straightforward to check that $E_{a,b}^p = 1$, and that they obey the following commutation and composition laws: \eqn{EabComLaws}{ E_{a,b}E_{a^\prime,b^\prime} = \xi^{\langle a,b^\prime \rangle - \langle a^\prime,b \rangle} E_{a^\prime,b^\prime} E_{a,b} \qquad E_{a,b}E_{a^\prime,b^\prime} = \xi^{-\langle b,a^\prime \rangle} E_{a+a^\prime, b+b^\prime}\,. } Here, we have made use of an $\mathbb{F}_p$-valued pairing, \eqn{CompInProd}{ \langle a, b \rangle = \sum_{i=1}^n \langle a_i, b_i \rangle = \sum_{i=1}^n \sum_{j=1}^r a_{i,j} b_{i,j} \qquad a,b \in \mathbb{F}_q^n, } where the elements $a_i, b_i \in \mathbb{F}_q$ are expanded in terms of an $\mathbb{F}_p$-basis as \eqn{abComponents}{ a_i = \sum_{j=1}^r \gamma_j a_{i,j} \qquad b_i = \sum_{j=1}^r \gamma_j b_{i,j}\qquad a_{i,j}, b_{i,j} \in \mathbb{F}_p\,. } \eqref{EabDef} therefore produces the explicit representation matrices of the Heisenberg group of~$\mathbb{F}_q^{2n}$, corresponding to a particular (nontrivial) choice of central character. \subsubsection{Classical algebrogeometric codes} \label{ALGEBCODES} In this section, we give a few general remarks about classical codes over finite fields. The next section will review the CRSS algorithm, which associates a quantum error-correcting code to each such self-orthogonal classical code. Placing the two together, one can demonstrate that perfect tensors with arbitrarily many indices can be constructed, which we will require for the models of~section~ \ref{DUALGRAPH}; an additional ingredient is an appropriate family of classical codes, an example of which is given in appendix~\ref{THREEQ}. Section~\ref{CRSS} gives a new perspective on the CRSS algorithm, showing that perfect tensors arise naturally in the context of quantization of symplectic vector spaces over finite fields. A (classical) \emph{linear code} is nothing more than a linear subspace of a vector space over a finite field. In a basis, it is defined by an injective map \deq{ i : \mathbb{F}_q^k \hookrightarrow \mathbb{F}_q^n, } which can be thought of as encoding $k$ bits of information (each bit being of size $q$) into $n$ bits of information. The \emph{Hamming weight} is defined to be the function \deq{ \wt: \mathbb{F}_q^n \rightarrow \mathbb{N}, \quad c \mapsto \#\{ i : c_i \neq 0 \}. } Note that this is a basis-dependent definition! The \emph{minimum weight} of a code is simply the minimum Hamming weight of all nonzero elements of the code subspace; one often uses the notation ``$[n,k,d]_q$ code'' to speak of a code with the given parameters. We may sometimes omit the weight parameter $d$ from this list; no confusion should arise. Equipping $\mathbb{F}_q^n$ with an inner product or more generally a bilinear form, one can classify codes according to the properties of the code subspace. In particular, a code is said to be \emph{self-orthogonal} when the code subspace is isotropic with respect to the bilinear form, i.e., contained in its orthogonal complement: $\im(i) \subseteq \im(i)^\perp$.\footnote{\label{fn:DualCode} The superscript $^\perp$ denotes the dual (orthogonal) code. The dual code is defined as follows: If $C$ is a classical code over $\mathbb{F}_q$ of size $n$, then $C^\perp = \{ v \in \mathbb{F}_q^n : a * v = 0 \ \forall a \in C\}$. } The CRSS algorithm produces a quantum error-correcting code from classical self-orthogonal codes associated to symplectic vector spaces over finite fields. Such codes are generally of the form $[2n,\ell]_q$, where $\ell \leq n$ is the dimension of the isotropic subspace, and the inner product on~$\mathbb{F}_q^{2n}$ may, without loss of generality, be taken to have the standard Darboux form. We review the construction in the following subsection. For certain choices of the code parameters, CRSS quantum codes may then be used in turn to produce perfect tensors; in fact, as explained in~section~ \ref{CRSS}, a generalization of the CRSS algorithm relates perfect tensors to Lagrangian subspaces in general position in~$\mathbb{F}_q^{2n}$. Let us also remark that isotropic subspaces in symplectic vector spaces may be constructed from other types of classical codes. For example, let $D$ be a classical self-orthogonal $[n,k,d]_{q^2}$ code over $\mathbb{F}_{q^2}$, where the self-orthogonality is established with respect to the Hermitian inner product \eqn{HermInProd}{ v * w = \sum_{i=1}^n v_i w_i^q \qquad v, w \in \mathbb{F}_{q^2}^n\,. } By Theorem~4 of~\cite{AshKni:2001}, there exists a classical code $C$ of length $2n$ and size $2k$ over $\mathbb{F}_q$ which is self-orthogonal with respect to the inner product, \eqn{TrInProd}{ (a,b) * (a^\prime,b^\prime) = \Tr \left(\langle a,b^\prime\rangle_* - \langle a^\prime,b\rangle_*\right) \qquad (a,b), (a^\prime, b^\prime) \in \mathbb{F}_q^{2n}\,, } where the Euclidean inner product $\langle \cdot, \cdot \rangle_*$ is defined to be \eqn{EucInProd}{ \langle a, b \rangle_* = \sum_{i=1}^n a_i b_i \qquad a,b \in \mathbb{F}_q^n \quad a_i, b_i \in \mathbb{F}_q\,. } This is of course precisely the standard Darboux symplectic form on~$\mathbb{F}_q^{2n}$. The inner product given in \eno{TrInProd} has an equivalent description in terms of the inner product of \eno{CompInProd}, as follows (see \cite{AshKni:2001}): \eqn{TrInProdEqui}{ (a,b) * (a^\prime,b^\prime) = \langle a, \varphi(b^\prime) \rangle - \langle a^\prime, \varphi(b) \rangle\,, } where\footnote{More generally, $\varphi$ is an automorphism of the vector space $\mathbb{F}_p^r$, but for convenience we will restrict our focus to the particular choice of $\varphi$ described here.} \eqn{phiAction}{ \varphi(a) = \left(\varphi(a_1),\ldots,\varphi(a_n)\right) \qquad a \in \mathbb{F}_q^n \quad a_i \in \mathbb{F}_q, } and the action of $\varphi$ on elements of $\mathbb{F}_q$ is given by matrix multiplication, where $\varphi$ acts as an $r \times r$ matrix $M$ on the elements of $\mathbb{F}_q$, with \eqn{phiMatrix}{ M_{ij} = \Tr (\gamma_i \gamma_j) \qquad i,j = 1,\ldots r\,. } \subsubsection{CRSS algorithm} We briefly review the CRSS algorithm~\cite{CRSS}, which produces a quantum error-correcting code from an appropriately chosen classical code. We emphasize the perspective that CRSS is intimately related to the formalism of canonical quantization, albeit for Heisenberg groups over~$\mathbb{F}_p$ rather than~$\mathbb{R}$. For further discussion, the reader is referred to~\cite{Marcolli:2018ohd,CRSS,GBR}. As mentioned above, the CRSS algorithm starts with a symplectic vector space~$V$ of dimension~$2n$ over a finite field. We let $\mathscr{H}(V)$ denote the ``quantization'' of this symplectic space, i.e., the unique irreducible representation of~$\Heis(V)$ with central character $\chi$. In fact, by results of~\cite{GH}, there is a canonical model for~$\mathscr{H}(V)$; we review these results in~section~ \ref{CRSS}. Now, $\mathscr{H}(V)$ is isomorphic to the tensor product of $n$ $p$-dimensional Hilbert spaces, one for each ``qubit'' or discrete degree of freedom. Such a tensor product decomposition corresponds to a choice of Darboux basis for~$V$, which splits it as the direct sum of standard two-dimensional symplectic spaces. As noted above, the representation matrices of~$\Heis(V)$ additively span the space $\End(\mathscr{H}(V))$ of all operators over~$\mathbb{C}$. Now, consider any maximal isotropic subspace $L$ of~$V$; every such subspace defines a maximal abelian subgroup of $\Heis(V)$. Mutually diagonalizing the action of the operators representing~$L$ splits~$\mathscr{H}$ as a direct sum of one-dimensional eigenspaces. Then, consider a (necessarily isotropic) subspace $C \subset L$, whose dimension is $i < n$. $C$ is to be thought of as the classical code subspace. The mutual eigenspaces of the abelian group associated to~$C$ define a decomposition of~$\mathscr{H}$ into $\#C = q^i$ eigenspaces, each of dimension $q^{n-i}$. Each of these is further split as a sum of one-dimensional eigenspaces of~$L$. Now, one can define the quantum code space to be the invariant subspace of $C$, $\mathscr{H}(V)^C$, which is a Hilbert space of $(n-i)$ qubits, isomorphic to $\qty(\mathbb{C}^q)^{\otimes(n-i)}$. (One could equivalently have chosen any of the joint eigenspaces of~$C$.) Choosing an identification of this space with a standard set of $n-i$ qubits gives an encoding of $n-i$ qubits to $n$ qubits; the ``code words'' can be thought of as the natural basis in the code space consisting of eigenspaces of~$L$. To think of the code as a perfect tensor, we'd like to view the isometric injection of the $(n-i)$-qubit code space into the $n$-qubit encoding space as arising from a partitioning of the indices of a $(2n-i)$-index tensor. In other words, we should consider the larger space consisting of $(n-i)$ degrees of freedom to be encoded, together with $n$ degrees of freedom for the encoding space. Note that the number of indices of the perfect tensor will be even precisely when $i = \dim C$ is even, as is the case for the codes of the previous section. The error-correction properties of such a code are discussed in~\cite{CRSS}; we note that quantization of a self-orthogonal $[2n,2k]_q$ code, such as those discussed above, produces a quantum code with parameters $[\![n, n-2k, d_Q]\!]_q$. That is, one encodes $n-2k$ qubits in $n$ qubits in a manner that protects against $d_Q$ errors, where $d_Q = \min \{ \wt(a,b): (a,b) \in C^\perp \smallsetminus C\}$. In order to produce a perfect tensor, we will need $d_Q = n - k$. In appendix \ref{THREEQ} we consider an explicit example of a particular classical code, one of the Reed--Solomon codes, and construct the associated quantum Reed--Solomon code. The classical Reed--Solomon codes have parameters $[n,k,n-k+1]_q$, and are constructed using a set of points $X \subseteq \mathbb{P}^1(\mathbb{F}_q)$ with $|X| = n \leq q+1$, and homogeneous polynomials $f \in \mathbb{F}_q[u,v]$ where $x = [u:v] \in X$. For an input $k$-tuple of $q$-ary bits, $a = (a_0,\ldots,a_{k-1}) \in \mathbb{F}_q^k$, the homogeneous polynomial is chosen to be \eqn{homPol}{ f_a(u,v) = \sum_{i=0}^{k-1} a_i u^i v^{k-1-i}\,, } and the resulting code takes the form \eqn{RSCode}{ C = \{( f_a(u_1,v_1), \ldots, f_a(u_n,v_n): a \in \mathbb{F}_q^k, [u_i:v_i] \in X\}\,. } This family of Reed--Solomon codes can be used to construct quantum error-correcting Reed--Solomon codes $[\![n,n-2k,k+1]\!]_q$~\cite{Marcolli:2018ohd}. The case of perfect tensors is obtained by setting $n=q$ and $k+1=n-k$, which leads to a $[\![q,1,(q+1)/2]\!]_q$ code describing perfect tensors with $q+1$ indices and bond dimension $q$, where we can take the prime $q$ to be large as required for our later applications. In this paper we will consider precisely this code to construct holographic tensor networks; however our results are applicable more generally to tensor networks built out of any error-correcting code with the ``perfectness'' property in definition \ref{def:PT}. \smallskip {\bf Notational remark:} In this and the following section, as well as in appendix~\ref{THREEQ}, we set $q=p^r$ where $p$ is a prime and $r$ is a positive integer. Later in section \ref{GENUSZERO} onward, we will reserve the letter $p$ to parametrize the bulk geometry of the Bruhat--Tits tree of valence $p+1$, and will set up on this geometry the quantum Reed--Solomon code $[\![r,1,(r+1)/2]\!]_r$ where $r$ will be an independent prime number. It's also worth emphasizing that, in addition to being a prime power, $q$ will be used for the parameter of a multiplicative normalization of an elliptic curve (i.e. a representation as $\mathbb{C}^\times/q^\mathbb{Z}$, or $\mathbb{Q}_p^\times/q^\mathbb{Z}$ in the $p$-adic case). Both notations are standard, but the context should always be sufficient to determine which usage is intended. \section{Perfect Tensors Associated to Semiclassical States} \label{CRSS} The reader will have noticed that we have chosen to emphasize the perspective of canonical quantization in our exposition of the CRSS algorithm. We have done this, in part, to prepare for the discussion in this section, in which we will demonstrate that there is a natural generalization of the CRSS algorithm that produces perfect tensors \emph{directly}, without any intermediate reference to the theory of quantum codes. This perspective on perfect tensors has several advantages: First off, it shows that they are naturally associated by quantization to a particular class of semiclassical states, i.e., Lagrangian subspaces of a symplectic vector space $V$ over~$\mathbb{F}_q$. The condition of maximal rank on the perfect tensor, with respect to a decomposition of the Hilbert space into groups of qubits, translates naturally into a general-position requirement on the Lagrangian, asking that the dimensions of its intersections with a symplectic splitting of~$V$ be generic (as small as possible). As such, one is led to the conclusion that perfect tensors---rather than just being an \emph{ad~hoc} choice, adopted for calculational convenience in tensor network models---arise naturally in a way which bears a precise relationship to standard physical constructions. As a consequence of this perspective, we are able to write down a natural class of Hamiltonians, which are closely related to standard Hamiltonians for discrete degrees of freedom, for which the vacuum state is precisely the perfect-tensor state. With a bit of additional work, related to understanding gluing of perfect tensors, it should be possible to use these ideas to write down concrete spin systems whose vacuum states are computed by networks of perfect tensors. We look forward to returning to this question in future work. Our constructions make use of results of Gurevich and Hadani~\cite{GH}, who demonstrated the existence of a ``canonical quantization functor'' for symplectic vector spaces over finite fields. Given a choice of central character, this functor associates a finite-dimensional Hilbert space to each such symplectic vector space. Of course, this Hilbert space is just isomorphic to the unique representation of the corresponding Heisenberg group with given central character; however, constructing that representation normally requires a choice of auxiliary data, taking the form of a Lagrangian subspace of~$V$ and playing the role of a choice of polarization in geometric quantization. (The reader should imagine, for example, the choice between the position and momentum representations in constructing the quantum-mechanical Hilbert space of a particle.) Rather than a canonical Hilbert space, one therefore normally gets a family of Hilbert spaces over the oriented Lagrangian Grassmannian of~$V$. Gurevich and Hadani demonstrate the existence of a collection of intertwining morphisms that naturally identify all of the fibers of this family; the reader should think of equipping this bundle with a natural flat connection (with trivial monodromy). The canonical model of the Hilbert space is then given by horizontal sections of the family. Using these intertwining morphisms, one can therefore use any model one chooses to study the irrep of the Heisenberg group. For the perfect tensor associated to a Lagrangian $L\subset V$, it is natural to choose the polarization to be either $L$ or~$L^\vee$; by making this choice, one obtains a state that looks, roughly speaking, either like a delta function or like a constant function. The reader can imagine that translation-invariant states in $L^2(\mathbb{R})$ are constant functions in the position representation, or delta functions in the momentum representation. Of course, since~$L$ is in general position, the basis for~$\mathscr{H}$ arising from this choice is as far as possible from being a tensor-product basis. To change to such a basis, one must instead take the polarization data to be a Lagrangian $\Lambda\subset V$ which is a direct sum of one-dimensional Lagrangians, one in each of the symplectic direct summands of~$V$. ($\Lambda$ is thus maximally decomposable, i.e., as \emph{far} as possible from being in general position.) Applying the intertwining morphism of~\cite{GH} then gives an explicit formula for the perfect-tensor state, in the tensor product basis. The organization of this section is as follows: We will begin by giving a few more details on the construction of models for the irreducible representation of~$\Heis(V)$, and then continue by reviewing some of the results of~\cite{GH}. From there, we will go on to give the relation between perfect tensors and Lagrangians in general position. \subsection{Irreducible representations of Heisenberg groups} \label{ssec:HeisGrp} We reviewed some basic facts about Heisenberg groups above; here, we give some more details about the construction of irreducible representations. Any symplectic vector space $(V,\omega)$ can be written in some basis in the form \deq[eq:darboux]{ V = \Lambda \oplus \Lambda^\vee,\quad \omega: (x_1,z_1;x_2,z_2) \mapsto z_1(x_2) - z_2(x_1), } i.e., in a standard set of Darboux-type coordinates as the direct sum of two Lagrangian subspaces, which are placed in duality by~$\omega$. So we can simply write $\Heis(n,q)$ for the Heisenberg group where $\Lambda \cong \mathbb{F}_q^n$. Here, a basis of $\Lambda$ also determines a splitting of~$V$ as a direct sum of two-dimensional symplectic spaces; with respect to this splitting, $\Lambda$ is of course maximally decomposable, in the sense mentioned above. If we choose a section of the projection map in the exact sequence~\eqref{eq:seq}, which is equivalent to choosing a normal-ordering prescription, we can begin to write familiar-looking explicit formulas. For example, given a Darboux basis of the form~\eqref{eq:darboux}, we can pick the section of the projection map defined by the condition that all operators from~$L^\vee$ appear to the right of those coming from~$L$, and thereby identify~$\Heis(V)$ with $\mathbb{F}_q \times V$. In other words, by an obvious Poincar\'e--Birkhoff--Witt-type property, we can write each element of the group uniquely in the form \deq{ c^\ell X^I Z^J, } where the multi-index $I$ is an element of~$L$, $J$ of~$L^\vee$, and~$\ell$ of~$\mathbb{F}_q$. (Clearly, $X^I$ means $X_1^{i_1}\cdots X_n^{i_n}$, corresponding to a chosen basis of~$L$, and so on. Since $L$ is abelian, there is no further ordering ambiguity.) In what follows, we use the notation $[v]$ to mean the element of~$\Heis(V)$ corresponding to~$v\in V$ under the above prescription. The commutator of two such elements is then determined by the standard symplectic form: \deq{ [v][w] = c^{\omega(v,w)} [w][v]. } In a basis, we could write \deq{ X^I Z^J X^{I'} Z^{J'} = c^{(I',J) - (I,J')} X^{I'} Z^{J'} X^I Z^J, } where the pairing $(I,J) = i_1j_1 + \cdots + i_nj_n$, and the computation is carried out in~$\mathbb{F}_q$. This is the dual pairing between~$L$ and~$L^\vee$, and as such we have just rewritten the standard symplectic form~\eqref{eq:darboux}. The reader should compare this with~\eqref{EabComLaws}; note, however, that a specific choice of central character has been made, whereas here we have not done this yet. Now, consider the regular representation of~$\Heis(V)$, which is just on the space of complex-valued functions on the group itself. This space admits an action of~$\Heis(V)$ by both left and right translations, and furthermore has the natural $L^2$ Hermitian inner product, which is translation invariant. Furthermore, it breaks up into a direct sum over the set of central characters; we will denote by $F_\chi$ the subspace of functions where $Z(\Heis(V))\cong \mathbb{F}_q$ acts through the character~$\chi$. After choosing a normal-ordering prescription as above, $F_\chi$ can be thought of as identified with~$\Fun(V)\cong (\mathbb{C}^{p})^{\otimes 2n}$; In the presence of this additional data, it is precisely the space \begin{equation} \begin{tikzcd} \Fun(V) \otimes \chi \arrow[hookrightarrow]{d}{} \arrow{r}{\sim} & F_\chi \arrow[hookrightarrow]{d}{} \\ \Fun(V) \otimes \Fun(\mathbb{F}_q) \arrow{r}{\sim} &\Fun(\Heis(V)). \end{tikzcd} \end{equation} Note, however, that $F_\chi$ it is defined independent of a normal ordering, even though the above identification is not. Thus, the horizontal isomorphisms in the above diagram are not canonical. Moreover, it is apparent that, with respect to the $L^2$ inner product, \deq{ (F_\chi)^\vee = F_{\bar\chi}, } where $\bar\chi = \chi^{-1}$ is the inverse (or complex conjugate) character. Now, by the Stone--von~Neumann theorem, $F_\chi$ must decompose as the direct sum of $p$ irreducibles, each isomorphic to the unique (Heisenberg) representation with central character $\chi$. This decomposition can be seen as follows: Given a choice of Lagrangian $\Lambda$ and the corresponding decomposition $V = \Lambda \oplus \Lambda^\vee$ as above, one can take the \emph{right} $\Lambda^\vee$-invariants inside of~$\Fun_\chi(V)$. Specifically, we mean functions $f \in F_\chi$ such that \deq{ f( g [z]) = f(g), \quad \forall g \in \Heis(V),\ z \in \Lambda^\vee. } This is then an invariant subspace (in fact, an irreducible representation) with respect to the action of~$\Heis(V)$ by \emph{left} translations; indeed, the entire right action of~$\Lambda^\vee$ commutes with the left action of~$\Heis(V)$, so that the eigenspaces of that action are $\Heis(V)$-invariant. They are precisely the $p$ irreducible factors of the isotypic component $F_\chi$, and can be labeled by a set of characters of~$L$. With respect to a choice of normal ordering as above, we can identify $(F_\chi)^{\Lambda^\vee}$ in the obvious way with~$\Fun(\Lambda)$, and may write $f(x)$ for an element using this identification. In formulas, one has \deq{ [x'] \cdot f(x) = f(x + x'), } as well as \deq{ [z] \cdot f(x) = f([z][x]) = f(c^{z(x)}[x+z]) = \chi(c^{z(x)}) f(x), } after using right $\Lambda^\vee$-invariance. This is the model of the Heisenberg representation corresponding to the polarization data $\Lambda^\vee$, and we will sometimes denote it $\mathscr{H}_{\Lambda^\vee}$. It should be apparent that this construction is precisely analogous to the construction of the Hilbert space in quantum mechanics, or (in more sophisticated terms) to the role of a ``choice of polarization'' in geometric quantization. For example, functions on phase space would be ``wavefunctions'' $\psi(x,p)$, but the Hilbert space in fact consists of wavefunctions $\psi(x)$---i.e., that subset of functions on phase space that are invariant under translation in the momentum directions. The choice of central character corresponds to a choice of numerical value for~$\hbar$. \subsection{Functorial quantization} \label{ssec:quant} In this section, we give a brief review of the main results of~\cite{GH}, which constructed a canonical model of the Heisenberg representation. As mentioned above, this was done by constructing a family of intertwining morphisms \deq{ \phi_{\Lambda}^{\Lambda'}: \mathscr{H}_\Lambda \rightarrow \mathscr{H}_{\Lambda'}, } trivializing the family over the space of oriented Lagrangians in~$V$. (The morphisms $\phi$---although not the representations $\mathscr{H}_\Lambda$---depend on the additional data of orientations on~$\Lambda$ and~$\Lambda'$; an orientation is just a choice of nonzero vector in the top exterior power of~$\Lambda$. We suppress this from the notation for the sake of simplicity; in fact, the orientation only enters $\phi$ in the form of a normalization constant, which will play no role in our considerations.) Let us begin by stating the theorem: \begin{thm}[\cite{GH}] To each nontrivial central character $\chi$, there is associated a canonical ``quantization functor'' \deq{ \mathscr{H}: \cat{symp}^\text{\rm iso}(\mathbb{F}_q) \rightarrow \cat{vect}^\text{\rm iso}(\mathbb{C}), } where the first category is that of finite-dimensional symplectic vector spaces over~$\mathbb{F}_q$ (with arrows being isomorphisms of such) and the second is that of finite-dimensional complex vector spaces together with their isomorphisms. (We may occasionally write $\mathscr{H}^\chi$ for clarity.) By the action on arrows, one obtains a natural group homomorphism \deq[eq:Wrep]{ \Sp(V) \rightarrow GL\qty(\mathscr{H}(V)), } for every symplectic vector space~$V$; this gives a canonical model of the Weil representation of~$\Sp(V)$. Moreover, $\mathscr{H}(V)$ carries a natural action of~$\Heis(V)$, isomorphic to the Heisenberg representation with the corresponding central character. The functor $\mathscr{H}$ is monoidal, carrying the Cartesian product in~$\cat{symp}$ to the tensor product in~$\cat{vect}$. It is compatible with symplectic duality, meaning that \deq{ \mathscr{H}(\overline{V}) = \mathscr{H}(V)^\vee. } Here $\overline{V} = (V,-\omega)$ is the ``symplectic dual'' of~$(V,\omega)$. Moreover, $\mathscr{H}$ is compatible with symplectic reduction, in the following sense: Let $I \subset V$ be an isotropic subspace. Then there is a natural isomorphism \deq{ \mathscr{H}(V)^I \cong \mathscr{H}(V{/\!\!/} I). } The left-hand side is the $I$-invariants in the quantization of~$V$, whereas the right-hand side is the quantization of the linear symplectic reduction \deq[eq:sreddef]{ V{/\!\!/} I \doteq I^\perp / I } of~$V$ along~$I$. \end{thm} A few remarks on this result: First off, as mentioned above, one should think of the choice of central character as the value of~$\hbar$. As such, this is not erroneous extra data, but conceptually essential to the idea of a quantization functor. Second, the compatibility with symplectic reduction is an example of a result of the form ``quantization commutes with reduction'' (i.e., the Guillemin--Sternberg conjecture) in the context of finite fields. As mentioned above, the proof takes the form of giving a natural family $\phi$, trivializing the dependence of~$\mathscr{H}_\Lambda$ on additional data. This means that, to check a given property, it is enough to establish it for each such model, and further check that it is compatible with the trivialization maps. For example, the action of the functor on morphisms is quite simple to see: for a symplectomorphism $f_V\rightarrow V'$, it is just the pullback of $\mathscr{H}_{\Lambda'} \subset F_\chi'$ to~$V$, which lands in~$\mathscr{H}_{f^{-1}(\Lambda')}$. That is, one identifies polarization data in the two cases using the natural map induced from the symplectomorphism between the respective spaces of Lagrangians. Furthermore, the family of maps $\phi$ are relatively simple to describe: For two Lagrangians $L$ and~$L'$, in general position with respect to one another (in the sense that $L \cap L' = \{0\}$), the map is simply given by averaging: \deq{ \phi(f)(h) \propto \sum_{m \in L'} f(h\cdot[m]). } The result is obviously right $L'$-invariant; as mentioned above, the proportionality constant depends on a choice of orientation on each Lagrangian. When $L\cap L' = I$ is nontrivial, one averages over a set of representatives for the cosets $L'/I$. For more detailed discussion, the reader is referred to~\cite{GH}. \subsection{Perfect tensors from symplectic vector spaces} \label{ssec:PT} We're now at the point where we can formulate our central result. Take~$V$ to be a symplectic vector space of dimension $4n$ (as always, over a finite field~$\FF_q$), and choose a Darboux basis as above, defining a splitting of~$V$ into classical degrees of freedom (two-dimensional symplectic subspaces). We will show that a choice of Lagrangian subspace $L$ in~$V$, which is in generic position with respect to the splitting, defines a perfect tensor upon application of the quantization functor~$\mathscr{H}$. To start, let's make the notion of general position a bit more precise. The splitting of~$V$, coming from a Darboux basis, writes it as a symplectic direct sum \deq[eq:decomp]{ V = \bigoplus_{i=1}^{2n} V_i } of copies of~$\FF_q^2$. Since the functor is monoidal, this is correspondingly a tensor product decomposition \deq{ \mathscr{H}(V) = \bigotimes_{i=1}^{2n} \mathscr{H}(V_i) \cong (\mathbb{C}^q)^{\otimes 2n} } of the corresponding Hilbert space into qubits. Partition the index set $\{1,\ldots,2n\}$ into disjoint sets $K$ and~$K'$, with $k = \# K \leq n$, and denote the corresponding splitting~$V=W\oplus W'$. It is then simple to check that \deq[eq:dims]{ \dim(L \cap W) \geq 2(k - n), \quad \dim(L \cap W') \geq 2(n - k). } The first condition, though, is obviously vacuous, since $n - k \geq 0$. In particular, if $k=n$, $L$ will generically have zero-dimensional intersection with both $W$ and~$W'$. We will say that $L$ is in \emph{strongly general position} with regard to the decomposition~\eqref{eq:decomp} if it has zero-dimensional intersection with all such $W$ and $W'$, i.e., for any partition of the indices into two equal sets. Now, it is straightforward to check the following simple result: Given such a splitting, a Lagrangian $L$ in general position is precisely equivalent to a choice of symplectomorphism \deq{ \psi: \overline{W} \rightarrow {W}', } where $\overline{W}$ is the symplectic dual of~$W$. Indeed, an element $v\in L$ is a pair of elements $(v_1,v_2)$ of~$W \times W'$, and the symplectic form is of direct product type, so that \deq[eq:symplecto]{ 0 = \omega(u,v) = \omega(u_1,v_1) + \omega(u_2,v_2). } Define the map $\psi$ by the rule $\psi(v_1) = v_2$; this is clearly linear. Moreover, it is unambiguous, since the existence of more than one $v_2$ such that $(v_1,v_2) \in L$ would contradict the assumption that $L \cap W = 0$. There must exist at least one such $v_2$, though, since $L$ has maximal dimension. By~\eqref{eq:symplecto}, $\psi^*\omega = - \omega$, so that $\psi$ is a symplectomorphism between $W'$ and the symplectic dual of~$W$. The converse construction is obvious; just take $L$ to be the graph of the map $\psi$. It is now easy to see that the $L$-invariant element defines a linear isomorphism. We can interpret $L$ as a symplectomorphism, which will be carried by the quantization to a linear map \deq{ \mathscr{H}(\psi): \mathscr{H}(W)^\vee \rightarrow \mathscr{H}(W'). } This map is equivalently an element in~$\mathscr{H}(W) \otimes \mathscr{H}(W') = \mathscr{H}(V)$, which is the perfect tensor. Of course, it remains to show that the element associated to the symplectomorphism $\psi$ is, in fact, identical to the invariant element under the abelian subgroup associated to the graph of~$\psi$. But this is not difficult to see; for a given basis vector in~$L$, which can be written as a sum over the Darboux basis, the invariants of that basis vector will be those tensor products of eigenvectors of the Darboux generators whose eigenvalues multiply to unity. Upon splitting the generators into two subsets, this means that the product of eigenvalues from the first set is equal to (the inverse of) the product of eigenvalues from the second. We can furthermore make the following observation: The dimensions of the isotropic spaces $J = L\cap W$ and~$J' = L\cap W'$, must, in fact, be equal. To see that this is true, suppose that $J$ has dimension $j$. Then consider the symplectic reduction of~$V$ along~$J$; because $W' \subset J^\perp$, this is naturally isomorphic to the direct sum of~$W'$ with the symplectic reduction of~$W$ along~$J$, and $L/J$ is a Lagrangian subspace there (as one can see by checking dimensions). But then it follows from dimension formulas analogous to~\eqref{eq:dims} that \deq{ j' = \dim J' \geq j. } By reversing $W$ and~$W'$, the equality $j=j'$ follows. It further follows that the rank of the $L$-invariant element $T$, with respect to the tensor-product decomposition $\mathscr{H}(W) \otimes \mathscr{H}(W')$, is in fact always given by the formula \deq{ \rank T = p^{n-j}, } where, as above, $j = \dim(L\cap W) = \dim(L \cap W')$. This is a simple consequence of compatibility with symplectic reduction, as defined in~\eqref{eq:sreddef}: after reducing along $J \oplus J'$, one obtains a Lagrangian in general position, of the form \deq{ L/(J \oplus J') \subset (W{/\!\!/} J) \oplus (W'{/\!\!/} J'). } The formula then follows by recalling the above considerations in the case of general position. In fact, nothing in our arguments requires that $V$ consist of an even number of degrees of freedom. One is free to consider the more general case of a decomposition, where \deq{ L^m \subset V^{2m} = W^{2k} \oplus (W')^{2(m-k)}. } Here, superscripts denote dimension, and $2k\leq m$ without loss of generality; with regard to our previous notation, $m = 2n$, although $m$ may of course now be odd. It is trivial to check, as above, that \deq{ j = \dim(L\cap W) \geq 0, \quad j' = \dim(L \cap W') \geq m - 2k. } But it is further true that \deq{ j' = m - 2k + j, } as may be seen by considering symplectic reduction along~$J$ and~$J'$ in turn, and counting dimension. Moreover, the rank of the quantization of~$L$ will be precisely $p^{k-j}$. It is thus true that a Lagrangian in general position with respect to a Darboux basis gives rise to a perfect tensor, in the sense of our above definition, independent of the number of indices under consideration. A further remark on this result: It is simple to see that, for each splitting of~$V$ as a symplectic direct sum, generic position is an open condition in the Lagrangian Grassmannian. Since there are only finitely many splittings to check, we impose only finitely many open conditions by insisting that $L$ is in strongly general position. As such, this condition is ``generic'' among all Lagrangians in~$V$. Of course, this is subject to the caveat that (over $\mathbb{F}_p$) the Lagrangian Grassmannian itself consists of only finitely many points. As such, for a given choice of $q$ and~$\dim V$, there may not be \emph{any} such Lagrangian---even though the condition is open! However, for large enough $q$, one expects perfect tensor states to be generic in this precise sense among all semiclassical states. \subsection{A simple example} \label{ssec:example} Just for concreteness, let's consider the simplest example of the above considerations: a two-qubit space $V\cong \FF_p^4$, with Darboux basis $\{x_1,x_2;z_1,z_2\}$ over $\mathbb{F}_p$. An example of a Lagrangian in general position is \deq{L = \Span(x_1 + z_2, x_2 + z_1); } this is the graph of the symplectomorphism \deq{ \psi: V_1 \rightarrow \overline{V}_2, \qquad x_1 \mapsto z_2,\quad z_1 \mapsto x_2. } It is now trivial to see that, in the representation where the explicit matrices are \deq{ x_i \ket{a} = \ket{a+1}, \quad z_i \ket{a} = e^{2 \pi i a / p} \ket{a}, } a set of invariant eigenvectors for the first basis element of~$L$ consists of \deq{ \psi(a) = \sum_{b \in \mathbb{F}_p} e^{-2 \pi i a b / p} \ket{a,b}, } since this element is just the basis vector $\ket{a}$ of~$x_1$ tensored with the eigenvector of $z_2$ with eigenvalue $a^{-1}$. Similarly, for the second basis element, \deq{ \psi'(b) = \sum_{a \in \mathbb{F}_p} e^{-2 \pi i a b / p} \ket{a,b}, } from which it is trivial to see that the perfect tensor associated to~$L$ (the intersection of the above two subspaces) is just \deq{ T = \sum_{a,b \in \mathbb{F}_p} e^{-2 \pi i a b / p} \ket{a,b}. } It is, of course, obvious that this defines a full-rank matrix after lowering an index, since the matrix elements are just \deq{ T_{ab} = e^{-2 \pi i a b / p}. } Furthermore, it is clearly unitary, up to a normalization by~$1/\sqrt{p}$ that we should have included all along (to make use of the \emph{normalized} eigenvectors of~$z_i$). And we should expect it to be the isomorphism that identifies the eigensystem of~$x_1$ in the first copy of~$\mathbb{C}^p$ with the eigensystem of~$z_2$ in the second copy; a quick look at the $a$-th column of~$T_{ab}$ demonstrates that this is indeed the case. (This is, of course, closely connected to the kernel representation of the Fourier transform over~$\mathbb{F}_p$.) An equally simple example can be constructed by thinking of the Lagrangian \deq{ L = \Span(x_1+x_2,z_1-z_2); } the reader will find it pleasant to check that the matrix $T_{ab}$, in this case, reduces exactly to the identity matrix. \subsection{Hamiltonians with perfect tensor vacua} \label{ssec:Ham} We've worked hard to develop the idea that it is profitable to think about the systems of qubits arising in CRSS-type algorithms as quantizations of the canonical commutation relations on a discrete and periodic degree of freedom (i.e., representations of Heisenberg groups over finite fields). This makes a lot of analogies with usual quantum mechanics---which, after all, is about representations of Heisenberg groups over~$\mathbb{R}$---apparent. As a simple application of these ideas, we'd like to show that it's possible to write a physically natural Hamiltonian, starting with the semiclassical state (i.e. Lagrangian subspace) whose quantization is the perfect tensor. This Hamiltonian has the property that its spectrum is positive semidefinite, and the unique vacuum state is precisely the perfect tensor state. Of course, the elements of~$\Heis(n,q)$ are represented as \emph{unitary} operators, rather than Hermitian ones---the typical commutation relation, \deq{ZX = cXZ,} is after all an analogue of the \emph{exponentiated} canonical commutation relations. This seems like an obstruction to writing down interesting Hamiltonians, especially since $\Heis(n,q)$ is a discrete group and one cannot simply ask about its Lie algebra! However, it is straightforward to see that the situation is analogous to that of writing discrete derivative operators in quantum mechanics. Here, the only natural operators are the shift operators, which of course are unitary: the adjoint of a shift is the inverse shift. But it's therefore straightforward to see that combinations like \deq{ x + x^{-1}, ~ z + z^{-1} } are Hermitian, with spectra that look like (twice) the real parts of the roots of unity. And these Hermitian operators are obviously diagonal in the same basis that the unitaries themselves are. In fact, there's also the option to simply write \deq{ h(x) = 2 - x - x^{-1}, } which is still obviously Hermitian. Recalling that $x$ is an elementary shift operator, this is precisely analogous to a standard quadratic kinetic term in a lattice model: a discrete analogue of the operator $-\nabla^2$. Moreover, since $X$ has eigenvalue $e^{2 \pi i a / p}$, the spectrum of $h(X)$ is \deq{ h(X) = 2 - 2 \cos\qty(\frac{2\pi i a}{p}) = 4 \sin^2 \qty(\frac{\pi i a}{p}). } The construction of a suitable Hamiltonian is now clear: Choose any basis $b_i$ for the Lagrangian~$L$ whose quantization is the perfect tensor state, and then write \deq{ h = \sum_i h(b_i) = \sum_i \qty( 2 - b_i - b_i^{-1} ). } Since $L$ is Lagrangian, the $b_i$ (and therefore also the $h(b_i)$) are mutually commuting, and $h$ can be diagonalized by a set of mutual eigenvectors of the $b_i$---which is precisely an eigenbasis of~$L$. These Hamiltonians provide a natural set of candidates for constructing physical Hamiltonians, analogous to spin systems, whose sets of vacua are precisely the states obtained by networks of perfect tensors. What remains to be developed is an understanding of how the operation of gluing perfect tensors together lifts to the construction of a glued Hamiltonian, whose vacuum is the glued state. It seems plausible that this could be done in a way that has a natural semiclassical interpretation; whether the resulting model would have a Hamiltonian of commuting-projector type is not obvious. We look forward to returning to this question in future work. \section{Entanglement in $p$-adic AdS/CFT} \label{GENUSZERO} In this section we build on section \ref{REVIEW} to initiate the study of holographic entanglement entropy in $p$-adic AdS/CFT, via a quantum error-correcting tensor network construction built using perfect tensors. We begin by discussing the framework for the vacuum ($p$-adic) AdS geometry, culminating in the verification of a Ryu--Takayanagi like formula in this purely $p$-adic setting, and in the next section proceed to discuss entanglement in a genus 1 ($p$-adic) black hole geometry. \subsection{The dual graph tensor network} \label{DUALGRAPH} The states we want to focus on in this paper are a subset of all possible states which can be constructed using contractions of perfect tensors, each of which can be referred to as a ``tensor network''. The basic idea of this construction involves the contraction of many tensors in a ``bulk'' space to produce a complicated entangled state at the boundary of the network. One may interpret this boundary state as an analog of the ground state of a boundary conformal field theory, and there are many proposals in the literature on how this may be realized. The details of the particular tensor network proposed here are along the lines of~\cite{Pastawski:2015qua} and are important to the overall conclusions and generalizations. We will see the tensor network is closely associated with $p$-adic AdS/CFT. To construct a holographic state $|\psi \rangle$ in the boundary Hilbert space, we consider a tensor network given by what we call the ``dual graph'' of the holographically dual bulk geometry. For instance, if the boundary is $\mathbb{P}^1(\mathbb{Q}_p)$, then we consider the ``dual graph'' of the Bruhat--Tits tree in the bulk. If we are interested in states dual to the $p$-adic analog of the BTZ black hole, we must consider the corresponding ``dual graph'' of the genus $1$ Schottky uniformization of the Bruhat--Tits tree. In this section we focus on the tensor network associated with the Bruhat--Tits tree (which as mentioned in section \ref{ssec:padic} is to be thought of as the $p$-adic analog of a time slice of vacuum AdS$_3$). In the following, the introduction of this dual graph to the Bruhat--Tits tree may at first sight appear to be an additional structure beyond what is needed to study bulk dynamics $p$-adic AdS/CFT, but it will turn out to be crucial to our investigation of the relationship between bulk geometries and boundary entanglement. We recall from section~\ref{ssec:padic} that every edge on the Bruhat--Tits tree can be uniquely specified by specifying its two end-points (either as a pair of adjacent lattice equivalence classes or as a pair of cosets) and for every node on the Bruhat--Tits tree, there are $p+1$ edges incident on it. We define the dual graph as follows: \begin{definition} \label{def:dualgraph} A {\it dual graph} of the Bruhat--Tits tree is a graph which satisfies the following two properties:\footnote{These properties are motivated from the ``minimum cut rule'' which provides a discrete analog of the Ryu--Takayanagi formula for connected regions in networks of perfect tensors~\cite{Pastawski:2015qua}; however we do not assume this in the following. In fact in our setup the ``minimum cut rule'' applies more generally, for instance in the evaluation of the bipartite entanglement of a disconnected region as discussed in section \ref{GENUS0DOUBLE}.} \begin{itemize} \item There exists a bijective correspondence between bonds on the dual graph and edges on the Bruhat--Tits tree. (Both ``edge'' and ``bond'' refer to the same object in graph theory -- links between nodes on the graph, but for clarity we reserve the term ``edge'' for the Bruhat--Tits tree and ``bond'' for the dual graph.) Consequently, each bond on the dual graph is identified by specifying the corresponding edge on the Bruhat--Tits tree. \item The incidence relations of the set of bonds in bijective correspondence with those edges on the Bruhat--Tits tree incident at a particular node, are such that they form a cycle graph. We refer to such cycle graphs as ``plaquettes''. Thus there is a bijective correspondence between nodes on the Bruhat--Tits tree and plaquettes on the dual graph. \end{itemize} In fact, the dual graph in the $p$-adic black hole geometry also satisfies the same properties. \end{definition} {\it Any} valid dual graph must satisfy the definition above; however, the definition does not uniquely specify {\it a particular} dual graph. By construction, a $\PGL(2,\mathbb{Q}_p)$ transformation acts simultaneously on both the Bruhat--Tits tree and its dual graph as an isometry. The choice of picking a particular valid dual graph (which corresponds to making a particular choice on the connectivity of the plaquettes at each node) corresponds to a choice of ``planar embedding'' of the Bruhat--Tits tree as we explain in section \ref{TNPROPS}. See figure \ref{fig:planar} for an example. The construction of the dual graph may appear sensitive to the existence and choice of a planar embedding of the Bruhat--Tits tree. However, we show that physical quantities do not depend on this choice, and in section \ref{GEOMETRIC} we explain this construction entirely in the context of the $p$-adic Drinfeld upper half plane without assuming an embedding in the ordinary (real) upper half plane. \begin{figure}[!th] \centering \begin{subfigure}[]{0.49\textwidth} \centering \begin{tikzpicture}[scale=1.6] \tikzstyle{vertex}=[draw,scale=0.4,fill=black,circle] \tikzstyle{ver2}=[draw,scale=0.6,circle] \tikzstyle{ver3}=[draw,scale=0.4,circle] \coordinate (C) at (0,0); \foreach \x in {0,1,2,3} { \coordinate (a\x) at (\x*360/4:1); \draw (a\x) node[color=red,fill=red,circle,scale=0.4] {}; \draw[color=red] (a\x) -- (C); }; \foreach \x in {0,1,...,11} { \coordinate (b\x) at (\x*360/12 - 30 :2); \pgfmathparse{floor(\x/3)} \draw (b\x) node[color=red,fill=red,circle,scale=0.4] {}; \draw[color=red] (b\x) -- (a\pgfmathresult); \draw[color=red, dotted, thick] (b\x) -- (\x*360/12 - 30 :2.3); }; \foreach \x in {0,1,...,11} { \coordinate (c\x) at (\x*360/12 - 45 :2); }; \foreach \x/\y in {0/1,1/2,2/3,3/4,4/5,5/6,6/7,7/8,8/9,9/10,10/11,11/0} { \draw[thick] (c\x) to [out = \x*360/12 + 115, in = \y*360/12 + 155, looseness = 1.5] (c\y); }; \foreach \x/\y in {0/3,3/6,6/9,9/0} { \draw[thick] (c\x) to [out = \x*360/12 + 135, in = \y*360/12 + 135, looseness = 1.3] (c\y); }; \draw[dotted] (0,0) circle[radius=1]; \draw[dotted] (0,0) circle[radius=2]; \draw (0,0) circle[radius=2.4]; \draw (1.697,1.697) node[right] {$\mathbb{P}^1(\mathbb{Q}_p)$}; \draw[color=red] (C) node[below left] {$v_0$}; \draw (C) node[color=red,fill=red,circle,scale=0.4] {}; \draw[color=red] (a0) node[below left] {$v_2$}; \draw[color=red] (a1) node[below right] {$v_1$}; \draw[color=red] (a2) node[above right] {$v_4$}; \draw[color=red] (a3) node[above left] {$v_3$}; \draw[color=red] (b0) node[below] {$v_{10}$}; \draw[color=red] (b1) node[below right] {$v_9$}; \draw[color=red] (b2) node[below right] {$v_8$}; \draw[color=red] (b3) node[right] {$v_7$}; \draw[color=red] (b4) node[above right] {$v_6$}; \draw[color=red] (b5) node[left] {$v_5$}; \draw[color=red] (b6) node[below left] {$v_{16}$}; \draw[color=red] (b7) node[below left] {$v_{15}$}; \draw[color=red] (b8) node[below] {$v_{14}$}; \draw[color=red] (b9) node[left] {$v_{13}$}; \draw[color=red] (b10) node[below left] {$v_{12}$}; \draw[color=red] (b11) node[right] {$v_{11}$}; \end{tikzpicture} \caption{A choice of a planar embedding for the dual graph} \label{fig:planar1} \end{subfigure} \begin{subfigure}[]{0.49\textwidth} \centering \begin{tikzpicture}[scale=1.6] \tikzstyle{vertex}=[draw,scale=0.4,fill=black,circle] \tikzstyle{ver2}=[draw,scale=0.6,circle] \tikzstyle{ver3}=[draw,scale=0.4,circle] \coordinate (C) at (0,0); \foreach \x in {0,1,2,3} { \coordinate (a\x) at (\x*360/4:1); \draw (a\x) node[color=red,fill=red,circle,scale=0.4] {}; \draw[color=red] (a\x) -- (C); }; \foreach \x in {0,1,...,11} { \coordinate (b\x) at (\x*360/12 - 30 :2); \pgfmathparse{floor(\x/3)} \draw (b\x) node[color=red,fill=red,circle,scale=0.4] {}; \draw[color=red] (b\x) -- (a\pgfmathresult); \draw[color=red, dotted, thick] (b\x) -- (\x*360/12 - 30 :2.3); }; \foreach \x in {0,1,...,11} { \coordinate (c\x) at (\x*360/12 - 45 :2); }; \foreach \x/\y in {0/1,1/2,2/3,3/4,4/5,5/6,6/7,7/8,8/9,9/10,10/11,11/0} { \draw[thick] (c\x) to [out = \x*360/12 + 115, in = \y*360/12 + 155, looseness = 1.5] (c\y); }; \foreach \x/\y in {0/3,3/6,6/9,9/0} { \draw[thick] (c\x) to [out = \x*360/12 + 135, in = \y*360/12 + 135, looseness = 1.3] (c\y); }; \draw[dotted] (0,0) circle[radius=1]; \draw[dotted] (0,0) circle[radius=2]; \draw (0,0) circle[radius=2.4]; \draw (1.697,1.697) node[right] {$\mathbb{P}^1(\mathbb{Q}_p)$}; \draw[color=red] (C) node[below left] {$v_0$}; \draw (C) node[color=red,fill=red,circle,scale=0.4] {}; \draw[color=red] (a0) node[below left] {$v_1$}; \draw[color=red] (a1) node[below right] {$v_3$}; \draw[color=red] (a2) node[above right] {$v_2$}; \draw[color=red] (a3) node[above left] {$v_4$}; \draw[color=red] (b0) node[below] {$v_{6}$}; \draw[color=red] (b1) node[below right] {$v_5$}; \draw[color=red] (b2) node[below right] {$v_7$}; \draw[color=red] (b3) node[right] {$v_{13}$}; \draw[color=red] (b4) node[above right] {$v_{11}$}; \draw[color=red] (b5) node[left] {$v_{12}$}; \draw[color=red] (b6) node[below left] {$v_{8}$}; \draw[color=red] (b7) node[below left] {$v_{10}$}; \draw[color=red] (b8) node[below] {$v_{9}$}; \draw[color=red] (b9) node[left] {$v_{15}$}; \draw[color=red] (b10) node[below left] {$v_{16}$}; \draw[color=red] (b11) node[right] {$v_{14}$}; \end{tikzpicture} \caption{A different planar embedding for the dual graph} \label{fig:planar2} \end{subfigure} \caption{A finite part of the infinite Bruhat--Tits tree is shown in red, with the nodes labelled $v_i$. The graphical representation of the dual graph for the finite red subgraph is shown in black. The Bruhat--Tits tree in (b) is obtained by acting on the Bruhat--Tits tree in (a) with a $\PGL(2,\mathbb{Q}_p)$ transformation fixing the vertex $v_0$. Equivalently, the dual graphs in (a) and (b) correspond to two different choices of incidence relations for bonds on the dual graph, subject to the two requirements mentioned in definition \ref{def:dualgraph}. In the infinite graph limit, the geometry of the Bruhat--Tits tree is represented by the Schl\"{a}fli symbol $\{\infty, p+1\}$, while the dual graph is given by the Schl\"{a}fli symbol $\{p+1,\infty\}$.} \label{fig:planar} \end{figure} The dual graph will describe a tensor network. Each node on the dual graph will represent a rank-$(r+1)$ perfect tensor for some chosen $r$, with the bonds on the dual graph specifying how tensor indices are contracted among themselves. In this paper we restrict ourselves to the study of the so-called holographic states rather than holographic codes~\cite{Pastawski:2015qua}. Thus there are no ``bulk logical inputs'' in our setup. Interestingly, the Bruhat--Tits tree and (any choice of) its dual graph may be obtained as the asymptotic limit of the simplest holographic states considered in \cite{Pastawski:2015qua} -- where the geometry is described by a regular hyperbolic tiling using $q$-gons with $p+1$ $q$-gons incident at each vertex (which is represented by the Schl\"{a}fli symbol $\{q, p+1\}$) and the corresponding perfect tensor network with Schl\"{a}fli symbol $\{p+1, q\}$ -- in the limit $q \to \infty$. Viewed as such a limit, we observe that in fact all nodes of the dual graph tensor network may be interpreted as having been ``pushed to the boundary'' leaving no ``bulk nodes'' on the tensor network (see figure \ref{fig:SimplePsi} for an example with a finite tree).\footnote{Since we restrict our attention in this paper to only holographic states~\cite{Pastawski:2015qua}, the results do not depend on this curious feature of the $p$-adic tensor network. Thus we do not comment further on the physical interpretation of this observation.} As mentioned in section \ref{REVIEW} the perfect tensors themselves originate from quantum Reed-Solomon codes, particularly the $[\![r,1,(r+1)/2]\!]_r$-code, where $r$ is prime, although in the following the only thing we will explicitly use is the perfectness property of the quantum code and the fact that the tensors have rank and bond dimension $r$. Depending on the chosen rank of the perfect tensor, there will be a varying number of free (uncontracted or ``dangling'') legs at each node on the dual graph. (We have suppressed such ``dangling'' legs in figure \ref{fig:planar}.) As with other tensor network models, we interpret the ``boundary wavefunction'' as a complicated entangled state in the tensor product Hilbert space of these dangling legs. In practice, we always work with a boundary UV cutoff, so that we only consider finite graphs in the bulk. Thus in constructing a dual graph tensor network which describes a holographic state, in addition to the prime $p$ (which parametrizes the bulk geometry ${\cal T}_p$), we need to specify two other integral parameters: the UV ``cut-off parameter'' $\Lambda$ and the rank of the perfect tensors $(r+1)$. \begin{definition} \label{def:cutoff} The {\it cut-off parameter} $\Lambda$ is defined to be one-half the length of the longest geodesic on the radially truncated (i.e.\ cut-off) Bruhat--Tits tree. \end{definition} See figure \ref{fig:TNexample}. Eventually, we will take the number of tensors and $r$ to be large; the resulting boundary holographic states will have entanglement properties which are geometrized by this bulk network. \begin{figure} \centering \begin{subfigure}[] {0.48\textwidth} \centering \begin{tikzpicture}[scale=1.6] \tikzstyle{vertex}=[draw,scale=0.4,fill=black,circle] \tikzstyle{ver2}=[draw,scale=0.6,circle] \tikzstyle{ver3}=[draw,scale=0.4,circle] \coordinate (C) at (0,0); \foreach \x in {0,1,2,3} { \coordinate (a\x) at (\x*360/4:1); \draw[color=red] (a\x) -- (C); }; \foreach \x in {0,1,...,11} { \coordinate (b\x) at (\x*360/12 - 30 :2); \pgfmathparse{floor(\x/3)} \draw[color=red] (b\x) -- (a\pgfmathresult); }; \foreach \x in {0,1,...,11} { \coordinate (c\x) at (\x*360/12 - 45 :2); }; \foreach \x/\y in {0/1,1/2,2/3,3/4,4/5,5/6,6/7,7/8,8/9,9/10,10/11,11/0} { \draw[thick] (c\x) to [out = \x*360/12 + 115, in = \y*360/12 + 155, looseness = 1.5] (c\y); }; \foreach \x/\y in {0/3,3/6,6/9,9/0} { \draw[thick] (c\x) to [out = \x*360/12 + 135, in = \y*360/12 + 135, looseness = 1.3] (c\y); }; \draw[dotted] (0,0) circle[radius=1]; \draw[dotted] (0,0) circle[radius=2]; \draw[color=red] (C) node[below left] {$C$}; \draw (C) node[color=red,fill=red,circle,scale=0.4] {}; \end{tikzpicture} \caption{Dual graph to a finite tree} \label{fig:SimpleDual} \end{subfigure} \begin{subfigure}[] {0.48\textwidth} \centering \begin{tikzpicture}[scale=1.6] \tikzstyle{vertex}=[draw,scale=0.4,fill=black,circle] \tikzstyle{ver2}=[draw,scale=0.6,circle] \tikzstyle{ver3}=[draw,scale=0.4,circle] \coordinate (C) at (0,0); \foreach \x in {0,1,2,3} { \coordinate (a\x) at (\x*360/4:1); \draw[color=red] (a\x) -- (C); }; \foreach \x in {0,1,...,11} { \coordinate (b\x) at (\x*360/12 - 30 :2); \pgfmathparse{floor(\x/3)} \draw[color=red] (b\x) -- (a\pgfmathresult); }; \foreach \x in {0,1,...,11} { \coordinate (c\x) at (\x*360/12 - 45 :2); }; \foreach \x/\y in {0/1,1/2,2/3,3/4,4/5,5/6,6/7,7/8,8/9,9/10,10/11,11/0} { \draw[thick] (c\x) to [out = \x*360/12 + 115, in = \y*360/12 + 155, looseness = 1.5] (c\y); }; \foreach \x/\y in {0/3,3/6,6/9,9/0} { \draw[thick] (c\x) to [out = \x*360/12 + 135, in = \y*360/12 + 135, looseness = 1.3] (c\y); }; \foreach \x in {0,3,6,9} { \foreach \y in {-3,-1,1,3} { \coordinate (d\x_\y) at (\x*360/12 + \y - 45 : 2.2); \draw[thick] (c\x) -- (d\x_\y); }; \draw (\x*360/12 - 45 : 2.35) node {$4$}; }; \foreach \x in {1,2,4,5,7,8,10,11} { \foreach \y in {-5,-3,-1,1,3,5} { \coordinate (d\x_\y) at (\x*360/12 + \y - 45 : 2.2); \draw[thick] (c\x) -- (d\x_\y); }; \draw (\x*360/12 - 45 : 2.35) node {$6$}; }; \draw[dotted] (0,0) circle[radius=1]; \draw[dotted] (0,0) circle[radius=2]; \draw[color=red] (C) node[below left] {$C$}; \draw (C) node[color=red,fill=red,circle,scale=0.4] {}; \end{tikzpicture} \caption{Holographic state with free dangling legs} \label{fig:SimplePsi} \end{subfigure} \caption{The construction of the holographic tensor network for the cut-off Bruhat--Tits tree. In this example, we have used perfect tensors of rank eight. The construction shown here corresponds to the parameters $p=3$, $\Lambda = 2$, and~$r=7$. Hereafter we will suppress showing the free uncontracted/dangling legs displayed in (b). } \label{fig:TNexample} \end{figure} Cutting off ordinary (real) or $p$-adic AdS at a finite radius provides an IR regulator from the bulk point of view, as the length of boundary anchored geodesics formally diverges for an infinite tree. In the usual picture, this IR regulator of minimal surfaces is dual to the UV cut-off of the conformal field theory; in this model it is the finiteness of the number of boundary tensors. For each choice of $p$ and $\Lambda$, the endpoints of the cut-off (truncated) Bruhat-Tits tree form the space $\mathbb{P}^1(\mathbb{Z}/p^{\Lambda} \mathbb{Z}),$\footnote{As usual, points in $\mathbb{P}^1(\mathbb{Z}/p^{\Lambda} \mathbb{Z})$ are obtained by considering pairs in the ring $\mathbb{Z}/p^{\Lambda} \mathbb{Z}$ modulo scaling. For $\Lambda > 1$, this is not a field and there are zero divisors without multiplicative inverses. When forming the projective line, one finds there are multiple ``points at infinity'' beyond the usual inverse of $0$. Perhaps surprisingly, the set of base points and points at infinity are in one to one correspondence with the boundary of the tree cut off at finite distance.} and the tensor network is the dual graph to the tree of this space. As we remove the cutoff, the boundary approaches $\mathbb{P}^1(\mathbb{Q}_p)$, and we will show that various quantities such as the length of geodesics and the entanglement entropy will have a logarithmic UV divergences as expected in a two-dimensional quantum field theory. Further, we impose a constraint on the rank $(r+1)$. We require $(r+1)/2 \geq 2\Lambda$. Thus in the limit $\Lambda \to \infty$, the rank of the perfect tensor also goes to infinity. We will return to a detailed conceptual and technical analysis of such a limit in section \ref{AFALGEBRA}. If for a chosen vertex $v$ on the dual graph, the number of contracted legs of the tensor is denoted $v_c$, while the number of uncontracted legs is denoted $v_d$, then for cut-off $\Lambda$, all vertices on the dual graph tensor network satisfy $2\Lambda \geq v_c$. Since $v_c + v_d = r+1$, the requirement above implies $v_c \leq v_d$ at all vertices of the tensor network. Thus this condition ensures that the number of free dangling legs at any vertex on the dual graph is greater than or equal to the number of contractions at the vertex. This requirement may seem arbitrary, but plays an important role in our setup. Without this constraint the minimal cut rule obeyed by perfect tensors may lead to the cuts being made at the uncontracted dangling legs of the tensor network rather than along the contractions in the bulk of the network, which will be necessary for recovering the appropriate RT surface. The holographic state so constructed is a pure state which is a ground state of the dual toy model CFT. We comment on the construction of the Hilbert space to which such states belong in section~\ref{AFALGEBRA}. We show that the dual graph tensor network satisfies a Ryu--Takayanagi like formula and is independent of the choice of the planar embedding. We will prove this in general for the bipartite entanglement of ``connected'' and ``disconnected regions'' (which we will be define more precisely shortly). In this section, we restrict to discussing the results; all detailed computations can be found in section \ref{COMPUTATION}. \subsection{Entanglement in genus zero $p$-adic background} \label{GENUSZERORESULTS} The $p$-adic numbers have a totally disconnected topology, so that open balls (in fact, all balls are clopen, i.e.\ closed and open) are either fully disjoint or contained one inside another. Clopen balls are defined as ${\cal B}_v(x) \equiv \{y \in \mathbb{Q}_p: |x-y|_p \leq p^v\}$ for any integer $v$. The set of non-zero $p$-adic numbers itself can be written as the disjoint union of the clopen balls $\mathbb{Q}_p \smallsetminus{\{0\}}= \bigcup_{m=-\infty}^\infty p^m \mathbb{U}_p$, where $\mathbb{U}_p = \mathbb{Z}_p \smallsetminus p\mathbb{Z}_p = \{x \in \mathbb{Q}_p : |x|_p =1\}$. Thus although $p$-adic numbers are not ordered, they admit a partial sense of ordering with respect to the $p$-adic norm. This partial ordering is captured by the Bruhat--Tits tree. Using conformal transformation, set any two points on the projective line $\mathbb{P}^1(\mathbb{Q}_p)$, the boundary of the Bruhat--Tits tree, to $0$ and $\infty$. Then the particular clopen balls of $\mathbb{Q}_p^\times = \mathbb{Q}_p \smallsetminus{\{0\}}$ of the form $p^m \mathbb{U}_p$ arrange themselves as shown in figure \ref{fig:clopenQp}. In the Poincar\'{e} disk picture, any clopen ball of $\mathbb{P}^1(\mathbb{Q}_p)$ can be obtained by cutting the Bruhat--Tits tree along one of its edges -- the terminus of the disconnected branches of the tree represent the (mutually complimentary) clopen sets whose union is the whole of $\mathbb{P}^1(\mathbb{Q}_p)$ (see figure \ref{fig:clopenP1Qp}). More general clopen sets are obtained as finite union of balls. \begin{figure}[t] \begin{subfigure}[]{0.49\textwidth} \[ \begin{tikzpicture}[scale=1.5] \tikzstyle{vertex}=[draw,scale=0.4,fill=black,circle] \foreach \x in {0,...,4} { \coordinate (a\x) at (0,\x); \coordinate (b\x) at (-1,\x); \draw (b\x) node[vertex]{} -- (a\x) node[vertex]{}; \foreach \y in {-1,1} { \coordinate (c\x\y) at (-2,\x+0.3*\y); \draw (c\x\y) -- (b\x); \foreach \z in {-1,1} { \coordinate (d\x\y\z) at (-2.25,\x+0.3*\y + 0.075*\z); \draw[very thick, dotted] (d\x\y\z) -- (c\x\y); }; }; }; \draw (-2.3,2) node[anchor=east] {$\mathbb{U}_p$}; \draw (-2.3,1) node[anchor=east] {$p \mathbb{U}_p$}; \draw (-2.3,3) node[anchor=east] {$p^{-1} \mathbb{U}_p$}; \draw (0,-0.3) node[anchor=north]{$0$ }-- (0,4.3) node[anchor=south]{$\infty$}; \end{tikzpicture} \] \caption{The multiplicative group $\mathbb{Q}_p^\times$.} \label{fig:clopenQp} \end{subfigure} \begin{subfigure}[]{0.49\textwidth} \[ \begin{tikzpicture}[scale=0.9] \tikzstyle{vertex}=[draw,scale=0.4,fill=black,circle] \tikzstyle{ver2}=[draw,scale=0.6,circle] \tikzstyle{ver3}=[draw,scale=0.4,circle] \coordinate (C) at (0,0); \foreach \x in {0,1,2} { \coordinate (a\x) at (\x*360/3 + 30:1); \draw (a\x) -- (C); }; \foreach \x in {0,1,...,5} { \coordinate (b\x) at (\x*360/6:2); \pgfmathparse{floor(\x/2)} \draw (b\x) -- (a\pgfmathresult); }; \foreach \x in {0,1,...,11} { \coordinate (c\x) at (\x*360/12 - 15:3); \pgfmathparse{floor(\x/2)} \draw (c\x) -- (b\pgfmathresult); }; \foreach \x in {0,1,...,23} { \coordinate (d\x) at (\x*360/24 - 22.5:4); \pgfmathparse{floor(\x/2)} \draw (d\x) -- (c\pgfmathresult); }; \draw[dotted] (0,0) circle[radius=1]; \draw[dotted] (0,0) circle[radius=2]; \draw[dotted] (0,0) circle[radius=3]; \draw[dotted] (0,0) circle[radius=4]; \draw (C) node[below left] {$C$}; \draw (C) node[vertex] {}; \draw (c1) node[vertex] {}; \draw[very thick] (d2) -- (c1); \draw[very thick] (d3) -- (c1); \draw (b4) node[vertex] {}; \draw[very thick] (b4) -- (c8); \draw[very thick] (b4) -- (c9); \draw[very thick] (c8) -- (d16); \draw[very thick] (c8) -- (d17); \draw[very thick] (c9) -- (d18); \draw[very thick] (c9) -- (d19); \draw (d2) node[right] {$y_2$}; \draw (d3) node[right] {$y_1$}; \end{tikzpicture} \] \caption{Clopen balls in~$\mathbb{P}^1(\mathbb{Q}_p)$.} \label{fig:clopenP1Qp} \end{subfigure} \caption{Two pictures of the Bruhat--Tits tree for~$P^1(\mathbb{Q}_2)$, emphasizing either the action of the multiplicative group $\mathbb{Q}_2^\times$ or the Patterson--Sullivan measure.} \label{fig:clopen} \end{figure} Now recall from standard results in holography that in a CFT$_2$, the entanglement of a connected region $A$ with its complement on a one-dimensional spatial slice admits an interpretation as the length of the minimal geodesic(s) in AdS$_3$ homologous to the region $A$, i.e.\ one minimizes over the length of the geodesic(s) such that there exists a bulk region $r$ whose boundary is the union of the minimal geodesic(s) and the boundary region $A$. In this case the boundary of $A$ is simply a pair of points (which together comprise the ``entangling surface''). This presents an obvious obstruction over the $p$-adic formulation, since $\mathbb{Q}_p$ is not ordered, and it is clear one cannot define regions by specifying end points in $\mathbb{Q}_p$. How, then, is entanglement to be interpreted in a $p$-adic theory over one (spatial) dimension? There are at least two physically motivated points of view: \begin{itemize} \item One possibility is to study entanglement of clopen sets on the projective line with their complementary clopen sets. However, due to ultrametricity, every point in a clopen ball is the center of the ball. Thus the notion of ``boundary'' of the ball is ill-defined (more generally, the ``boundary'' of a clopen set is ill-defined), nor is there an analog of entangling surfaces. One can however specify the {\it smallest} clopen ball containing a given pair of points. The size of such a clopen ball is given by the Patterson-Sullivan measure~\cite{Sullivan,CornKool}, and is directly related to the (regulated) length of the boundary-anchored bulk geodesic joining the given pair of points. \item A second related possibility, but closer in spirit to the real formalism, is to motivate entanglement directly in terms of the ``entangling surface'' on the spatial $p$-adic boundary, namely, in terms of pairs of points on the boundary $\mathbb{P}^1(\mathbb{Q}_p)$. We note that the (regulated) geodesic distance between any two chosen points on the boundary is invariant under any automorphism of the Bruhat--Tits tree, and thus is independent of the planar embedding, which is an essential feature of the setup. The freedom in the choice of planar embedding reflects the fact that $p$-adic numbers (living on the boundary of the Bruhat--Tits tree) do not form an ordered field and thus admit all possible planar embeddings equally. \end{itemize} We adopt the latter point of view here (although some aspects of the former point of view will also inevitably feature in our discussion of the $p$-adic results due to the inherent ultrametric nature of $p$-adic numbers), as it represents a generalization which is applicable to both the real and the $p$-adic formulations. We comment further on this point in section~\ref{DISCUSSION}. We emphasize that the geometry, which is given by the Bruhat--Tits tree has a strong nonarchimedean flavor owing to the direct connection with $p$-adic numbers.\footnote{The nonarchimedean property of $p$-adic numbers is as follows: Given two $p$-adic numbers $a,b \in \mathbb{Q}_p$, such that $|a|_p < |b|_p$, then for all $n \in \mathbb{Z}$, $|na|_a < |b|_p$. We will also use the term ``ultrametricity'' in the context of the $p$-adic norm. Ultrametricity refers to the stronger form of the triangle inequality obeyed by the $p$-adic norm: Given $a,b \in \mathbb{Q}_p$, $|a+b|_p \leq \sup\{|a|_p,|b|_p\}$. The nonarchimedean property follows from ultrametricity.} The dual graph tensor network is however closer in spirit to the usual tensor networks framework over the reals. The network still encodes a maximally entangled ground state of the CFT, and the perfect tensors from which it is made provide the quantum-error correction properties. In this setup, we will compute entanglement in the usual way: by tracing out ``boundary regions'' of the tensor network, specified by sets of nodes on the dual graph (more precisely the collection of uncontracted tensor legs at those nodes), and then explicitly compute the reduced density matrix, and from it the von Neumann entropy. However, we will argue in the $p$-adic setting that the specification of intervals is not as fundamental as the specification of the entangling surfaces. \subsubsection{Regions in the bulk and boundary of tensor networks} \label{sssec:intervals} In a static slice of the boundary theory, the standard way of specifying a connected region at the terminus of a holographic tensor network is by picking a pair of points on the spatial slice of the two-dimensional CFT. This naturally defines a pair of complimentary intervals at the boundary of the tensor network, and provides a factorization of the Hilbert space, ${\cal H} = {\cal H}_1 \otimes {\cal H}_2$, where ${\cal H}_i$ are the Hilbert spaces associated with the individual regions. Starting with a pure state in ${\cal H}$ given by the density matrix $\rho$, the bipartite von Neumann entropy of region $1$ is computed by tracing out the states associated with ${\cal H}_2$, producing the reduced density matrix $\rho_1 = \Tr_{{\cal H}_2} \rho$. The von Neumann entropy, in this case called the entanglement entropy, is then given by $S_1 = -\Tr (\rho_1 \log \rho_1)$. However, the situation is different in the $p$-adic setting in an important way, namely the specification of intervals on the boundary. As mentioned above, the notion of an interval with end-points is ill-defined when the CFT lives on a (spatial) $p$-adic slice $\mathbb{Q}_p$ (or the projective line over $\mathbb{Q}_p$); however one can {\it still} define a corresponding pair of complimentary connected regions at the boundary on the {\it tensor network}, separated by two boundary points as we now explain. In our setup, the ``connected region'' of interest on the tensor network will be specified by a set of nodes on the tensor network which lie ``in between'' the given boundary points $x$ and $y$, which themselves lie at the terminus of the cutoff Bruhat--Tits tree. We will explain the terminology ``in between'' shortly, but essentially it corresponds to selecting a set of vertices at the boundary of the tensor network in between the chosen end points, in a given planar embedding. The exact region to be traced out will depend on the choice of planar embedding (i.e.\ the choice of the dual graph). See figure \ref{fig:interval} for an example. The ambiguity in picking a region or its complement is fixed by assigning an orientation, such as an anti-clockwise orientation. We stress that we are not assuming any ordering of the $p$-adic numbers. Once a planar embedding is chosen, the region ``in between'' $x$ and $y$ is $\PGL(2,\mathbb{Q}_p)$ ``covariant'', which follows from the transformation properties of the dual graph explained earlier.\footnote{By covariance, we mean that the region is always given by the set of nodes on the tensor network ``in between'' the given boundary points. The boundary points will in general transform under $\PGL(2,\mathbb{Q}_p)$ to a new set of points, and accordingly, the region will transform to one between the transformed pair of points.} The final result for the von Neumann entropy for the connected region is independent of this choice of the planar embedding. \begin{figure}[!th] \centering \begin{subfigure}[]{0.49\textwidth} \centering \begin{tikzpicture}[scale=1.6] \tikzstyle{vertex}=[draw,scale=0.4,fill=black,circle] \tikzstyle{ver2}=[draw,scale=0.6,circle] \tikzstyle{ver3}=[draw,scale=0.4,circle] \coordinate (C) at (0,0); \foreach \x in {0,1,2,3} { \coordinate (a\x) at (\x*360/4:1); \draw (a\x) node[color=red,fill=red,circle,scale=0.4] {}; \draw[color=red] (a\x) -- (C); }; \foreach \x in {0,1,...,11} { \coordinate (b\x) at (\x*360/12 - 30 :2); \pgfmathparse{floor(\x/3)} \draw (b\x) node[color=red,fill=red,circle,scale=0.4] {}; \draw[color=red] (b\x) -- (a\pgfmathresult); }; \foreach \x in {0,1,...,11} { \coordinate (c\x) at (\x*360/12 - 45 :2); }; \foreach \x/\y in {0/1,1/2,2/3,3/4,4/5,5/6,6/7,7/8,8/9,9/10,10/11,11/0} { \draw[thick] (c\x) to [out = \x*360/12 + 115, in = \y*360/12 + 155, looseness = 1.5] (c\y); }; \foreach \x/\y in {0/3,3/6,6/9,9/0} { \draw[thick] (c\x) to [out = \x*360/12 + 135, in = \y*360/12 + 135, looseness = 1.3] (c\y); }; \draw[dotted] (0,0) circle[radius=1]; \draw[dotted] (0,0) circle[radius=2]; \draw (0,0) circle[radius=2.4]; \draw[color=red] (C) node[below left] {$v_0$}; \draw (C) node[color=red,fill=red,circle,scale=0.4] {}; \draw[color=red] (a0) node[below left] {$v_2$}; \draw[color=red] (a1) node[below right] {$v_1$}; \draw[color=red] (a2) node[above right] {$v_4$}; \draw[color=red] (a3) node[above left] {$v_3$}; \draw[color=red] (b0) node[below] {$v_{10}$}; \draw[color=red] (b1) node[below right] {$v_9$}; \draw[color=red] (b2) node[below right] {$v_8$}; \draw[color=red] (b3) node[right] {$v_7$}; \draw[color=red] (b4) node[above right] {$v_6$}; \draw[color=red] (b5) node[left] {$v_5$}; \draw[color=red] (b6) node[below left] {$v_{16}$}; \draw[color=red] (b7) node[below left] {$v_{15}$}; \draw[color=red] (b8) node[below] {$v_{14}$}; \draw[color=red] (b9) node[left] {$v_{13}$}; \draw[color=red] (b10) node[below left] {$v_{12}$}; \draw[color=red] (b11) node[right] {$v_{11}$}; \draw (c3) node[color=blue,fill=blue,circle,scale=0.7] {}; \end{tikzpicture} \caption{A choice of a planar embedding for the dual graph} \label{fig:interval1} \end{subfigure} \begin{subfigure}[]{0.49\textwidth} \centering \begin{tikzpicture}[scale=1.6] \tikzstyle{vertex}=[draw,scale=0.4,fill=black,circle] \tikzstyle{ver2}=[draw,scale=0.6,circle] \tikzstyle{ver3}=[draw,scale=0.4,circle] \coordinate (C) at (0,0); \foreach \x in {0,1,2,3} { \coordinate (a\x) at (\x*360/4:1); \draw (a\x) node[color=red,fill=red,circle,scale=0.4] {}; \draw[color=red] (a\x) -- (C); }; \foreach \x in {0,1,...,11} { \coordinate (b\x) at (\x*360/12 - 30 :2); \pgfmathparse{floor(\x/3)} \draw (b\x) node[color=red,fill=red,circle,scale=0.4] {}; \draw[color=red] (b\x) -- (a\pgfmathresult); }; \foreach \x in {0,1,...,11} { \coordinate (c\x) at (\x*360/12 - 45 :2); }; \foreach \x/\y in {0/1,1/2,2/3,3/4,4/5,5/6,6/7,7/8,8/9,9/10,10/11,11/0} { \draw[thick] (c\x) to [out = \x*360/12 + 115, in = \y*360/12 + 155, looseness = 1.5] (c\y); }; \foreach \x/\y in {0/3,3/6,6/9,9/0} { \draw[thick] (c\x) to [out = \x*360/12 + 135, in = \y*360/12 + 135, looseness = 1.3] (c\y); }; \draw[dotted] (0,0) circle[radius=1]; \draw[dotted] (0,0) circle[radius=2]; \draw (0,0) circle[radius=2.4]; \draw[color=red] (C) node[below left] {$v_0$}; \draw (C) node[color=red,fill=red,circle,scale=0.4] {}; \draw[color=red] (a0) node[below left] {$v_1$}; \draw[color=red] (a1) node[below right] {$v_3$}; \draw[color=red] (a2) node[above right] {$v_2$}; \draw[color=red] (a3) node[above left] {$v_4$}; \draw[color=red] (b0) node[below] {$v_{6}$}; \draw[color=red] (b1) node[below right] {$v_5$}; \draw[color=red] (b2) node[below right] {$v_7$}; \draw[color=red] (b3) node[right] {$v_{13}$}; \draw[color=red] (b4) node[above right] {$v_{11}$}; \draw[color=red] (b5) node[left] {$v_{12}$}; \draw[color=red] (b6) node[below left] {$v_{8}$}; \draw[color=red] (b7) node[below left] {$v_{10}$}; \draw[color=red] (b8) node[below] {$v_{9}$}; \draw[color=red] (b9) node[left] {$v_{15}$}; \draw[color=red] (b10) node[below left] {$v_{16}$}; \draw[color=red] (b11) node[right] {$v_{14}$}; \draw (c2) node[color=blue,fill=blue,circle,scale=0.7] {}; \draw (c1) node[color=blue,fill=blue,circle,scale=0.7] {}; \draw (c7) node[color=blue,fill=blue,circle,scale=0.7] {}; \draw (c8) node[color=blue,fill=blue,circle,scale=0.7] {}; \draw (c9) node[color=blue,fill=blue,circle,scale=0.7] {}; \draw (c10) node[color=blue,fill=blue,circle,scale=0.7] {}; \draw (c11) node[color=blue,fill=blue,circle,scale=0.7] {}; \draw (c0) node[color=blue,fill=blue,circle,scale=0.7] {}; \end{tikzpicture} \caption{A different planar embedding for the dual graph} \label{fig:interval2} \end{subfigure} \caption{The region at the boundary of the dual graph ``in between'' boundary points $v_8$ and $v_7$ for two choices of planar embedding for the dual graph, depicted in blue.} \label{fig:interval} \end{figure} Let us make this more precise. \begin{definition} \label{def:shortestbond} The {\it shortest bonds} on the tensor network comprise the subset of bonds (contractions) on the tensor network which are in bijective correspondence with the set of edges on the Bruhat--Tits tree situated at the cutoff boundary. \end{definition} \begin{definition} \label{def:connectedregion} A {\it connected region} on the tensor network is defined to be a set of vertices on the tensor network which are ``path-connected'' to each other (by which we mean one can jump, solely via the shortest bonds on the tensor network, from any vertex in the set to any other without landing on a vertex which is not in the set). We define a {\it disconnected region} to be a region which is not connected. We will also interchangeably mean the (connected or disconnected) region to stand for the uncontracted tensor legs situated at the vertices in the specified region. \end{definition} As noted earlier, we specify the connected region (on the dual tensor network) by specifying two boundary points on the Bruhat--Tits tree and considering all vertices on the dual graph which lie ``in between'' the boundary points (after making a choice of orientation), which we now define. \begin{figure}[thb] \begin{subfigure}[]{0.49\textwidth} \centering \[\musepic{\oneintpic} \] \caption{}\label{fig:connectedA} \end{subfigure} \begin{subfigure}[]{0.49\textwidth} \centering \[ \musepic{\multiintpic}\] \caption{}\label{fig:connectedB} \end{subfigure} \caption{(a) The set of vertices on the dual graph marked in blue form a ``connected region''. (b) The set of blue vertices form a ``disconnected region'' which can be specified by a set of four boundary points. Note that for the given set of boundary points and a chosen planar embedding, another choice of disconnected regions, shown in black circles, is possible.} \label{fig:connected} \end{figure} \begin{definition} \label{def:inbetween} Given two points $x$ and $y$ in $\partial T_{p}$ and a choice of a planar embedding, we define the connected region on the tensor network {\it in between} $x$ and $y$ (up to a choice of orientation) as the set of nodes on the tensor network path-connected to each other via the ``shortest bonds'' starting at the ``start bond'' and ending at the ``end bond'', without backtracking. The ``start'' and ``end'' bonds correspond to bonds on the tensor network dual to the cutoff edges on the Bruhat--Tits tree ending at $x$ and $y$. \end{definition} For instance, in figure \ref{fig:connectedA}, the chosen connected region lies in between the boundary points $x$ and $y$. By contrast, in figure \ref{fig:connectedB} we show an example of a ``disconnected region'', associated to a given set of {\it four} boundary points. The von Neumann entropy of the chosen region in such a case will depend on the von Neumann entropy of its disjoint connected parts and the mutual information shared between them. We will discuss the case of disconnected regions in more detail in sections \ref{GENUS0DOUBLE}-\ref{ssec:Inequalities}. Looking ahead, we make two more definitions. \begin{definition} \label{def:bulkregion} Given a planar embedding (i.e.\ a choice of a dual graph tensor network) and a geodesic $\gamma$ that separates the tensor network into two connected components, called {\it bulk regions} $P$ and $Q$, the {\it boundary} of each bulk region is a collection of uncontracted tensor legs, which includes the tensor legs which were originally contracted across $\gamma$. \end{definition} \begin{definition} \label{def:homologous} Given a boundary interval $A$ specified by a set of nodes on the tensor network (where more precisely, by $A$ we mean the collection of uncontracted tensor legs at the specified set of nodes), we say a geodesic $\gamma$ is {\it homologous} to $A$ if there exists a bulk region on the tensor network, part of whose boundary is given by (the full set of uncontracted tensor legs at) $A$ while the remaining uncontracted legs forming the boundary of the bulk region are in bijective correspondence with the edges of the geodesic $\gamma$. \end{definition} This definition applies to both connected and disconnected regions $A$, and has an obvious extension to the setting with multiple geodesics. This notion of homologous geodesics is a natural adaptation of the notion in continuum case to tensor networks, and makes a natural appearance in our tensor network setup. \subsubsection{Results} \label{sssec:results} We now summarize the entanglement entropy results for the tensor network described above; the detailed calculations can be found in sections \ref{SIMPLESTATES}-\ref{ssec:Inequalities}. In the case of a connected region $A$, parametrized by the points $x, y \in \mathbb{Q}_p$ as described previously, we prove in section~\ref{GENUS0SINGLE} that the vacuum von Neumann entropy is given by \eqn{RTgenus0}{ S(x,y) = (2 \log r)\log_p \left| {x-y \over \epsilon} \right|_p, } where $\epsilon = p^{\Lambda} \in \mathbb{Q}_p$ is the UV cutoff, and the tensor network is built out of perfect tensors of rank $r+1$. The overall factor of $\log r$ can absorbed by taking a logarithm in base $r$ when computing von Neumann entropy via $S = -\Tr \rho \log \rho$. We use the notation $S(x,y)$ to emphasize that the entropy is independent of the choice of planar embedding and only a function of $p$-adic coordinates $x$ and $y$. The $p$-adic norm $|\cdot|_p$ appears naturally in this tensor network setup, and the expression \eno{RTgenus0} makes sense in the limit $|\epsilon|_p \to 0$ when the finite cutoff tree approaches the infinite Bruhat--Tits tree.\footnote{For a finite tree bulk geometry, the expression \eno{RTgenus0} continues to make sense if one views the boundary points $x,y$, which are now elements of the ring $\mathbb{Z}/p^\Lambda \mathbb{Z}$, as $p$-adic numbers with a truncated power series expansion.} The quantity $|x-y|_p$ has a natural intepretation as the measure of the smallest clopen set in $\mathbb{Q}_p$ containing both $x$ and $y$ and is the analog of the ``length of an interval'' over the reals. The result \eno{RTgenus0} is obtained by explicitly computing the reduced density matrix and diagonalizing it.\footnote{Some issues associated with infinite dimensional density matrices in the limit when the cutoff $\Lambda \to \infty$ are discussed in section \ref{AFALGEBRA}.} The entanglement entropy obtained is in direct analogy with the corresponding classic result for the entanglement of a connected interval (of size $|x-y|$ over the real line) with its complement, on a static spatial slice of a massless CFT$_2$ with UV cutoff $\epsilon$~\cite{Holzhey,Calabrese:2004eu}. The tensor network also affords a bulk interpretation for \eno{RTgenus0}. We show in section \ref{GENUS0SINGLE} that the von Neumann entropy $S(x,y)$ is precisely equal to the length of the minimal geodesic in the bulk homologous to the connected region $A$, consistent with the Ryu--Takayanagi formula. On a static slice of CFT$_2$, the minimal geodesic joins the end-points of $A$, given by the ``entangling surfaces'' $x, y \in \mathbb{Q}_p$. Indeed, we show that \eqn{RTgenus0bulk}{ S(x,y) = {{\rm length}(\gamma_{xy}) \over \ell} \log r\,, } where $\gamma_{xy}$ is the minimal geodesic on the cutoff Bruhat--Tits tree joining boundary points $x$ and $y$, and $\ell$, the length of each edge on the tree, is proportional to the AdS radius. As remarked earlier, this result is consistent with the minimum-cut rule obeyed by networks of perfect tensors in the case of bipartite entropy of connected regions~\cite{Pastawski:2015qua}, although we do not assume it in our setup. Essentially, the dual graph tensor network proposed in this paper has the property that the minimal number of tensor contractions on the tensor network which must be ``cut'' across to completely separate the connected region from the rest of the network, precisely equals the length of the minimal geodesic on the Bruhat--Tits tree. Indeed, we show in section \ref{COMPUTATION} that these cuts trace out precisely the path of the minimal geodesic joining the entangling surfaces.\footnote{In fact the geodesic is homologous to the specified region. This condition is especially important in black hole geometries where there can exist shorter geodesics not homologous to the given region. The tensor network always picks the one which is homologous. This will discussed in more detail in the next section.} Moreover, we have the bulk formula \eqn{lengthxy}{ {\rm length} (\gamma_{xy})/\ell = \delta(C\to x,C\to x)+\delta(C\to y,C\to y) - 2\delta(C\to x,C\to y)\,, } where $C$ is an arbitrary node on the Bruhat--Tits tree (or its boundary), and $\delta(\cdot,\cdot)$ is the signed-overlap between the two directed paths in its argument~\cite{Zab}.\footnote{For example, the signed-overlap of a path with itself is simply the length of the path, while the signed-overlap with the same path but with opposite orientation is {\it negative} the total length of the path. The signed overlap vanishes for paths which do not share any edges.} Equations \eno{RTgenus0bulk}-\eno{lengthxy} are applicable for connected regions in the genus 1 geometry as well, which is discussed in the next section. We also show in section \ref{COMPUTATION} that the result \eno{RTgenus0bulk}-\eno{lengthxy} continues to apply to a (massless) CFT defined over a circle, more precisely the projective line $\mathbb{P}^1(\mathbb{Q}_p)$. Let's first recall the result in the real case. Over the reals, the entropy formula \eqn{Sline}{ S(x,y) \propto \log {L \over \epsilon}\,, } where $L=|x-y|$ is the size of the connected interval $A=[x,y]$ and $\epsilon$ the UV cutoff in the CFT, gets replaced by~\cite{Calabrese:2004eu} \eqn{Scircle}{ S(x,y) \propto \log \left( {2 R \over \epsilon} \sin{ L \over 2 R}\right), } where $R$, the IR cutoff parametrizing the total size of the spatial boundary, is the radius of the Poincar\'{e} disk, and $L=R |\arg x-\arg y| $ is the arc length of the interval with $x,y \in \mathbb{P}^1(\mathbb{R})=S^1$. In the limit $L \ll R$, \eno{Scircle} reduces to \eno{Sline}. When the spatial slice at the boundary is the projective line over $\mathbb{Q}_p$, the results in the $p$-adic tensor network setup continue to be analogous to the real case. The measure of the smallest clopen set in $\mathbb{Q}_p$ containing $x, y \in \mathbb{Q}_p$, given by $|x-y|_p$ in \eno{RTgenus0} gets replaced by the Patterson-Sullivan measure of the smallest clopen set in $\mathbb{P}^1(\mathbb{Q}_p)$ containing $x,y \in \mathbb{P}^1(\mathbb{Q}_p)$~\cite{Sullivan}, so we have \eqn{RTgenus0circle}{ S(x,y) = (2 \log r)\log_p {|{\mathfrak B}(x,y)|_{\rm PS} \over |\epsilon|_p} \,. } Explicitly, choosing $C$ to be the radial center of the cutoff Bruhat--Tits tree in the Poincar\'{e} disk picture (recall figure \ref{fig:clopenP1Qp}), the Patterson-Sullivan measure is given by $|{\mathfrak B}(x,y)|_{\rm PS} \equiv p^{-d(C,{\rm anc}(x,y)}$, where $d(\cdot,\cdot)$ is the graph distance between the nodes in its argument, and ${\rm anc}(x,y)$ is the unique vertex on the Bruhat--Tits tree at which the geodesics from $x,y$ and $C$ simultaneously intersect. When $d(C,{\rm anc}(x,y)) \ll d(C,x) = d(C,y)$, the Patterson-Sullivan measure is approximated by $|x-y|_p$, thus recovering the formula \eno{RTgenus0} from \eno{RTgenus0circle}.\footnote{Here we are being loose about the distinction between $x \in \mathbb{P}^1(\mathbb{Q}_p)$ and $x \in \mathbb{Q}_p$. The precise statement is that when the radial center $C$ is sent to a boundary point, say $\infty \in \mathbb{P}^1(\mathbb{Q}_p)$, the remaining boundary of the Bruhat--Tits tree is described precisely by the $p$-adic numbers $\mathbb{Q}_p$ and in this case, the Patterson-Sullivan measure on $\mathbb{P}^1(\mathbb{Q}_p)$, $|{\mathfrak B}(x,y)|_{\rm PS} = p^{-d(C,{\rm anc}(x,y)}$ reduces exactly to the Haar measure on $\mathbb{Q}_p$, given by the $p$-adic norm $|x-y|_p$.} Moreover, the Patterson-Sullivan measure rises, attains a maxima, and then falls as the boundary points $x$ and $y$ are moved away from each other, similar to the sine function in \eno{Scircle} which rises, reaches a maxima, and then falls as $L$ is increased. This can be seen from the explicit form of the Patterson-Sullivan measure quoted above, by fixing one of the boundary points while moving the other ``away'' from it.\footnote{The Patterson-Sullivan measure rises, attains a maximum, and then eventually falls in discrete steps in contrast to the smoothly varying sine function in \eno{Scircle}, as one fixes one of the nodes but moves the other ``away'' in the sense of increasing the path length along the ``shortest bonds'' between the two nodes for a chosen orientation.} We remark that as is clear from \eno{RTgenus0} and \eno{RTgenus0circle}, the measure of the clopen set is more fundamental than the number of boundary vertices falling within a connected region in a chosen planar embedding. Fixing a planar embedding, a given pair of end-points $x,y$ may simply contain ``in between'' themselves a single vertex on the tensor network but may still correspond to a clopen set with a larger measure than that of another pair of points which carry ``in between'' themselves a larger number of vertices. This essentially is due to the inherent ultrametric nature of the setup. In section \ref{GENUS0DOUBLE} we extend the results for a single connected interval to the case of a disconnected interval. Instead of two boundary points specifying a single connected interval, we now have four boundary points parametrizing the disconnected case, with the full interval written as a union of its two connected components. We show that the entropy is independent of the planar embedding and obeys an RT-like formula exactly (see, for instance, the discussion around \eno{Sdisconnect}). We also verify the non-negativity of mutual information as well as the Araki--Lieb inequality, and in fact in \eno{MIbdyGen} write down an explicit expression for mutual information in terms of the conformal cross-ratio constructed from the boundary points. We then provide a dual bulk interpretation of mutual information in terms of the overlap of the minimal geodesics of the individual components. Finally, in section \ref{ssec:Inequalities} we give a simple holographic proof of strong subadditivity in the $p$-adic setting, demonstrating its relation to ultrametricity, and a proof for monogamy of mutual information. In fact we find that mutual information is extensive, that is, the tripartite entropy is identically zero. We refer to these sections for more details. \section{$p$-adic BTZ Black Hole} \label{GENUSONE} In this section, we continue the study of the proposed tensor network in bulk geometries which can be considered the $p$-adic analog of black hole or thermal states. We will first summarize the construction of these $p$-adic geometries $(\mathbb{Q}_p)$ in analogy with the complex case of Euclidean AdS$_3$ $(\mathbb{C})$ or a time slice of the Lorentzian counterpart. This \emph{uniformization} procedure is an algebraic way to obtain black hole geometries from empty AdS, and there is a natural way to apply this construction to the perfect tensor network of the previous section. Instead of a pure state on the boundary in this toy model, one finds degrees of freedom behind the `horizon' which must be traced out. The result is a thermal density matrix, and following~\cite{Maldacena:1998bw} we interpret this as the thermal state at the conformal boundary with entropy analogous to the entropy first observed by Bekenstein and Hawking~\cite{Bekenstein:1973ur, Hawking:1974sw}. In this discrete $p$-adic model, using computational tools described in the next section we find precise agreement between the perimeter of the black hole horizon and the thermal entropy of the boundary density matrix. We postpone the details of this calculation until section~\ref{COMPUTATION}, and here we will focus on the setup and results. One can further study entanglement entropy in these genus $1$ backgrounds by tracing out regions of qubits at the boundary. The resulting entanglement entropy has a dual interpretation in the bulk as the lengths of minimal geodesics homologous to the boundary intervals in the black hole background, the analog of the Ryu--Takayanagi formula in this geometry~\cite{Ryu:2006bv, Ryu:2006ef}. In our model, the boundary anchored geodesics wrap non-trivially around the horizon to minimize the total length, and one might have expected this minimization property of tensor networks from the minimal cut rule of~\cite{Pastawski:2015qua}; however we emphasize that the genus 1 tensor network is fundamentally different from the one considered in~\cite{Pastawski:2015qua} and is obtained instead as a quotient of the genus 0 construction. We have verified this agreement between the boundary entropy and the bulk geodesic length by direct computation, and conjecture that this gives a holographic derivation of entanglement entropy in $p$-adic AdS/CFT in thermal backgrounds. \subsection{Genus $1$ curves and Schottky uniformization}\label{ssec:Schottky} In analogy with the complex case of $\text{AdS}_3/\text{CFT}_2$ where the Ba\~nados, Teitelboim, and Zanelli~\cite{Banados:1992wn} (BTZ) black hole boundary in Euclidean signature is a $T^2$, the boundary picture of these $p$-adic BTZ black holes can be understood as genus $g=1$ curves over nonarchimedean fields. These curves were originally described in the classical work of Tate for $g=1$ and Mumford for $g>1$~\cite{Mum}, and while we focus on the genus $1$ case we will often refer to the boundary curve as a Tate-Mumford curve. The applications of the boundary curve/bulk graph to $p$-adic AdS/CFT are described in~\cite{Heydeman:2016ldy,Manin:2002hn}. The bulk spaces are then given by quotients of the $p$-adic Bruhat--Tits tree (the analog of empty AdS) by the action by isometries of a discrete group. For a general introduction to Mumford curves and their associated bulk spaces see \cite{Man,GevdP}. In the complex case of a torus boundary, a familiar realization is a uniformization of the elliptic curve $E(\mathbb{C})$ by the complex plane $\mathbb{C}$. If the modular parameter is $\tau$ with $\text{Im}(\tau) >0$, one may construct a $\mathbb{C}$ lattice $\Lambda = \mathbb{Z} \oplus \tau \mathbb{Z}$ and describe the curve as the quotient \begin{equation} T^2 \simeq E(\mathbb{C}) \simeq \mathbb{C}/\Lambda \,. \end{equation} This is the familiar procedure of identifying opposite sides of a parallelogram. However, a direct $p$-adic analog using a lattice turns out to not be possible. An alternative approach due to Tate is essentially to consider the exponentiated map. Defining the standard Fourier parameter $q = e^{2 \pi i \tau}$ which satisfies $|q| < 1$ since $\text{Im}(\tau) >0$, we may instead consider a quotient of the multiplicative group $\mathbb{C}^{\times}$: \begin{equation} \label{complexuniformization} T^2 \simeq E(\mathbb{C}) \simeq \mathbb{C}^{\times}/q^{\mathbb{Z}} \, , \end{equation} where $q^{\mathbb{Z}}$ for the integers $\mathbb{Z}$ form a discrete abelian group. This construction of the elliptic curve is an example of complex \emph{Schottky uniformization} of genus $1$ curves, which can be generalized to $g>1$ curves and has a natural $p$-adic analog. Schottky uniformization is the uniformization of an elliptic curve by quotienting the projective line by a chosen discrete subgroup of its M\"{o}bius transformations. More precisely, we must first remove a certain limit set of the projective line where the Schottky group acts poorly; in the present $g=1$ case these can be chosen to be the two points $\{0, \infty \}$, which explains the $\mathbb{C}^{\times}$ used above. At higher genus the limit set is much more complicated, see section~\cite{Heydeman:2016ldy} for details. At genus $1$ we can be even more explicit. Recall in the complex case that M\"{o}bius transformations form the group $G = \PSL(2,\mathbb{C})$ which acts on $\mathrm{P}^1(\mathbb{C})$ with complex coordinate $z$ by fractional linear transformations, \begin{equation} g \in G =\begin{pmatrix} a & b \\ c & d \end{pmatrix}: \, \, \, z \mapsto \frac{az+b}{cz+d} \, . \end{equation} Removing the aforementioned limit points, we may now pick a discrete subgroup $\Gamma \in \PSL(2,\mathbb{C})$ with which to perform the quotient. For genus $g=1$ with the abelian group $\Gamma=q^\mathbb{Z}$, a generator $\gamma$ acts on the domain by \begin{equation} \label{complexschottky} \gamma \in \Gamma = \begin{pmatrix} q^{1/2} & 0 \\ 0 & q^{-1/2} \end{pmatrix}: \, \, \, z \mapsto qz \, , \end{equation} and we may obtain a torus by dividing by this action; explicitly points in the plane are identified under this scaling. In Euclidean signature, this action uplifts to the 3-dimensional hyperbolic upper half plane which has the complex projective line as its boundary. Here we view $\PSL(2,\mathbb{C})$ as the group of isometries of Euclidean AdS$_3$ with the scaling extending into the bulk direction. In the bulk, this Schottky generator $q$ acts by scaling of geodesic surfaces, and in particular it acts on the unique geodesic connecting $\{0,\infty \}$ as translations by $\log q$. Taking a quotient of the bulk by this action gives the Euclidean BTZ black hole, presented as a solid torus with the desired elliptic curve $E(\mathbb{C})$ at the conformal boundary. This is illustrated in figure \ref{EuclideanBTZFig}. (In fact, one may obtain a family of black hole and thermal AdS solutions by acting with modular transformations~\cite{Maldacena:1998bw, Maloney:2007ud}.) \begin{figure}[th] \includegraphics[width=11cm]{EuclideanBTZ.jpg} \centering \caption{Left: Fundamental domain and quotient for the Euclidean BTZ black hole~\cite{Manin:2002hn}. The two circles on the plane are identified under $z \rightarrow qz$, while the two domes in the bulk are identified. Right: The resulting quotient gives a torus boundary and a solid torus with AdS metric in the bulk, the BTZ black hole in Euclidean signature. \label{EuclideanBTZFig}} \end{figure} It is possible to repeat the above uniformization in Lorentzian AdS$_3$. In this case, the isometry group is $SO(2,2)$, the connected part of which is isomorphic to $(\SL(2,\mathbb{R}) \times \SL(2,\mathbb{R}))/\mathbb{Z}_2$ (in fact, there is a subtlety in the choice of covering group \cite{Witten:2007kt}, which will not concern us here.) As before, quotienting with discrete abelian subgroups can be used to find BTZ black hole spacetimes in Lorentzian signature, possibly with angular momentum. In our case in analogy with genus $0$, we would like to interpret the $p$-adic tensor network as describing a time slice of a static black hole. To this end, one may pick a discrete subgroup of the diagonal $\PSL(2,\mathbb{R})$ acting on the $t=0$ slice, which is now a copy of the upper half plane $\mathbb{H}^2 \sim \mathbb{R} \times \mathbb{R}^+$. For $q\in \mathbb{R}$, the Schottky group is again $\Gamma=q^\mathbb{Z}$ with matrix form (\ref{complexschottky}), acting by fractional linear transformation, but now on $\mathbb{H}^2$ (rather than the complex boundary coordinate as in the Euclidean case.) The result of this quotient is the $t=0$ slice of the non-rotating BTZ black hole in Lorentzian signature. In the bulk, this is two-sided and has one non-contractible cycle-- the black hole horizon. In the conventions of \cite{Carlip:1995qv} with unit cosmological constant, this black hole with mass $M = r_+$ and angular momentum $J=0$ is generated by the Schottky element \begin{equation} \gamma = \begin{pmatrix} e^{\pi r_+} & 0 \\ 0 & e^{-\pi r_+} \end{pmatrix}. \end{equation} From this one my find the horizon perimeter and compute the Bekenstein Hawking entropy. In these units it is simply $S_{BH} = 4 \pi r_+ = 2 \log q$ in terms of the $q$ parameter. When moving from $\mathbb{R}$ or $\mathbb{C}$ to $\mathbb{Q}_p$, the topology and geometry of both the bulk and boundary change dramatically. The goal of the present work is not to define or classify the possible choices in this $p$-adic setting (which might involve even more exotic structures such as buildings), but rather to provide a discrete toy model of holography in which aspects of the bulk can be computed exactly. For this reason, in discussing genus $1$ black holes we will assume the simplest interpretation of a static black hole where we work on a spatial slice; this is the situation where our formulas have qualitative similarity to real AdS/CFT in $3$-dimensions. There may be other interpretations of our results, and we will remain agnostic about more general signatures and situations such as rotating black holes. We now proceed to describe the uniformization of the Tate-Mumford curve by the $p$-adic multiplicative group $\mathbb{Q}_p^\times$. This describes the boundary geometry for the black hole, and there is a natural extension to the bulk Bruhat-Tits tree. As explained above, we will not rely on a lattice, but rather identification of points under a Schottky generator; now a discrete subgroup of the $p$-adic conformal group. Mathematically, we will mimic the above construction with the substitutions $\PSL(2, \mathbb{R}) \rightarrow \PGL(2, \mathbb{Q}_p)$ along with the discrete Schottky generator $\Gamma \subset \PGL(2,\mathbb{Q}_p)$. Physically as we have explained, we interpret the result in the bulk as a time slice of a static black hole. Asymptotically, this geometry looks like the Bruhat-Tits tree, but the center contains a non-contractible cycle of integer length. In a different context, this uniformization was used in the context of open $p$-adic string theory to compute multi-loop amplitudes in~\cite{Chekhov:1989bg}, which viewed the Bruhat-Tits tree and its higher genus generalizations as worldsheets. We seek a $p$-adic version of equation~(\ref{complexuniformization}), which is the desired Tate-Mumford curve at the boundary. Beginning with the usual boundary $z \in \mathrm{P}^1(\mathbb{Q}_p)$, the M\"{o}bius transformations now form the group $\PSL(2, \mathbb{Q}_p)$ acting on $z$ by fractional linear transformations, though below we will use $G = \PGL(2, \mathbb{Q}_p)$ which is the isometry group of the tree.\footnote{We have passed from the special linear group to the general linear group because this more properly accounts for the isometries of the Bruhat-Tits tree. One may see that the $\SL(2)$ matrix in (\ref{complexschottky}) requires us to take a square root in $\mathbb{C}$; this is in general not possible for $q \in \mathbb{Q}_p$ without extensions. Among other things, restricting to $\SL(2)$ thus excludes translations on the tree by non-square elements, while a $\GL(2)$ matrix allows one to act with isometries of this type.} We choose the abelian Schottky group $\Gamma = q^{\mathbb{Z}} \in \PGL(2, \mathbb{Q}_p)$, with $q \in \mathbb{Q}_p^{\times}$, $|q|_p <1$. The $\PGL$ matrix which generates the Schottky group can be chosen to be \begin{equation} \label{padicschottky} \gamma \in \Gamma = \begin{pmatrix} q & 0 \\ 0 & 1 \end{pmatrix}: \, \, \, z \mapsto qz \, . \end{equation} Performing the quotient, which identifies $p$-adic numbers related by this scaling, we obtain a curve of genus $1$ at the conformal boundary, which is the elliptic curve uniformized by the $p$-adic multiplicative group: \begin{equation} \label{padicuniformization} E(\mathbb{Q}_p) \simeq \mathbb{Q}_p^{\times}/q^{\mathbb{Z}} \, . \end{equation} This in principle completes the description of the boundary curve for the black hole geometry which might be interpreted as a thermal state of a conformal field theory at a fixed time slice. An important technical caveat is that not all elliptic curves over $\mathbb{Q}_p$ can be uniformized in this way, only those with split multiplicative reduction. However, it is precisely these Tate-Mumford curves which have a natural extension to the Bruhat-Tits tree ${\cal T}_p$, so we will only consider these in this work. The situation so far over the $p$-adics may be somewhat abstract, but a very intuitive picture resembling a black hole emerges when we consider the quotient of the Bruhat-Tits tree itself by the above Schottky generator. Algebraically, one again removes the boundary points $\{0, \infty \}$ of ${\cal T}_p$ and identifies vertices and edges of the tree related by the action of the Schottky generator; we can express this as $({\cal T}_p \smallsetminus \{0,\infty\})/q^\mathbb{Z}$. In analogy with the real case, an explicit form of this generator is a $\PGL(2, \mathbb{Q}_p)$ element which translates along the $\{ 0, \infty \}$ by $\log_p|q|_p^{-1}$. Pictorially (and more formally), the geometry after the quotient is obtained by taking the entire tree and identifying points which are related by translation by $\ord_p(q)=\log_p |q|_p^{-1}$ steps along this main geodesic. The condition on the norm of $q$ means this translation is always an integer number of steps, and the result is a central ring of length $\ord_p(q)$ with branches which asymptotically look like ${\cal T}_p$. It is a motivating result that the boundary of this ring geometry can be identified with the Tate-Mumford curve, and mathematically it is guaranteed by our uniformization procedure. This is illustrated in figure \ref{fig:padicBTZ}, where by analogy with the real case we interpret this as a time slice of a static BTZ black hole. \begin{figure}[t] \centering \begin{tikzpicture}[scale=1.3] \tikzstyle{vertex}=[draw,scale=0.4,color=red,fill=red,circle] \tikzstyle{ver2}=[draw,scale=0.4,color=red,fill=red,circle] \tikzstyle{ver3}=[]; \draw (0*60:1) node[vertex] (a0) {}; \draw (1*60:1) node[vertex] (a1) {}; \draw (2*60:1) node[vertex] (a2) {}; \draw (3*60:1) node[vertex] (a3) {}; \draw (4*60:1) node[vertex] (a4) {}; \draw (5*60:1) node[vertex] (a5) {}; \draw[thick,color=red] (a0) -- (a1) -- (a2) -- (a3) -- (a4) -- (a5) -- (a0); \draw (a0) + (0*60 + 30:1) node[ver2] (b0) {}; \draw (a0) + (0*60 - 30:1) node[ver2] (c0) {}; \draw (a1) + (1*60 + 30:1) node[ver2] (b1) {}; \draw (a1) + (1*60 - 30:1) node[ver2] (c1) {}; \draw (a2) + (2*60 + 30:1) node[ver2] (b2) {}; \draw (a2) + (2*60 - 30:1) node[ver2] (c2) {}; \draw (a3) + (3*60 + 30:1) node[ver2] (b3) {}; \draw (a3) + (3*60 - 30:1) node[ver2] (c3) {}; \draw (a4) + (4*60 + 30:1) node[ver2] (b4) {}; \draw (a4) + (4*60 - 30:1) node[ver2] (c4) {}; \draw (a5) + (5*60 + 30:1) node[ver2] (b5) {}; \draw (a5) + (5*60 - 30:1) node[ver2] (c5) {}; \foreach \x in {-1,0,1} { \foreach \n in {0, 1, 2, 3, 4, 5} { \draw (b\n) + (\n*60 + 20 + \x*20:1) node[ver3] (d\n\x) {}; \draw (c\n) + (\n*60 - 20 + \x*20:1) node[ver3] (e\n\x) {}; \draw (b\n) + (\n*60 + 20 + \x*20:1.5) node[ver3] (D\n\x) {}; \draw (c\n) + (\n*60 - 20 + \x*20:1.5) node[ver3] (E\n\x) {}; \draw[thick,color=red] (d\n\x) -- (b\n); \draw[thick,color=red] (e\n\x) -- (c\n); \draw[very thick, color=red, loosely dotted] (d\n\x) -- (D\n\x) ; \draw[very thick, color=red, loosely dotted] (e\n\x) -- (E\n\x) ; }; }; \draw[thick,color=red] (b0) -- (a0) -- (c0); \draw[thick,color=red] (b1) -- (a1) -- (c1); \draw[thick,color=red] (b2) -- (a2) -- (c2); \draw[thick,color=red] (b3) -- (a3) -- (c3); \draw[thick,color=red] (b4) -- (a4) -- (c4); \draw[thick,color=red] (b5) -- (a5) -- (c5); \draw[very thick, loosely dotted] (0,0) circle (3.5); \end{tikzpicture} \caption{The $p$-adic BTZ black hole, obtained here for $p =3$ by a quotient of the Bruhat-Tits tree by a Schottky generator with $\log_p |q|_p^{-1} = 6$. The geometry is locally indistinguishable from the Bruhat-Tits tree, but the presence of the horizon signifies the boundary interpretation will be very different. In the tensor network analysis we will find the boundary state of this geometry will have a thermal entropy proportional to $6 \log p$. \label{fig:padicBTZ}} \end{figure} Our final task of this section is to explain how to extend this $p$-adic uniformization to the tensor network living on the dual graph of the the Bruhat-Tits tree. The most natural way to do this is to simply perform the identifications under the Schottky generator on both the tree and the dual graph simultaneously, and one can see graphically that this will introduce a special vertex behind the `horizon'. This vertex is most naturally traced out (as in a two-sided black hole geometry,) and we will later show by explicit computation that this choice produces a mixed density matrix for the boundary state. The thermal entropy of this density matrix is proportional to the perimeter of the $p$-adic BTZ black hole, giving agreement with Bekenstein-Hawking formula up to an overall constant. In this toy model, the interpretation of these microstates are those legs (namely contracted bonds) of the tensor network which stretch across the horizon. As usual, the identification and resulting BTZ black hole tensor network are best done with the aid of a figure. We first redraw the tree in a form that is `flattened out' along the preferred $\{ 0\to \infty \}$ geodesic, as shown in the top sub-figure of figure \ref{fig:BTZtensornetwork}, where we have explicitly chosen $p=3$ and $\log_p|q|_p^{-1}= 5$ as an example. While we can only display a small portion of the tree, one should think of this geodesic as stretching infinitely, with branches coming off and continuing to the conformal boundary. This is nothing but a relabeling of figure \ref{fig:SimplePsi}, but we have now labeled a special vertex $O$ on the tensor network which sits above the central geodesic as well as a special vertex $a$ on the tree. After the quotient, $O$ will be in the black hole interior and $a$ will be identified with its image under $a \rightarrow a - \log_p|q|_p$. Recall also that all the vertices on this dual graph have dangling legs and represent degrees of freedom on the boundary, but we have not yet specified the rank of these tensors due to a subtlety explained below. \begin{figure}[t] \[ \musepic{\quotientline} \] \bigskip \[ \musepic{\quotientloop} \] \caption{The quotient construction of the dual graph tensor network (in black). As pictured, $p=3$ and $\ord_p(q)=5$.} \label{fig:BTZtensornetwork} \end{figure} After taking the quotient, we can redraw the tree and its dual graph in a form that has rotational symmetry as seen in the second figure \ref{fig:BTZtensornetwork}. The infinite geodesic has now become the central cycle or horizon of the black hole, and the genus $1$ boundary is now the boundary of the infinite branches coming off of this cycle. The center point $O$, which before the quotient was just another vertex on the boundary, has now moved behind the horizon. The number of internal bonds connected to $O$ is determined by the $p$-adic norm of our Schottky parameter $q$, which can be any integer greater than $1$. This is an intuitive picture of how one might make a black hole tensor network state which agrees with our uniformization procedure for the tree. A formal recipe for constructing different choices of dual graph is discussed in section~\ref{ssec:geometryTATE}, and here as before we make one convenient choice. Even so, it is now necessary to explain both the subtleties of how the dual graph is defined as well as the cut-off prescription. Recall in the genus $0$ picture, the bulk IR regulator $\Lambda$ was defined as half the length of the longest geodesic- this meant truncating both the tree and the dual graph $\Lambda$ steps from the central vertex in all directions. For the black hole case, we would like the analogous statement to be that we truncate the tree and network $\Lambda$ steps from the horizon. However, in order to achieve this we had to use a different cutoff before identifying points related by the Schottky group. This should be clear from examining the `flat' picture of the tree, which for finite cut-off would not correspond to something radially symmetric. Conceptually, we could treat the genus $0$ and genus $1$ on an equal footing if we worked with the infinite tree and network and only apply the cut-off prescription after taking the quotient. One can note that the two criterion expressed in section~\ref{DUALGRAPH} continue to hold in the black hole background; every edge of the tree has exactly one bond of the dual graph which ``cuts'' it, and every vertex of the tree is surrounded by a plaquette with $p+1$ sides. These facts do not guarantee that the dual graph after taking the quotient is uniquely defined, but later we will discuss the boundary measure associated to the Mumford curve and the dual graph and find a canonical choice. Recall that in the genus 0 case, the non-uniqueness could be interpreted as the lack of ordering of $p$-adic numbers at the boundary. A further technical point concerns black holes that are large compared to the cutoff scale. When working with the genus $0$ network with a finite cutoff, we observed that our uniform use of a single kind of perfect tensor of rank $r+1$ required certain conditions on the rank and the cutoff in order for the Ryu--Takayanagi formula to hold. Roughly speaking, the number of bulk contracted legs at any tensor could not exceed the number of boundary uncontracted legs. Increasing the cutoff thus meant increasing the rank, corresponding to a large number of UV degrees of freedom at the boundary. Similarly, the perimeter of the black hole horizon at genus $1$ also constrains the minimum rank of the tensor, as now the center point may have a larger number of bulk bonds than other points in the network. This becomes an issue for black holes that are large compared to the cutoff, but using a sufficiently large rank perfect tensor will always produce the correct answers for the black hole entropy. There is one final point to address, which is the curious case of a horizon with length $\log_p |q|_p^{-1} = 1$. This is the minimal Tate-Mumford curve allowed by the uniformization; the corresponding quotient of the Bruhat--Tits tree contains a self-looping edge, and it correspondingly leads to a degenerate configuration of the network. One of the plaquettes of the dual graph network collapses. Nonetheless, the entropy computed from this degenerate network still leads to the expected result. \subsection{Black hole entropy} \label{BTZResults} In the previous subsection, we described the construction of BTZ black holes in $p$-adic AdS/CFT via the algebraic process of Schottky uniformization. We also explained how this naturally extended to the dual graph tensor network, essentially by identifying all nodes and bonds related by a translation by the horizon length. The result is a graph with a cycle and a dual graph tensor network with many desirable properties; crucially there is one special vertex behind the horizon which cannot be identified with any boundary degrees of freedom. In this section, we present the results of a computation on the tensor network for the thermal entropy of the boundary density matrix obtained by tracing out this vertex, explained in more detail in section \ref{GENUS1BTZ}. We find perfect agreement between the thermal entropy and the black hole horizon perimeter, as predicted by an analog of the Bekenstein-Hawking formula. \begin{figure}[t] \[ \begin{tikzpicture}[scale=1.7] \tikzstyle{vertex}=[draw,scale=0.3,fill=black,circle] \tikzstyle{ver2}=[draw,scale=0.6,circle] \tikzstyle{ver3}=[draw,scale=0.4,circle] \draw[color=red] (0,0) -- (0,1) -- (1,1) -- (1,0) -- cycle; \draw[color=red] (3,0) -- (3,1) -- (4,1) -- (4,0) -- cycle; \draw[color=red] ($ (0,0) - (20:0.7) $) -- (0,0) -- ($ (0,0) - (70:0.7) $); \draw[color=red] ($ (0,1) - (20-90:0.7) $) -- (0,1) -- ($ (0,1) - (70-90:0.7) $); \draw[color=red] ($ (1,1) - (20-180:0.7) $) -- (1,1) -- ($ (1,1) - (70-180:0.7) $); \draw[color=red] ($ (1,0) - (20-270:0.7) $) -- (1,0) -- ($ (1,0) - (70-270:0.7) $); \draw[color=red] ($ (3,0) - (20:0.7) $) -- (3,0) -- ($ (3,0) - (70:0.7) $); \draw[color=red] ($ (3,1) - (20-90:0.7) $) -- (3,1) -- ($ (3,1) - (70-90:0.7) $); \draw[color=red] ($ (4,1) - (20-180:0.7) $) -- (4,1) -- ($ (4,1) - (70-180:0.7) $); \draw[color=red] ($ (4,0) - (20-270:0.7) $) -- (4,0) -- ($ (4,0) - (70-270:0.7) $); \draw (0.5,-0.2) node[vertex]{}; \draw (0.5,1.2) node[vertex]{}; \draw (-0.2,0.5) node[vertex]{}; \draw (1.2,0.5) node[vertex]{}; \draw (3.5,-0.2) node[vertex]{}; \draw (3.5,1.2) node[vertex]{}; \draw (2.8,0.5) node[vertex]{}; \draw (4.2,0.5) node[vertex]{}; \draw (0.5,-0.2) -- (0.5,0.2) to[out=90,in=135,looseness=0.5] (0.8,0.25) to[in=225,out=-45,looseness=0.5] (3.2,0.25) to[out=45,in=90,looseness=0.5] (3.5,0.2) -- (3.5,-0.2); \draw (-0.2,0.5) to[out=0,in=135,looseness=0.5] (0.8,0.3) to[in=225,out=-45,looseness=0.5] (3.2,0.3) to[out=45,in=180,looseness=0.5] (4.2,0.5); \draw (0.5,1.2) to[out=-90,in=135,looseness=0.5] (0.8,0.35) to[in=225,out=-45,looseness=0.5] (3.2,0.35) to[out=45,in=90,looseness=0.5] (3.5,1.2); \draw (1.2,0.5) to[out=180,in=135,looseness=0.5] (0.8,0.4) to[in=225,out=-45,looseness=0.5] (3.2,0.4) to[out=45,in=180,looseness=0.5] (2.8,0.5); \end{tikzpicture} \] \caption{The genus $1$ tensor network after tracing out the vertex behind the horizon. One should imagine the two sides corresponding to $| \psi \rangle$ and $\langle \psi |$, and the total state being the mixed density matrix which has von Neumann entropy proportional to the number of shared bonds. We postpone the explanation of the graphical rules and the computation of the entropy until section~\ref{COMPUTATION}, but the reader might be reminded of the 2-sided BTZ black hole.} \label{fig:BTZ2sided} \end{figure} Tracing out the center vertex in our tensor network amounts to constructing a mixed density matrix reminiscent of a two-sided BTZ black hole. This is depicted in figure~\ref{fig:BTZ2sided} and follows from our graphical rules for computing density matrices from tensor networks, explained in section~\ref{COMPUTATION}. Defining the perimeter to be $\tau = \log_p|q|_p^{-1}$, we find by detailed computation the von Neumann entropy of the boundary state to be proportional to the perimeter, which is the same as the number of bonds stretched across the two sides. The result is surprisingly simple and analogous to the BTZ black hole entropy discussed in the previous section: \eqn{BTZEntropysummary}{ S_{\rm BH} = \tau \log r \,. } \subsection{Ryu--Takayanagi formula in the black hole background} \label{RTBTZresults} Here we will briefly summarize our results which combine the main ideas of sections~\ref{GENUSZERORESULTS} and \ref{BTZResults}. This involves computing the boundary von Neumann entropy of a single connected interval in the thermal background, holographically found to be dual to the length of a minimal geodesic in the black hole geometry homologous to the interval. This presents further computational challenges which are discussed in section~\ref{GENUS1BTZ}. As is often the case with quantities that can be computed in the dual picture, the bulk result is easier to state than derive, but we find agreement in all cases considered. A schematic depiction of the behavior of the minimal surface is shown later in figure~\ref{fig:realBHInt}. It is a surprising and nontrivial fact that the tensor network proposed here automatically captures the three topologically distinct cases for the surface. We take the success of this tensor network proposal as a conjecture for the entanglement entropy of a connected interval in $p$-adic field theory at finite temperature. A key conceptual difference from the genus $0$ Ryu--Takayanagi formula is that the entropy of a boundary region and its complement are not equal, since the holographic state generated by the network is no longer a pure state. The bulk interpretation of this is the presence of the black hole horizon which minimal surfaces may wrap around. Varying the ($p$-adic) size of the boundary region, a minimal geodesic might jump from crossing one side of the horizon to the other, and one observes this behavior in the boundary von Neumann entropy as well. This is a feature that is desirable in principle and in practice, though care must be taken in the precise definition of the boundary measure and dual graphs. We chose to parametrize the size of the boundary `intervals' using the measure for the covering space (before taking the quotient), and this is explained in greater mathematical detail in section~\ref{ssec:geometryTATE}. However, after making a choice of a planar embedding for the tensor network, the intuitive picture of the genus $1$ minimal geodesic behavior is easy to see in figure~\ref{fig:BTZMinSurface}. \begin{figure}[t] \[ \musepic{\geodesicwrap} \] \caption{Geodesics in the BTZ black hole geometry. Moving $y$ to~$y'$ leads to a ``jump'' of the minimal geodesic to a path wrapping the other side of the horizon.} \label{fig:BTZMinSurface} \end{figure} The structure of entanglement in this $p$-adic black hole setting has a particular novel feature not present in the usual picture of AdS$_3$. In that case, small interval sizes or low temperatures will have entanglement entropy very nearly equal to the flat space result \cite{Calabrese:2004eu}, which can be seen by Taylor expansion or noting the minimal surfaces do not approach the BTZ horizon. In contrast, for $p$-adic AdS/CFT the transition would seem to be much more sharp. If one considers boundary regions that are small enough in the measure described above, one may see that the minimal geodesic never reaches the horizon, and the length and thus the entropy will be precisely equal to the genus $0$ case. This is an interesting prediction for entanglement in thermal $p$-adic field theories, where the indication is that low enough temperatures have exactly zero effect on the short distance physics (rather than parametrically small effect.) This is ultimately due to the nonarchimedean or ultrametric nature of $p$-adic numbers. The various features of the minimal geodesics in the black hole background can be described using a distance measure in the bulk. Given a planar embedding, the entropy of the connected region $(x,y)$ is given by a minimal geodesic homologous to the given region (recall definition \ref{def:homologous} of the homologous condition). In the $p$-adic black hole background, given two boundary points, there are two possible boundary anchored geodesics to choose from, only one of which will be homologous to the given region. The geodesic homologous to the complimentary region will be given by the other path. This can be unified together into the formula \eqn{lengthBTZ0}{ S(x,y) = {{\rm length}(\gamma_{xy}) \over \ell } \log r\,, } where \eqn{lengthxybtzCombine}{ {\rm length} (\gamma_{xy})/\ell = \delta(C\to x,C\to x)+\delta(C\to y,C\to y) - 2\delta(C\to x,C\to y)\,. } Here $\ell$ is the constant length of each edge of the tree, and $C$ is an arbitrary reference point on the genus $1$ graph. Recall that $\delta(\cdot,\cdot)$ is an integer which counts the signed overlap between the two paths in its arguments. Equation \eno{lengthxybtzCombine} does not depend on the choice of $C$, but the two choices of paths for $C \to x$ (as well as $C\to y$) in \eno{lengthxybtzCombine} correspond in all to the two possible values of $S(x,y)$, corresponding to the interval $(x,y)$ and its complement on the boundary (such that the geodesics are appropriately homologous). Define $\epsilon \in \mathbb{Q}_p$ to be the cutoff \eqn{}{ \epsilon \equiv p^{\Lambda}\,, } which goes to zero $p$-adically, i.e.\ $|\epsilon|_p \to 0$ as $\Lambda \to \infty$. If $x,y \in E(\mathbb{Q}_p) \simeq \mathbb{Q}_p^{\times}/q^{\mathbb{Z}}$, then \eqn{lengthBTZ}{ {\rm length} (\gamma_{xy})/\ell = 2\log_p { |\mathfrak{B}(\{x,y\})|_{g=1} \over |\epsilon|_p}\,, } where $|\mathfrak{B}(\{x,y\})|_{g=1}$ is the measure of the set containing $x,y \in E(\mathbb{Q}_p)$. On the covering space geometry, there are infinitely many sets which contain $x$ and $y$ because there is an infinite set of image points which correspond to these boundary points. From the point of view of the fundamental domain, there are two minimal sets which correspond to the two ways to wrap around the horizon. The measure above corresponds to choosing one of the two depending on which choice of minimal surface(s) is homologous to the boundary region. This measure is further explained in section~\ref{ssec:geometryTATE}, and the explanation from tensor network contractions is outlined in section~\ref{GENUS1BTZ}. Here we comment that up to an overall constant factor, the entanglement entropy for the mixed states is equal to these geodesic lengths, and this is encapsulated by this measure. One can see this as a kind of prediction for the single interval entanglement entropies for thermal states in $p$-adic AdS/CFT. \section{von Neumann Entropy and Inequalities} \label{COMPUTATION} In this section we present the detailed computations leading to the results summarized in sections \ref{GENUSZERO} and \ref{GENUSONE}, as well as proofs of various entropy inequalities in sections \ref{GENUS0DOUBLE}-\ref{ssec:Inequalities}. \subsection{Perfect tensors and density matrices} \label{SIMPLESTATES} Before getting to calculations in the holographic setup, we point out some of the basic ingredients and properties which will be useful later using simpler toy examples. We focus on simple states (not necessarily holographic) constructed using rank-$(r+1)$ perfect tensors; the indices of such tensors will label bases of fixed finite-dimensional Hilbert spaces, which we interchangeably call ``spins,'' ``qubits,'' or ``qudits.'' For example, for $r=3$, consider \eqn{psiSimplest}{ |\psi \rangle = T_{abcd}\, |abcd \rangle \qquad a,b,c,d \in \{0,1,2\}\,, } where repeated indices are summed over, and $|abcd\rangle = |a \rangle \otimes |b\rangle \otimes |c \rangle \otimes |d\rangle$ is a product state of four qutrits. $T_{abcd}$ is the rank-4 perfect tensor, and we normalize it so that all its non-zero components are unity. Graphically, we may represent \eno{psiSimplest} as \eqn{psiSimplestGraph}{ |\psi \rangle = \musepic{\bigtensor}. } Since $T$ is a perfect tensor, specifying half of its indices uniquely fixes the remaining half. For instance, we may choose \eqn{Trank4}{ T_{0000} = T_{0111} = T_{0222} = T_{1012} = T_{1120} = T_{1201} = T_{2021} = T_{2210} = T_{2102} = 1\,, } with all other components vanishing. One may check that for any choice of two indices, there is a unique non-vanishing component of $T$, so the other two indices are also determined. The ``perfectness property'' is useful in writing out the full state $|\psi\rangle$ starting from \eno{psiSimplestGraph}. This $|\psi \rangle$ as constructed is a very special entangled superposition of $9$ of the $3^4 = 81$ possible basis states. In this state, any choice of $2$ spins are maximally entangled with the remaining two. This is a general feature of perfect states. One can construct more complicated states by contracting multiple copies of the perfect tensor $T$ in different ways. For example, the following graph represents a new state, \eqn{psiComplic}{ |\psi^\prime \rangle = \musepic{\bigtree} \,\, , } where we have suppressed the index labels. In \eno{psiComplic} and future graphical representations, a shared edge between two vertices will denote that the corresponding index is to be summed over. Thus, explicitly, \eno{psiComplic} represents the state \eqn{psiComplicAgain}{ |\psi^\prime \rangle = T_{a_0 b_0 c_0 d_0} T_{a_0 b_1 c_1 d_1} T_{b_0 b_2 c_2 d_2} T_{c_0 b_3 c_3 d_3} T_{d_0 b_4 c_4 d_4} |b_1 c_1 d_1 b_2 c_2 d_2 b_3 c_3 d_3 b_4 c_4 d_4 \rangle\,. } In this example, the internal lines (denoted by indices with a $0$ subscript) appear traced over in the tensors but do not label the basis of boundary states. The (normalized) density matrix corresponding to the state $|\psi\rangle$ is given by \eqn{DenMat}{ \rho ={1 \over \langle \psi | \psi \rangle} |\psi \rangle \langle \psi |\,. } For example, for the state in \eno{psiSimplest}, \eqn{DenMatSimplest}{ \rho = {1 \over 9} T_{abcd} T_{a^\prime b^\prime c^\prime d^\prime} |abcd \rangle \langle a^\prime b^\prime c^\prime d^\prime |\,, } where we used \eqn{psiSimplestNorm}{ \langle \psi | \psi \rangle = T_{abcd} T_{abcd} = 9\,, } which follows from the perfect tensor property of $T$ and the fact that $a,b,c,d \in \{0,1,2\}$. Just as states built from perfect tensor contractions had a convenient graphical representation, we will sometimes also write the density matrix in the same way. Because the density matrix is a product of the perfect state vector and the dual, graphically we can write \eno{DenMatSimplest} as \eqn{DenMatSimplestGraph}{ \rho = {1\over 9} \musepic{\bigtensor} \musepic{\bigtensorprime} \,, } with the understanding that \eno{DenMatSimplestGraph} represents a density matrix, with the matrix elements given by specifying the external indices and performing the tensor contractions. The normalization in \eno{psiSimplestNorm} is a contraction on all indices, and we can represent this contraction by connecting the lines of \eno{DenMatSimplestGraph}, producing: \eqn{psiSimplestNormGraph}{ \langle \psi | \psi \rangle = \musepic{\contractfour} \,. } This has no external legs, so it is a pure number. It evaluates to $9$ as this is the number of non-vanishing components of $T$, equivalently the number of allowed assignments of the internal legs. Taking partial traces leads to reduced density matrices. For example, for the $\rho$ given in \eno{DenMatSimplest}, if $Tr_2$ denotes tracing out the second factor of the direct product state $|\psi \rangle = |abcd \rangle$, then \eqn{rho1}{ \rho_1 \equiv \Tr_2\, \rho = \sum_{b^{\prime\prime}} \langle b^{\prime\prime}| \rho | b^{\prime\prime} \rangle = {1 \over 9}\, T_{ab^{\prime\prime}cd} T_{a^\prime b^{\prime\prime} c^\prime d^\prime} |acd \rangle \langle a^\prime c^\prime d^\prime |\,. } Graphically, we represent this as \eqn{rho1Graph}{ \rho_1 = \frac{1}{9} \, \, \musepic{\contractone} \,. } We have suppressed index labels on the graph. There is a slight abuse of notation by representing both states and reduced density matrices using the same kinds of pictures, even though the corresponding equations are unambiguous. We will be careful to distinguish between states and matrices in more complicated examples later; a rule of thumb is that the reduced density matrix is mirrored across the contracted lines. The diagram in \eno{rho1Graph} is identical to that of a reduced density matrix where instead of the second factor, we traced out any of the other single qutrits in $|\psi \rangle = |abcd \rangle$. This follows from the permutation symmetry of the legs. If instead we trace out two sites, say the first two, we obtain \eqn{rho2}{ \rho_2 \equiv \Tr_{12} \rho = \sum_{a^{\prime \prime} b^{\prime\prime}} \langle a^{\prime \prime} b^{\prime\prime}| \rho | a^{\prime \prime} b^{\prime\prime} \rangle = {1\over 9}\, T_{a^{\prime \prime} b^{\prime\prime}cd} T_{a^{\prime \prime} b^{\prime\prime} c^\prime d^\prime} |cd \rangle \langle c^\prime d^\prime |\,. } Graphically, we write \eqn{rho2Graph}{ \rho_2 = {1 \over 9} \,\, \musepic{\contracttwo} \,. } Similarly, tracing out three sites leads to the following representation, \eqn{rho3Graph}{ \rho_3 = {1 \over 9} \,\, \musepic{\contractthree} \,. } Explicitly evaluating expressions such as \eno{rho1} and \eno{rho2}, and in fact the norm of a given state constructed out of perfect tensor contractions can become cumbersome for more complicated states. It would be useful to have a set of graphical rules which can be used to simplify and evaluate reduced density matrices without resorting to tedious (though straightforward) algebra. In the end, our goal is to evaluate the von Neumann entropy of the reduced density matrix $\rho_A = \Tr_{A^c} \rho$, \eqn{vNent}{ S = - \Tr \rho_A \log \rho_A\,, } which for pure $\rho$ corresponds to a measure of quantum entanglement between the traced out region $A^c$ and its complement $(A^c)^c = A$. With this in mind, we present some useful (diagrammatic) rules and techniques. For two rank-$(r+1)$ perfect tensors $T$ which have $n_c$ indices contracted between them, with $n_d$ free indices each, so that $n_c + n_d = r+1$ we have \eqn{ContractRuleEqn}{ T_{a_1 \ldots a_{n_d} b_1 \ldots b_{n_c}} T_{a^\prime_1 \ldots a^\prime_{n_d} b_1 \ldots b_{n_c}} = \delta_{a_1 a^\prime_1} \cdots \delta_{a_{n_d} a^\prime_{n_d}} \times r^{(n_c-n_d)/2} \qquad n_c \geq n_d\,. } This easily follows from the ``perfectness property'' of perfect tensors and crucially assumes $n_c \geq n_d$. In this situation, we are tracing out at least half of the available indices on each tensor; once we have specified half the indices (for each term in the sum), the rest are uniquely determined. Tracing more than half the indices means we are performing a free sum on the remaining indices, this introduces the multiplicity given by $r^{(n_c-n_d)/2}$. In fact in \eno{ContractRuleEqn}, we need not have contracted precisely the final $n_c$ indices, but some other subset of $n_c$ indices to obtain the same form by symmetry. The Kronecker delta functions between ``dangling'' (uncontracted) indices in that case would still be between indices at matching positions. Graphically, we write this contraction identity as \eqn{ContractRule}{ \musepic{\numerouscontractions} = \musepic{\numerouslines} \times \qty(\musepic{\smcircle})^{(n_c-n_d)/2}, } which is valid whenever~$n_c \geq n_d$. The horizontal line-segments correspond to the delta function factors in~\eno{ContractRuleEqn}, with the understanding that precisely the free (dangling) indices at matching positions in the two tensors share a delta function. After doing the contraction, we see that we have ``split'' open the tensor to obtain the delta functions, and we refer to this operation as a ``split''. The meaning of each disconnected loop represents a factor of $r$ coming from the free sum. To see this, note that contracting a delta function $\delta_{ab}$ with another $\delta_{ab}$ results in \eqn{loopEqn}{ \delta_{ab} \delta_{ab} = \delta_{aa} = r\,, } which diagrammatically is represented as joining together the end-points of a line-segment, turning it into a (disconnected) loop. One checks that using rule \eno{ContractRule}, the the inner product in \eno{psiSimplestNormGraph} evaluates to $\left(\musepic{\tinycircle}\right)^2 = r^2=3^2$, which was the number of allowed terms in the free sum. We may also interpret the l.h.s.\ of \eno{ContractRule} as representing the partial trace of a pure (unnormalized) density matrix, leading to a diagonal reduced density matrix, as long as we remember to include appropriate ket and bra state factors in \eno{ContractRuleEqn} when translating \eno{ContractRule} back to equations. More precisely, starting with the pure (unnormalized) density matrix \eqn{TwoTrho}{ \rho = T_{a_1 \ldots a_{n_d} b_1 \ldots b_{n_c}} T_{a^\prime_1 \ldots a^\prime_{n_d} b_1^\prime \ldots b_{n_c}^\prime} |a_1 \ldots a_{n_d} b_1 \ldots b_{n_c} \rangle \langle a^\prime_1 \ldots a^\prime_{n_d} b_1^\prime \ldots b_{n_c}^\prime|\,, } where $n_c + n_d = r+1$, and tracing out, say, the final $n_c$ sites (indices), one obtains a reduced density matrix $\rho_r$ which is diagonal in the basis ${\cal B} = \{ |a_1 \ldots a_{n_d} \rangle : a_i=0,1,\ldots r-1\}$: \eqn{}{ \langle a_1 \ldots a_{n_d} | \rho_r |a_1^\prime \ldots a_{n_d}^\prime \rangle = r^{(n_c-n_d)/2}\, \delta_{a_1 a^\prime_1} \cdots \delta_{a_{n_d} a^\prime_{n_d}} \,, } {\it as long as} $n_c \geq n_d$ (this is the case where the sum over internal contracted lines fixes the values of the external legs.) Thus $\rho_r$ is a $r^{n_d} \times r^{n_d}$ diagonal density matrix. To start with a normalized $\rho$ such that $\Tr \rho=1$, we normalize $\rho$ in \eno{TwoTrho} by multiplying with an overall factor of $1/\langle \psi|\psi \rangle = r^{-(r+1)/2} = r^{-(n_c+n_d)/2}$. Then we see that the $\Tr \rho_r = 1$ condition is satisfied automatically, and in fact $\rho_r$ has $r^{n_d}$ eigenvalues each equaling $r^{-n_d}$. The von Neumann entropy associated with $\rho_r$ is then \eqn{SingleTensorEnt1}{ S = - \Tr \rho_r \log \rho_r = n_d \log r \hspace{5mm} \textrm{(perfect state with $n_c \geq n_d$ spins traced out)\,.} } If $n_c < n_d$, then the reduced density matrix will no longer be diagonal in the previously chosen basis. In fact, since in this case $n_d > (r+1)/2$, not all possible combinations for the string ``$a_1 \ldots a_{n_d}$'' are permissible any more, as some combinations will lead to a vanishing tensor component $T_{a_1\ldots a_{n_d} b_1\ldots b_{n_c}}$. Thus the basis of states in which we may represent the reduced density matrix is no longer $r^{n_d}$-dimensional, but in fact an $r^{(r+1)/2}$-dimensional subset ${\cal V} \subset {\cal B}$ (the exponent is $(r+1)/2$ because that is precisely the maximum number of $a_i$ indices one needs to specify before fully determining the tensor component $T_{a_1\ldots a_{n_d} b_1\ldots b_{n_c}}$ uniquely). The reason that the reduced density matrix is not diagonal in ${\cal V}$ is because in the l.h.s.\ of \eno{ContractRuleEqn} (or \eno{ContractRule}) knowledge about all the contracted indices no longer uniquely fixes the free dangling indices, since $n_c < (r+1)/2$. In such cases, with an eye on computing the von Neumann entropy \eno{vNent} -- which comes down to finding the eigenvalues of the reduced density matrix -- we use a convenient parametrization of ${\cal V}$ in which the reduced density matrix assumes a Jordan block-diagonal form, with each diagonal block a matrix with all elements equal to 1. To arrive at this convenient block diagonal form, enumerate the basis states $|a_1 \ldots a_{n_d} \rangle$ such that the first $r^{(r+1)/2-n_c}$ states correspond to the set of $\{ a_1,\ldots, a_{n_d}\}$ such that given a particular numerical combination of the string ``$b_1 \ldots b_{n_c}$'', the tensor $T_{a_1 \ldots a_{n_d} b_1 \ldots b_{n_c}}$ is non-zero. There are $r^{n_c}$ distinct combinations possible for ``$b_1 \ldots b_{n_c}$'', and for each single combination, the set of allowed $\{ a_1,\ldots, a_{n_d}\}$ such that $T_{a_1 \ldots a_{n_d} b_1 \ldots b_{n_c}}$ is non-zero has cardinality $r^{(r+1)/2-n_c}$. The next $r^{(r+1)/2-n_c}$ correspond to a different choice of ``$b_1 \ldots b_{n_c}$'', and so on. In this way of enumerating the basis, the reduced density matrix assumes a block-diagonal form, with $r^{n_c}$ blocks along the diagonal, each of size $r^{(r+1)/2-n_c} \times r^{(r+1)/2-n_c}$. Thus the total size of $\rho_r$ is $r^{(r+1)/2} \times r^{(r+1)/2}$, as expected (since $\dim {\cal V} = r^{(r+1)/2}$). So for $n_c < n_d$, by abuse of notation, we write graphically \eqn{}{ \musepic{\numerouscontractions} = \begin{pmatrix} J_{r^{(r+1)/2-n_c}} \\ & \ddots \\ && J_{r^{(r+1)/2-n_c}} \end{pmatrix}_{r^{(r+1)/2} \times r^{(r+1)/2}} \qquad n_c < n_d\,, } where $J_{n}$ is the $n \times n$ matrix of all ones and all other entries are $0$. We note that this argument holds regardless of precisely which $n_c$ indices in \eno{TwoTrho} were traced out. If we normalize $\rho$ with an overall factor of $1/\langle \psi|\psi\rangle = r^{-(r+1)/2}$ so that $\Tr \rho=1$, the reduced density matrix will automatically have $\Tr \rho_r =1$. Further, it will have $r^{n_c}$ non-zero eigenvalues, each equaling $\lambda_i = r^{-n_c}$ (this follows from a standard result on block diagonalization of these matrices.) From the eigenvalues, it follows that the von Neumann entropy of the (normalized) reduced density matrix will be \eqn{SingleTensorEnt2}{ S= -\sum_i \lambda_i \log \lambda_i = n_c \log r \hspace{5mm} \textrm{(perfect state with $n_c < n_d$ spins traced out)\,.} } Since $n_c$ counts the number of contractions between the two copies of the state $\psi$ in the reduced density matrix (and consequently the number of diagonal blocks in the matrix representation of $\rho_r$, which is $r^{n_c}$), we conclude in this case that for $n_c < n_d$, the von Neumann entropy is proportional to precisely the number of such contractions (more precisely, equal to the logarithm of the number of blocks in the Jordan block-diagonal matrix representation of $\rho_r$). It is instructive to apply what we have learned so far to the previous examples of \eno{rho1}-\eno{rho3Graph}. We conclude that the latter two examples satisfy $n_c \geq n_d$ and the reduced density matrices have the explicit diagonal form $\rho_2 = {1\over 3^2} I_{3^2}$, $\rho_3 = {1\over 3} I_3$. The first example has $n_c < n_d$, and in an appropriate basis, $\rho_1 = {1\over 3^2} \diag (J_3, J_3, J_3)$. Correspondingly, the von Neumann entropies are $S_2 = 2\log 3$, $S_3= \log 3$ and $S_1 = \log 3$, which is consistent with the expectation that the von Neumann entropy in tensor networks built out of perfect tensors is proportional to the minimal number of cuts needed to separate out the traced out part of the tensor network from the rest. We have so far focused on the simplest explicit examples, but the reasoning we have used for the state given by \eno{TwoTrho} works for {\it any} state constructed out of any number of copies of the perfect tensor $T$ of fixed rank-$(r+1)$, with the proviso that we have already applied the rule \eno{ContractRule} wherever possible to reduce the reduced density matrix to its ``simplest'' form. With even only a few tensors of modest rank, one can quickly construct enormous density matrices due to the doubly exponential power scaling of various quantities. However, armed with \eno{ContractRule} and the properties of perfect tensors, it becomes possible to determine these density matrices analytically. The dimension of the basis ${\cal V}$ in more complicated examples will differ from the case of a single tensor determined above, but one can still parametrize the basis such that the matrix representation of the reduced density matrix $\rho_r$ (for $n_c < n_d$) is in Jordan form. The size of each diagonal block will also depend on the details of the original state $\rho$ and the choice of $n_c, n_d$ (even with $n_c + n_d$ no longer equaling $r+1$), but the number of blocks will still equal $r^{n_c}$, where by definition, $n_c$ is the number of contractions between the two copies of the state $\psi$ in the ``simplified'' reduced density matrix as described in the previous paragraph. Thus the von Neumann entropy (for normalized states) will evaluate to $S=n_c \log r$ as long as $n_c < n_d$. As an example, if the reduced density matrix was obtained by tracing out the bottom three legs of the state in \eno{psiComplic}, we first apply rule \eno{ContractRule} to replace the contraction of the form $\cdots \musepic{\contractthreeinline} \cdots$ by a single horizontal line (times a constant factor) to obtain the simplified reduced density matrix, which now has simply one contraction ($n_c = 1$) between the two copies of the state $\psi$. Thus $S = \log r$. This was also expected from the ``minimal number of cuts'' intuition, as we only needed one cut to separate the three sites to be traced out from the rest of the tensor network. We will return to similar calculations for holographic states next. \subsection{Efficient techniques: splits and cycles} \label{SPLITSCYCLES} Before discussing the partial trace of density matrices associated with the dual graph tensor network, let's warm up with a simpler computation: the inner product of the holographic state with itself. As described in section \ref{DUALGRAPH}, a holographic state in the vacuum AdS cut-off tree geometry is specified by a prime $p$ which labels the Bruhat--Tits tree ${\cal T}_p$, a prime $r$ which relates to the rank of the perfect tensors forming the tensor network, and the bulk IR cut-off parameter $\Lambda$. In fact, before discussing this in full generality for any $p, r$ and $\Lambda$, let's work it out in the case of the state shown in figure \ref{fig:SimplePsi} and in the process introduce some more useful tools and techniques (the state in figure \ref{fig:SimplePsi} corresponds to the choice $p=3, r=7$, and $\Lambda = 2$). Diagrammatically, computing $\langle \psi | \psi \rangle$, which is the same as contracting all free dangling legs of $\psi$ with another copy of $\psi$, is represented as \eqn{psiNormStep1}{ \langle \psi|\psi \rangle = \musepic{\bigcontraction} \ . } We have chosen to omit the lines representing contraction of dangling legs at the outermost vertices, but no confusion should arise. To evaluate the inner product, one must contract the dangling legs at {\it all} vertices. The contractions between the two copies of $\psi$, shown in blue in~\eno{psiNormStep1} to guide the eye, take precisely the form of the contraction rule \eno{ContractRule}. Note that the number of dangling legs at the vertices of $|\psi \rangle$, which we denoted $v_d$ in section \ref{DUALGRAPH}, is now reinterpreted as the number of contractions $n_c$ between pairs of vertices from the two copies of $\psi$. On the other hand, the number of contractions at a vertex within each $| \psi \rangle$, denoted $v_c$ in section \ref{DUALGRAPH}, can be reinterpreted as the number of ``dangling legs'' $n_d$ in each of the contractions between the two copies of $\psi$. Since $v_c \leq v_d$ at any vertex, which we recall follows from the requirement $(r+1)/2 \geq 2\Lambda$, we now have $n_c \geq n_d$ for the contracted pair of vertices. Thus we can apply rule \eno{ContractRule}. For example, \eqn{psi4Rule}{ \musepic{\fourtofour} \ = \ \musepic{\fourlines} \ , } at each of the four pairs of vertex contractions between the dangling legs of vertices of $\psi$ which originally had four dangling legs each. The second kind of contraction between the two copies of $\psi$ depicted in \eno{psiNormStep1}, which is between vertices carrying six dangling legs each, also admits a simplification. Since $n_c \geq n_d$ here as well, we may simplify this contraction using rule \eno{ContractRule}, \eqn{psi6Rule}{ \musepic{\contractsix} = \qty(\musepic{\smcircle} )^2 \times \ \musepic{\twolines} = r^2 \times\ \musepic{\twolines}\ . } The result of applying the contraction rule at all vertex contractions is, \eqn{psiNormStep4}{ \langle \psi|\psi \rangle = \musepic{\bigspread} \,. } We note the creation of new ``cycles'' (or disconnected loops) in \eno{psiNormStep4}. Each such cycle contributes a factor of $r$, as explained in \eno{loopEqn}. The factors of $r^2$ explicitly shown in \eno{psiNormStep4} originate from the application of \eno{psi6Rule}. One counts $16$ new cycles and eight factors of $r^2$, thus $\langle \psi | \psi \rangle = r^{16} \times (r^2)^8 = r^{32}$, where $r=7$.\footnote{The normalization, as well as several other quantities as computed by these kinds of diagrams, are reminiscent of similar calculations in topological quantum field theory. In that setting, for example, the norm of the state determined by a nullbordism of a particular manifold is computed by gluing two copies of the nullbordism along their common boundary, corresponding to the inner product in Hilbert space.} Clearly this method is cumbersome to implement. We now propose a shorthand procedure which makes the computation of the norm much more efficient (see figure \ref{fig:proc} for reference): \begin{figure} \begin{equation*} \musepic{\computeone} \longrightarrow \musepic{\computetwo} \longrightarrow \musepic{\computethree} \end{equation*} \caption{Computation of $\langle \psi | \psi \rangle$.} \label{fig:proc} \end{figure} \begin{enumerate} \item We start with the holographic state $|\psi \rangle$ depicted in figure \ref{fig:SimplePsi} constructed from rank-$8$ tensors, but suppress drawing the dangling legs. The number of dangling legs at each vertex can be reconstructed from the knowledge of the rank of the tensor. \item {\bf (Splits.)} We wish to contract all vertices of $\psi$ with itself. Since the number of dangling legs at each vertex of $\psi$ is greater than or equal to half the rank of the perfect tensor, we can apply the contraction rule \eno{ContractRule} at each of the vertices. However, we draw just one-half, say the left half, of the entire diagram, and label the vertices with an appropriate power of $r$ wherever the contraction rule \eno{ContractRule} prescribes factors of $r$. In practice, as explained below \eno{ContractRule}, this comes down to ``splitting'' open each vertex of the tensor network as shown in the second step in figure \ref{fig:proc}, and assigning any prescribed powers of $r$ at the corresponding vertex, coming from the application of the contraction rule \eno{ContractRule}. Such powers of $r$ will be referred to as ``splits''. \item {\bf (Cycles.)} Finally, we would like to count the number of new cycles created upon performing all the ``splits'' in step 2. Focusing on the left-half of the diagram in \eno{psiNormStep4}, it is clear that each disconnected bond (line-segment) in step 2 above will end up in a ``cycle'' (disconnected loop). So we simply join together the end-points of each line-segment creating as many cycles as disconnected line-segments. \end{enumerate} Following this procedure to compute $\langle \psi|\psi \rangle$, as depicted in figure \ref{fig:proc}, we verify that we have created $16$ new loops (coming from 16 cycles) and introduced $8$ factors of $r^2$ as before, yielding $\langle \psi|\psi \rangle = r^{16} \times (r^2)^8 = r^{32}$. \subsection{Norm of a holographic state} \label{NORM} In this subsection we work out the norm of a general holographic state dual to the $(p+1)$-regular Bruhat--Tits tree geometry, with cutoff $\Lambda$ and which is constructed as a dual graph tensor network made from perfect tensors of rank-$(r+1)$, where we assume $(r+1)/2 \geq 2\Lambda$. This ground state normalization is necessary for any computation involving the vacuum state or density matrix at the $p$-adic boundary. The black hole and more general backgrounds require an analogous normalization constant which will be determined later. Let us begin by tabulating the ``type'' of vertices which make up the tensor network corresponding to a general holographic state. Two vertices are of the same ``type'' if the tensors at the respective vertices have the same number of legs (indices) contracted with other tensors. We refer to this number as $v_c$. Since the total number of legs (contracted and uncontracted) is constant (and equals $r+1$), vertices of the same type also have identical number of dangling (uncontracted) legs (which we refer to as $v_d$). Thus each type of vertex may appear with a non-trivial multiplicity in the holographic state. Some reflection immediately leads to the conclusion that for a fixed cutoff $\Lambda$, all vertices with the number of contracted legs $v_c$ in the set $\{2, 4, \ldots, 2\Lambda\}$ will appear in the tensor network. The multiplicity of each type of vertex is straightforward to work out thanks to the highly symmetric nature of the dual graph. We tabulate the results in table \ref{tb:vertices}. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|} \hline $v_c$ & $v_d$ & multiplicity $M$\\ \hline \hline $2\Lambda$ & $(r+1) - 2\Lambda$ & $p+1$ \\ $2\Lambda - 2$ & $(r+1) - (2\Lambda -2)$ & $(p+1)(p^1- p^0)$\\ $2\Lambda - 4$ & $(r+1) - (2\Lambda -4)$ & $(p+1)(p^2- p^1)$\\ \vdots & \vdots & \vdots \\ 4 & $(r+1) - 4$ & $(p+1)(p^{\Lambda-2}- p^{\Lambda-3})$\\ 2 & $(r+1) - 2$ & $(p+1)(p^{\Lambda-1}- p^{\Lambda-2})$ \\ \hline \end{tabular} \caption{Types of vertices in a general holographic state $|\psi \rangle$.} \label{tb:vertices} \end{table} The multiplicities in the table add up to give $\sum_i M^{(i)} = (p+1) p^{\Lambda-1}$, where $M^{(i)}$ is the multiplicity of the vertex type $i$ and we sum over all vertex types. This precisely equals the total number of vertices in the tensor network, and consequently the total number of vertices at the boundary of the (cutoff) Bruhat--Tits tree, which is $\mathbb{P}^1(\mathbb{Z}/p^{\Lambda} \mathbb{Z})$. While computing the norm of $|\psi\rangle$, we argued that we may apply the contraction rule \eno{ContractRule} at each vertex, where the role of $v_d$ gets mapped to $n_c$, the number of legs contracted within each vertex pair coming from the two copies of $\psi$, while that of $v_c$ is mapped to $n_d$, the number of ``dangling'' legs. We put dangling in quotes because in fact these legs are still contracted with other vertices within the {\it same} copy of $\psi$, but for the purposes of applying the contraction rule \eno{ContractRule}, we may treat them as dangling. We are able to apply \eno{ContractRule} because $v_d \geq v_c \Rightarrow n_c \geq n_d$. As outlined in the previous subsection, we begin by performing ``splits''. At each vertex, we pick up a factor of $r^{(n_c-n_d)/2} = r^{(v_d-v_c)/2}$ as prescribed by \eno{ContractRule}. Since each vertex type comes with a certain multiplicity, we really pick up a factor of $r^{M(v_d-v_c)/2}$ for each vertex {\it type}. Multiplying together factors from each vertex type, we obtain that ``splitting'' leads to an overall factor of $r^{N_{\rm splits}}$, where \eqn{psiNormSplits}{ N_{\rm splits} \equiv \sum_{{\rm type\ }i} M^{(i)} \left({v_d^{(i)} - v_c^{(i)}\over 2}\right) = {p+1 \over 2(p-1)}\left( p^\Lambda(r-3) - p^{\Lambda-1}(r+1) +4 \right). } In the previous example, we specialized to $p=3, \Lambda=2, r=7$, in which case $N_{\rm splits} = {4 \over 2 \cdot 2} \left( 3^2 \cdot 4 - 3^1 \cdot 8 + 4\right) = 16$ as we found earlier. After the ``splitting'' we proceed to counting the number of new cycles created. Each new cycle contributes a factor of $r$. Now the number of cycles is equal to the number of disconnected bonds obtained after the splitting. This number can be obtained by summing up the number of contracted legs $v_c$ in a single copy of $\psi$ at each vertex, and dividing by half to compensate for the over-counting. This gives, \eqn{psiNormLoops}{ N_{\rm cycles} \equiv {1\over 2} \sum_{{\rm type\ }i} M^{(i)} v_c^{(i)} = {p+1 \over p-1} \left(p^\Lambda -1 \right). } In our previous example, we had $N_{\rm cycles} = {4\over 2}(3^2-1) = 16$, as expected. Combining the results, we obtain \eqn{psiNorm}{ \log \langle \psi | \psi \rangle = \left(N_{\rm splits} + N_{\rm cycles}\right)\log r = {p+1 \over 2(p-1)} \left( p^{\Lambda}(r-1) - p^{\Lambda-1}(r+1) +2 \right) \log r\,. } For $\Lambda \gg 1 \Rightarrow r \gg 1$ and fixed $p$, the norm takes the asymptotic form \eqn{psiNormAsymp}{ \log \langle \psi | \psi \rangle \to {p+1 \over 2} p^{\Lambda} r \log r\,. } One may see from this and other considerations that the dimension of the boundary Hilbert space grows very rapidly. Still, it is possible to make sense of quantum information theoretical quantities such as density matrices in this limit, see section~\ref{AFALGEBRA}. \subsection{Bipartite entanglement and the Ryu--Takayanagi formula} \label{GENUS0SINGLE} As explained in section \ref{GENUSZERORESULTS}, fixing a planar embedding, two given boundary points $x$ and $y$ define a unique (up to the choice of the complimentary set which can be eliminated by specifying the orientation) connected interval on the tensor network, which we denote by $A$. Particularly, $A$ is given as a set of nodes on the tensor network each of which has a number of contracted and uncontracted legs attached to it. Our goal is to compute the entanglement of $A$ with its compliment $B=A^c$. We proceed by writing down the pure density matrix for the full holographic state $|\psi\rangle$, \eqn{}{ \rho = {1 \over \langle \psi | \psi \rangle } |\psi \rangle\langle \psi |\,, } and computing the reduced density matrix obtained by tracing out the region $B$, \eqn{}{ \rho_A = \Tr_B \rho = {1 \over \langle \psi | \psi \rangle } \Tr_B |\psi \rangle\langle \psi |\,. } The trace over region $B$ is performed exactly in the manner we described previously. Graphically, one may represent this trace by taking two copies of $|\psi \rangle$, then ``gluing'' the vertices along $B$. Just as in the computation of the normalization, this set of contractions implements the trace of the density matrix, now only over the qudits in $B$. The first step is the application of the contraction rule \eno{ContractRule} wherever possible to reduce the density matrix to its simplest form. At this point, like in the previous section, we parametrize the basis of states ${\cal V}$ such that the reduced density matrix in this basis assumes a Jordan block-diagonal form with all diagonal blocks simply matrices of ones. Then the calculation of the von Neumann entropy reduces to the computation of the number of blocks, since as discussed earlier $S = \log N_{\rm blocks}$. We begin by simplifying the reduced density matrix using the contraction rule \eno{ContractRule} at all vertices in $B$. Like in the previous subsection, we would like keep track of the number of splits and the number of new cycles generated in the process, as these factors not only affect the overall normalization of $\rho_A$ but also dictate the form of the simplified reduced density matrix. We have \eqn{}{ N_{\rm splits} &= \sum_{v \in B} {v_d - v_c \over 2} \cr N_{\rm cycles} &= {1 \over 2} \left(\sum_{v \in B} v_c - C_{AB} \right), } where $C_{AB}$ is the number of tensor legs which extend between $A$ and $B$ (equivalently, the number of tensor leg contractions between vertices in $A$ and $B$), and thus are precisely the number of contractions between the two copies of $\psi$ in the diagrammatic representation of the reduced density matrix. For each of the $C_{AB}$ tensor legs, we sum over the possible values in $0,1, \ldots r-1$, giving $r^{C_{AB}}$ terms. In fact, from our discussion in the previous section, it follows that the number of blocks in the block-diagonal representation of $\rho_A$ will be precisely $r^{C_{AB}}$, where each term of the internal sum implies a certain block of non-vanishing matrix elements. With the choice of basis explained in the previous subsection, the density matrix becomes block diagonal with identical blocks of all $1$'s. The size of each block can be explicitly determined using the properties of perfect tensors, essentially by counting the number of allowed configurations of external legs for a given assignment of indices on the $C_{AB}$ legs. This is always a power of $r$ which can be determined in terms of other quantities by demanding the usual condition that the trace of the reduced density matrix is $1$. If the size of each block is $r^\sigma \times r^\sigma$, then the total size of the density matrix is $r^{\sigma +C_{AB}} \times r^{\sigma +C_{AB}}$ (or equivalently $\dim {\cal V} = r^{\sigma +C_{AB}}$). Thus the explicit form of the reduced density matrix for any single interval can always be written as: \eqn{RhoAInterval}{ \rho_A = { r^{N_{\rm splits} + N_{\rm cycles}} \over \langle \psi|\psi \rangle} \begin{pmatrix} \begin{bmatrix} 1 & \cdots & 1 \\ \vdots & \ddots & \vdots \\ 1 & \cdots & 1 \end{bmatrix} r^{\sigma} & \vspace{-1 mm} \\ \hspace{-3.5 mm} r^{\sigma} \\ & \ddots & \\ & & \vspace{-1mm} \hspace{3.5 mm} r^{\sigma} \\ & & r^{\sigma} \begin{bmatrix} 1 & \cdots & 1 \\ \vdots & \ddots & \vdots \\ 1 & \cdots & 1 \end{bmatrix} \end{pmatrix}_{r^{\sigma +C_{AB}} \times r^{\sigma +C_{AB}}} } where the number of blocks is $r^{C_{AB}}$. This is a generalization of the earlier examples, where the various parameters depend on $\Lambda$ and the specific interval chosen. Now we compute the trace \eqn{}{ \Tr \rho_A = { r^{N_{\rm splits} + N_{\rm cycles}} \over \langle \psi|\psi \rangle} r^{\sigma +C_{AB}}\,, } and we recall that we computed the norm of the holographic state $| \psi \rangle$ in the previous subsection. However, for the purposes of simplifying the trace, we note that we can write the norm of $\psi$ as an independent sum over vertices in $A$ and $B$: \eqn{}{ \log_r \langle \psi|\psi \rangle = \left( \sum_{v \in A} {v_d-v_c \over 2} + \sum_{v \in B} {v_d-v_c \over 2} \right) + \left( {1\over 2} \sum_{v \in A} v_c + {1\over 2} \sum_{v \in B} v_c \right). } where the first parenthesis corresponds to the total power of $r$ originating from the contraction rule \eno{ContractRule} upon ``splitting'' all vertices of $\psi$, while the second parenthesis counts the total number of new loops created after ``splitting'' all vertices of $\psi$. Combining all the results, we obtain an expression which only depends on quantities in region $A$ and the number of bonds which connect $A$ to $B$: \eqn{TraceCond}{ \log_r \Tr \rho_A = -\sum_{v \in A} {v_d \over 2} + \sigma + {C_{AB} \over 2}\,. } For any (normalized) reduced density matrix, the trace is always unity, so the logarithm on the left hand side vanishes. This determines the size of the blocks in terms of other quantities which depend only on the chosen region to be traced out. Returning back to the calculation of the von Neumann entropy, we have \eqn{SACAB}{ S_A = C_{AB} \log r\,. } This follows by direct diagonalization of the block diagonal reduced density matrix, and gives an explicit connection between the von Neumann entropy and the number of tensor bonds extending between $A$ and $B$. This kind of behavior for single intervals in perfect tensor networks was an attractive feature of ~\cite{Pastawski:2015qua}, where a general argument was given for this behavior based on the properties of perfect tensors. We will explain in this section that the network proposed here for $p$-adic AdS/CFT gives analogous results for a broader class of physical situations such as black hole backgrounds and multiple intervals. A key motivation for this specific (i.e.\ ``dual graph'') tensor network is the relationship between $S_A$, thought of as the boundary entanglement entropy between $A$ and $B$, and the bulk geometry of geodesics on the Bruhat-Tits tree. Recall that by construction, $C_{AB}$ is the number of edges in the dual graph tensor network which originate from a vertex in $A$ and end on a vertex outside $A$. For example, in the single interval example in figure \ref{fig:connected}, $C_{AB} = 4$, where $A$ is the connected region on the tensor network in between boundary points $x$ and $y$. In other words, $C_{AB}$ is the number of edges which start on a tensor network vertex belonging to the region $A$ ``in between'' $x$ and $y$, but end outside this region. In the case when the boundary is $\mathbb{P}^1(\mathbb{Q}_p)$ (although this argument also works when the boundary is the ``infinite line'' $\mathbb{Q}_p$) the outside of $A$ is precisely the region ``between'' $x$ and $y$ which is {\it complimentary} to $A$. Thus by construction, the edges on the tensor network constituting $C_{AB}$ cut across those edges (geodesics) on the the Bruhat--Tits tree which separate the boundary points $x$ and $y$ into two disconnected parts of the tree, if we imagine cutting the tree at precisely those edges (geodesics). In fact, on a tree geometry, there are precisely as many such edges as the length of the boundary anchored geodesic joining $x$ and $y$ (if we normalize the length of each edge to unity). Thus we conclude that \eqn{CABlength}{ C_{AB} = {\rm length} (\gamma_{xy})/\ell\,, } where $\gamma_{xy}$ is the (regulated) geodesic on the Bruhat--Tits tree joining $x$ to $y$ and $\ell$ is the length of each edge on the tree. If the boundary is $\mathbb{Q}_p$ so that $x,y \in \mathbb{Q}_p$, then \eqn{lengthQp}{ {\rm length} (\gamma_{xy})/\ell &= d(C,x)+ d(C,y) - 2d(C, {\rm anc}(x,y)) \cr &= 2\Lambda + 2\log_p |x-y|_p \cr &= 2\log_p \left| {x-y \over \epsilon} \right|_p \cr } where $C$ is the any point on the Bruhat--Tits tree, ${\rm anc}(x,y)$ is the unique vertex on the Bruhat--Tits tree where geodesics from $C, x$ and $y$ meet, and $d(\cdot,\cdot)$ measures the graph distance between two points. We have defined $\epsilon \in \mathbb{Q}_p$ to be the cutoff \eqn{}{ \epsilon \equiv p^{\Lambda}\,, } which goes to zero $p$-adically, i.e.\ $|\epsilon|_p \to 0$ as $\Lambda \to \infty$. If $x,y \in \mathbb{P}^1(\mathbb{Q}_p)$, then \eqn{lengthP1Qp}{ {\rm length} (\gamma_{xy})/\ell &= d(C,x)+ d(C,y) - 2d(C, {\rm anc}(x,y)) \cr &= 2\log_p { |{\mathfrak B}(x,y)|_{\rm PS} \over |\epsilon|_p}, } where we take $C$ to be the ``radial center'' of the Bruhat--Tits tree (so that $d(C,x) = d(C,y) = \Lambda$), and as explained around \eno{RTgenus0circle}, $|{\mathfrak B}(x,y)|_{\rm PS}$ is the Patterson-Sullivan measure of the smallest clopen ball in $\mathbb{P}^1(\mathbb{Q}_p)$ containing both $x$ and $y$.\footnote{In fact, we can interpret the ``interval size'' $|x-y|_p = |{\mathfrak B}(x,y)|_{\rm Haar}$ in \eno{lengthQp} as the Haar measure of the smallest clopen ball in $\mathbb{Q}_p$ containing both $x$ and $y$.} As discussed in section \ref{sssec:results}, equations \eno{SACAB}-\eno{lengthP1Qp} correspond to the $p$-adic analog of the RT formula. We end this discussion by remarking that it is straightforward to see that the lengths in \eno{lengthP1Qp} and \eno{lengthQp} can be re-expressed using the signed-overlap function $\delta(\cdot,\cdot)$ described in section \ref{GENUSZERORESULTS} (see \eno{lengthxy}). This is more convenient because $\delta(\cdot,\cdot)$ admits a choice of alternate paths, which is important in the genus $1$ case (see, for example, sections \ref{RTBTZresults} and \ref{GENUS1BTZ}) where there are always two choices. \subsection{Bipartite entanglement for a disconnected region and subadditivity} \label{GENUS0DOUBLE} So far we have discussed bipartite entanglement entropy only in the case of a connected region, where we are given a pair of points on $\mathbb{Q}_p$ (or $\mathbb{P}^1(\mathbb{Q}_p)$). We now extend the discussion to include the case of a ``disconnected region'' (recall definition \ref{def:connectedregion}). To specify a disconnected region built from two connected subregions, we must specify a set of four distinct points on the projective line on which the spacial slice of the CFT resides, and which constitute the entangling surface. Given a set of four boundary points and a choice of planar embedding for the tensor network, there are two different choices for constructing a (complementary pair of) disconnected region on the tensor network, as previously illustrated in figure \ref{fig:connectedB}. Each ``path-disjoint'' piece of a disconnected region is specified by the set of vertices at the boundary of the tensor network ``in between'' a chosen pair of boundary points, just like for connected regions in the previous subsection (recall definition \ref{def:inbetween} for the notion of ``in between''). We define ``path-disjoint'' as follows: \begin{definition} Two regions (i.e.\ sets of vertices) on the dual graph tensor network are {\it path-disjoint} if they do not share any common vertices, {\it and} the tensors located on the vertices in one set are not contracted with the tensors in the other set via any of the ``shortest bonds''.\footnote{Recall that the ``shortest bonds'' on the tensor network are the bonds in bijective correspondence with the UV edges (equivalently the boundary edges) of the cut off Bruhat--Tits tree (see definition \ref{def:shortestbond}). For a choice of planar embedding, the notion of ``shortest bonds'' is $\PGL(2,\mathbb{Q}_p)$ invariant.} \end{definition} Particularly, we use the notation $A = (x,y)$ to specify the connected region $A$ in the tensor network which corresponds to vertices on the network between boundary points $x,y$ on the tree. The ordering inside the parenthesis is used to tell $A$ apart from its complement. We will often use the convention that given a planar embedding, the region $A = (x,y)$ is given by the set of vertices ``in between'' $x$ and $y$ going counter-clockwise from $x$ to $y$. Then the two choices for the complimentary pairs of path-disjoint subregions in figure \ref{fig:connectedB} correspond to $A_1 = (x_1,x_2), A_2=(x_3,x_4)$ and $A_3=(x_4,x_1), A_4=(x_2,x_3)$. The disconnected region $A = A_1 \cup A_2$ is the complement of $A^c=A_3 \cup A_4$. In vacuum, we expect $S(A) = S(A^c)$, and indeed this is borne out in our setup. A different choice of planar embedding will lead to a different pair of path-disjoint intervals $A_1=(x_i,x_j), A_2=(x_k,x_\ell)$ for distinct $i,j,k,\ell \in \{1,2,3,4\}$; however the von Neumann entropy $S(A_1A_2)$ will be independent of the choice of embedding, for the same reasons as in the case of the single-interval setup -- in this case it will depend solely on the specified boundary points $x_1, x_2, x_3, x_4$ via a conformally invariant cross-ratio constructed from them. For this reason, we make the following definition: \begin{definition} We define the {\it bipartite entanglement entropy of a disconnected region} constructed using boundary points $x_1,x_2,x_3$ and $x_4$, and denoted $S^{\rm disc.}(x_1,x_2,x_3,x_4)$, as the von Neumann entropy of the union $A= A_1 \cup A_2$ of connected regions $A_1 = (x_i, x_j)$ and $A_2=(x_k,x_\ell)$ in any chosen planar embedding, where the distinct indices $i,j,k,\ell \in \{1,2,3,4\}$ are selected such that $A_1$ and $A_2$ are path-disjoint. \end{definition} The calculation of bipartite entanglement in the case where $A = A_1 \cup A_2$ with $A_1, A_2$ appropriately chosen, proceeds almost identically to the single interval calculation described above. We begin by applying the contraction rule \eno{ContractRule} on all vertices in $B=A^c$. In this case, after applying the diagrammatic methods from the previous subsection, two things may happen. Simplifying using the contraction rule, one either ends up with two disjoint pieces for the simplified form of the reduced density matrix (here by disjoint, we mean the diagrammatic representation of the reduced density matrix splits into two pieces which do not share {\it any} edges), or a single connected diagram. In either case, the bipartite entanglement for the disconnected region follows the Ryu--Takayanagi formula. The case of the disjoint reduced density matrix is interpreted as a direct product reduced state, $\rho_A = \rho_{A_1} \otimes \rho_{A_2}$. Each of the disjoint reduced density matrix pieces can be evaluated using the method described in the previous subsection. This case is precisely when $S(A)=S(A_1 A_2) = S(A_1) + S(A_2)$, that is, $A_1$ and $A_2$ share no mutual information, defined to be \eqn{MI}{ I(A_1:A_2) \equiv S(A_1) + S(A_2) - S(A_1 A_2)\,. } This general result follows from elementary results on diagonalization of tensor products of density matrices. We also write this in the alternate, more suitable notation \eqn{MIx}{ I(x_i,x_j,x_k,x_\ell) \equiv S(x_i,x_j) + S(x_k,x_\ell) - S^{\rm disc.}(x_1,x_2,x_3,x_4)\,, } where $i,j,k,\ell \in \{1,2,3,4\}$ are distinct labels chosen so that $(x_i,x_j)$ and $(x_k,x_\ell)$ are {\it any} connected subregions path-disjoint from each other, and $S^{\rm disc.}(x_1,\ldots,x_4)$ is the bipartite entanglement of the union $(x_i,x_j) \cup (x_k,x_\ell)$.\footnote{In fact as we stressed previously, $S^{\rm disc.}(x_1,\ldots,x_4)$ equals the bipartite entanglement of {\it any} disconnected region constructed from boundary points $x_1,\ldots,x_4$.} This situation is depicted schematically in figure \ref{fig:realTwoIntA}. \begin{figure}[thb] \begin{subfigure}[]{0.49\textwidth} \centering \[ \musepic{\twointIzero} \] \caption{} \label{fig:realTwoIntA} \end{subfigure} \begin{subfigure}[]{0.49\textwidth} \centering \[ \musepic{\twointIpos} \] \caption{} \label{fig:realTwoIntB} \end{subfigure} \caption{A schematic representation of the bipartite entanglement entropy calculation for the cases shown in figures \ref{fig:A1A4} and \ref{fig:A1A2}, here (a) and (b) respectively, to emphasize the parallel with the usual story over the reals. The disconnected region $A= A_1 \cup A_2$ is shown in black, while the minimal geodesics homologous to $A$ is shown in blue.} \label{fig:realTwoInt} \end{figure} In the other case, where we obtain a reduced density matrix given by a single diagram (as opposed to two disjoint diagrammatic pieces), we can again employ the method of the previous subsection to calculate the reduced density matrix as well as the entanglement entropy. In this case $S(A_1 A_2) \leq S(A_1) + S(A_2)$, i.e. the mutual information $I(A_1:A_2)$ is non-negative. This situation is depicted schematically in figure \ref{fig:realTwoIntB}. In this case the entanglement entropy of the disconnected region $A = A_1 \cup A_2$, $S(A)$ is still given by the logarithm of the number of blocks in the Jordan block-diagonal representation of the reduced density matrix. Just like in the case of the single interval, the number of blocks is $r^{C_{AB}}$, where $C_{AB}$ is the number of edges on the tensor network which originate on a vertex in $A = A_1 \cup A_2$ but end outside $A$ (i.e.\ in $B = A^c$). Let us now describe these cases in more detail. The two possible scenarios discussed above can be classified in terms of the entangling surface consisting the boundary points specifying the disconnected region $A$. If $A_1 = (x_1, x_2)$, and $A_2 = (x_3,x_4)$ are two path-disjoint intervals on the tensor network (with $x_i \neq x_j\,, \forall i,j$) with $A=A_1 \cup A_2$, then the sign of the logarithm of the cross-ratio \eqn{uCrossRatio}{ u(x_1,x_2,x_3,x_4) \equiv \left|{x_{12} x_{34} \over x_{14}x_{23}}\right|_p, } dictates which of the scenarios depicted in figure \ref{fig:realTwoInt} will occur. If the logarithm is non-positive, then the pairwise boundary anchored geodesics between $x_1$ \& $x_2$ and $x_3$ \& $x_4$ do not overlap (i.e. they intersect at most at a single vertex) on the Bruhat--Tits tree, and in fact constitute the minimal surfaces homologous to $A_1$ and $A_2$ respectively.\footnote{Recall definition \ref{def:homologous} for the homologous condition.} Thus $S^{\rm disc.}(x_1,x_2,x_3,x_4)=S(A_1A_2) = S(A_1) + S(A_2)$. \begin{figure}[t] \centering \[ \musepic{\twointplain} \qquad \musepic{\twointcolored} \] \caption{The setup for the bipartite entanglement calculation for a disconnected region, in the case when the cross-ratio $u(x_1,x_2,x_3,x_4)$ defined in \eno{uCrossRatio} is strictly less than one. Left: The holographic state with the region marked in blue. Right: Computation of the reduced density matrix, and the depiction of the minimal surface homologous to the boundary region. See the main text for the detailed explanation of the figure.} \label{fig:A1A4} \end{figure} Figure~\ref{fig:A1A4} shows an example of a disconnected region setup for $A_1=(x_1,x_2)$, $A_2=(x_3,x_4)$ with $u(x_1,x_2,x_3,x_4)<1$.\footnote{In figure \ref{fig:A1A4}, $u(x_1,x_2,x_3,x_4) = p^{-1}$. See the discussion around \eno{ugraph} for the explanation.} As usual, the tensor network is shown in red with regions marked by blue vertices, while the Bruhat--Tits tree geometry is shown in black. Applying the contraction rule \eno{ContractRule} to the trace out the complement of $A = A_1 \cup A_2$ in the state shown in the left subfigure ``splits'' open vertices on the tensor network marked in black (in the manner described in the previous subsections), while the dashed bonds on the tensor network turn into ``cycles''. The bonds marked in green connect vertices in $A$ to vertices in the complement of $A$. The simplified reduced density matrix is given by the subdiagram containing blue vertices, along with black and green colored bonds, where we remember to ``split open'' all the black vertices so that all bonds originally coincident on such vertices no longer meet. Thus the reduced density matrix manifestly decomposes into two disjoint pieces, and can be written as the direct product of the density matrices for the individual sub-intervals. The number of green bonds in each individual piece correspond to the (base-$r$ logarithm of the) number of blocks in the Jordan block form of the individual reduced density matrices. It is clear from counting the green bonds that $S^{\rm disc.}(x_1,\ldots,x_4)=S(A_1 A_2) = S(A_1)+S(A_2)$.\footnote{Particularly, applying the methods from the previous subsection, we have $S(A_1)=4, S(A_2)=2$ and $S(A)=6$, which can be confirmed visually in figure \ref{fig:A1A4} by counting the length of the corresponding minimal geodesics homologous to various regions.} The edges on the Bruhat--Tits tree corresponding to the green bonds are highlighted in blue. Together they correspond to the minimal length boundary-anchored geodesics homologous to $A$. Thus the RT formula is satisfied. Schematically, this case is the $p$-adic analog of the configuration depicted in figure \ref{fig:realTwoIntA}, where the minimal surface homologous to a disconnected region is the union of the minimal surfaces homologous to each disjoint piece of the region separately. Figure~\ref{fig:A1A3} shows the disconnected region setup for $A_1 = (x_1,x_2)$, $A_2 = (x_3,x_4)$ with $u(x_1,x_2,x_3,x_4) = 1$. The analysis in this case proceeds identically to the one above, so we do not repeat it here. In figure \ref{fig:A1A4}, we could have considered the alternate choice of path-disjoint intervals, $A_1^\prime=(x_4,x_1)$ and $A_2^\prime=(x_2,x_3)$ with the full disconnected region given by $A^\prime = A_1^\prime \cup A_2^\prime$. Then the cross-ratio of interest \eno{uCrossRatio} would become \eqn{}{ u(x_4,x_1,x_2,x_3) = \left|{x_{41}x_{23} \over x_{43}x_{12}}\right|_p = {1 \over u(x_1,x_2,x_3,x_4)} > 1\,. } We discuss this case in more detail next. Before proceeding, we note that in this case although the individual von Neumann entropies $S(A_1^\prime)$ and $S(A_2^\prime)$ will in general differ from $S(A_1)$ and $S(A_2)$, the von Neumann entropy of the union $S(A_1^\prime A_2^\prime) = S(A_1 A_2) = S^{\rm disc.}(x_1,\ldots,x_4)$, and in fact the minimal surface homologous to $A_1^\prime \cup A_2^\prime$ is the same as the minimal surface homologous to $A_1 \cup A_2$. Thus the bipartite von Neumann entropy is independent of the choice of choosing the disconnected region given a set of four boundary points.\footnote{The two choices are illustrated in figure \ref{fig:connectedB}.} Moreover, it is independent of the choice of the planar embedding. Let the logarithm of the cross-ratio $u(x_1,x_2,x_3,x_4)$ be positive; then the boundary anchored geodesics between $x_1$ \& $x_2$ and $x_3$ \& $x_4$ overlap (i.e.\ share non-zero edges on the Bruhat--Tits tree). The bulk interpretation of mutual information $I(x_1,\ldots,x_4)=I(A_1:A_2)$ is that it is given precisely by (twice) the number of edges of overlap between the minimal geodesics homologous to $A_1$ and $A_2$ individually. Each such edge corresponds to a bond on the tensor network which extends from a node in $A_1$ to a node of $A_2$, explaining why the combination $I(A_1:A_2)=S(A_1)+S(A_2)-S(A_1A_2)$ is given precisely by twice the number of such edges (up to an overall factor of $\log r$). From the boundary perspective, the mutual information between the path-disjoint intervals $A_1, A_2$ is given by \eqn{MIbdy}{ I(x_i,x_j,x_k,x_\ell) = I(A_1:A_2) = (2\log r)\: \log_p u(x_i,x_j,x_k,x_\ell)\,, } provided $u(x_1,\ldots,x_4)>1$ (which is the same as $u(x_1,\ldots,x_4) \geq p$ since the $p$-adic norm in \eno{uCrossRatio} takes values in $p^\mathbb{Z}$), with $A_1=(x_i,x_j)$, $A_2=(x_k,x_\ell)$ where $i,j,k,\ell \in \{1,\ldots, \}$ are distinct and chosen such that $A_1, A_2$ are path-disjoint. This result follows from a basic entry in the $p$-adic holographic dictionary between cross-ratios and graph distances on the Bruhat--Tits tree, \eqn{ugraph}{ \left|{(x-y)(z-w) \over (x-z)(y-w)}\right|_p = p^{-d(a,b)}\,, } where $x,y,z,w \in \mathbb{P}^1(\mathbb{Q}_p)$ such that the bulk geodesics joining $x$ to $z$, and $y$ to $w$ intersect precisely along the path between the bulk points $a$ and $b$ on the Bruhat--Tits tree, and $d(a,b)$ is the graph distance between $a$ and $b$.\footnote{If the bulk geodesics do not intersect along a path on the tree, \eno{ugraph} can still be used after a simple relabelling of the boundary points.} \begin{figure}[t] \begin{subfigure}[]{0.49\textwidth} \centering \[ \musepic{\intCRone} \] \caption{$u(x_1,x_2,x_3,x_4)=1$} \label{fig:A1A3} \end{subfigure} \begin{subfigure}[]{0.49\textwidth} \centering \[ \musepic{\intCRlarge} \] \caption{$u(x_1,x_2,x_3,x_4)>1$} \label{fig:A1A2} \end{subfigure} \caption{The case of a disconnected region with cross-ratio $u(x_1,x_2,x_3,x_4)$ defined in \eno{uCrossRatio} greater than or equal to 1.} \end{figure} This possibility is depicted in figure \ref{fig:A1A2}. We consider the disjoint intervals $A_1 = (x_1,x_2)$ and $A_2= (x_3,x_4)$ with $A = A_1 \cup A_2$ and $u(x_1,x_2,x_3,x_4) > 1$. Applying the contraction rule \eno{ContractRule} to the trace out the complement of $A$ in the state on the left ``splits'' open vertices marked in black, while the dashed bonds on the tensor network turn into cycles. Like for figure \ref{fig:A1A4}, the bonds marked in green connect vertices in $A$ to vertices in the complement of $A$, and the simplified reduced density matrix is given by the subdiagram containing blue vertices, along with black and green bonds. Importantly, the reduced density matrix in this case does not split into individual disjoint pieces. The number of green bonds once again correspond to the (base-$r$ logarithm of the) number of blocks in the Jordan block form of the reduced density matrix, and thus contribute to the von Neumann entropy as explained in section \ref{GENUS0SINGLE}. It is clear from counting the green bonds that $S^{\rm disc.}(x_1,\ldots,x_4) = S(A_1 A_2) < S(A_1)+S(A_2)$.\footnote{Particularly in figure \ref{fig:A1A2}, $S(A_1)=4, S(A_2)=4$ and $S(A_1A_2)=6$.} The excess on the r.h.s.\ (or equivalently the non-zero mutual information $I(A_1:A_2)$) can precisely be accounted for by the existence of a black bond on the tensor network joining a vertex of $A_1$ with a vertex of $A_2$. {\it This establishes the non-negativity of mutual information.} The edges on the Bruhat--Tits tree corresponding to the green bonds are highlighted in blue, and together they specify the minimal boundary-anchored geodesics homologous to the boundary region $A$. Thus once again the RT formula holds. This case is the $p$-adic analog of the situation in figure \ref{fig:realTwoIntB}, where the two regions share mutual information. We note that while the sample computations presented above for the disconnected interval case considered specific examples (for a specific choice of cutoff and specific value of $p$), the lessons and results obtained here hold in full generality for arbitrarily chosen disconnected regions and arbitrary cutoffs for any prime $p$. In summary, from the boundary perspective, given path-disjoint regions $A_1=(x_i,x_j)$ and $A_2=(x_k,x_\ell)$ with $i,j,k,\ell \in \{1,2,3,4\}$ distinct and $A_1, A_2$ path-disjoint, the sign of $\log u(x_i,x_j,x_k,x_\ell)$ fixes whether the mutual information $I(x_i,x_j,x_k,x_\ell)=I(A_1:A_2)$ is positive or vanishing. Combining the cases described above, we write \eqn{MIbdyGen}{ I(x_i,x_j,x_k,x_\ell) &= (2\log r)\: \max\{ 0, \log_p u(x_i,x_j,x_k,x_\ell) \} \cr &= (2 \log r)\: \gamma_p\!\left({x_{i\ell}x_{jk} \over x_{ij}x_{k\ell}} \right) \log_p \left|{x_{ij}x_{k\ell} \over x_{i\ell}x_{jk}} \right|_p \cr &\geq 0, } where $\gamma_p(x)$ is the characteristic function on $\mathbb{Z}_p$, that is $\gamma_p(x) = 1$ if $x \in \mathbb{Z}_p$ (equivalently $|x|_p \leq 1$), and zero otherwise, and we emphasize the non-negativity of mutual information in the final inequality. From the bulk perspective, mutual information equals twice the number of shared edges between the minimal surfaces homologous to the individual regions $A_1$ and $A_2$ (or equivalently twice the number of bonds in the tensor network which start in $A_1$ and end in $A_2$) up to an overall factor of $\log r$. The entropy of the disconnected region, $S(A)$ equals the entropy of any disconnected region built out of boundary points $x_1,\ldots,x_4$, thus we alternately denote the bipartite entropy as $S(x_1,\ldots,x_4)$. Using the results from this and the previous subsections, we write \eqn{Sdisconnect}{ & S^{\rm disc.}(x_1,x_2,x_3,x_4) \cr &= S(x_i,x_j) + S(x_k,x_\ell) - I(x_i,x_j,x_k,x_\ell) \cr &= 2\left( \log_p { |\mathfrak{B}(x_i,x_j)|_{\rm PS} \over |\epsilon|_p} + \log_p { |\mathfrak{B}(x_k,x_\ell)|_{\rm PS} \over |\epsilon|_p} - \gamma_p\!\left({x_{i\ell}x_{jk} \over x_{ij}x_{k\ell}} \right) \log_p \left|{x_{ij}x_{k\ell} \over x_{i\ell}x_{jk}} \right|_p \right)\log r \cr &= \big( \delta(x_i \to x_j,x_i \to x_j) + \delta(x_k \to x_\ell,x_k \to x_\ell) - 2| \delta(x_i \to x_j,x_k \to x_\ell)| \big) \log r } where we used \eno{SACAB}, \eno{CABlength}, \eno{lengthP1Qp} and \eno{lengthxy} for the first two terms in the last equality above,\footnote{Moreover, we made convenient choices for the arbitrary node $C$ in \eno{lengthxy} to obtain the simplified forms in \eno{Sdisconnect}.} while the third term in the last equality explicitly counts the number of shared edges between the minimal surfaces $x_i \to x_j$ and $x_k \to x_\ell$ (for regions $A_1$ and $A_2$ respectively) which appear in the bulk interpretation of mutual information. Recall that the indices $i,j,k,\ell \in \{1,2,3,4\}$ are chosen such that the subregions $(x_i,x_j)$ and $(x_k,x_\ell)$ are path-disjoint. Thus the three terms on the r.h.s.\ in the first equality of \eno{Sdisconnect} depend on the initial choice of the dual graph tensor network (i.e.\ the choice of planar embedding). The second and third equalities show explicitly the functional form of the three terms, entirely in terms of the boundary coordinates (as well as the UV cut off). We now show that $S^{\rm disc.}(x_1,x_2,x_3,x_4)$ as given by the particular combination in \eno{Sdisconnect} is {\it independent} of the choice of a planar embedding. Assume, without loss of generality, $u(x_1,x_2,x_3,x_4) \leq 1$ (the associated configuration on the Bruhat--Tits tree is shown in figure \ref{fig:config}). (If not, we relabel the boundary points to ensure the inequality holds.) The choice of the path-disjoint intervals $A_1=(x_i,x_j)$ and $A_2=(x_k,x_\ell)$ depends on the choice of the planar embedding. Depending on the chosen planar embedding, there are in all three inequivalent possibilities:\footnote{Technically, one needs to be careful about the orientation of the interval to ensure there is no overlap, but we will assume proper orientations have already been chosen to ensure the subregions are path-disjoint. This simply amounts to being careful about the order of the boundary points in specifying the intervals $A_1, A_2$; however the argument in the following is insensitive to this ordering as long as we assume the intervals are path-disjoint.} \begin{itemize} \item $x_i=x_1, x_j=x_2, x_k=x_3, x_\ell = x_4$, or \item $x_i=x_1, x_j=x_3, x_k=x_2, x_\ell = x_4$, or \item $x_i=x_1, x_j=x_4, x_k=x_2, x_\ell = x_3$. \end{itemize} In each of the three cases, the graph-theoretic quantity in the third line of \eno{Sdisconnect} is easily computed to explicitly verify that $S^{\rm disc.}(x_1,x_2,x_3,x_4)$ is identical in all cases, and in fact equals the {\it minimal} length of the geodesics homologous to the intervals $A_1$ and $A_2$.\footnote{This observation is also illustrated in figure \ref{fig:realTwoInt} where the two cases shown have the same minimal surface homologous to the disconnected regions.} Thus the bipartite entanglement for a disconnected region is given exactly by the RT formula. The bipartite entanglement $S^{\rm disc.}(x_1,\ldots,x_4)$ as defined above is not only $\PGL(2,\mathbb{Q}_p)$ invariant but also independent of the choice of planar embedding (equivalently the choice of a valid tensor network associated with the bulk geometry). \begin{figure}[!t] \centering \[ \musepic{\uconfig} \] \caption{Boundary anchored bulk geodesics on the Bruhat--Tits tree and the boundary point configuration such that $u(x_1,x_2,x_3,x_4) \leq 1$.} \label{fig:config} \end{figure} We are now in a position to show that the Araki--Lieb inequality~\cite{Araki1970} is satisfied as well. In our setup, this corresponds to showing that the inequality \eqn{AL}{ S(x_1,x_2,x_3,x_4) \geq |S(x_i,x_j) - S(x_k,x_\ell)| } holds, where the various terms are defined below \eno{MIx}. Once again, without loss of generality, we assume the boundary point configuration of figure \ref{fig:config}. As already discussed, in this case, $S(x_1,\ldots,x_4)$ is proportional to the sum of the lengths of the unique geodesics joining $x_1$ to $x_2$ and $x_3$ to $x_4$. On the other hand, the entropies of the path-disjoint intervals, $S(x_i,x_j)$ and $S(x_k,x_\ell)$ are proportional to lengths of the unique geodesics joining $x_i$ to $x_j$ and $x_k$ to $x_\ell$, respectively. Comparing lengths of geodesics in figure \ref{fig:config} it is immediately clear that for all possible choices of $i,j,k,\ell$ (subject to the requirements specified below \eno{MIx}), the inequality \eno{AL} reduces to checking whether $a+b \geq |a-b|$ for positive real numbers $a,b$. This is clearly true, and thus establishes the Araki--Lieb inequality for path-disjoint intervals. The simplicity in comparison of lengths of geodesics such as the ones in figure \ref{fig:config} is a direct consequence of ultrametricity of the $p$-adic numbers (or equivalently, the simplifying tree structure of the bulk geometry). Together, the Araki--Lieb inequality and the non-negativity of mutual information, which we have now established for path-disjoint intervals, are referred to as the subadditivity property of entropy. In the next subsection, we define path-adjoining intervals, and the proofs presented here extend easily to this case (we leave them as trivial exercises for the reader), thus establishing subadditivity in full generality in our setup. \subsection{More entropy inequalities: SSA and MMI} \label{ssec:Inequalities} So far we have shown that the $p$-adic bipartite entropy satisfies an RT-like formula, as well as subadditivity of entropy. One should also expect strong subadditivity (SSA) and monogamy of mutual information (MMI)~\cite{Hayden:2011ag} to hold. Indeed in this section we establish these inequalities holographically. Given three regions $A_1, A_2$ and $A_3$, SSA is the statement that~\cite{Lieb:1973zz,Lieb:1973cp} \eqn{SSA123}{ S(A_1 A_2) + S(A_2 A_3) \geq S(A_1A_2A_3) + S(A_2)\,, } or equivalently\footnote{The inequality \eno{SSA123alt} can be obtained from \eno{SSA123} (and vice versa) by first purifying the system $\rho_{A_1A_2A_3}$ by formally adding a fourth region $A_4$.} \eqn{SSA123alt}{ S(A_1 A_2) + S(A_2 A_3) \geq S(A_1) + S(A_3)\,. } In the previous subsection, we discussed the bipartite entropy of unions of path-disjoint regions. However, here we will focus on regions $A_i$ such that in a given planar embedding, they are disjoint (i.e.\ they do not share any nodes on the tensor network) but are ``adjoining'', that is they share end-points on the Bruhat--Tits tree. We refer to them as ``path-adjoining'' regions. \begin{definition} Given a planar embedding, two regions $A_1$ and $A_2$ are {\it path-adjoining} if they are disjoint as sets of nodes on the tensor network, but there exists exactly one ``shortest bond'' on the network which contracts a vertex in $A_1$ with a vertex in $A_2$. \end{definition} A consequence of this definition is that if two regions are path-adjoining, then written as a set of nodes ``in between'' two boundary points, the two regions share a common boundary point. (This notion is also $\PGL(2,\mathbb{Q}_p)$ invariant.) The converse of this statement is not always true. Given four boundary points $x_1,\ldots,x_4$ and any choice of a planar embedding, we will assume that regions $A_1$ and $A_2$ are path-adjoining as well as $A_2$ and $A_3$ are path-adjoining.\footnote{\label{fn:pathdisjoint}In this paper, we will not discuss the case where the regions $A_1, A_2$ and $A_3$ are path-disjoint from each other, although we expect SSA to hold here as well. This case requires an input data of six distinct boundary points. The notion of bipartite entropy presented in section \ref{GENUS0DOUBLE} given a set of four boundary points should generalize in a systematic way to the case of six (and higher) boundary points, but we leave this for future work.} Without loss of generality, we take $A_1=(x_i,x_j), A_2=(x_j,x_k)$ and $x_3=(x_k,x_\ell)$, where $i,j,k,\ell$ are distinct indices from the set $\{1,2,3,4\}$ chosen such that $A_1$ and $A_2$ are path-adjoining, and similarly for $A_2$ and $A_3$. This setup might be familiar to the reader from the holographic proof of strong subadditivity over the reals~\cite{Headrick:2007km}. The proof presented here is similar in spirit but has a distinct $p$-adic flavor as will be apparent shortly. Under the hypotheses of the previous paragraph, we can write \eqn{Sindiv}{ S(A_1) &= S(x_i,x_j) \qquad\quad S(A_2) = S(x_j,x_k) \qquad\qquad S(A_3) = S(x_k,x_\ell) \cr S(A_1 A_2) &= S(x_i,x_k) \qquad S(A_2 A_3) = S(x_j,x_\ell) \qquad S(A_1A_2A_3) = S(x_i,x_\ell)\,, } where the two-point bipartite entropy $S(x,y)$ is the entropy of a connected region $(x,y)$, discussed previously in section \ref{GENUSZERORESULTS}. Consequently all terms in \eno{SSA123} (and \eno{SSA123alt}) turn into bipartite entropies of connected regions. Thus to check SSA, we need to show \eqn{ToShowSSA}{ S(x_i,x_k) + S(x_j,x_\ell) \stackrel{!}{\geq} S(x_i,x_\ell) + S(x_j,x_k) \qquad S(x_i,x_k) + S(x_j,x_\ell) \stackrel{!}{\geq} S(x_i,x_j) + S(x_k,x_\ell)\,. } To be concrete, (without loss of generality) we label the given boundary points $x_1,\ldots,x_4$ such that the pairwise boundary anchored bulk geodesics intersect as shown in figure \ref{fig:config}. Now recall from section \ref{GENUSZERO} that in a genus zero background, $S(x,y)$ is given simply by the unique minimal geodesic joining boundary points $x$ and $y$. Thus the inequalities in \eno{ToShowSSA} turn into (trivial) statements about lengths of various boundary anchored geodesics in figure \ref{fig:config}. Further they can be related them directly to the conformal cross-ratios as we now explain. For example, suppose in a planar embedding we can choose $i=1, j=2, k=3, \ell=4$. Then it is clear from comparing lengths of minimal geodesics in figure \ref{fig:config} that the first inequality in \eno{ToShowSSA} is saturated, while the second one is obeyed in the strict sense. In fact, the equality $S(x_1,x_3)+S(x_2,x_4) = S(x_1,x_4) + S(x_2,x_3)$ of the lengths of geodesics has the same content as the triviality of the cross-ratio, $u(x_1,x_3,x_2,x_4) = |(x_{13} x_{24})/ (x_{14}x_{23})|_p = 1$. Similarly, the inequality $S(x_1,x_3)+S(x_2,x_4) > S(x_1,x_2)+S(x_3,x_4)$ has identical content as the inequality $u(x_1,x_2,x_4,x_3) = |(x_{12} x_{34})/ (x_{13}x_{24})|_p < 1$.\footnote{Refer to the discussion around \eno{ugraph}.} The 23 other permutations of assignments for $i,j,k,\ell$ which could possibly be made over all possible planar embeddings admit identical analysis so we do not repeat it here. This confirms SSA. To summarize, in each case one of the two inequalities in \eno{ToShowSSA} is saturated,\footnote{In the case of path-disjoint intervals $A_1, A_2$ and $A_3$ (see the comment in footnote \ref{fn:pathdisjoint}), we do not expect such a saturation of one of the inequalities to hold in general.} while the other remains an inequality.\footnote{The inequality is obeyed strictly unless the boundary points $x_1,\ldots,x_4$ are such that the geodesics connecting them in the bulk meet at a single bulk point. This corresponds in figure \ref{fig:config} to the collapse of the internal bulk geodesic to a single point.} SSA is interpreted as an inequality between lengths of geodesics and admits a dual description in terms of boundary cross-ratios. From the boundary perspective, SSA (for path-adjoining regions) has the same content as the following statement about cross-ratios: Given four boundary points $x_1,\ldots,x_4$, up to a relabelling of coordinates one always has $|(x_{12}x_{34})/(x_{13}x_{24})|_p \leq 1$ with $|(x_{14}x_{23})/(x_{13}x_{24})|_p = 1$, which follows from the ultrametric nature of $p$-adic numbers. Next let's turn to MMI (also referred to as the negativity of tripartite information)~\cite{Hayden:2011ag}. Given three disjoint intervals $A, B$ and $C$ (in our terminology in a given planar embedding, they can be either path-disjoint or path-adjoining or a mix of both such that no two intervals overlap), MMI is the following inequality obeyed by mutual information, \eqn{MMI}{ I(A:BC) \geq I(A:B) + I(A:C)\, } or equivalently \eqn{MMIalt}{ I(A:B:C) \equiv I(A:B) + I(A:C) - I(A:BC) \leq 0\,. } Such an inequality does not hold in general for arbitrary quantum mechanical states, but is special to quantum states which admit a holographic dual. The inequality makes sense even for adjoining intervals since the divergences in the individual mutual information pieces cancel out among the various terms. We will now prove \eno{MMI} holds in the $p$-adic tensor network setting for (connected) intervals $A,B$ and $C$ chosen in an arbitrary planar embedding such that they are either path-adjoining or path-disjoint but never overlapping. In fact we show the inequality is always saturated. We will first prove this in the case these intervals are specified in terms of given set of {\it five} boundary points (see figure \ref{fig:5ptconfig}), and then extend this to full generality. \begin{figure}[!t] \centering \[ \musepic{\uvconfig} \] \caption{Boundary anchored bulk geodesics on the Bruhat--Tits tree and the boundary point configuration such that $u(x_1,x_2,x_4,x_5) \leq 1$ and $u(x_3,x_4,x_1,x_5) \leq 1$. Up to relabelling, this is the most general configuration of five boundary points at the terminus of the Bruhat--Tits tree.} \label{fig:5ptconfig} \end{figure} Fix a planar embedding. Let's first consider the case where $B$ and $C$ are chosen such that they are path-adjoining but $B \cup C$ is path-disjoint from $A$. There are then three inequivalent choices of intervals in figure \ref{fig:5ptconfig}:\footnote{We will suppress keeping track of orientation of intervals and simply assume the intervals are chosen with the correct orientation such that they are path-adjoining or path-disjoint as desired. Keeping track of orientations simply adds an extra layer of detail without changing the basic analysis.} \begin{itemize} \item $A=(x_1,x_2), B=(x_3,x_4), C=(x_4,x_5),$ or \item $A=(x_5,x_1), B=(x_2,x_3), C=(x_3,x_4),$ or \item $A=(x_2,x_3), B=(x_4,x_5), C=(x_5,x_1).$ \end{itemize} In each of these cases, the mutual information measures $I(A:BC), I(A:B)$ and $I(A:C)$ take the form of $I(x_i,x_j,x_k,x_\ell)$ for appropriately chosen $i,j,k,\ell$ and can be determined simply by considering the overlap of minimal geodesics for given intervals $A, B, C$ and $B\cup C$, as discussed in detail in section \ref{GENUS0DOUBLE}. We immediately see that the inequality \eno{MMI} is saturated in all three cases -- in the first case each of the individual mutual information measures are identically zero, while in the second and third cases there is non-trivial overlap of minimal geodesics so not all $I$'s vanish. The remaining cases involve choosing $A, B$ and $C$ such that $A$ is path-adjoining to either $B$ or $C$, where at the same time $B$ and $C$ may be path-adjoining or path-disjoint to each other. In all such cases, some of the mutual information measures will diverge, but the divergences still cancel out on both sides of \eno{MMI}. We leave it as a simple exercise to the reader to consider the finitely many inequivalent cases to consider and verify that in each case, \eno{MMI} is satisfied, and in fact saturated. Thus we conclude that in the case of five points, \eno{MMI} is saturated in the $p$-adic setting. The restriction to five boundary points allowed us to prove \eno{MMI} in the $p$-adic setting in almost full generality. The only case remaining is when the intervals $A,B$ and $C$ are pairwise path-disjoint, in which case we need six boundary points to specify the intervals. Once again there are a small number of cases to individually consider, with the analysis identical to the previously studied cases. We find \eno{MMI} is saturated here as well. In all, we conclude in the $p$-adic tensor network considered in this paper, $I(A:B:C) = 0$, that is \eqn{MMIpadic}{ I(A:BC) = I(A:B) + I(A:C)\,. } It is interesting to compare this observation with the result over real CFTs where it was shown that the inequality is saturated for a massless fermion in two dimensions (i.e.\ mutual information is exactly extensive), but not for instance for the massless scalar~\cite{Hayden:2011ag,Casini:2004bw,Casini:2005rm}. \subsection{Black hole backgrounds} \label{GENUS1BTZ} We now change gears and present some of the computational methods and results for black hole entropy and a Ryu--Takayanagi like formula for minimal geodesics in the black hole background. This discussion is essentially an extension of sections~\ref{BTZResults} and \ref{RTBTZresults}; we refer to these for the basic setup of the tensor network geometry. We will define the integer length of the horizon to be $\tau = \log_p|q|_p^{-1}$ with $q$ our uniformizing parameter; we also assume a nondegenerate geometry so $\tau > 1$. As before, the cutoff is $\Lambda$ (defined now from the horizon) and the rank is $r+1$, where we assume $(r+1)/2 \geq 2\Lambda + 1$ and $(r+1)/2 \geq \tau$. The first condition ensures there are enough boundary dangling legs for minimal surfaces to extend into the bulk and the second condition ensures a similar property for the center vertex. As in the genus 0 case, our first task is to calculate the norm of the black hole boundary state. This quantity is obtained by contracting all legs, including those at the center vertex behind the horizon. Denoting the boundary state of a black hole of size $\tau$ as $|\psi_{\tau} \rangle$, we may compute $\langle \psi_{\tau} | \psi_{\tau} \rangle$ graphically using techniques of the previous sections. This involves counting the number of each type of tensor, where again the type is the number of legs contracted with other tensors (in the network), $v_c$. The number of dangling legs is $v_d$, and $v_c + v_d = r+1$ for every tensor. In the black hole geometry, one can see all vertices with $v_c$ in the set $\{2, 4, \ldots, 2\Lambda, 2\Lambda +1\}$ will appear, as well as the central vertex which always has $v_c = \tau$. The counting is similar to the genus $0$ case, though all numbers explicitly depend on $\tau$. The multiplicity of each type of vertex is found in table \ref{tb:BTZvertices}, where we have singled out the center vertex multiplicity, even though it may coincide with one of the other types of tensors. \begin{table}[h!] \centering \begin{tabular}{|c|c|c|} \hline $v_c$ & $v_d$ & multiplicity $M$\\ \hline \hline $\tau$ & $(r+1) - \tau$ & $1$ \\ $2\Lambda + 1$ & $(r+1) - (2\Lambda+1)$ & $\tau$ \\ $2\Lambda$ & $(r+1) - 2\Lambda$ & $(p-2)\tau$ \\ $2\Lambda - 2$ & $(r+1) - (2\Lambda -2)$ & $(p-1)(p^1- p^0) \tau$\\ $2\Lambda - 4$ & $(r+1) - (2\Lambda -4)$ & $(p-1)(p^2- p^1) \tau$\\ \vdots & \vdots & \vdots \\ 4 & $(r+1) - 4$ & $(p-1)(p^{\Lambda-2}- p^{\Lambda-3}) \tau$\\ 2 & $(r+1) - 2$ & $(p-1)(p^{\Lambda-1}- p^{\Lambda-2}) \tau$ \\ \hline \end{tabular} \caption{Types of vertices in a black hole holographic state $|\psi_{\tau} \rangle$.} \label{tb:BTZvertices} \end{table} As in previous sections, we contract each vertex and obtain splits which contribute an overall factor of $r^{(v_d-v_c)/2}$ for each vertex as prescribed by \eno{ContractRule}. After, splitting we must now count the number of new cycles created, where each new cycle contributes a factor of $r$. The number of cycles is the number of internal lines in the tensor network. These considerations mean that the norm of the state will go as a power of $r$, and we find: \eqn{BTZpsiNorm}{ \log \langle \psi_{\tau} | \psi_{\tau} \rangle = \frac{1}{2}\sum_{{\rm type\ }i} \frac{{ M^{(i)} v_d^{(i)}}}{2}\log r = \left( \frac{r+1}{2} + \frac{\tau}{2}p^{\Lambda - 1}((p-1)r-(p+1))\right )\log r \,. } \begin{figure}[t!] \[ \musepic{\gluehole} \] \caption{The density matrix associated to the thermal state on the boundary of a BTZ black hole tensor network.} \label{fig:BTZglued} \end{figure} Having found the norm of the state, we may now compute the density matrix and entropy which comes from tracing out the central vertex behind the horizon, as shown in figure \ref{fig:BTZglued}. The intuition is that these degrees of freedom cannot be associated to any boundary state, so we should trace them out of the Hilbert space. The result is a mixed density matrix describing only the boundary qudits. As we are only tracing out one vertex, the result is surprisingly simple and parallels the computation of the entanglement entropy at genus $0$. Applying our rule for tensor contractions to the center vertex as in figure \ref{fig:BTZglued}, where the two sides denote $| \psi_{\tau} \rangle$ and $\langle \psi_{\tau} |$, we see that in general we have a mixed density matrix with $\tau$ bonds stretched across the two sides. This is somewhat reminiscent of a two-sided BTZ black hole, as depicted in figure \ref{fig:BTZ2sided}. Based on the computation for the entanglement entropy, one would expect the thermodynamic entropy of this state to be proportional to $\tau$, and this can be supported by explicit analytic computation. Performing the split of the center vertex gives a factor of $r^{(v_d-v_c)/2} = r^{(r+1)/2 - \tau}$, and the general density matrix has a form similar to \eno{RhoAInterval}, with $r^{\tau}$ blocks of all $1$'s of size $r^{\sigma}$: \eqn{RhoBTZ}{ \rho_{BH} = { r^{\frac{(r+1)}{2} - \tau} \over \langle \psi_{\tau}|\psi_{\tau} \rangle} \begin{pmatrix} \begin{bmatrix} 1 & \cdots & 1 \\ \vdots & \ddots & \vdots \\ 1 & \cdots & 1 \end{bmatrix} r^{\sigma} & \vspace{-1 mm} \\ \hspace{-3.5 mm} r^{\sigma} \\ & \ddots & \\ & & \vspace{-1mm} \hspace{3.5 mm} r^{\sigma} \\ & & r^{\sigma} \begin{bmatrix} 1 & \cdots & 1 \\ \vdots & \ddots & \vdots \\ 1 & \cdots & 1 \end{bmatrix} \end{pmatrix}_{r^{\sigma +\tau} \times r^{\sigma +\tau}} } We may fix the value of $\sigma$ by diagrammatic computation, but it is easier to simply impose the unit trace condition. Using the value of \eno{BTZpsiNorm}, we find $\sigma$ is given by the second piece, \eqn{BTZsigmasize}{ \sigma = \frac{\tau}{2}p^{\Lambda - 1}((p-1)r-(p+1)) \,. } We now compute the von Neumann entropy of this state, which corresponds to the black hole entropy. \eqn{BTZvNent}{ S_{BH} = - \Tr \rho_{BH} \log \rho_{BH}\,, } where each block of $\rho_{BH}$ may be diagonalized before taking the trace. This gives a sum of identical terms with the $\sigma$ dependence cancelling, \begin{equation} S_{BH} = - \sum^{r^{\tau}} r^{-\tau} \log r^{-\tau}\,, \end{equation} or the main result of this section, \eqn{BTZEntropy}{ S_{BH} = \tau \log r = (\log r) \log_p|q|_p^{-1} \,. } We see that the von Neumann entropy of the boundary state is large and directly proportional to the perimeter of the event horizon. We now briefly discuss the entanglement entropy between an interval and its complement in the thermal background, dual to minimal geodesics in the black hole geometry. We have already explained the results of these computations in section~\ref{RTBTZresults}, which match the expectations of real AdS/CFT and the cut rule for perfect tensors. The computation of specific examples is straightforward using the rules we have used throughout this section, but the general formula is cumbersome to present in detail, so we elect to explain the basic geometry and the result of the contractions. Tracing out a boundary region in the black hole background is obtained by combining the two graphical rules for reduced density matrices so far. The mixed density matrix is constructed by gluing the state and its dual along the boundary interval to be traced out (implementing the partial trace), as well as along the center black hole vertex. Physically, these two gluings are two separate effects, but the resulting mixed state has an entropy which is sensitive both to the black hole horizon size and to the interval size. As before, we apply the rules of splits and cycles to the perfect tensors which are contracted, and from this point of view we treat the various contractions on equal footing to ultimately obtain the correct entropy. Several cases are possible, as the entanglement entropy of a region is no longer equal to that of the complement due to the presence of the black hole. There are essentially three possible cases which are schematically depicted in figure \ref{fig:realBHInt}: \begin{itemize} \item Given a cutoff, if the region to be traced out is sufficiently small such that the entanglement wedge does not approach near the horizon of the black hole as in the first picture, the resulting entropy will be completely insensitive to the presence of the horizon. As in the genus 0 case, this entropy is proportional to the log of the interval size and can be graphically computed by counting the number of bonds shared across the traced out region after performing all contractions. (These are exactly the bonds which cut the minimal surface on the genus $1$ tree geometry.) \item For a larger region, the graphical rules imply that bonds crossing the horizon interfere with those for the traced out region. The suspended bonds between the state and its dual now use up some of the bonds which were originally part of the horizon contraction. This is interpreted as the minimal surface wrapping around the horizon, and a computation reveals the entropy is given by exactly this length. This schematically looks like the middle figure. \item For a sufficiently large region, the available bonds inside the black hole become exhausted. The entanglement is now given by the number of remaining bonds across the state and the dual, which corresponds to a minimal surface which wraps the other side and includes the black hole; this is show in the final figure. The entropy is given by the sum of the horizon area and the length of the minimal surface. \end{itemize} In each case the minimal geodesic is homologous to the boundary region as desired. From these basic geometric rules, the results of section~\ref{RTBTZresults} follow, as one can see by direct though non-trivial calculation. The minimal surfaces we find closely resemble their archimedean counterparts, but are distinctly discrete and ultrametric. \begin{figure}[t] \[ \begin{tikzpicture}[scale=0.9] \draw[color=white,use as bounding box] (-2.2,-2.2) rectangle (2.2,2.2); \tikzstyle{vred}=[draw,scale=0.4,color=red,fill=red,circle] \foreach \x in {0,1,...,11} { \coordinate (b\x) at (\x*360/12 - 30 :2); }; \draw[thick,blue] (b3) arc (-45:-135:1.4); \draw[thick] (b3) arc (60:120:2); \draw[dotted] (0,0) circle[radius=2]; \draw[fill=gray!50!white,pattern=north east lines] (0,0) circle[radius=0.5]; \draw (b3) node[anchor=south west] {$x_1$}; \draw (b5) node[anchor=south east] {$x_2$}; \draw (b4) node[anchor=south] {$A$}; \end{tikzpicture} \quad \begin{tikzpicture}[scale=0.9] \draw[color=white,use as bounding box] (-2.2,-2.2) rectangle (2.2,2.2); \tikzstyle{vred}=[draw,scale=0.4,color=red,fill=red,circle] \foreach \x in {0,1,...,11} { \coordinate (b\x) at (\x*360/12 - 30 :2); }; \draw[thick] (b8) arc (210:-30:2); \draw[dotted] (0,0) circle[radius=2]; \draw[fill=gray!50!white,pattern=north east lines] (0,0) circle[radius=0.5]; \coordinate (a1) at (180:0.65); \coordinate (a2) at (0:0.65); \draw[thick,blue] (a1) arc (180:0:0.65); \draw[thick,blue] (b8) to[out=45,in=-90,looseness=1] (a1); \draw[thick,blue] (b0) to[out=135,in=-90,looseness=1] (a2); \draw (b8) node[anchor=north east] {$x_1$}; \draw (b0) node[anchor=north west] {$x_2$}; \draw (b4) node[anchor=south] {$A$}; \end{tikzpicture} \quad \begin{tikzpicture}[scale=0.9] \draw[color=white,use as bounding box] (-2.2,-2.2) rectangle (2.2,2.2); \tikzstyle{vred}=[draw,scale=0.4,color=red,fill=red,circle] \foreach \x in {0,1,...,11} { \coordinate (b\x) at (\x*360/12 - 30 :2); }; \draw[thick,blue] (b11) arc (45:135:1.4); \draw[thick] (b9) arc (240:-60:2); \draw[dotted] (0,0) circle[radius=2]; \draw[thick,blue] (0,0) circle[radius=0.65]; \draw[fill=gray!50!white,pattern=north east lines] (0,0) circle[radius=0.5]; \draw (b9) node[anchor=north east] {$x_1$}; \draw (b11) node[anchor=north west] {$x_2$}; \draw (b4) node[anchor=south] {$A$}; \end{tikzpicture} \] \caption{A schematic depiction of the three topologically distinct cases of minimal geodesics for a connected region $A$.} \label{fig:realBHInt} \end{figure} We take the success of this network as a prediction for entanglement in thermal $p$-adic AdS/CFT. We also suspect the methods we have described for single intervals in black hole backgrounds generalize, possibly to higher genus black holes and more intervals. \section{Geometric Properties of the Tensor Networks} \label{GEOMETRIC} In this section we discuss in more detail some aspects of the geometry of the tensor networks introduced above. In particular we discuss more in detail the symmetries and the dependence in the choice of embedding. We show that the construction can be carried out in a purely $p$-adic setting, where the tensor network lives on the Drinfeld $p$-adic plane and is determined by a choice of sections of the projection from the Drinfeld plane to the Bruhat--Tits tree. We also show a similar construction of the tensor network for the genus one case on a fundamental domain for the action of the Schottky group on the Drinfeld plane. We also discuss measures on the $p$-adic Tate--Mumford curve induced by different restrictions of the Patterson--Sullivan measure on the projective line. Finally we discuss the limit of the density matrices when the entire infinite tree is considered, interpreted as states on an approximately finite dimensional von Neumann algebra. \subsection{Tensor networks: symmetries and embeddings} \label{TNPROPS} Recall from section \ref{ssec:padic} that we can label the nodes on the Bruhat--Tits tree as cosets $G= \PGL(2,\mathbb{Q}_p) =\bigcup_{i=1}^\infty g_i H$ where $H = \PGL(2,\mathbb{Z}_p)$ and $g_i \in G$. Suppose we pick a particular planar embedding, or in other words, make a choice of all incidence relations among bonds on the dual graph consistent with definition \ref{def:dualgraph}. Further, focus on the particular bond in the dual graph specified by the corresponding edge between the nodes $g_1 H$ and $g_2 H$ on the Bruhat--Tits tree. Let it be incident with a bond in the dual graph corresponding to the edge between the nodes $g_3 H$ and $g_4 H$. After an isometric $G$ transformation, suppose the nodes on the Bruhat--Tits tree go to the cosets $g_1^\prime H, g_2^\prime H, g_3^\prime H$ and $g_4^\prime H$ respectively. Then the $G$ transformation sends the edge on the Bruhat--Tits tree between $g_1 H$ and $g_2 H$ to the edge between $g_1^\prime H$ and $g_2^\prime H$ (and similarly for the other edge). Correspondingly, the bonds on the dual graph transform as well, in such a way that all incidence relations are preserved on the dual graph. In other words, $G$ acts as an isometry on the dual graph. The point of intersection of the two bonds on the dual graph before the $G$ transformation was a node on the dual graph where the two bonds corresponding to cosets $g_1 H$ \& $g_2H$ and $g_3H$ \& $g_4H$ met. After the transformation, the intersection node on the dual graph is mapped to the node which is the point of intersection of the bonds specified by the cosets $g_1^\prime H$ \& $g_2^\prime H$ and $g_3^\prime H$ \& $g_4^\prime H$ respectively. Other choices for the dual graph can be obtained as follows. Starting with the Bruhat--Tits tree where the nodes are specified via the cosets $g_i H$ for $i=0,1,\ldots, \infty$, we specialize to a particular planar embedding. Now we perform an isometric $G$ transformation, which transforms $g_i H \to g_i^\prime H$ for all $i$. The original dual graph incidence relations, given in terms of cosets $g_iH$ transform to the ones in terms of the transformed cosets $g_i^\prime H$ as explained in the previous paragraph. However, we can construct a {\it different} dual graph, whose bond incidence relations are the original incidence relations (in terms of original labelling of the cosets $g_iH$) but the Bruhat--Tits tree nodes are given in terms of the transformed cosets $g_i^\prime H$. (Here we are using the fact that $G=\bigcup_{i=1}^\infty g_i H = \bigcup_{i=0}^\infty g_i^\prime H$.) From this we conclude there are at least as many possible dual graphs as elements of the isometry group $G$. In fact there exist more choices for the dual graph. One can act with any element of the automorphism group of the Bruhat--Tits (which is still an isometry) and obtain a different planar embedding. All such planar embeddings are allowed but there is no preferred choice among them. For the purposes of computation, we usually picked a particular choice of a dual graph (i.e.\ a particular planar embedding); however, the final physical result of the computations was always independent of this choice as discussed in previous sections. \subsection{Drinfeld plane and the dual graph}\label{ssec:Drinfeld} In our previous discussion of the tensor networks on the dual graph, we have constructed such dual graph by realizing the tree (or a finite portion of the tree) embedded inside an ordinary plane. This can look at first very artificial: the Bruhat--Tits tree is a nonarchimedean $p$-adic object hence a natural construction of associated tensor networks should exist entirely inside the $p$-adic world and should not depend on the choice of an embedding in an archimedean space like the plane. Indeed, we show in the following that it is in fact possible to realize the tensor networks described in the previous sections entirely in the $p$-adic setting, on the $p$-adic Drinfeld plane. Thus, while we continue to draw them in the ordinary plane for graphical convenience and simplicity, one should really think of these tensor networks as living on the Drinfeld plane. More precisely, we describe here a notion of ``dual graph" to an embedding of the Bruhat-Tits tree as a $1$-skeleton in the Drinfeld $p$-adic upper half plane, given by a choice of a lift of the natural projection $\Upsilon: \Omega \to T$ of the $p$-adic plane to the tree. We first describe a toy model based on a tubular neighbourhood of a tree in ordinary $3$-space, and then we explain how this model adapts to the case of the Drinfeld $p$-adic upper half plane. \subsubsection{An archimedean toy model} \label{ssec:toymodel} We discuss first a toy model in a simpler archimedean setting, where we consider a homogeneous tree $T$ of valence $q+1$ embedded in a $3$-dimensional Euclidean space and a $2$-dimensional surface ${\mathcal S}$ given by the boundary of a small tubular neighbourhood ${\mathcal N}(T)$ of the tree, ${\mathcal S}=\partial {\mathcal N}$. In this section we will take $q$ to be a positive integral power of $p$. We identify ${\mathcal N}(T)$ with the disk bundle of the normal bundle and we denote by $\Pi: {\mathcal S} \to T$ the projection restricted to ${\mathcal S}$. Then, for almost all points $x$ in the tree $T$ the preimage $\Pi^{-1}(x)$ is a circle, while the image of the star $S(v)$ of half-edges around a vertex $v$ is a ``pair of pants" figure with $q+1$ holes. Choose two lifts of the projection map $\Pi: {\mathcal S} \to T$, so that their images give two disjoint embeddings of the tree $T$ in ${\mathcal S}$. For example, take the sections so that the two trees cut each circle in the fiber of $\Pi$ in antipodal points. We call the two images $T$ and $T'$. Also fix a sufficiently small $\epsilon>0$ and a tubular neighbourhood ${\mathcal N}'_\epsilon(T')$ of size $\epsilon$ of the tree $T'$ inside ${\mathcal S}$. The chosen epsilon should be small enough that the distance on ${\mathcal S}$ between $\partial {\mathcal N}'_\epsilon(T')$ and $T$ is bounded below by a nonzero quantity, say greater than $3\epsilon$. Let ${\mathcal C}$ denote the countable collection of curves on ${\mathcal S}$ given by ${\mathcal C} = \partial {\mathcal N}'_\epsilon(T')$. Choose then a sequence of $\epsilon_i>0$ with the property that the series converges with $\sum_i \epsilon_i <\epsilon$. We consider tubular neighbourhood ${\mathcal N}'_{\eta}(T')$ for each $\eta=\epsilon+\epsilon_i$. Let ${\mathcal C}_{\eta} =\partial {\mathcal N}'_{\eta}(T')$. All these curves are at distance at least $\epsilon$ from $T$. Also fix a real number $t$ with $0<t<1$, and for each edge $e$ of $T$, identified with the set of points $x_t(e)=(1-t)v + t v'$ with $v,v'$ the endpoint vertices, consider the circle $\Pi^{-1}(x_t(e))$ in ${\mathcal S}$. Note that it is possible to find such a $t$ so that all $\Pi^{-1}(x_t(e))$ are indeed circles. We construct a ``dual graph" to the copy $T$ of the tree embedded in ${\mathcal S}$ as follows. Fix a base vertex $v_0$ in the tree $T$. Let $e_0, \ldots, e_q$ be the edges of $T$ adjacent to $v_0$. For each of these edges $e_i$ let $x_i$ be the point of intersection with the circle $S^1_i=\Pi^{-1}(x_t(e_i))$ constructed as above. Consider the path $\ell_i$ given by the two arcs $\gamma_{i,1}, \gamma_{i,2}$ of $S^1_i$ between the point $x_i$ and the two points $y_{i,1}, y_{i,2}$ of intersection between $S^1_i$ and ${\mathcal C}=\partial{\mathcal N}'_\epsilon(T')$ together with the two infinite curves $C_{i,1}, C_{i,2}$ in ${\mathcal C}$ that start at the points $y_{i,1}, y_{i,2}$ pointing in the direction of $e_i$, that is, $\ell_i =\gamma_{i,1}\cup \gamma_{i,2}\cup C_{i,1} \cup C_{i,2}$. We proceed in a similar way for an arbitrary edge $e$ in $T$. Let $e$ be an edge that is at distance $N$ from the root vertex $v_0$. Then consider the curve $\ell_e =\gamma_{e,1,}\cup \gamma_{e,2}\cup C_{e,1} \cup C_{e,2}$, where $\gamma_{e,j}$ are the two arcs along $S^1_e =\Pi^{-1}(x_t(e))$ connecting the point $x_e$ of intersection between $e$ and $S^1_e$ and the two points $y_{e,j}$ of intersection between $S^1_e$ and the ${\mathcal C}_\eta$ for $\eta=\epsilon+\epsilon_N$, and $C_{e,j}$ the infinite paths along ${\mathcal C}_\eta$ that start from $y_{e,j}$ in the direction of $e$. \begin{figure}[t] \centering \[ \musepic{\dualDrinfeld} \] \caption{The dual graph locus in the Drinfeld plane.} \label{fig:drinfelddual} \end{figure} Our ``dual graph" ${\mathcal D}(T)$ of $T$ in ${\mathcal S}$ consists of the collection of edges $\ell_e$ with vertices given by their endpoints at infinity, modulo an equivalence relation: if two subarcs $C_{e,j}$ $C_{e',j'}$ of two curves $\ell_e$ and $\ell_{e'}$ are always at a distance less than $\epsilon$ outside of a compact region in ${\mathcal S}$, then their endpoints at infinity are identified. This is shown in figure \ref{fig:drinfelddual}. \subsubsection{The $p$-adic plane} \label{ssec:pplane} The toy model considered above explains the heuristics of what we would like to construct on the Drinfeld plane. Indeed the Drinfeld plane $\Omega$, introduced in \cite{Dri}, can be thought of as a $p$-adic analog of the tubular neighbourhood ${\mathcal S}=\partial {\mathcal N}(T)$ of the Bruhat-Tits tree $T$. For ${\mathbb K}$ a finite extension of ${\mathbb Q}_p$ with residue field ${\mathbb F}_q$, for some $q=p^r$, the Drinfeld plane $\Omega$ can be identified with ${\mathbb P}^1({\mathbb C}_p) \smallsetminus {\mathbb P}^1({\mathbb K})$, or equivalently with the set of homothety classes of invertible ${\mathbb K}$-linear maps $\varphi: {\mathbb K}^2 \to {\mathbb C}_p$, $\varphi: (x,y) \mapsto x \zeta_0 + y \zeta_1$ for $(\zeta_0:\zeta_1)\in {\mathbb P}^1({\mathbb C}_p) \smallsetminus {\mathbb P}^1({\mathbb K})$. It is endowed with a projection map $\Upsilon: \Omega \to T$ to the Bruhat--Tits tree of ${\mathbb K}$. Given two adjacent vertices $v,v'$ connected by an edge $e$ in $T$, parameterized by $e_t=(1-t)v +t v'$, for $0\leq t \leq 1$, the projection map satisfies (see section~2 of \cite{BoutCar}) \eqn{}{ \Upsilon^{-1}(v) &= \{ \zeta \in {\mathbb C}_p \,:\, |\zeta|\leq 1\} \smallsetminus \bigcup_{a\in {\mathcal O}_{\mathbb K}/\pi {\mathcal O}_{\mathbb K}} \{ \zeta \in {\mathbb C}_p \,:\, |\zeta - a| < 1 \} \cr \Upsilon^{-1}(v') &=\{ \zeta \in {\mathbb C}_p \,:\, |\zeta|\leq q^{-1} \} \smallsetminus \bigcup_{b \in \pi {\mathcal O}_{\mathbb K}/\pi^2 {\mathcal O}_{\mathbb K}} \{ \zeta \in {\mathbb C}_p \,:\, |\zeta - b| < q^{-1} \} } where $v=[M]$, $v'=[M']$ with $\pi M \subset M' \subset M$, and for $e_t = (1-t) v + t v'$, for $0<t<1$, along the edge $e$ \eqn{}{ \Upsilon^{-1}(e_t) =\{ \zeta \in {\mathbb C}_p \,:\, |\zeta|\leq q^{-t} \}\,. } One can therefore view it as an analog of the surface ${\mathcal S}$ with its decomposition into a collection of ``pairs of pants", as discussed previously. More extensive discussions of the geometry of the Drinfeld plane can be found in \cite{BoutCar} and \cite{DasTeit}. Given the star $S(v)$ of a vertex $v$ (given by the vertex together with all its adjacent edges) in the Bruhat--Tits tree, consider the regions $\Sigma(v)=\Upsilon^{-1}(S(v))$ in the Drinfeld plane $\Omega$. The sets $\Sigma(v)$ are a covering of $\Omega$ with nerve $T$. A cell $\tau$ in the Bruhat-Tits tree is given by an edge together with its two adjacent vertices. Given a cell $\tau$ corresponding to an edge $e$ we denote by \ $\Sigma(\tau):=\Sigma(v) \cap \Sigma(v') =\Upsilon^{-1}(\tau)$, with $\partial(e)=\{ v,v' \}$, given by \eqn{}{ \Sigma(\tau) = \{ \zeta \in {\mathbb C}_p \,:\, |\zeta|\leq 1\} \smallsetminus \bigcup_{a\in ({\mathcal O}_{\mathbb K} \smallsetminus \pi {\mathcal O}_{\mathbb K})/\pi {\mathcal O}_{\mathbb K}} \{ \zeta \in {\mathbb C}_p \,:\, |\zeta - a| < 1 \} \cr \smallsetminus \bigcup_{b \in \pi {\mathcal O}_{\mathbb K}/\pi^2 {\mathcal O}_{\mathbb K}} \{ \zeta \in {\mathbb C}_p \,:\, |\zeta - b| < q^{-1} \}\,. } It is known by \cite{SchStu} and \cite{DeSha} that the de Rham cohomology of the Drinfeld plane can be computed in terms of certain combinatorial harmonic forms on the Bruhat--Tits tree, with the map realizing this identification given by a $\PGL(2,{\mathbb K})$-equivariant residue map. The results of \cite{SchStu} and \cite{DeSha} in fact holds more generally for harmonic forms on higher rank Bruhat--Tits buildings and de Rham cohomology of higher rank Drinfeld symmetric spaces (the complement in ${\mathbb P}^n({\mathbb C}_p)$ of the ${\mathbb K}$-rational hyperplanes). In fact, the case of rank two, involving the $p$-adic plane and the Bruhat--Tits tree of $\PGL(2,{\mathbb K})$, was also discussed in \cite{Sch} (see also \cite{vdP}), with an extension to the case of quotients by $p$-adic Schottky groups, Mumford curves and quotients of the Bruhat--Tits tree. In the setting discussed in \cite{vdP} one identifies the holomorphic one-forms on the Drinfeld plane with currents on the Bruhat--Tits tree, via a residue map. A current on an oriented locally finite graph ${\mathcal G}$ is a map $\mu: E({\mathcal G}) \to {\mathbb Z}$ from the oriented edges of ${\mathcal G}$ to the integers satisfying the conditions \eqn{}{ \mu(\bar e)=-\mu(e)\,, } where $\bar e$ denotes the edge with the reverse orientation, and \eqn{}{ \sum_{e: s(e)=v} \mu(e) =0 \,. } Currents form an abelian group, denoted by ${\mathcal C}({\mathcal G})$. One can also consider currents on a locally finite directed graph ${\mathcal G}$ with values in a field ${\mathbb K}$ of characteristic zero, by taking ${\mathcal C}({\mathcal G},{\mathbb K})={\mathcal C}({\mathcal G})\otimes_{\mathbb Z} {\mathbb K}$. There is an algebra of $p$-adic holomorphic functions on the Drinfeld upper half plane (see e.g.~\cite{Man}), which we denote by ${\mathcal O}(\Omega)$. There is a short exact sequence (Corollary 2.1.2 of \cite{vdP} and \cite{Sch} p.~225) relating currents ${\mathcal C}(T_{\mathbb K},{\mathbb K})$ on the Bruhat-Tits tree and $p$-adic holomorphic $1$-forms on the Drinfeld upper half plane \begin{equation}\label{Omegaseq} 0 \to {\mathcal O}(\Omega) \stackrel{d}{\to} \Omega^1(\Omega) \to {\mathcal C}(T_{\mathbb K},{\mathbb K}) \to 0\,. \end{equation} Thus, ${\mathbb K}$-valued currents on the Bruhat-Tits tree provide a combinatorial way of describing holomorphic $1$-forms modulo exact forms. The map that assigns a current on $T$ to a 1-form on $\Omega$ is the residue map \eqn{}{ \omega \mapsto \sum_i {\rm res}_{\partial D_i}(\omega) } where the $D_i$ are a collection of finitely many disjoint disks in ${\mathbb P}^1({\mathbb C}_p)$ such that their union contains ${\mathbb P}^1({\mathbb K})$. The group of currents ${\mathcal C}(T_{\mathbb K},{\mathbb K})$ can be identified with the group of finitely additive ${\mathbb K}$-valued measures on ${\mathbb P}^1({\mathbb K})=\partial T_{\mathbb K}$ with zero total mass $\mu({\mathbb P}^1({\mathbb K}))=0$, by the identification $\mu(U(e)):=\mu(e)$, for any edge $e$ in $T_{\mathbb K}$ with $U(e)\subset {\mathbb P}^1({\mathbb K})$ the clopen set consisting of ends of half infinite paths in the tree starting with $e$. In turn we can identify the set of these measures with \eqn{}{ {\rm Ker} (\Phi: {\rm Hom}_{\mathbb Z} ({\mathcal C}^\infty({\mathbb P}^1({\mathbb K})),{\mathbb Z}) \to {\mathbb K}) } by identifying $\mathbb{K}$-valued measures as functionals acting by integration on locally constant functions with the zero mass condition represented by the vanishing of $\Phi: \mu \mapsto \mu(1)$. This gives the case of rank two of \cite{SchStu} and \cite{DeSha} with the identification of the first de Rham cohomology of the Drinfeld plane with \eqn{}{ H^1_{dR}(\Omega) \cong {\rm Ker} (\Phi: {\rm Hom}_{\mathbb Z} ({\mathcal C}^\infty({\mathbb P}^1({\mathbb K})),{\mathbb Z})\to {\mathbb K})\,. } \subsubsection{Other models of $p$-adic planes} There are other possible models of $p$-adic plane, where one can directly use metric properties. The version considered in \cite{Gel} section~2.2.8 has the advantage that it does have a hyperbolic metric and geodesics behaving in many ways (e.g.~the trace formula in \cite{Yas}) like the usual hyperbolic plane, but it does not have a nice relation to the Bruhat--Tits tree, while the version of \cite{Guill} has a projection to the Bruhat--Tits tree and is defined so as to have metric properties but it does not generalize to higher ranks, unlike the Drinfeld plane. For these reasons, especially in view of developing higher rank generalizations of the $p$-adic AdS/CFT correspondence based on Bruhat--Tits buildings, we prefer to use the Drinfeld plane model of a $p$-adic plane. \subsubsection{The dual locus in the Drinfeld plane}\label{ssec:dualpplane} Due to the topological nature of $p$-adic spaces, we cannot quite literally perform the same construction described in the previous toy model case, but we aim at identifying a locus in $\Omega$ that has similar properties to the dual graph ${\mathcal D}(T)$ described in the previous case, although it will not be a graph. As in the previous case, consider two sections $s,s': T \to \Omega$ that lift the projection $\Upsilon: \Omega \to T$, with disjoint images $s(T)$ and $T'=s'(T)$. Also consider a collection of nested sets in $\Omega$ (with strict inclusions) \eqn{}{ T'\subset {\mathcal N}_0^-\subset {\mathcal N}_0^+ \subset {\mathcal N}_1^- \subset {\mathcal N}_1^+ \subset \cdots \subset {\mathcal N}_k^- \subset {\mathcal N}_k^+ \subset \cdots } and such that ${\mathcal N}=\cup_{k,\pm} {\mathcal N}_k^\pm$ is disjoint from $s(T)$. We require that the sets ${\mathcal N}_k^\pm$ have compatible projection maps $\Pi_k^\pm$ to $T'$ with $\Upsilon \circ \Pi_k^\pm = \Upsilon$ and $\Pi_k^- |_{{\mathcal N}^+_{k-1}}=\Pi_{k-1}^+$ and $\Pi_k^+|_{{\mathcal N}_k^-}=\Pi_k^-$. We also require that each ${\mathcal N}_k^\pm$ has trivial cohomology. The explicit description of \cite{Sch,SchStu} recalled above of de Rham cohomology of $\Omega$ in terms of residues and harmonic forms on the Bruhat-Tits tree shows that such regions ${\mathcal N}_k^\pm$ can be constructed, by showing that the restriction of the holomorphic forms in $\Omega^1(\Omega)$ to ${\mathcal N}$ have trivial residues. Since the residues of the holomorphic forms on $\Omega$ are all supported along circles $\partial D$ that are boundaries of the ``pairs of pants" regions, it suffices to ensure that the region ${\mathcal N}$ does not contain any of these circles. For instance, one can construct the ${\mathcal N}_k^\pm$ by choosing nested sets $S_k^\pm$ of sections of the projection $\Upsilon: \Omega \to T$ containing the section $s'$ with $s\notin \cup_{k,\pm} S^\pm_k$. Given an edge $e$ in $T$ and a chosen base vertex $v_0$, orient all the edges of $T$ away from $v_0$ and let $T(e)\subset T$ be the subtree with root vertex $v=s(e)$, consisting of all vertices and edges of $T$ that are reachable from $s(e)$ along an oriented path. Let $\Omega(e):=\Upsilon^{-1}(T(e))$. Fix a $t$ with $0<t<1$ and let $\Sigma_t =\Upsilon^{-1}(e_t)$ for the point $e_t=(1-t)v + t v$ on the edge $e$. Then to an edge $e$ in the Bruhat-Tits tree at a distance $N$ from a fixed root vertex $v_0$ we associate a region $L_e$ obtained as the union $L_e = \Gamma_e \cup {\mathcal C}_{N,e}$, where $\Gamma_e$ is the region of $\Omega$ given by \eqn{}{ \Gamma_e = \Sigma_t \smallsetminus (\Sigma_t \cap {\mathcal N}_N^-) } and ${\mathcal C}_{N,e}={\mathcal N}_{N,e}^+\smallsetminus {\mathcal N}_{N,e}^-$, where ${\mathcal N}_{k,e}^\pm$ is the region given by \eqn{}{ {\mathcal N}_{k,e}^\pm={\mathcal N}_k^\pm \cap \Omega(e)\,. } The $L_e$ are mutually disjoint regions in $\Omega$, with endpoints at the boundary at infinity ${\mathbb P}^1(K)$ of $\Omega$. We define the analog of the ``dual graph" of our Archimedean toy model to be the region \eqn{}{ {\mathcal D}(T) := \cup_e L_e \subset \Omega\,. } This depends on the choice of the sections $s,s'$, of $t$, and of the regions ${\mathcal N}_k^\pm$. As in the Archimedean setting we think of tensor networks supported on the dual graph, here we consider a tensor networks supported in the region ${\mathcal D}(T)$, with ``bonds" along the loci $L_e$. A geodesic in the tree $s(T)$ cuts a number of such bonds equal to its length in number of edges. We place vertices of the ``dual graph" ${\mathcal D}(T)$ at its limit points at infinity in ${\mathbb P}^1({\mathbb K})$. These are also limit points of the geodesics in the Bruhat--Tits tree, by construction. \subsection{Genus one case: Tate--Mumford elliptic curves} \label{ssec:geometryTATE} As discussed earlier, in the genus one case, which gives the $p$-adic BTZ black hole, we consider a rank one $p$-adic Schottky group $\Gamma \subset \PGL(2,{\mathbb K})$, generated by a single hyperbolic element $\gamma$ with two fixed points in the boundary ${\mathbb P}^1({\mathbb K})$. We can always identify the endpoints with the points $\{ 0, \infty \}$. Instead of the Bruhat--Tits tree we then consider the quotient graph $T/\Gamma$. This consists of a polygonal ring with infinite trees attached to the vertices, as illustrated in figure \ref{fig:padicBTZ}. The Mumford curve $X_\Gamma({\mathbb K}) = \partial T/\Gamma$ is a Mumford--Tate $p$-adic elliptic curve with Tate uniformization $X_\Gamma({\mathbb K}) =({\mathbb P}^1({\mathbb K})\smallsetminus \{ 0, \infty \})/\Gamma$. The Schottky group $\Gamma=\gamma^{\mathbb Z}$ also acts on the Drinfeld plane $\Omega={\mathbb P}^1({\mathbb C}_p)\smallsetminus {\mathbb P}^1({\mathbb K})$ and we can consider the quotient $\Omega/\Gamma$. Since the projection map $\Upsilon: \Omega \to T$ is equivariant with respect to the $\PGL(2,{\mathbb K})$ action, hence with respect to $\Gamma$, we obtain an induced projection $\Upsilon: \Omega/\Gamma \to T/\Gamma$. By choosing a lift of this projection we can embed a copy of the graph $T/\Gamma$ inside $\Omega/\Gamma$ as a $1$-skeleton, with boundary at infinity given by the ${\mathbb K}$-rational points of the Mumford--Tate elliptic curve, $\partial T/\Gamma = \partial \Omega/\Gamma = X_\Gamma ({\mathbb K})$. \begin{figure} \centering \begin{tikzpicture}[scale=1.4] \tikzstyle{vertex}=[draw,scale=0.25,fill=black,circle] \tikzstyle{ver2}=[draw,scale=0.4,fill=red,circle] \tikzstyle{ver3}=[]; \newcommand{0.2}{0.2} \newcommand{0.65ex}{0.65ex} \foreach \x in {0,...,5} { \coordinate (a\x) at (\x*60:1); \foreach \z in {-1,1} { \coordinate (b\x_\z) at ($ (a\x) + (\x*60 + \z*30:1) $) ; \draw[thin, color=red, name path = tree] (b\x_\z) -- (a\x); \draw[thick, dotted] ($ ($ (b\x_\z)!.3!(a\x) $) !0.65ex!90:(a\x)$) to[out=\x*60 + \z*30 , in = \x*60 + \z*30 , looseness = 1.3] ($ ($ (b\x_\z)!.3!(a\x) $) !0.65ex!-90:(a\x)$) ; \draw[thick, name path=circ] ($ ($ (b\x_\z)!.3!(a\x) $) !0.65ex!90:(a\x)$) to[out=\x*60 + \z*30 + 180 , in = \x*60 + \z*30 + 180 , looseness = 1.3] ($ ($ (b\x_\z)!.3!(a\x) $) !0.65ex!-90:(a\x)$) ; \path [name intersections={of=circ and tree}]; \draw (intersection-1) node[vertex] {}; }; }; \foreach \x/\y in {0/1,1/2,2/3,3/4,4/5,5/0} { \draw[thin,color=red, name path = loop] (a\x) -- (a\y); \coordinate (a\x\y) at ($ (a\x)!.2!(a\y) $); \coordinate (a\y\x) at ($ (a\x)!.8!(a\y) $); \coordinate (U\x\y) at ($ (a\x\y)!0.65ex!90:(a\x) $); \coordinate (U\y\x) at ($ (a\y\x)!0.65ex!-90:(a\y) $); \coordinate (L\x\y) at ($ (a\x\y)!0.65ex!-90:(a\x) $); \coordinate (L\y\x) at ($ (a\y\x)!0.65ex!90:(a\y) $); \draw[thick, name path=circ] ($ (a\x)!.5!(a\y) !0.65ex!90:(a\y)$) to[out=\y*60 + 60 , in = \y*60 + 60, looseness = 1.3] ($ ($ (a\x)!.5!(a\y) $) !0.65ex!90:(a\x) $); \draw[thick, dotted] ($ (a\x)!.5!(a\y) !0.65ex!90:(a\y)$) to[out=\y*60 - 120, in = \y*60 - 120, looseness = 1.3] ($ ($ (a\x)!.5!(a\y) $) !0.65ex!90:(a\x) $); \path [name intersections={of=circ and loop}]; \draw (intersection-1) node[vertex] {}; \foreach\z in {-1,1} { \coordinate (V\x_\z) at ($ ($ (a\x)!.35!(b\x_\z) $) !0.65ex!90*\z:(a\x) $); \coordinate (O\x_\z) at ($ ($ (a\x)!.2!(b\x_\z) $) !0.65ex!-90*\z:(a\x) $); \coordinate (Q\x_\z) at ($ ($ (a\x)!.8!(b\x_\z) $) !0.65ex!-90*\z:(a\x) $); \coordinate (W\x_\z) at ($ ($ (a\x)!.8!(b\x_\z) $) !0.65ex!90*\z:(a\x) $); \draw[thick] (V\x_\z) -- (W\x_\z); \draw[thick] (O\x_\z) -- (Q\x_\z); }; }; \foreach \x/\y/\z in {0/1/2,1/2/3,2/3/4,3/4/5,4/5/0,5/0/1} { \draw[thick] (V\x_1) to[in = \x*60 - 30 + 180, out = \x*60 + 30 + 180, looseness = 1.5] (V\x_-1); \draw[thick] (L\x\y) -- (L\y\x); \draw[thick] (U\x\y) -- (U\y\x); \draw[thick] (U\x\y) to[out = \x*60 - 60, in = \x*60 - 150, looseness = 1.5] (O\x_1); \draw[thick] (O\y_-1) to[out = \y*60 + 150, in = \y*60 + 60, looseness = 1.5] (U\y\x); \draw[thick] (L\y\x) to[out = \y*60 + 60, in = \y*60 - 60, looseness = 1.5] (L\y\z); }; \foreach \n in {0, 1, 2, 3, 4, 5} { \foreach \z in {-1,1} { \foreach \x in {-1,0,1} { \coordinate (d\n\x\z) at ($ (b\n_\z) + (\n*60 + \z*15 + \x*30:1) $) ; \draw[thin,color=red,name path= branch] (d\n\x\z) -- (b\n_\z); \draw[thin, color=red,name path=outer] (d\n\x\z) -- ($ (b\n_\z) + (\n*60 + \z*15 + \x*30:1.15) $); \draw[thick,name path= circ] ($ (d\n\x\z)!0.65ex!90:(b\n_\z) $) to[out=\n*60 + \z*15 + \x*30 + 180, in = \n*60 + \z*15 + \x*30 + 180, looseness=1.25] ($ (d\n\x\z)!0.65ex!-90:(b\n_\z) $) ; \draw[thick,name path=undercirc] ($ (d\n\x\z)!0.65ex!90:(b\n_\z) $) to[out=\n*60 + \z*15 + \x*30, in = \n*60 + \z*15 + \x*30, looseness=1.25] ($ (d\n\x\z)!0.65ex!-90:(b\n_\z) $) ; \path [name intersections={of=circ and branch,by=Bl}]; \draw (Bl) node[vertex] {}; \path [name intersections={of=outer and undercirc,by=Wh}]; \draw (Wh) node[scale=0.3,fill=white,circle] {}; \draw[thin, color=red] (d\n\x\z) -- ($ (b\n_\z) + (\n*60 + \z*15 + \x*30:1.15) $); \draw[thin, color=red, dotted] ($ (b\n_\z) + (\n*60 + \z*15 + \x*30:1.15) $) -- ($ (b\n_\z) + (\n*60 + \z*15 + \x*30:1.3) $); }; \foreach \x in {-1,1} { \draw[thick] ($ (d\n\x\z)!0.65ex!90*\x:(b\n_\z) $) to[out=\n*60 + \z*15 + \x*30 + 180, in = \n*60 + \z*15 + 180, looseness=5.5] ($ (d\n0\z)!0.65ex!-90*\x:(b\n_\z) $) ; \draw[thick] ($ (d\n\x\z)!0.65ex!-90*\x:(b\n_\z) $) -- ($ ($(d\n\x\z)!.8!(b\n_\z)$) !0.65ex!-90*\x:(b\n_\z) $); \draw[thick] ($ ($ (b\n_\z)!0.2!(a\n) $) !0.65ex!-90*\x:(a\n) $) to[in= \n*60 + \z*15 + \x*30 + 180, out = + \n*60 + \z*30, looseness = 1.1] ($ ($(d\n\x\z)!.8!(b\n_\z)$) !0.65ex!-90*\x:(b\n_\z) $); }; }; }; \end{tikzpicture} \caption{A sketch of the quotient of the Drinfeld plane associated to a genus-one Mumford curve. The details of the bonds of the dual graph are not shown; the reader should compare figure~\ref{fig:drinfelddual}.} \label{fig:mumforddual} \end{figure} \subsubsection{Dual locus for the Tate--Mumford curve} The same construction described above of a ``dual locus" in the Drinfeld plane to a lift of the projection to the Bruhat--Tits tree, realizing a copy of $T$ as a $1$-skeleton in the $p$-adic plane, can be adapted to the genus one case. We illustrate here how the construction changes using the toy model of a tubular neighbourhood of a tree in $3$-space, which is easier to show visually. The corresponding construction on the $p$-adic Drinfeld plane itself then proceeds as in the previous case following the model of the tubular neighbourhood of the tree. In this case we consider the tubular neighbourhood of the geodesic $L_{\{0,\infty\}}$ in the tree, and we fix a choice of a base point on this tubular neighbourhood, away from both chosen lifts of the tree via disjoint preimages of the projection. We construct the dual graph by considering a family of loops from the chosen base point around each loop $\Pi^{-1}(x_t(e_i))$ for edges along $L_{\{0,\infty\}}$, while for all other edges we repeat the construction as in the genus zero case. This gives a collection of curves whose image in the quotient lies on the surface illustrated in figure \ref{fig:mumforddual}. Consider a fundamental domain ${\mathcal F}$ of the action of the group $\Gamma$ that contains the chosen base point, and a collection of curves contained in this fundamental domain, so that the quotient looks like the curves in figure \ref{fig:drinfelddual} drawn on the surface of figure \ref{fig:mumforddual}. These determine the ``dual graph" on the quotient by $\Gamma$. \subsubsection{A measure on the Tate--Mumford curve} In the genus-zero case, the boundary ${\mathbb P}^1({\mathbb K})$ of the Bruhat--Tits tree $T_{\mathbb K}$ is a finite extension ${\mathbb K}$ of ${\mathbb Q}_p$ carries a measure that is the Patterson--Sullivan measure for the action of $\PGL(2,{\mathbb K})$ on $T_{\mathbb K}$, which has the full boundary ${\mathbb P}^1({\mathbb K})$ as limit set. It is known by the general construction of \cite{Coor} that any Gromov-hyperbolic space with a proper discontinuous action of an isometry group determines a Patterson--Sullivan measure on the hyperbolic boundary, with support on the limit set of the group, and quasi conformal of dimension equal to the Hausdorff dimension of the limit set. In particular, in the case of a $p$-adic Schottky group $\Gamma$ of rank at least two acting on the Bruhat-Tits tree and its boundary, one obtains in this way a Patterson--Sullivan measure supported on the limit set $\Lambda_\Gamma\subset {\mathbb P}^1({\mathbb K})$ of the Schottky group. The properties of this Patterson--Sullivan measure are used to prove rigidity results for Mumford curves, \cite{CornKool}. However, notice that the Patterson--Sullivan measure lives on the limit set $\Lambda_\Gamma$, which is the complement of the boundary region that determines the Mumford curve $X_\Gamma({\mathbb K}) = ( {\mathbb P}^1({\mathbb K}) \smallsetminus \Lambda_\Gamma )/\Gamma$. Thus, unlike the genus zero case, the natural construction of a Patterson--Sullivan measure does not produce a measure on the Mumford curve, but only a measure on the limit set. Moreover, in the particular case of genus one, even this measure on the limit set would be uninteresting, since in the genus one case the limit set only consists of two points (which we can always assume to be $0$ and $\infty$), rather than a Cantor set type object as in the higher genus cases. One can also see that the other interesting group action that is present in the case of Mumford curves, namely the action of the automorphism group of the curve, also fails to give rise to an interesting Patterson--Sullivan measure (except in genus zero where the automorphism group of the projective line is $\PGL(2,{\mathbb K})$ and one obtains again the Patterson--Sullivan measure on ${\mathbb P}^1({\mathbb K})$). Indeed, in the case with genus at least two the automorphism group ${\rm Aut}(X)$ of the Mumford curve $X$ is a finite group hence the limit set is empty, hence we do not have a Patterson--Sullivan measure supported on the Mumford curve $X$ itself. In the case of genus one (BTZ black hole) the automorphism group of the elliptic curve is a semidirect product of the elliptic curve $E$ itself as a group (acting on itself by translations) by the automorphism group $Aut(E)$ of this group. In particular, the action on the Bruhat--Tits tree of arbitrary translations along the geodesic with endpoints $\{0,\infty \}$ induces automorphisms of the Mumford curve, which act on the infinite graph $T/\Gamma$ and its boundary $\partial T/\Gamma=X_\Gamma$ as rotations of the central polygonal ring and of all the outgoing trees attached to it. The change of orientation that exchanges the endpoints $\{0,\infty \}$ also induces a self map of $T/\Gamma$ and its boundary $X_\Gamma$. Again we do not obtain a non-trivial limit set on the boundary Mumford curve $X({\mathbb K})=({\mathbb P}^1({\mathbb K})\smallsetminus \{ 0, \infty \})/\Gamma$, hence we cannot just replace the Patterson--Sullivan measure on ${\mathbb P}^1({\mathbb K})$ with a similar Patterson--Sullivan measure on the Mumford curves $X_\Gamma({\mathbb K})$ of genus at least one. However, for the genus one case of Tate--Mumford elliptic curves that we are mainly interested in here, it is possible to define a measure on $X_\Gamma({\mathbb K})$ induced by the Patterson--Sullivan measure on ${\mathbb P}^1({\mathbb K})$. Consider the geodesic $L_{\{ 0, \infty \}}$ in the Bruhat--Tits tree $T$ that connects the fixed points $\{ 0, \infty \}$ of the Schottky group. Fix a fundamental domain ${\mathcal F}_\Gamma$ of the action of the Schottky group $\Gamma \simeq {\mathbb Z}$ on $T$. The intersection ${\mathcal F}_\Gamma\cap L_{\{ 0, \infty \}}$ consists of a finite set of vertices in bijective correspondence with the vertices of the central polygon in the graph $T/\Gamma$. There are then two main choices for how to construct a measure on $X_\Gamma({\mathbb K})$ using the Patterson--Sullivan measure on ${\mathbb P}^1({\mathbb K})$. The first choice generates a measure on $X_\Gamma({\mathbb K})$ that is invariant under the automorphisms of $X_\Gamma$ induced by arbitrary translations along $L_{\{ 0, \infty \}}$, while the second one does not have this invariant property. For the first construction, fix a choice of a root vertex $v_0$ in the tree $T$ contained in ${\mathcal F}_\Gamma\cap L_{\{ 0, \infty \}}$, and consider the tree $T_0$ stemming from $v_0$ with first edges the $q-1$ directions at $v_0$ that are not along $L_{\{ 0, \infty \}}$. Let $\Omega_0({\mathbb K}) \subset {\mathbb P}^1({\mathbb K})$ be the boundary region $\Omega_0({\mathbb K})=\partial T_0$, endowed with the restriction $\mu_0 = \mu |_{\Omega_0}$ of the Patterson--Sullivan measure $\mu$ on ${\mathbb P}^1({\mathbb K})$. Every other subtree of the Bruhat--Tits tree that has root vertex on $L_{\{ 0, \infty \}}$ and first edges not in the direction of $L_{\{ 0, \infty \}}$ is obtained from $T_0$ via the action of a translation along $L_{\{ 0, \infty \}}$. We can endow the boundary region of these trees with copies of the same measure $\mu_0$. In this way we obtain a measure on ${\mathbb P}^1({\mathbb K})\smallsetminus \{ 0,\infty\}$ that has infinite total mass and that is invariant under arbitrary translations along $L_{\{ 0, \infty \}}$. Since it is in particular invariant under the action of the rank one Schottky group $\Gamma$ with limit set $\{ 0, \infty \}$ it descends to a measure on $X_\Gamma= ({\mathbb P}^1({\mathbb K})\smallsetminus \{ 0,\infty\})/\Gamma$. This measure on $X_\Gamma$ has finite total mass, since it consists of finitely many copies of $\mu_0$ (one for each tree stemming from one of the vertices of the central polygon of $T/\Gamma$), hence we can normalize it to a probability measure on $X_\Gamma$ which is invariant under the automorphisms induced by translations along $L_{\{ 0, \infty \}}$ and also by orientation reversal. The second construction is similar, but instead of considering the tree $T_0$ stemming from the root vertex $v_0$ along the directions complementary to $L_{\{ 0, \infty \}}$, we consider now the forest $T_{{\mathcal F}}$ which is the disjoint union of the trees $T_v$ stemming from the vertices in ${\mathcal F}_\Gamma\cap L_{\{ 0, \infty \}}$. We denote by $\Omega_{\mathcal F}$ the corresponding boundary region $\Omega_{\mathcal F}=\partial T_{{\mathcal F}}\subset {\mathbb P}^1({\mathbb K})$. The normalized restriction $\mu_{{\mathcal F}}$ of the Patterson--Sullivan measure $\mu$ on ${\mathbb P}^1({\mathbb K})$ to the region $\Omega_{\mathcal F}$ induces a $\Gamma$-invariant measure on ${\mathbb P}^1({\mathbb K}) \smallsetminus \{ 0, \infty \}$ of infinite total mass, and a probability measure on the quotient $X_\Gamma({\mathbb K})= ({\mathbb P}^1({\mathbb K})\smallsetminus \{ 0,\infty\})/\Gamma$. While the first construction gives a more ``symmetric" measure on $X_\Gamma({\mathbb K})$, the symmetry under arbitrary translations along $L_{\{ 0, \infty \}}$ has the disadvantage that the boundary measure no longer keeps track of geodesic paths along the central polygonal graph in $T/\Gamma$. The second measure instead is more useful for our purposes: while invariance under translations in $\Gamma$ means that the measure descends to the quotient $X_\Gamma({\mathbb K})$, hence it does not detect the number of times that a path in the bulk $T/\Gamma$ wraps around the central polygon, it still does distinguish the number of polygon edges along the polygon modulo its total length. \subsection{AF algebras and limits of density matrices} \label{AFALGEBRA} Our construction of density matrices using the dual graph, as described in sections~\ref{GENUSZERO}-\ref{COMPUTATION}, is based on fixing a level in the Bruhat--Tits tree, namely considering only the vertices that are at distance at most $n$ steps from a fixed root vertex (which is related to the UV cutoff parameter $\Lambda$ in definition \ref{def:cutoff}). This determines in turn the rank of the tensors in the tensor network and the number of dangling legs at the vertices of the dual graph. In order to consider the entire Bruhat--Tits tree, we need to perform a limiting procedure over this construction at finite levels. This means considering limits, in the appropriate sense, of density matrices of increasing ranks. This limiting procedure can be made precise in the setting of AF-algebras and states. We now describe this briefly. A Bratteli diagram \cite{Bratteli} is an infinite directed graph with vertex set $V=\cup_{n=0}^\infty V_n$ and edge set $E=\cup_{k=1}^\infty E_n$ where edges $e\in E_n$ have source and target $s(e)\in V_{n-1}$ and $t(e)\in V_n$. We assume here that each $V_n$ and $E_n$ is a finite set and that each vertex $v\in V_n$ emits at least one edge and when $n\geq 1$ also receives at least one edge. Each vertex $v\in V$ is labelled by a positive integer $N_v \in {\mathbb N}$, with the property that the number of edges $N_{v,v'}=\# \{ e\in E_n \,:\, s(e)=v,\, t(e)=v'\}$, for given $v\in V_{n-1}$ and $v'\in V_n$, satisfies the estimate $N_v \cdot N_{v,v'}\leq N_{v'}$. Equivalently, we can consider only diagrams for which there is at most a single edge $e$ between two given vertices $v,v'$, decorated with a multiplicity $N_{v,v'}\in {\mathbb N}$. We will work with diagrams without multiple edges and with both vertex and edge multiplicities $N_v$, $N_{v,v'}$. A finite dimensional complex $C^*$-algebra is a direct sum $\oplus_i M_{N_i}({\mathbb C})$ of matrix algebras (Wedderburn theorem) and $C^*$-algebra homomorphisms between them are completely specified by assigning multiplicities on the matrix algebra components. Thus, a direct system $(A_n, \varphi_n)$ of finite dimensional $C^*$-algebras and injective homomorphisms $\varphi_n: A_{n-1}\to A_n$ between them can be completely described by a Bratteli diagram with $V_n$ the set of matrix components and $N_v$ the mutliplicities, $A_n=\oplus_{v\in V_n} M_{N_v}({\mathbb C})$. The edges $e_{v,v'}\in E_n$ and their multiplicities $N_{v,v'}$ then uniquely specify the injective map $\varphi_n : A_{n-1}\to A_n$ by letting $N_{v,v'}$ be the multiplicity of $M_{N_v}({\mathbb C})$ into $M_{N_{v'}}({\mathbb C})$. Thus, Bratteli diagrams provide a very convenient graphical way of describing direct limits $A=\varinjlim_n (A_n,\varphi_n)$ of finite dimensional $C^*$-algebras. The $C^*$-algebras that can be obtained as such limits are called AF-algebras. A particular case of AF-algebras is given by the uniformly hyperfinite algebras, or UHF-algebras. These are direct limits of sequences $(A_n,\varphi_n)$ where the morphisms $\varphi_n$ are unit-preserving. This in particular implies that the restrictions to matrix blocks $M_{N_v}({\mathbb C}) \to M_{N_{v'}}({\mathbb C})$ satisfy $N_v\cdot N_{v,v'}=N_{v'}$, namely the block $M_{N_v}({\mathbb C})$ is mapped into $M_{N_{v'}}({\mathbb C})$ with multiplicity $N_{v'}/N_v$. A state on $C^*$-algebra $A$ is a continuous linear functional $\omega: A \to {\mathbb C}$ which satisfies positivity $\omega(a^* a)\geq 0$ for all $a\in A$, and is normalized $\omega(1)=1$ if the algebra is unital. In other words, it is the noncommutative analog of a measure. When $A$ is finite dimensional, states are given by density matrices $\rho$ with $\omega(a)={\rm Tr}(\rho\, a)$. If a $C^*$-algebra $A$ is an AF-algebra, obtained as a direct limit $A=\varinjlim_n (A_n,\varphi_n)$ corresponding to a Bratteli diagram ${\mathbb B}={\mathbb B}(A_n,\varphi_n)$, in general we can describe states on $A$ in terms of density matrices $\rho_n$ on the finite dimensional algebras $A_n$ and a compatibility condition on the diagram ${\mathbb B}$, \cite{Evans}. We consider here the case of a direct system of finite dimensional algebras $(A_n,\varphi_n)$ associated to the Bratteli diagram ${\mathbb B}$, with $A_n=\oplus_{v\in V_n} M_{N_v}({\mathbb C})$. The associated convex set of density matrices ${\mathcal M}(A_n)$ describing states on $A_n$ can be described as \eqn{}{ {\mathcal M}(A_n)=\left\{ \sum_{v\in V_n} \lambda_v \rho_v\,:\, \lambda=(\lambda_v)\in \Sigma_{V_n},\,\, \rho_v \in {\mathcal M}_{N_v} \right\} } where $\Sigma_{V_n}=\{ \lambda=(\lambda_v)\,:\, \lambda_v\geq 0, \, \sum_v \lambda_v=1\}$ is the simplex on the set $V_n$ and ${\mathcal M}_N=\{ \rho\in M_N({\mathcal C})\,:\, \rho=\rho^*, \, \rho\geq 0,\, {\rm Tr}(\rho)=1\}$ is the set of density matrices of rank $N$. The density matrices $\rho\in {\mathcal M}(A_n)$ have the same matrix block decomposition as the elements of $A_n$. Let ${\mathcal M}(A_n)^0$ denote the set of $\omega \in {\mathcal M}(A_n)$ such that $\omega(e_n)\neq 0$, for the idempotent $e_n=\varphi_n(1)$. In the case of a UHF-algebra $e_n=1$ hence this condition is always satisfied. One can then define maps $R_n: {\mathcal M}(A_n)^0 \to {\mathcal M}(A_{n-1})^0$ by setting \eqn{}{ R_n(\omega)(a_{n-1})=\frac{\omega(\varphi_n(a_{n-1}))}{\omega(e_n)}\,. } This determines a projective system $({\mathcal M}(A_n)^0, R_n)$ with ${\mathcal M}(A)^0$ the inverse limit. For $\psi_n : A_n \to A$ the maps to the direct limit, one has maps $\tilde R_n: {\mathcal M}(A)^0\to {\mathcal M}(A_n)^0$ given by $\tilde R_n (\omega)(a_n)=\omega(\psi_n(a_n))/\omega(\psi_n(1))$. Thus, we can identify elements in the projective limit ${\mathcal M}(A)^0$ with those sequences $\{ \omega_n \}_{n\in {\mathbb N}}$ with $\omega_n\in {\mathcal M}(A_n)$ that have the property that $\omega_{n-1}=R_n(\omega_n)$. A state $\omega$ on the AF-algebra $A$ defines such a sequence by setting $\omega_n(a_n)=\omega(\psi_n(a_n))/\omega(\psi_n(a))$ and conversely every such sequence determines a state $\omega(a)=\omega_n(\psi_n(a_n))/\omega_n(\psi_n(1))$ for $a=\psi_n(a_n)$, which is well defined because of the compatibility $\omega_{n-1}=R_n(\omega_n)$. Thus, in order to obtain a limit of the density matrices $\rho_n$ associated to the boundary of the tensor network on the dual graph of the Bruhat--Tits tree truncated at level $n$, we need to show that they give rise to a sequence of states $\omega_n(a_n)={\rm Tr}(\rho_n a_n)$ for $a_n \in A_n$ that satisfy the compatibility $\omega_{n-1}=R_n(\omega_n)$. In the case we are considering here, the AF-algebra is constructed in the following way, with a Bratteli diagram whose underlying graph is the Bruhat--Tits tree. As in the previous sections, we denote by $\Lambda\in {\mathbb N}$ the cutoff on the Bruhat--Tits tree of ${\mathbb Q}_p$. Thus, at level $\Lambda$, we consider a finite tree with $(p+1) p^{\Lambda -1}$ leaves. As before, we assume the choice of a fixed planar embedding of the Bruhat--Tits tree. Let $A,B$ be two complementary regions in the boundary of the finite tree at level $\Lambda$, determined by the choice of two boundary points $x,y$. We associate to the tree, the level $\Lambda$, and the choice of the regions $A$ and $B=A^c$, a finite dimensional algebra of the form $A_\Lambda = M_{r^\sigma}({\mathbb C})^{\oplus r^{C_{AB}}}$, a direct sum of $r^{C_{AB}}$ copies of the complex algebra of $r^\sigma \times r^\sigma$ matrices, where both $C_{AB}$ and $\sigma$ depend on $\Lambda$ and are defined as in the previous sections. The explicit expression for $\sigma=\sigma(\Lambda)$ is obtained through the vanishing condition of \eqref{TraceCond}. We write $\Delta\sigma(\Lambda)=\sigma(\Lambda+1)-\sigma(\Lambda)$. The explicit expression for $\Delta\sigma(\Lambda)$ can also be computed directly from \eqref{TraceCond} and from table~\ref{tb:vertices}. The quantity $C_{AB}$, which measures the normalized geodesic length in the cutoff tree connecting $x$ to $y$, changes by $2$ when both points are pushed one step forward towards the boundary of the Bruhat--Tits tree when the cutoff $\Lambda$ is increased to $\Lambda +1$. The embeddings $\varphi_{\Lambda+1}: A_\Lambda \hookrightarrow A_{\Lambda+1}$ is obtained by mapping each $r^{\sigma(\Lambda)} \times r^{\sigma(\Lambda)}$ block of $A_\Lambda$ into an $r^{\sigma(\Lambda+1)} \times r^{\sigma(\Lambda+1)}$ block by repeating the same block $r^{\Delta\sigma(\Lambda)}$ times, and then repeating the resulting configuration of $r^{C_{AB}(\Lambda)}$ blocks of size $r^{\sigma(\Lambda+1)} \times r^{\sigma(\Lambda+1)}$ for $r^2$ times. This gives a matrix consisting of $r^{C_{AB}(\Lambda+1)}$ blocks of size $r^{\sigma(\Lambda+1)} \times r^{\sigma(\Lambda+1)}$, which is an element of $A_{\Lambda+1}$. The map $\varphi_{\Lambda+1}: A_\Lambda \hookrightarrow A_{\Lambda+1}$ constructed in this way is unital, hence the resulting AF-algebra $A=\varinjlim_\Lambda A_\Lambda$ is a UHF-algebra. Thus, to show that the density matrices $\rho_\Lambda$ of the form specified in \eqref{RhoAInterval} determine a state on the limit AF-algebra, we need to check that they satisfy the compatibility condition $\omega_\Lambda=R_{\Lambda+1}(\omega_{\Lambda+1})$ where $\omega_\Lambda(a)={\rm Tr}(\rho_\Lambda a)$ is the state on the algebra $A_\Lambda$ determined by the density matrix $\rho_\Lambda$. This condition means that, for all $a\in A_\Lambda$, ${\rm Tr}(\rho_\Lambda \, a) = {\rm Tr}(\rho_{\Lambda+1}\, \varphi_{\Lambda+1}(a))$. The density matrix $\rho_\Lambda$ consists of $r^{C_{AB}(\Lambda)}$ blocks of size $r^{\sigma(\Lambda)}\times r^{\sigma(\Lambda)}$ where all the entries in each of these blocks are equal to $1$, with an overall normalization factor equal to $r^{-(\sigma(\Lambda)+C_{AB}(\Lambda))}$ that makes ${\rm Tr}(\rho_\Lambda)=1$ (see \eqref{RhoAInterval}). An element $a\in A_\Lambda$ is a matrix of the same size that also consists of $r^{C_{AB}(\Lambda)}$ blocks of size $r^{\sigma(\Lambda)}\times r^{\sigma(\Lambda)}$, with each block given by an arbitrary matrix in $M_{r^{\sigma(\Lambda)}}({\mathbb C})$. Thus, the evaluation of ${\rm Tr}(\rho_\Lambda \, a)$ just yields the sum of the entries of $a$ normalized by the factor $r^{-(\sigma(\Lambda)+C_{AB}(\Lambda))}$. Under the map $\varphi_{\Lambda+1}: A_\Lambda \hookrightarrow A_{\Lambda+1}$ the matrix $a$ is mapped to $r^{\Delta\sigma(\Lambda)}$ copies of each block and $r^2$ copies of the resulting matrix. Since all the nonzero entries of this resulting matrix $\varphi_{\Lambda+1}(a)$ fall inside one of the blocks where all the entries of the density matrix $\rho_{\Lambda+1}$ are equal to $1$, the evaluation of ${\rm Tr}(\rho_{\Lambda+1}\, \varphi_{\Lambda+1}(a))$ gives the sum of the entries of $a$ repeated as many times as each block of $a$ is repeated in $\varphi_{\Lambda+1}(a)$, normalized by $r^{-(\sigma(\Lambda+1)+ C_{AB}(\Lambda+1))}$. This gives \eqn{}{ {\rm Tr}(\rho_{\Lambda+1}\, \varphi_{\Lambda+1}(a)) &= r^{-(\sigma(\Lambda+1)+ C_{AB}(\Lambda+1))} \cdot (\sum_{ij} a_{ij}) \cdot r^{\Delta\sigma(\Lambda)} \cdot r^2 \cr &= r^{-(\sigma(\Lambda)+C_{AB}(\Lambda))} \cdot (\sum_{ij} a_{ij}) \cr &= {\rm Tr}(\rho_\Lambda \, a) \,, } hence the compatibility condition is satisfied and the density matrices $\rho_\Lambda$ determine a state on the UHF-algebra $A=\varinjlim_\Lambda A_\Lambda$. \section{Outlook} \label{DISCUSSION} The study of holography over nonarchimedean fields such as the $p$-adics is still a very young area, and there is much more to be learned both from the study of models in the continuum and from the relation to tensor network constructions such as we have pursued here. In these paragraphs, we summarize a few questions and directions that seem worthy of further investigation. One lesson of our computations is that, in this $p$-adic setting, it is more natural to think of the entropy as a function of boundary points and configurations of points, rather than as a function of boundary regions (intervals). This is in spite of the fact that our computations always rely on a choice of region in the dual tensor network. Thus, our results are consistent with an interpretation in a continuum $p$-adic field theory, for example in terms of correlation functions of twist operators (as used in real two-dimensional CFT computations by~\cite{Holzhey,Calabrese:2004eu}). This is also consistent with the physical intuition that the main contributions to entanglement entropy should arise from UV modes localized near the entangling surface. In our scenario, as in CFT$_2$, the entangling surfaces are just points, and in particular live at the boundary of the Bruhat--Tits tree itself, rather than being associated to the dual tensor network. It would be interesting to find a calculational framework depending only on the positions of entangling surfaces that would work in parallel fashion in real and $p$-adic field theories. One can interpret our results as giving predictions for entanglement entropies in certain continuum $p$-adic field theories, which we expect to be valid up to certain overall theory-dependent factors (such as the overall normalization). For pure states, these predictions include the connected interval (two point) entropy in~\eno{RTgenus0} and~\eno{RTgenus0circle}, disconnected-interval (four point) entropy~\eno{Sdisconnect}, and mutual information~\eno{MIbdyGen}; when considering the thermal state dual to a $p$-adic BTZ black hole, we give the form of the connected interval result in~\eno{lengthBTZ0} and~\eno{lengthBTZ}. Furthermore, our proofs of entropy inequalities are evidence in support of such results---such as subadditivity and strong subadditivity---in continuum $p$-adic field theory. Extending these results to holographic codes~\cite{Pastawski:2015qua} which include bulk logical inputs would be a natural next step. It would also be interesting to investigate the recently conjectured duality between entanglement of purification and entanglement wedge cross-section~\cite{Terhal,Takayanagi:2017knl,Nguyen:2017yqw} and its extensions~\cite{Bao:2017nhh,Espindola:2018ozt,Bao:2018gck,Umemoto:2018jpc} in this setup, as well as other measures of entanglement for mixed states, such as entanglement negativity~\cite{Rangamani:2014ywa} and the conjectured bulk interpretation (see e.g.~\cite{Chaturvedi:2016rcn,Chaturvedi:2016rft,Jain:2017uhe,Kudler-Flam:2018qjo}). The simplifying features of the tensor networks studied here provide an effective computational framework to explore such questions. Along these lines, we have shown that many aspects of the bulk $p$-adic geometries closely parallel the situation in real AdS/CFT. Even so, it is comparably simpler to work with the discrete geometries, and this specific network provided a model in which we could efficiently compute many holographic entropy quantities. One might hope this trend will continue, and we expect that more complicated holographic quantities will be computationally easier to study in the $p$-adic setting. A major goal of this program is to reconstruct bulk quantities in smooth AdS from knowledge of the corresponding $p$-adic quantities for all $p$. We hope to return to the reconstruction of real AdS quantities from $p$-adic in future work. As mentioned above, it would furthermore be interesting to study the possibility of gluing together the Hamiltonians of section~\ref{CRSS} to give a general semiclassical construction (over finite fields) of spin systems whose vacua are constructed from networks of perfect tensors. One could imagine that such a construction could produce either a spin system, with a Hamiltonian possibly of commuting-projector type, or a system described by a path integral over discrete ($\FF_p$-valued) classical degrees of freedom; either version of the construction would be interesting, and would lead to a unification of the tensor network perspective with a full-fledged quantum system evolving dynamically in time (see e.g.~\cite{Osborne:2017woa} for a different take on this). One could also examine if our setup allows insights into the connection between holographic correlators and entropy measures. We note that as also familiar from the tensor networks literature, the tensor network dual to the Bruhat--Tits tree does not account for sub-AdS effects. However, we showed that these tensor networks can be embedded in the Drinfeld $p$-adic plane. Although the geometry of the Drinfeld plane did not play a role in our entropy computations, it may become relevant in the investigation of sub-AdS holography (see~\cite{Bao:2018pvs} for another perspective on this). Finally, generalizations of the BTZ black hole given by higher genus Mumford curves (quotients by higher rank Schottky groups) and higher dimensional models based on higher rank buildings may also exhibit more intricate relations between entanglement entropy on the boundary $p$-adic varieties and the geometry of the bulk regions, and may help identify a covariant generalization of the RT formula in the context of $p$-adic theories. We look forward to returning to many of these questions in future work. \section*{Acknowledgments} I.A.S.\ thanks D.~Aasen, J.~Keating, and J.~Walcher for conversations, and the Kavli Institute for Theoretical Physics in Santa Barbara for hospitality as this manuscript was being completed; he also gratefully acknowledges partial support by the Deutsche Forschungsgemeinschaft, within the framework of the Exzellenzinitiative an der Universit\"at Heidelberg. M.H.\ and S.P.\ thank Perimeter Institute for their kind hospitality while this work was in its early stages. The work of M.H.\ and S.P.\ was supported in part by Perimeter Institute for Theoretical Physics. M.H.\ would like to thank S.S.~Gubser and Princeton University for their hospitality while this work was being completed, and work done at Princeton was supported in part by the Department of Energy under Grant No.~DE-FG02-91ER40671, and by the Simons Foundation, Grant 511167 (SSG). M.H.\ is also partially supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics, under Award Number DE-SC0011632. M.M.\ is partially supported by NSF grant DMS-1707882, by NSERC Discovery Grant RGPIN-2018-04937 and Accelerator Supplement grant RGPAS-2018-522593, and by the Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Research, Innovation and Science. Research at the Kavli Institute is supported in part by the National Science Foundation under Grant No.~PHY-1748958.
1911.03325
\section{Introduction} It is well known that each factor of a divisible module over an integral domain is divisible. By \cite[Proposition IX.3.4]{FuSa01} an integral domain is {\bf Pr\"ufer} (each ideal is flat) if and only if each divisible module is FP-injective. So, over any Pr\"ufer domain each factor module of a FP-injective module is FP-injective too. More generally, a ring $R$ is left {\bf hereditary} (each left ideal is projective) if and only if (by \cite[Proposition I.6.2]{CaEi56}) each factor of any injective left $R$-module is injective, a ring $R$ is left {\bf semihereditary} (each finitely generated left ideal is projective) if and only if (by \cite[Theorem 2]{Meg70}) each factor of any FP-injective left $R$-module is FP-injective, By \cite[Th\'eor\`eme 4]{Cou75} a commutative ring $R$ has {\bf global weak dimension} $\leq 1$ (each ideal is flat) if and only if each finitely cogenerated factor of any finitely cogenerated injective module is FP-injective, and in this case, by using \cite[Th\'eor\`emes 3 et 4]{Cou75} it is possible to show that each factor of any FP-injective module modulo a submodule of finite Goldie dimension is FP-injective. In \cite[Theorem 2.3]{Fac82} there is a characterization of commutative rings for which each factor of any finitely cogenerated injective module is injective. On the other hand, by using \cite[Theorem 3.2]{Ste70} it is not difficult to show that a ring $R$ is left {\bf coherent} (each finitely generated left ideal is finitely presented) if and only if each factor of any FP-injective left $R$-module modulo a pure submodule is FP-injective (each direct limit of a system of FP-injective modules is factor of the direct sum of all FP-injective modules of the system modulo a pure submodule). \bigskip In this paper the following two questions are studied: \begin{itemize} \item What are the rings $R$ for which $E/U$ is FP-injective for any FP-injective left module $E$ and any submodule $U$ of finite Goldie dimension? \item What are the rings $R$ for which any left module of finite Goldie dimension is of injective dimension at most one? \end{itemize} A complete answer to these questions is given but only when $R$ is commutative. However, a result in the general case is given by extending Problem 33 posed by Fuchs and Salce in \cite[p. 306]{FuSa01} and solved by Laradji in \cite{Lar05}. Then, we examine the following question: \begin{itemize} \item What are the rings $R$ for which $E/U$ is FP-injective for any FP-injective left module $E$ and any pure submodule $U$ of finite Goldie dimension? \end{itemize} We study this question uniquely in the case where $R$ is a commutative chain ring, and even in this case, it is not easy to get some interesting results. \bigskip In this paper all rings are associative and commutative (except at the beginning of section \ref{S:glo}) with unity and all modules are unital. First we give some definitions. An $R$-module $M$ is said to be \textbf{uniserial} if its set of submodules is totally ordered by inclusion and $R$ is a \textbf{chain ring}\footnote{we prefer ``chain ring '' to ``valuation ring'' to avoid confusion with ``Manis valuation ring''.} if it is uniserial as $R$-module. In the sequel, if $R$ is a chain ring, we denote by $P$ its maximal ideal, $Z$ its subset of zerodivisors and $Q(=R_Z)$ its quotient ring. Recall that a chain ring $R$ is said to be {\bf Archimedean} if $P$ is the sole non-zero prime ideal. A module $M$ has \textbf{finite Goldie dimension} if its injective hull is a finite direct sum of indecomposable injective modules. A module $M$ is said to be \textbf{finitely cogenerated} if its injective hull is a finite direct sum of injective hulls of simple modules. The \textbf{f.c. topology} on a module $M$ is the linear topology defined by taking as a basis of neighbourhoods of zero all submodules $G$ for which $M/G$ is finitely cogenerated (see \cite{Vam75}). This topology is always Hausdorff. When $R$ is a chain ring which is not a finitely cogenerated $R$-module, the f.c. topology on $R$ coincides with the $R$-topology which is defined by taking as a basis of neighbourhoods of zero all non-zero principal ideals. A module $M$ is called \textbf{linearly compact} if any family of cosets having the finite intersection property has nonempty intersection. A ring $R$ is said to be \textbf{(almost) maximal} if $R/A$ is linearly compact for any (non-zero) proper ideal $A$. An exact sequence \ $0 \rightarrow F \rightarrow E \rightarrow G \rightarrow 0$ \ is \textbf{pure} if it remains exact when tensoring it with any $R$-module. In this case we say that \ $F$ \ is a \textbf{pure} submodule of $E$. We say that an $R$-module $E$ is \textbf{FP-injective} if $\mathrm{Ext}_R^1(F,E)=0,$ for every finitely presented $R$-module $F.$ A ring $R$ is called \textbf{self FP-injective} if it is FP-injective as $R$-module. \section{Global case} \label{S:glo} \begin{proposition}\label{P:redu} Let $R$ be a ring, $E$ a left $R$-module and $U$ a submodule of $E$. Then the following conditions are equivalent: \begin{enumerate}[(1)] \item $E/U$ is FP-injective if $E$ is FP-injective; \item $E/U$ is FP-injective if $E$ is an injective hull of $U$. \end{enumerate} \end{proposition} \begin{proof} It is obvious that $(1)\Rightarrow (2)$. $(2)\Rightarrow (1)$. First we assume that $E$ is injective. Then $E$ contains a submodule $E'$ which is an injective hull of $U$. Since $E/E'$ is injective and $E'/U$ FP-injective, then $E/U$ is FP-injective too. Now we assume that $E$ is FP-injective. Let $H$ be the injective hull of $E$. Then $E/U$ is a pure submodule of $H/U$. We conclude that $E/U$ is FP-injective. \end{proof} The following theorem contains a generalization of \cite[Corollary 4]{Lar05}. \begin{theorem} \label{T:semiher} Let $R$ be a ring and $I$ its injective hull as left $R$-module. Then the following conditions are equivalent: \begin{enumerate} \item $R$ is left semihereditary; \item each homomorphic image of any FP-injective left module is FP-injective; \item each homomorphic image of $I$ is FP-injective. \end{enumerate} \end{theorem} \begin{proof} By \cite[Theorem 2]{Meg70} $(1)\Leftrightarrow (2)$, and it is obvious that $(2)\Rightarrow (3)$. $(3)\Rightarrow (2)$. Let $M$ be a FP-injective left $R$-module and $K$ a submodule of $M$. To show that $M/K$ is FP-injective we may assume that $M$ is injective by Proposition \ref{P:redu}. There exist a set $\Lambda$ and an epimorphism $g:R^{(\Lambda)}\rightarrow M$. Since $M$ is injective, we can extend $g$ to an epimorphism from $I^{(\Lambda)}$ into $M$. Hence, it is enough to show that each homomorphic image of $I^{(\Lambda)}$ is FP-injective for any set $\Lambda$. First we assume that $\Lambda$ is a finite set of cardinal $n$. Let $K$ be a submodule of $I^n$ and $p:I^n=I^{n-1}\oplus I\rightarrow I$ the canonical projection. We note $K'$ the image of $K$ by $p$. We get the following exact sequence: \[0\rightarrow I^{n-1}/K\cap I^{n-1}\rightarrow I^n/K\rightarrow I/K'\rightarrow 0.\] So, by induction on $n$ we get that $I^n/K$ is FP-injective. Now, let $(\Lambda_{\gamma})_{\gamma\in\Gamma}$ be the family of finite subsets of $\Lambda$ where $\Gamma$ is an index set. For each $\gamma\in\Gamma$ we put \[I_{\gamma}=\{x=(x_{\lambda})_{\lambda\in\Lambda}\in I^{(\Lambda)}\mid x_{\lambda}=0,\ \forall \lambda\notin\Lambda_{\gamma}\}.\] If $K$ is submodule of $I^{(\Lambda)}$ then $I^{(\Lambda)}/K$ is the union of the family of submodules $(I_{\gamma}/K\cap I_{\gamma})_{\gamma\in\Gamma}$. We use \cite[Corollary 2.3]{Ste70} to conclude. \end{proof} Given a ring $R$ and a left $R$-module $M$, we say that $M$ is \textbf{P-injective} if $\mathrm{Ext}_R^1(R/Rr,M)=0$ for any $r\in R$. When $R$ is a domain, $M$ is P-injective if and only if it is divisible. We say that $R$ is a left \textbf{PP-ring} if any principal left ideal is projective. The following theorem can be proven in a similar way as the previous. \begin{theorem} \label{T:PP} Let $R$ be a ring and $I$ its injective hull as left $R$-module. Then the following conditions are equivalent: \begin{enumerate} \item[{\rm (1)}] $R$ is a left PP-ring; \item[{\rm (2)}] each homomorphic image of any P-injective left module is P-injective; \item[{\rm (3)}] each homomorphic image of $I$ is P-injective. \end{enumerate} \end{theorem} The following is a slight improvement of \cite[Th\'eor\`eme 4]{Cou75}. \begin{theorem}\label{T:gw1} Let $R$ be a commutative ring. The following conditions are equi\-valent: \begin{enumerate} \item $R$ is of global weak dimension $\leq 1$; \item each finitely cogenerated factor of any finitely cogenerated FP-injective $R$-module is FP-injective; \item each finitely cogenerated $R$-module is of FP-injective dimension $\leq 1$; \item each finitely cogenerated factor of any FP-injective $R$-module of finite Goldie dimension is FP-injective; \item each $R$-module of finite Goldie dimension is of FP-injective dimension $\leq 1$. \end{enumerate} \end{theorem} \begin{proof} By \cite[Th\'eor\`eme 4]{Cou75} $(1)\Leftrightarrow (2)$. It is obvious that $(3)\Rightarrow (2)$, $(5)\Rightarrow (4)$ and $(4)\Rightarrow (2)$. $(2)\Rightarrow (3)$. Let $E$ be a injective $R$-module of finite Goldie dimension and $M$ be a factor of $E$. By using \cite[Th\'eor\`eme 3]{Cou75}, it is easy to prove that $M$ is a pure submodule of an module $M'$ with $M'=\prod_{\lambda\in\Lambda}M_{\lambda}$, where $\Lambda$ is an index set and $M_{\lambda}$ is a finitely cogenerated factor of $M$ for each $\lambda\in\Lambda$. Then $M_{\lambda}$ is a factor of $E$, whence it is FP-injective by $(2)$, for each $\lambda\in\Lambda$. We successively deduce that $M'$ and $M$ are FP-injective. $(4)\Rightarrow (5)$. By Proposition \ref{P:redu} we may assume that $E$ is injective of finite Goldie dimension. To conclude we do as in the proof of $(2)\Rightarrow (3)$. $(1)\Rightarrow (4)$. Let $p:E\rightarrow M$ be an epimorphism where $E$ is an injective $R$-module of finie Goldie dimension and $M$ a finitely cogenerated $R$-module. Let $u$ be the inclusion map from $M$ into its injective hull $F$ and $f=u\circ p$. Then $E=E_1\oplus\dots\oplus E_n$ and $F=F_1\oplus\dots\oplus F_q$ where $E_i$ and $F_j$ are indecomposable for $i=1,\dots,n$ and $j=1,\dots,q$. Since the endomorphism ring of any indecomposable injective module is local, there exist maximal ideals $P_1,\dots,P_n$ and $L_1,\dots,L_p$ of $R$ such that $E_i$ is a module over $R_{P_i}$ for $i=1,\dots,n$ and $F_j$ is a module over $R_{L_j}$ for $j=1,\dots,q$. Let $S=R\setminus(P_1\cup\dots\cup P_n\cup L_1\cup\dots\cup L_q)$. Then $E$ and $F$ are modules over $S^{-1}R$, $f$ is a $S^{-1}R$-homomorphism. It follows that $M$ is also a module over $S^{-1}R$. Since $S^{-1}R$ is semilocal, $(1)$ implies that it is semihereditary. We conclude that $M$ is FP-injective. \end{proof} Recall that a commutative ring $R$ is said to be {\bf arithmetical} if $R_P$ is a chain ring for each maximal ideal $P$ of $R$. It is well known that a reduced ring is arithmetical if and only if it is of global weak dimension $\leq 1$. \begin{theorem}\label{T:gw2} Let $R$ be a commutative ring. The following conditions are equi\-valent: \begin{enumerate} \item $R$ is of global weak dimension $\leq 1$ and $R/L$ is an almost maximal Pr\"ufer domain for every minimal prime ideal $L$ of $R$; \item $R$ is of global weak dimension $\leq 1$ and each factor of $R_L$ is injective for each minimal prime ideal of $R$; \item each $R$-module of finite Goldie dimension is of injective dimension $\leq 1$. \end{enumerate} \end{theorem} \begin{proof} Assume that $R$ is a reduced arithmetical ring. If $L$ is a minimal prime ideal of $R$, then $R/L$ is a submodule of $R_L$ and consequently it is a flat $R$-module. So, each injective $R/L$-module is injective over $R$ too. By \cite[Proposition IX.4.5]{FuSa01} we conclude that $(1)\Leftrightarrow (2)$. $(3)\Rightarrow (2)$. By Theorem \ref{T:gw1} $R$ is a reduced arithmetical ring. Let $L$ be a minimal prime ideal. Then $R_L$ is a field and so it is an injective module of Goldie dimension one. $(2)\Rightarrow (3)$. Let $I$ be an indecomposable injective module, $P$ the prime ideal of $R$ which is the inverse image of the maximal ideal of $\mathrm{End}_R(I)$ by the natural map $R\rightarrow\mathrm{End}_R(I)$ and $L$ the minimal prime ideal of $R$ contained in $P$. Since $I$ is a module over $R_P$ then it is annihilated by $L$, and since $R_P$ is almost maximal it is a factor of $R_L$. Now let $U$ be a module of finite Goldie dimension and $E$ its injective hull. Then $E=I_1\oplus\dots\oplus I_n$ where $I_i$ is indecomposable for $i=1,\dots,n$. Let $L_1,\dots,L_p$ be the minimal prime ideals of $R$ such that, for each $i=1,\dots,n$ there exists $j$, $1\leq j\leq p$ such that $I_i$ is annihilated by $L_k$. Then $E$ is annihilated by $L=L_1\cap\dots\cap L_p$. Since $R$ is arithmetical the minimal prime ideals $L_1,\dots, L_p$ are comaximal. Then $E=E_1\oplus \dots\oplus E_p$, $U=U_1\oplus\dots\oplus U_p$ where $E_k=E/L_kE$, $U_k=U/L_kU$ for $k=1,\dots,p$. So, $E/U\cong E_1/U_1\oplus\dots\oplus E_p/U_p$. From above, for each $k=1,\dots,p$, we deduce that $E_k/U_k$ is a factor of $R_{L_k}^{m_k}$ for some positive integer $m_k$. By induction on $m_k$ we show that $E_k/U_k$ is injective. Hence $E/U$ is injective. \end{proof} \begin{example} Let $R$ be the B\'ezout domain due to Heinzer and Ohm constructed in \cite[Example III.5.5]{FuSa01}. Then the injective dimension of any finitely cogenerated $R$-module is at most one, but $R$ does not verify the equivalent conditions of Theorem \ref{T:gw2}. \end{example} \begin{proof} Since $R_P$ is a Noetherian valuation domain, it is almost maximal and each non-zero prime ideal is contained in a unique maximal ideal. So, by \cite[Theorem 2.3]{Fac82} each finitely cogenerated $R$-module is of injective dimension $\leq 1$. But some elements of $R$ are contained in infinite many maximal ideals. So, by \cite[Theorem IV.3.9]{FuSa01} $R$ is not an almost maximal domain. \end{proof} \begin{proposition}\label{P:locoh} Let $R$ be a locally coherent commutative ring. For any FP-injective $R$-module $E$ and any pure submodule $U$ of finite Goldie dimension, $E/U$ is FP-injective. \end{proposition} \begin{proof} By Proposition \ref{P:redu} we may assume that $E$ is injective of finite Goldie dimension. If $I$ is an indecomposable injective module then $\mathrm{End}_R(I)$ is a local ring. Let $P$ be the prime ideal which is the inverse image of the maximal ideal of $\mathrm{End}_R(I)$ by the canonical map $R\rightarrow\mathrm{End}_R(I)$. It follows that $I$ is a module over $R_P$. Now let $E=\oplus_{k=1}^nI_k$ be a $R$-module where $I_k$ is indecomposable and injective for $k=1,\dots,n$. Let $P_k$ be the prime ideal defined as above by $I_k$ for $k=1\dots,n$ and let $S=R\setminus (\cup_{1\leq k\leq n}P_k)$. Then $E$ and $U$ are module over the semilocal ring $S^{-1}R$. Since $R$ is locally coherent then $S^{-1}R$ is coherent. It follows that $E/U$ is FP-injective. \end{proof} \section{Chain ring case: preliminaries} Some preliminaries are needed to prove our main results: Proposition \ref{P:main} and Theorems \ref{T:main} and \ref{T:mainCoh}. \begin{lemma} \label{L:crucial} Let $R$ be a chain ring, $E$ a FP-injective module, $U$ a pure essential submodule of $E$, $x\in E\setminus U$ and $a\in R$ such that $(0:a)\subseteq (U:x)$. Then: \begin{enumerate} \item if $(0:a)\subset (U:x)$ then $x\in U+aE$; \item if $(0:a)=(U:x)$ then $x\notin U+aE$. \end{enumerate} \end{lemma} \begin{proof} $(1)$. Let $b\in (U:x)\setminus (0:a)$. Then $bx\in U$. Since $U$ is a pure submodule there exists $u\in U$ such that $bx=bu$. We get that $(0:a)\subset Rb\subseteq (0:x-u)$. The FP-injectivity of $E$ implies that there exists $y\in E$ such that $x-u=ay$. $(2)$. By way of contradiction suppose there exist $u\in U$ and $y\in E$ such that $x=u+ay$. Then we get that $(U:x)=(U:x-u)=(0:x-u)$. So, $U\cap R(x-u)=0$. This contradicts that $E$ is an essential extension of $U$. \end{proof} Let $M$ be a non-zero module over a ring $R$. As in \cite[p.338]{FuSa01} we set: \[M_{\sharp}=\{s\in R\mid \exists 0\ne x\in M\ \mathrm{such\ that}\ sx=0\}\quad\mathrm{and}\quad M^{\sharp}=\{s\in R\mid sM\subset M\}.\] Then $R\setminus M_{\sharp}$ and $R\setminus M^{\sharp}$ are multiplicative subsets of $R$. If $M$ is a module over a chain ring $R$ then $M_{\sharp}$ and $M^{\sharp}$ are prime ideals and they are called the {\bf bottom} and the {\bf top prime ideal}, respectively, associated with $M$. When $I$ is a non-zero proper ideal, it is easy to check that \[I^{\sharp}=\{s\in R\mid I\subset (I:s)\}.\] So, $I^{\sharp}$ is the inverse image of the set of zero-divisors of $R/I$ by the canonical epimorphism $R\rightarrow R/I$. If we extend this definition to the ideal $0$ we have $0^{\sharp}=Z$. A proper ideal $I$ of a chain ring $R$ is said to be \textbf{Archimedean} if $I^{\sharp}=P$. When $R$ is Archimedean each non-zero ideal of $R$ is Archimedean. \begin{remark}\label{R:P=Z} \textnormal{If $P=Z$ then by \cite[Lemma 3]{Gil71} and \cite[Proposition 1.3]{KlLe69} we have $(0:(0:I))=I$ for each ideal $I$ which is not of the form $Pt$ for some $t\in R$. In this case $R$ is self FP-injective and the converse holds. So, if $A$ is a proper Archimedean ideal then $R/A$ is self FP-injective and it follows that $(A:(A:I))=I$ for each ideal $I\supseteq A$ which is not of the form $Pt$ for some $t\in R$.} \end{remark} \begin{lemma} \label{L:topbot} Let $G$ be a FP-injective module over a chain ring $R$. Then $G^{\sharp}\subseteq Z\cap G_{\sharp}$ and $G$ is a module over $R_{G_{\sharp}}$. \end{lemma} \begin{proof} Let $a\in R\setminus G_{\sharp}$ and $x\in G$. Let $b\in (0:a)$. Then $abx=0$, whence $bx=0$. So, $(0:a)\subseteq (0:x)$. It follows that $x=ay$ for some $y\in G$ since $G$ is FP-injective. Hence $a\notin G^{\sharp}$. If $a\notin Z$ then $0=(0:a)\subseteq (0:x)$ for each $x\in G$. \end{proof} \begin{proposition} \label{P:main} Let $R$ be a chain ring, $E$ an FP-injective $R$-module and $U$ a pure submodule of $E$. Assume that $E_{\sharp}\subset Z$. Then $E/U$ is FP-injective. \end{proposition} \begin{proof} Let $E_{\sharp}=L$. Then $E$ and $U$ are modules over $R_L$. Since $L\subset Z$, by \cite[Theorem 11]{Couch03} $R_L$ is coherent, whence $E/U$ is FP-injective. \end{proof} \begin{remark} Let $R$ be a chain ring. Assume that $P$ is not finitely generated and not faithful. Then, for any indecomposable injective $R$-module $E$ and for any non-zero pure submodule $U$ of $E$, $E/U$ is FP-injective over $R/(0:P)$. \end{remark} \begin{proof} Since $P$ is not finitely generated and not faithful $R$ is not coherent. Let $R'=R/(0:P)$. Since $(0:P)$ is a non-zero principal ideal, $R'$ is coherent by \cite[Theorem 11]{Couch03}. First we assume that $E\ncong E(R/P)$. By \cite[Corollary 28]{Couch03} $E$ is an $R'$-module and it is easy to check that it is injective over $R'$ too. Hence $E/U$ is FP-injective over $R'$. Now suppose that $E=E(R)\cong E(R/P)$. Then $(0:P)$ is a submodule of $U$ and $E$. So, $E/U$ is the factor of $E/(0:P)$ modulo the pure submodule $U/(0:P)$. By \cite[Proposition 14]{Couch03} $E/(0:P)\cong E(R/Rr)$ for some $0\ne r\in P$. Hence $E/(0:P)$ is injective over $R'$. Again we conclude that $E/U$ is FP-injective over $R'$. \end{proof} The following example shows that $E/U$ is not necessarily FP-injective over $R$. \begin{example} Let $D$ be a valuation domain whose order group is $\mathbb{R}$, $M$ its maximal ideal, $d$ a non-zero element of $M$ and $R=D/dM$. Assume that $D$ is not almost maximal. Then, for any indecomposable injective $R$-module $E$ and for any non-zero pure proper submodule $U$ of $E$, $E/U$ is not FP-injective over $R$. In particular, if $E=E(R)$, then $E/R$ is not FP-injective over $R$. \end{example} \begin{proof} If $I$ is a non-zero proper ideal of $R$ then either $I$ is principal or $I=Pa$ for some $a\in R$. On the other hand $P$ is not finitely generated and not faithful. Let $x\in E\setminus U$. Then $(U:x)$ is not finitely generated. So, $(U:x)=Pb$ for some $b\in R$ and there exists $a\in P$ such that $Pb=(0:a)$. By lemma \ref{L:crucial} $E/U$ is not FP-injective over $R$. Since $D$ is not almost maximal then $R$ is a proper pure submodule of its injective hull. \end{proof} \begin{lemma} \label{L:arch} Let $R$ be a chain ring. Then: \begin{enumerate} \item $sI$ is Archimedean for each non-zero Archimedean ideal $I$ and for each $s\in P$ for which $sI\ne 0$; \item $(A:I)^{\sharp}=I^{\sharp}$ for each Archimedean ideal $A$ and for each ideal $I$ such that $A\subseteq I$. \end{enumerate} \end{lemma} \begin{proof} $(1)$. Let $t\in R$ such that $tsI=sI$. If $b\in I$ then there exists $c\in I$ such that $sb=tsc$. If $sb\ne 0$, then by \cite[Lemma 5]{Couch03} $Rb=Rtc$. If $sb=0$, then $b\in (0:s)\subset tI$ since $tsI\ne 0$. So, $tI=I$. It follows that $t$ is invertible. $(2)$. Let $J=I^{\sharp}$. First suppose $J\subset P$. Let $s\in R\setminus J$. Then $sI=I$. It follows that $(A:I)\subset Rs$. Let $r\in (A:I)$. Then $r=st$ for some $t\in R$. We have $tI=tsI=rI\subseteq A$. So, $t\in (A:I)$, $(A:I)=s(A:I)$ and $(A:I)^{\sharp}\subseteq J$. But since $A$ is Archimedean we have $(A:(A:I))=I$ (Remark \ref{R:P=Z}). It follows that $(A:I)^{\sharp}=J$. Now assume that $J=P$. If $P\subseteq (A:I)$ then $(A:I)^{\sharp}=P$. Now suppose that $(A:I)\subset P$. Let $s\in P\setminus (A:I)$. Therefore $((A:I):s)=(A:sI)\supset (A:I)$ since $A$ is Archimedean. Hence $(A:I)^{\sharp}=J=P$. \end{proof} \begin{lemma} \label{L:ideal} Let $R$ be a chain ring, $I$ a non-zero Archimedean ideal of $R$ which is neither principal nor of the form $Pt$ for some $t\in R$, $0\ne a\in I$ and $A=I(0:a)$. Then: \begin{enumerate} \item If $(0:a)\subset (A:I)$ then there exists $c\in R$ such that $(A:I)=Rc$ and $(0:a)=Pc$; \item $A$ is Archimedean if $Z=P$. \end{enumerate} \end{lemma} \begin{proof} $(1)$. Let $c\in (A:I)\setminus (0:a)$. It is easy to see that $A=cI$. Let $d\in (A:I)$ such that $c=td$ for some $t\in R$. Then $A=cI=tdI=dI$. From Lemma \ref{L:arch} we deduce that $t$ is invertible. So, $(A:I)=Rc$. By way of contradiction suppose there exists $d\in R$ such that $(0:a)\subset Rd\subset Rc$. As above we get that $(A:I)=Rd$. This contradicts that $(A:I)=Rc$. Hence $(0:a)=Pc$. $(2)$. First we show that $A\subset c(0:a)$ if $c\in P\setminus I$. By way of contradiction suppose that $A=c(0:a)$. Since $I\ne Pt$ for each $t\in R$, by \cite[Lemma 29]{Couch03} there exists $d\in P$ such that $I\subset dcR$. We have $dc(0:a)=c(0:a)$. From $a\in I$ we deduce that $a=rdc$ for some $r\in P$. It follows that $rc(0:a)=rdc(0:a)=0$. But $rc\notin Ra$ implies that $rc(0:a)\ne 0$, whence a contradiction. Let $s\in P\setminus I$. Since $I$ is Archimedean there exists $t\in (I:s)\setminus I$. We have $A\subset t(0:a)\subseteq (A:s)$. Hence $A$ is Archimedean. \end{proof} \begin{lemma} \label{L:ann} Let $R$ be a chain ring such that $0\ne Z\subset P$ and $A$ a non-zero Archimedean ideal. \begin{enumerate} \item if $A\subset rZ$ for some $r\in R$ then $(A:rZ)=Qs$ for some $s\in Z$; \item if $I$ is an ideal satisfying $I^{\sharp}=Z$, $A\subset I$ and $I\ne rZ$ for any $r\in R$, then $(A:I)\ne bQ$ for any $b\in Z$. \end{enumerate} \end{lemma} \begin{proof} $(1)$. Let $J=(A:rZ)$. By Remark \ref{R:P=Z} $(A:J)=rZ$. By Lemma \ref{L:arch} $J^{\sharp}=Z$, so $J$ is an ideal of $Q$. By way of contradiction suppose that $J$ is not finitely generated over $Q$. Then $J=ZJ$ and $rJ=rZJ\subseteq A$. Whence $rR\subseteq (A:J)=rZ$. This is false. Hence $J=Qs$ for some $s\in R$. $(2)$. By way of contradiction suppose that $(A:I)=bQ$ for some $b\in Z$. It follows that $bI\subset A$. So, $(bI:I)\subseteq (A:I)$. It is obvious that $b\in (bI:I)$ and since $I$ is a $Q$-module we have $(bI:I)=bQ$. Since $I\ne cZ$ for each $c\in Z$ we have $bI=\cap_{r\notin bI}rQ$ by \cite[Lemma 29]{Couch03}. Let $c\in A\setminus bI$. There exists $t\in Z$ such that $tc\notin bI$. We have $(Rtc:I)=(Rc:I)$. It is obvious that $(Rc:I)\subseteq (Rtc:tI)$. Let $r\in (Rtc:tI)$. For each $s\in I$ $ts=tcv$ for some $v\in R$. If $ts\ne 0$ then $Rs=Rcv$ by \cite[Lemma 5]{Couch03}. If $ts=0$ then $s\in (0:t)\subset Rc$ because $tc\ne 0$. Hence $(Rc:I)=(Rtc:tI)$. But $tI\ne I$ because $t\in Z=I^{\sharp}$. Since $Rtc$ is Archimedean we get that $(Rtc:I)\subset (Rtc:tI)$, whence a contradiction. \end{proof} Let $\widehat{R}$ be the pure-injective hull of $R$ and $x\in\widehat{R}\setminus R$. As in \cite{SaZa85} the breadth ideal B($x$) of $x$ is defined as follows: B($x$)$ = \{ r\in R\mid x\notin R + r\widehat{R}\}$. \begin{proposition} \label{P:breadth} Let $R$ be a chain and $I$ a proper ideal of $R$. Then: \begin{enumerate} \item \cite[Proposition 20]{Couc06} $R/I$ is not complete in its f.c. topology if and only if there exists $x\in\widehat{R}\setminus R$ such that $I =\mathrm{B}(x)$; \item \cite[Proposition 3]{Cou01} if $Z=P$ and $I =\mathrm{B}(x)$ for some $x\in\widehat{R}\setminus R$ then: \begin{enumerate} \item $I = (0 : (R : x))$; \item $(R : x) = P(0 : I)$ and $(R : x)$ is not finitely generated. \end{enumerate} \end{enumerate} \end{proposition} We say that a module $M$ is \textbf{polyserial} if it has a pure-composition series \[0=M_0\subset M_1\subset\dots\subset M_n=M,\] i.e. $M_k$ is a pure submodule of $M$ and $M_k/M_{k-1}$ is a uniserial module for each $k=1,\dots, n$. The \textbf{Malcev rank} of a module $M$ is defined as the cardinal number \[\mathrm{Mr}\ M=\mathrm{sup}\{\mathrm{gen}\ X\mid X\ \mathrm{finitely\ generated\ submodule\ of}\ M\}.\] \begin{proposition}\label{P:unipoly} Let $U$ be a submodule of a FP-injective module $E$ over a chain ring $R$. Then the following conditions are equivalent: \begin{enumerate} \item $E/U$ is FP-injective if $U$ is uniserial; \item $E/U$ is FP-injective if $U$ is polyserial. \end{enumerate} \end{proposition} \begin{proof} It is obvious that only $(1)\Rightarrow (2)$ needs a proof. By \cite[Proposition 13]{Couc06} $\mathrm{Mr}\ U$ is finite and equals the length of any pure-composition series of $U$. Let $n=\mathrm{Mr}\ U$. Let $U_1$ be a pure uniserial submodule of $U$. Then $U/U_1$ is a pure submodule of $E/U_1$ which is FP-injective. On the other hand $U/U_1$ is polyserial and $\mathrm{Mr}\ U/U_1=n-1$. We conclude by induction on $n$. \end{proof} \section{Chain ring case: main results} \begin{lemma} \label{L:uniserial} Let $R$ be a chain ring, $E$ an indecomposable injective $R$-module and $U$ a pure uniserial submodule of $E$. Then, for each $0\ne e\in E$ there exists a pure submodule $V$ of $E$ containing $e$. \end{lemma} \begin{proof} There exists $r\in R$ such that $0\ne re\in U$. The purity of $U$ implies there exists $u\in U$ such that $re=ru$. By \cite[Lemma 2]{Couch03} $(0:e)=(0:u)$. Let $\alpha: Re\rightarrow U$ be the homomorphism defined by $\alpha(e)=u$. It is easy to check that $\alpha$ is a monomorphism. So, there exists a homomorphism $\beta: U\rightarrow E$ such that $\beta(\alpha(e))=e$. Then $\beta$ is injective since $\alpha$ is an essential monomorphism. Let $V=\beta(U)$. Thus by using the fact that a submodule of an injective module is pure if and only if it is a FP-injective module, we get that $V$ is a pure submodule of $E$. \end{proof} \begin{theorem} \label{T:main} Let $R$ be a chain ring. Assume that $Q$ is not coherent. Consider the following two conditions: \begin{enumerate} \item $R/I$ is complete in its f.c. topology for each proper ideal $I$, $I\ne rZ$ for any $r\in R$, satisfying $I^{\sharp}=Z$; \item for each FP-injective $R$-module $E$ and for each pure polyserial submodule $U$ of $E$, $E/U$ is FP-injective. \end{enumerate} Then $(1)\Rightarrow (2)$ and the converse holds if each indecompo\-sable injective module $E$ for which $E_{\sharp}=Z$ contains a pure uniserial submodule. \end{theorem} \begin{proof} $(1)\Rightarrow (2)$. We may assume that $U$ is uniserial by Proposition \ref{P:unipoly} and that $E$ is injective and indecomposable by Proposition \ref{P:redu}. Let $E_{\sharp}=L$. By Proposition \ref{P:main} we may suppose that $Z\subseteq L$. After, possibly replacing $R$ with $R_L$, we may assume that $L=P$. Let $x\in E\setminus U$ and $a\in R$ such that $(0:a)\subseteq (U:x)$. Let $A=(0:x)$ and $R'=R/A$. Let $E'=\{y\in E\mid A\subseteq (0:y)\}$. Let $c\in (U:x)\setminus A$. Since $U$ is a pure submodule there exists $e\in U$ such that $cx=ce$ and by \cite[Lemma 2]{Couch03} $(0:e)=A$. By \cite[Lemma 26]{Couch03} $A^{\sharp}=E_{\sharp}=P$. Thus $A$ is Archimedean. Let $v\in U\cap E'$ such that $e=tv$ for some $t\in R$. Then $A\subseteq (0:v)=t(0:e)=tA$. So, $tA=A$. We deduce that $t$ is invertible and $R'\cong Re=E'\cap U$. It follows that $(U:x)=(Re:x)$. We have B($x$)$=I/A$ where either $I^{\sharp}\ne Z$ or $I=rZ$ for some $r\in R$. We deduce that $(Re:x)=P(A:I)$ by Proposition \ref{P:breadth}. By Lemma \ref{L:arch} $(A:I)^{\sharp}=I^{\sharp}$. If $(A:I)$ is not principal then $(Re:x)=(A:I)$. If $(A:I)=Rr$ for some $r\in P$, then $(Re:x)=Pr$, and in this case $(Re:x)^{\sharp}=P=I^{\sharp}$. In the two cases $(Re:x)^{\sharp}=I^{\sharp}$. We deduce that $(U:x)^{\sharp}\ne Z$ if $I^{\sharp}\ne Z$. If $I=rZ$ for some $r\in R$, then $R\ne Q$ and $Z\ne P$ because $Q/rZ$ is complete and $R/rZ$ is not. By Lemma \ref{L:ann} $(A:I)=Qs$ for some $s\in R$. But, since $R_{\sharp}=Z$, $(0:a)^{\sharp}=Z$ by \cite[Lemma 26]{Couch03}, and by \cite[Theorem 10]{Couch03} $(0:a)$ is not finitely generated over $Q$. Hence $(0:a)\subset (U:x)$. By Lemma \ref{L:crucial} there exist $u\in U$ and $y\in E$ such that $x=ay+u$, so, $E/U$ is FP-injective. $(2)\Rightarrow (1)$. By way of contradiction suppose there exists an ideal $I$ of $R$, $I\ne rZ$ for any $r\in Z$, such that $I^{\sharp}=Z$ and $R/I$ is not complete in its f.c. topology. Since the natural map $R\rightarrow Q$ is a monomorphism, as in \cite[Proposition 4]{Cou10}, we can prove that $Q/I$ is not complete in its f.c. topology. After, possibly replacing $R$ by $Q$ we may assume that $Z=P$. Then, $R$ is not coherent and $I$ is Archimedean. Let $s\in P\setminus I$. So, $I\subset (I:s)\subset P$. If $E$ is the injective hull of $R$, by Proposition \ref{P:breadth} there exists $x\in E\setminus R$ such that $I=$B($x$). Since $s\notin I$, $x=r+sy$ with $r\in R$ and $y\in E\setminus R$. We have B($y$)=$(I:s)$, whence $R/(I:s)$ is not complete too. So, possibly, after replacing $I$ with $(I:s)$, we can choose $I\ne 0$. First assume that $I=Ra$ for some $a\in P$. Let $E$ be the injective hull of $R$ and $x\in E\setminus R$ such that B($x$)$=I$. By Proposition \ref{P:breadth} $(R:x)=P(0:a)=(0:a)$ since $(0:a)$ is not finitely generated by \cite[Theorem 10]{Couch03}. By Lemma \ref{L:crucial} $E/R$ is not FP-injective. Now, suppose that $I$ is not finitely generated. Let $a$ be a non-zero element of $I$. Then $(0:I)\subset (0:a)$. So, if $A=I(0:a)$, then $A\ne 0$ and $A$ is Archimedean by Lemma \ref{L:ideal}. Let $R'=R/A$, $e=1+A$, $E$ the injective hull of $R'$ over $R$ and $E'=\{z\in E\mid A\subseteq (0:z)\}$. Then $E'$ is the injective hull of $R'$ over $R'$. By hypothesis and Lemma \ref{L:uniserial} $R'$ is contained in a pure uniserial submodule $U$ of $E$. As in the proof of $(1)\Rightarrow (2)$ we get $R'=E'\cap U$. Let $I'=I/A$ and $P'=P/A$. Since $R'/I'$ is not complete in its f.c. topology there exists $x\in E'$ such that B($x$)$=I'$. Then $(R':_{R'}x)=P'(0:_{R'}I')$. It is easy to see that $(R':_{R'}x)=(U:x)/A$ and $(0:_{R'}I')=(A:I)/A$. So, $(U:x)=P(A:I)$. From Lemma \ref{L:ideal} we deduce that $P(A:I)=(0:a)$. Hence $(U:x)=(0:a)$, whence $E/U$ is not FP-injective by Lemma \ref{L:crucial}. \end{proof} \begin{theorem} \label{T:mainCoh} Let $R$ be a chain ring such that $Z\ne 0$ . Assume that $Q$ is coherent. The following conditions are equivalent: \begin{enumerate} \item $R/Z$ is complete in its f.c. topology; \item for each FP-injective $R$-module $E$ and for each pure polyserial submodule $U$ of $E$, $E/U$ is FP-injective. \end{enumerate} \end{theorem} \begin{proof} $(1)\Rightarrow (2)$. We may assume that $Z\subset P$. Since $Q$ is coherent, for each $0\ne a\in Z$, $(0:a)=bQ$ for some $0\ne b\in Z$. Let $E$ be an injective module, $U$ a pure uniserial submodule of $E$ and $L=E_{\sharp}$. We may assume that $E$ is indecomposable. If $L\subseteq Z$ then $E/U$ is FP-injective because $R_L$ is coherent. Now assume that $Z\subset L$. As in the proof of Theorem \ref{T:main} we may suppose that $L=P$. Let $x\in E\setminus U$, $A=(0:x)$, $a\in R$ such that $(0:a)\subseteq (U:x)$ and $c\in (U:x)\setminus A$. As in the proof of Theorem \ref{T:main} we show there exists an ideal $I$ such that $(U:x)=P(A:I)$. If $I^{\sharp}\ne Z$ we do in the proof of Theorem \ref{T:main} to show that $(0:a)\subset (U:x)$. Now suppose that $I^{\sharp}=Z$. By hypothesis $I\ne rZ$ for each $r\in R$. Since $(A:I)^{\sharp}=Z\ne P$, $(U:x)=(A:I)\ne (0:a)$ by Lemma \ref{L:ann}. We conclude by Lemma \ref{L:crucial} that $E/U$ is FP-injective. $(2)\Rightarrow (1)$. Let $0\ne a\in Z$. Then $(0:a)=bQ$ for some $0\ne b\in Z$. It is obvious that $(bZ:Z)\subseteq (bR:Z)$. Let $c\in (bR:Z)$ then $cZ\subset bR$. Since $(bQ/bZ)$ is simple over $Q$ and $cZ$ is a proper $Q$-submodule of $bQ$ we get that $cZ\subseteq bZ$. Hence $(Rb:Z)=(bZ:Z)$. Since $bZ$ is an Archimedean ideal over $Q$ and that $(bZ:b)=Z$ then $(bZ:Z)=(bZ:(bZ:b))=bQ=(0:a)$ by Remark \ref{R:P=Z}. So, $(bR:Z)=(0:a)$. Now, assume that $R/Z$ is not complete in its f.c. topology. Let $E$ the injective hull of $R/bR$. By \cite[Corollary 22(3)]{Couch03} there exists a pure uniserial submodule $U$ of $E$ containing $e=1+bR$. Now, as in the proof of Theorem \ref{T:main} we show that there exists $x\in E\setminus U$ such that $(U:x)=(bR:Z)=(0:a)$. By Lemma \ref{L:crucial} $E/U$ is not FP-injective. This contradicts the hypothesis. \end{proof} \begin{corollary} \label{C:Znonidem} Let $R$ be a chain ring such that $Z^2\ne Z$ . The following conditions are equivalent: \begin{enumerate} \item $R/Z$ is complete in its f.c. topology; \item for each FP-injective $R$-module $E$ and for each pure polyserial submodule $U$ of $E$, $E/U$ is FP-injective. \end{enumerate} \end{corollary} \begin{proof} Since $Z^2\ne Z$, $Z$ is principal over $Q$ and $Q$ is coherent by \cite[Theorem 10]{Couch03}. \end{proof} A chain ring $R$ is said to be {\bf strongly discrete} if $L\ne L^2$ for each non-zero prime ideal of $R$. \begin{corollary} \label{C:discr} Let $R$ be a strongly discrete chain ring. The following conditions are equivalent: \begin{enumerate} \item $R/Z$ is complete in its f.c. topology; \item for each FP-injective $R$-module $E$ and for each polyserial submodule $U$ of $E$, $E/U$ is FP-injective. \end{enumerate} \end{corollary} \begin{corollary} \label{C:Z=P} Let $R$ be a chain ring such that $Z=P$. Consider the following conditions: \begin{enumerate} \item either $R$ is coherent or $R/I$ is complete in its f.c. topology for each Archime\-dean ideal $I$; \item for each FP-injective $R$-module $E$ and for each pure polyserial submodule $U$ of $E$, $E/U$ is FP-injective. \end{enumerate} Then $(1)\Rightarrow (2)$ and the converse holds if each indecomposable injective module $E$ for which $E_{\sharp}=P$ contains a pure uniserial submodule. \end{corollary} \begin{proof} It is a consequence of Theorem \ref{T:main}. \end{proof} For each module $M$ we denote by $\mathcal{A}(M)$ its set of annihilator ideals, i.e. an ideal $A$ belongs to $\mathcal{A}(M)$ if there exists $0\ne x\in M$ such that $A=(0:x)$. If $E$ is a uniform injective module over a chain ring $R$, then, for any $A,\ B\in\mathcal{A}(E),\ A\subset B$ there exists $r\in R$ such that $A=rB$ and $B=(A:r)$ (see \cite{Nis72}). \begin{lemma} \label{L:puruni} Let $R$ be a chain ring. Assume that $Z_1\ne Z$, where $Z_1$ is the union of all prime ideals properly contained in $Z$. Let $E$ be an indecomposable injective $R$-module and $0\ne e\in E$. Suppose that $E_{\sharp}=Z$. Then $E$ contains a uniserial pure submodule $U$ such that $e\in U$. \end{lemma} \begin{proof} Since $E$ is a module over $Q$, we may assume that $R=Q$. By Lemma \ref{L:uniserial} it is enough to show that $E$ contains a pure uniserial submodule. Since $R/Z_1$ is archimedean, $P$ is countably generated by \cite[Lemma 33]{Couch03}. By \cite[Proposition 32]{Couch03} $(0:P)$ is a countable intersection of ideals containing it properly. So, by \cite[Proposition 19]{Couch03} $\mathrm{E}(R/P)$ and $\mathrm{E}(R/rR)$, $r\ne 0$, contain a pure uniserial submodule. If $\mathcal{A}(E)=\mathcal{A}(R)$ then $E\cong\mathrm{E}(R)$. Since $R$ is self FP-injective, it follows that $E$ contains a pure uniserial submodule. Now assume that $\mathcal{A}(E)\ne\mathcal{A}(R)$ and $\mathcal{A}(E)\ne \{rR\mid r\in R\}$. By \cite[Theorem 5.5]{ShLe74} there exists a uniserial $R$-module $U$ such that $\mathcal{A}(U)=\mathcal{A}(E)$ and consequently $E\cong\mathrm{E}(U)$. Let $r\in R$ and $u\in U$ such that $(0:r)\subseteq (0:u)$. Then $(0:r)\subset (0:u)$, and $r(0:u)$ is not a principal ideal. So, $(0:P)\subset r(0:u)$, and by \cite[Proposition 27]{Couch03} there exists $v\in U$ such that $(0:v)\subset r(0:u)\subset (0:u)$. It follows that $u=tv$ for some $t\in R$. By \cite[Lemma 2]{Couch03} $(0:v)=t(0:u)\subset r(0:u)$. Hence $t\in rR$ and $u\in rU$. We conclude that $U$ is FP-injective, whence it is isomorphic to a pure submodule of $E$. \end{proof} In the following theorems let us observe that the word "polyserial" is replaced with "of finite Goldie dimension". \begin{theorem} \label{T:Z=P1} Let $R$ be a chain ring such that $Z=P$. Assume that $P\ne P_1$ where $P_1$ is the union of all nonmaximal prime ideals of $R$. The following conditions are equivalent: \begin{enumerate} \item either $R$ is coherent or $R/P_1$ is almost maximal; \item for each FP-injective $R$-module $E$ and for any its pure submodule $U$ of finite Goldie dimension, $E/U$ is FP-injective. \end{enumerate} \end{theorem} \begin{proof} $(2)\Rightarrow (1)$. By Lemma \ref{L:puruni} each indecomposable injective module $E$ for which $E^{\sharp}=P$ contains a pure submodule. We conclude by Corollary \ref{C:Z=P}. $(1)\Rightarrow (2)$. We may assume that $R$ is not coherent and $E$ is the injective hull of $U$. Then $E$ is a finite direct sum of indecomposable injective modules. So, it is easy to show that $E=F\oplus G$ where $F_{\sharp}=P$ and $G_{\sharp}=L\subset P$. If $F=0$ then $E$ and $U$ are modules over $R_L$ which is coherent by \cite[Theorem 11]{Couch03}. In this case $E/U$ is FP-injective. Now, $F\ne 0$. Let $a\in R$ and $x\in E\setminus U$ such that $(0:a)\subseteq (U:x)$. We have $x=y+z$ where $y\in F$ and $z\in G$. By \cite[Lemma 26]{Couch03} $(0:y)^{\sharp}=P$ and $(0:z)^{\sharp}\subseteq L$. It is obvious that $(0:x)=(0:y)\cap (0:z)$. So, it is possible that $(0:x)^{\sharp}\subseteq L$. Let $B$ be the kernel of the natural map $R\rightarrow R_L$. For any $s\in P\setminus L$ we have $(0:P)\subset (0:s)\subseteq B\subseteq (0:x)$. By \cite[Corollary 28]{Couch03} there exists $f\in F$ such that $(0:f)\subset (0:x)$. There exists $b\in R$ such that $0\ne bf\in U$. Since $U$ is a pure submodule there exists $u\in U$ such that $bf=bu$. By \cite[Lemma 2]{Couch03} $(0:u)=(0:f)$. It is obvious that $(U:x+u)=(U:x)$ and we have $(0:x+u)=(0:u)$ and $(0:x+u)^{\sharp}=P$. So, after possibly replace $x$ with $x+u$, we may assume that $(0:x)^{\sharp}=P$. For any $c\in (U:x)\setminus (0:x)$ let $e_c\in U$ such that $ce_c=cx$. Then $E$ contains an injective hull $E_c$ of $Re_c$, and clearly $x\in E_c$. So, we do as in the proof of Theorem \ref{T:main} to show that $(Re_c:x)^{\sharp}\ne P$ for any $c\in (U:x)\setminus (0:x)$. It is obvious that $(U:x)=\cup_{c\in (U:x)\setminus (0:x)}(Re_c:x)$. It follows that $(U:x)^{\sharp}\subseteq\cup_{c\in (U:x)\setminus (0:x)}(Re_c:x)^{\sharp}\subseteq P_1$. We conclude as in the proof of Theorem \ref{T:main}. \end{proof} \begin{theorem} \label{T:main2} Let $R$ be a chain ring. Assume that $Z\ne Z_1$ where $Z_1$ is the union of all prime ideals properly contained in $Z$. Suppose $R/Z_1$ is almost maximal. Then, for each FP-injective $R$-module $E$ and for any its pure submodule $U$ of finite Goldie dimension, $E/U$ is FP-injective. \end{theorem} \begin{proof} We may assume that $R$ is not coherent and $E$ is the injective hull of $U$. By Theorem \ref{T:Z=P1} we suppose that $Z\ne P$. As in the proof of Theorem \ref{T:main} we may assume that $E_{\sharp}=P$. Let $a\in R$ and $x\in E\setminus U$ such that $(0:a)\subseteq (U:x)$. It is possible that $(0:x)=0$. But, there exists $b\in R$ such that $0\ne bx\in U$, and since $U$ is a pure submodule there exists $v\in U$ such that $bx=bv$. We get $(0:x-v)\ne 0$ and $(U:x-v)=(U:x)$. Now we do the same proof as in Theorem \ref{T:Z=P1} to conclude. \end{proof} \begin{corollary} \label{C:Arch} Let $R$ be an Archimedean chain ring. The following conditions are equivalent: \begin{enumerate} \item $R$ is either coherent or maximal; \item for each FP-injective $R$-module $E$ and for each pure submodule $U$ of finite Goldie dimension of $E$, $E/U$ is FP-injective. \item for each injective $R$-module $E$ and for each pure submodule $U$ of finite Goldie dimension of $E$, $E/U$ is injective. \end{enumerate} \end{corollary} \begin{proof} $(1)\Rightarrow (2)\ \mathrm{and}\ (3)$. By Proposition \ref{P:redu} we may assume that $E$ is injective of finite Goldie dimension. If $R$ is maximal then $E$ is a finite direct sum of uniserial modules by \cite[Theorem]{Gil71}. By \cite[Theorem XII.2.3]{FuSa01} (this theorem holds even if $R$ is not a domain) $U$ is a direct summand of $E$. So, $U$ and $E/U$ are injective. If $R$ is coherent we apply \cite[Lemma 3]{Cou05}. $(2)\Rightarrow (1)$ by Theorem \ref{T:Z=P1}. \end{proof} \begin{corollary} \label{C:locArch} Let $R$ be an arithmetical ring such that $R_P$ is Archimedean for any maximal ideal $P$ of $R$. Then the following conditions are equivalent: \begin{enumerate} \item $R_P$ is either coherent or maximal for each maximal ideal $P$ of $R$; \item for each FP-injective $R$-module $E$ and for each pure submodule $U$ of finite Goldie dimension of $E$, $E/U$ is FP-injective; \item for each injective $R$-module $E$ and for each pure submodule $U$ of finite Goldie dimension of $E$, $E/U$ is injective. \end{enumerate} \end{corollary} \begin{proof} By Corollary \ref{C:Arch} $(3)\Rightarrow (1)$. $(1)\Rightarrow (2)$. We may assume that $E$ is injective of finite Goldie dimension. By \cite[Corollary 4]{Cou09} $E_P$ is injective, and $U_P$ is a pure submodule of $E_P$. We must prove that $(E/U)_P$ is FP-injective for each maximal ideal $P$ of $R$. If $R_P$ is coherent it is a consequence of Proposition \ref{P:locoh}. If $R_P$ is maximal and non coherent, first we show that $E_P$ is a finite direct sum of indecomposable injective $R_P$-modules. We may assume that $E$ is indecomposable. Since $\mathrm{End}_R(E)$ is local, there exists a maximal ideal $L$ such that $E$ is a module over $R_L$. If $L=P$ then $E_P=E$. If $L\ne P$ then $E_P=0$ because $P$ is also a minimal prime ideal. By Corollary \ref{C:Arch} $(E/U)_P$ is FP-injective. We conclude that $E/U$ is FP-injective. $(2)\Rightarrow (3)$. We have $E=E_1\oplus\dots\oplus E_n$ where $E_k$ is indecomposable for $k=1,\dots,n$. For $k=1,\dots,n$, let $P_k$ be the maximal ideal of $R$ which verifies that $E_k$ is a module over $R_{P_k}$. If $S=R\setminus (P_1\cup\dots\cup P_n)$, then $E$ and $U$ are modules over $S^{-1}R$. So, we replace $R$ with $S^{-1}R$ and we assume that $R$ is semilocal. By \cite[Theorem 5]{Jen66} each ideal of $R$ is principal ($R$ is B\'ezout). By using \cite[Corollary 36]{Couch03} it is easy to prove that each ideal of $R$ is countably generated. So, we can do the same proof as in \cite[Lemma 3]{Cou05} to show that $E/U$ is injective. \end{proof} \section*{Acknowledgements} This work was presented at the "International Workshop on Algebra and Applications" held in Fes-Morocco, june 18-21, 2014. I thank again the organizers of this workshop. I thank too the laboratory of mathematics Nicolas Oresme of the university of Caen Basse-Normandie which allowed me to participate to this workshop.
2108.00265
\section{introduction} The closed quantum system can display resistance to the thermalization under its own intrinsic dynamics when it is localized, e.g., induced by static disorder\cite{rmp}, a linear potential with a spatial gradient \cite{stark}, or the presence of a special subspace of the Hilbert space\cite{scar,fragment}. Experimental evidence for the violation of ergodicity was presented, e.g., in ultracold atomic fermions \cite{exp-quasidisorder} and superconducting systems \cite{roushan17}. In practice, no realistic systems can be immune to environmental coupling. Recent studies have found that even when the system is eventually driven to the thermal equilibrium, the localization decays slowly \cite{opendisorder}. This strained localization decay induces a large time window during which the nonergodic character of the system becomes apparent \cite{luschen2017}. While localization will be detrimental to the transportation of quantum particles in systems, it was recently discovered that increasing disorder within a system can enhance particle transport \cite{disorderenhance}. Furthermore, combined with environment-induced dephasing, the localized system can display robust quantum transport \cite{opendisorderenhance}. These findings imply that the interplay of localization and environment-induced decoherence can give rise to intriguing and complex dynamics in quantum systems. It motivates us to reconsider the robustness of localization, based on a fundamental point that the system is dissipative because of coupling to the environment. For this purpose, we studied the exact evolution of single excitation in a one-dimensional lattice system coupled to a bosonic environment. Different from Markovian treatments in previous works \cite{opendisorder}, the exact dynamics of excitation can display strong dissipation or stable oscillation, depending on the localization of initial state. Moreover, we found that strong localization may enhance the decaying of excitation, rather than preserve excitation in the system. We argue that this counter institutive feature is a consequence of the energy exchange between the system and environment, which induces the coherent transition in the energy levels of system. The work is divided into five sections. Following the preceding introduction, section II introduces the model, and the dynamic equation for excitation is derived. In section III, we discusses the time evolution of excitation for different cases by calculating the survival probability of the information of initial state and the inverse participation ratio, both of which characterize the localization of system. Subsequently, a physical explanation is provided in section IV. It is shown that the energy exchange between the system and environment is responsible for the unusual observation. Finally, conclusion is presented in section V. \section{The model and dynamic equation } In this work, we focus on the open dynamics of single excitation in a generalized Aubry-Andr\'{e}-Harper (GAAH) model described by the Hamiltonian below \cite{gAAH} \begin{eqnarray}\label{hs} H_S=\lambda \sum_{n=1}^N \left(c^{\dagger}_{n} c_{n+1} +c^{\dagger}_{n+1} c_{n} \right) + \nonumber \\ \Delta \frac{\cos(2\pi \beta n +\phi)}{1 - a \cos(2\pi \beta n +\phi)}c^{\dagger}_n c_{n}, \end{eqnarray} where $N$ denotes the number of lattice sites and $c_n (c^{\dagger}_n)$ is the annihilation (creation) operator of excitation at the $n$-th lattice site. For a quasi-periodic modulation, we adopt $\beta=\left(\sqrt{5}-1\right)/2$ with respect to the recent experimental verification of delocalization- localization transition \cite{exp-quasidisorder}. The onsite potential is a smooth function of parameter $a$ in open interval $a\in(-1, 1)$. When $a=0$, Eq. \eqref{hs} reduces to the standard AAH model \cite{aah}, in which a delocalization- localization phase transition can occur when $\Delta=2\lambda$. Whereas for $a\neq 0$, GHHA exhibits an exact mobility edge (ME) following the expression \cite{gAAH} \begin{eqnarray}\label{me} a E_c = \text{sign}\left(\lambda\right)\left(2\left|\lambda\right| - \left|\Delta\right|\right). \end{eqnarray} In the above, $E_c$ is a special eigenenergy of $H_S$, that separates the extended eigenstates from localized counterparts. The coexistence of localized and delocalized states is a typical feature of GAAH model, and leads to the complex excitation dynamics in the system. To avoid the boundary effect, the periodic boundary condition, i.e., $c_n=c_{n+N}$, is adopted. Since the current work focuses on the robustness of localization in the system, $\phi=\pi$ is adopt without a loss of generality. For simplicity, $\hbar=\lambda \equiv 1$ is assumed in the following discussion. Recently, the localization properties of GAAH model has been experimentally investigated in optical lattices \cite{alexan21}. To establish the open dynamics of localization in the GAAH model, bosonic reservoir with different modes characterized by frequencie $\omega_k$ are introduced as the environment. Its Hamiltonian can be written as follows: \begin{eqnarray} H_B= \sum_k \omega_k b_k^{\dagger}b_k, \end{eqnarray} where $b_k$ ($b_k^{\dagger}$) is the annihilation (creation) operator of reservoir $k$. The system is coupled to the environment via particle-particle exchanging, \begin{eqnarray} H_{int}= \sum_{k, n}\left(g_k b_k c_n^{+} + g_k^* b_k^{\dagger} c_n\right) \end{eqnarray} where $g_k$ is the coupling amplitude between the system and reservoir mode $k$. The complexity of dynamics is determined by the spectral density \begin{eqnarray} J(\omega)= \sum_k \left| g_k \right|^2 \delta\left(\omega- \omega_k\right), \end{eqnarray} which characterizes the energy structure of the system plus the system-environment interaction. In this work, Ohmic-type spectral density is adopt as follows: \begin{eqnarray}\label{j} J(\omega)= \eta \omega \left(\frac{\omega}{\omega_c}\right)^{s-1}e^{-\omega/\omega_c}. \end{eqnarray} The quantity $\eta$ is simply the classical friction coefficient, and thus forms a dimensionless measure of the coupling strength between the system and its environment \cite{leggett}. Actually, Eq. \eqref{j} characterizes the damping movement of electrons in a potential, and can simulate a large class of environments. The environment can be classified as sub-Ohmic ($s<1$), Ohmic ($s=1$) or super-Ohmic ($s>1$) \cite{leggett}. Without loss of generality, we focus on the Ohmic case ($s=1$) since it characterizes the typical dynamics of dissipation in the system \cite{leggett}. Here, $\omega_c$ is the cutoff frequency of the environment spectrum, beyond which the spectral density starts to fall off; hence, it determines the regime of reservoir frequency, which is dominant for dissipation. In general, the value of $\omega_c$ depends on the specific environment. In this work, $\omega_c=10$ is set in order to guarantee that the highest energy level in $H_S$ is embedded into the continuum of the environment. With respect of the absence of particle interaction in Eq.\eqref{hs}, it is convenient to focus our attention on the dynamics of single excitation. Under this prerequisite, the environment is set to be at zero temperature such that it can be at vacuum state. With respect of the environment at finite-temperature, the recent studies have shown that the localization may be destroyed by heating dynamics. As a result, the system relaxes eventually into a infinite-temperature state \cite{opendisorder}. However, the relaxation is logarithmically slow in time because of the localization in system \cite{opendisorder}. This means that there is a time window where the nonergodic feature of localization can be observed in experiments \cite{luschen2017} . So it is expected that the dynamics of localization can display some unusual feature at zero temperature because of the strong coupling between system and environment. Furthermore, the dynamics of localization can be treated exactly in this case. And thus one can get a comprehensive picture how the localization in the system is varied by coupling to environment. For this purpose, the dynamical equations of excitation must be derived first. Formally, at any time $t$, the state of the system plus environment can be written as \begin{eqnarray}\label{state} \ket{\psi(t)}&=& \left(\sum_{n=1}^N \alpha_n(t) \ket{1}_n\ket{0}^{\otimes (N-1)} \right)\otimes \ket{0}^{\otimes M} + \nonumber \\ &&\ket{0}^{\otimes N} \otimes \left(\sum_{k=1}^{M} \beta_k(t) \ket{1}_k\ket{0}^{\otimes (M-1)} \right), \end{eqnarray} where $\ket{1}_n = c_n^{\dagger}\ket{0}_n$ denotes the occupation of the $n$-th lattice site, $\ket{0}_k$ is the vacuum state of $b_k$ and $\ket{1}_k=b_k^{\dagger}\ket{0}_k$, and $M$ denotes the number of mode in the environment. Substituting Eq. \eqref{state} into Schr\"{o}dinger equation and solving first for $\beta_k (t)$, one can get a integrodifferential equation for $\alpha_n(t)$, \begin{eqnarray}\label{evolution} \mathbbm{i}\frac{\partial }{\partial t}\alpha_n(t)&=& \left[\alpha_{n+1}(t) + \alpha_{n-1}(t)\right]+ \Delta \cos(2\pi \beta n +\phi)\alpha_n(t) \nonumber\\ &&- \mathbbm{i} \sum_{n=1}^N \int_0^t \text{d}\tau\alpha_n(\tau)f(t-\tau), \end{eqnarray} where $\mathbbm{i}$ is the square root of $-1$, and the memory kernel $f(t-\tau)$ is defined as \begin{eqnarray} f(t-\tau)= \int_0^{\infty} \text{d} \omega J(\omega)e^{-\mathbbm{i} \omega(t-\tau)}. \end{eqnarray} It is evident that the population amplitude of $\alpha_n(t)$ is significantly correlated to its past values. For spectral density shown in Eq. \eqref{j}, $f(t-\tau)= \frac{\eta}{\omega_c^{s-1}} \frac{\Gamma(s+1)}{\left[\mathbbm{i}(t-\tau) + 1/\omega_c\right]^{s+1}}$. It is noted that the Markovian limit could be obtained by replacing $\alpha_n(\tau)$ in Eq. \eqref{evolution} with its current value $\alpha_n(t)$. As a result, the last term on the right hand of Eq. \eqref{state} contributes a positive term to Eq. \eqref{evolution}, which depicts the decaying of $\alpha_n(t)$ \cite{hnxiong10}. So, it is expected that the non-Markovian feature would inspire the distinct dynamics of excitation. \section{Dynamical evolution of excitation in the system} \begin{figure*} \center \includegraphics[width=12cm]{f1.pdf} \caption{(Color online) Logarithmic plotting for the time evolution of SP (right column) and IPR (left column) when $a=0$ for different values of $\Delta$. The initial state is chosen as the highest excited state of $H_S$ in all plots. For these plots, $\eta=0.1$, $\omega_c=10$, $s=1$ and $\phi=\pi$ are chosen. } \label{fig:a=0} \end{figure*} Taking the existence of ME into account, we focus on the evolution of excitation initially in the highest excited eigenstate (ES) of $H_S$ \cite{alexan21}. Thus, as for Eq. \eqref{me}, the higher eigenenergy refers to the stronger localization in the ES \cite{gAAH}. Moreover, since ES is embedded in the continuum of the environment, the occurrence of a bound state is excluded from the current discussion \cite{john}. We introduce the survival probability (SP), defined as $SP=\left|\inp{\psi(t)}{ES}\right|^2$, to characterize the dissipation of quantum information. In addition, the inverse participation ratio (IPR), defined as $\text{IPR}=\sum_{n=1}^N \left|\alpha_n(t)\right|^4$, is also calculated to establish localization variation in the system. Both SP and IPR have been extensively used to explore the dynamics of localization in disordered many-body systems \cite{hs15}. The following discussion focuses on two cases, i.e., $a=0$ and $a=0.5$. For the former case, ME is absent. However, a delocalization- localization transition can occur in the system when $\Delta=2$, which separates the delocalized ($\Delta<2$) from the localized ($\Delta>2$) phase. This transition is absent in the latter; instead, the eigenstates are classified as localized ($E>E_c$) or extended ($E<E_c$) since the occurrence of ME. Finally, the difficulty of finding the analytical solution to Eq. \eqref{evolution} is noted. Thus, we have to rely on numeric method. Our method is to transform the integral in Eq. \eqref{evolution} into a summation with properly chosen step length. By solving Eq. \eqref{evolution} iteratively, $\alpha_n(t)$ could be determined for any times $t$. However, the computational cost grows exponentially as the number of steps and lattices number $N$. To disclose the long-time behavior of SP and IPR, $N$ is restricted to 21 so that $t=1,200$ could be achieved in a moderate computational cost. \subsection{$a=0$} In Fig. \ref{fig:a=0}, the time evolution of SP and IPR is plotted for different values of $\Delta$. As shown in Fig.\ref{fig:a=0}(a1) and (a2), both SR and IPR indicate a rapid decaying when the system is in a delocalized phase ($\Delta<2$). With an increase of $\Delta$, the decay of IPR becomes very slow, as shown in Fig.\ref{fig:a=0}(a2). Meanwhile, a stable oscillation is developed for SP, as indicated for $\Delta=2.5$ in Fig. \ref{fig:a=0}(a1) and $\Delta=3$ and $4$ in Fig. \ref{fig:a=0}(b1). Whereas ES is localized in these cases, it manifests the robustness of localization against dissipation. This finding is different from the observation in Refs.\cite{opendisorder}, where the localized system eventually reflect a thermal equilibrium because of coupling to the environment. It can be attributed to the effect of memory kernel $f(t-\tau)$, which makes the past and current state of system interfered. The stability of SP or IPR can also be manifested by finding the variance of the position of excitation within the atomic site, $\langle\delta^2 n\rangle$. As show in Fig.\ref{fig:variance}(b) in Appendix, $\langle\delta^2 n\rangle$ displays a regular oscillation for $\Delta=2.5$. In contrast, it becomes irregular for $\Delta=1$ shown in Fig.\ref{fig:variance}(a). With a further increase in $\Delta$, we find that both SP and IPR reflect significant decaying, as shown for $\Delta=6$ and $10$ in Fig.\ref{fig:a=0}(b1) and (b2). This unusual feature indicates that the strong localization may have enhanced the loss of quantum information, rather than preserving it against decoherence. However, we also note that, in contrast to the super-exponential decaying of the extended state, the strongly localized state decays exponentially instead. Experimentally, this implies that it is still possible to differentiate the extended from the localized phase by checking the process of decoherence. In addition, it is noted that $\langle\delta^2 n\rangle$ can tend to be steady shown in Fig.\ref{fig:variance}(c). This observation can be considered as a result of the strong localization in the system. \subsection{$a=0.5$} \begin{figure*} \center \includegraphics[width=12cm]{f2.pdf} \caption{(Color online) Logarithmic plotting for the time evolution of SP (right column) and IPR (left column) when $a=0.5$ for different values of $\Delta$. The initial state is chosen as the highest excited state of $H_S$ in all plots. The other parameters are the same as those shown in Fig. \ref{fig:a=0}. } \label{fig:a=0.5} \end{figure*} To gain a further understanding of the localization-enhanced dissipation, this section focuses on the case of $a\neq 0$. A distinct feature in this situation is the occurrence of ME \cite{gAAH}. Consequently, the energy ES of $H_S$ may behave in a localized or extended manner, which will be decided by the relationship of eigenvalue $E$ and $E_c$ in Eq. \eqref{me}. As a result, the system cannot simply be classified as either localized or extended. It is noted that, depending on the sign of $a$, the maximally localized state can be exchanged between the ground state and the highest excited state of $H_S$ \cite{gAAH}. Since this work focuses on the interplay of localization and dissipation, $a=0.5$ is selected as an exemplification. Accordingly, ES has the largest localization. For $a<0$, the ground state is maximally localized instead. For $\omega_k>0$, the coupling to the environment renormalizes the ground state as a dissipationless bound state \cite{john}. This unique situation is excluded from our discussion. In Fig. \ref{fig:a=0.5}, the evolution of SR and IPR are presented for different values of $\Delta$. Two selected cases, i.e., $\Delta=0.5$ and $0.76$ are first studied, for which ES is extended or has the eigenvalue very closed to $E_c$. Both SP and IPR display rapidly decaying at an earlier time before very slowly descending, as shown in Fig. \ref{fig:a=0.5} (a1) and (a2). With the increase of $\Delta$, ES behaves in a more localized manner. As is anticipated, it is noted that both SP and IPR decay slowly when $\Delta=1.5, 2$ and $3$. In contrast, a steep descent is observed when $\Delta=3, 6$ and $10$, as shown in Fig. \ref{fig:a=0.5} (b1) and (b2). This feature is consistent with the observation in the case of $a=0$. The strong localization of a state can enhance the lost of information within the initial state. Similar to the former case, one can find that both SP and IPR seem to become numerically stable when $\Delta=1$, as shown in Fig. \ref{fig:a=0.5} (a1) and (a2). Meanwhile, a regular oscillation can also be noted for SP. The stability can also be demonstrated by $\langle\delta^2 n\rangle$. As shown in Fig.\ref{fig:variance}(e) in Appendix, a regular oscillation of $\langle\delta^2 n\rangle$ can be observed. In contrast, this picture is absent for the other value of $\Delta$, as exemplifications in Fig. \ref{fig:variance}(d) and (f). To conclude, we have shown that the localization does not always preserve the excitation in the system. Instead, strong localization enhances the decaying of excitation into the environment. However, one can also note that before this picture occurs, a stability of SP or IPR can be found. This observation implies that the localization enhanced dissipation could have different underlying physical reason that will be disclosed in the next section. \section{Coupling induced coherent transition among energy level $E$s} \begin{figure*} \center \includegraphics[width=17cm]{f3.pdf} \caption{(Color online) Comparative plotting of the time evolution of SP (blue color) and IPR (red color) for $\eta=0.1$ (solid line) and $\eta=0.5$ (dashed line). Except for $a$ and $\Delta$, $\omega_c=10$, $s=1$ and $\phi=\pi$ are selected for all plots. } \label{fig:eta} \end{figure*} In general, the periodic oscillation of SP is a manifestation of the coherent transition between two orthogonal states. To verify this point, it is necessary to find the eigenstates by solving the Schr\"{o}dinger equation as follows: \begin{eqnarray}\label{h} H\ket{\psi_E}= E \ket{\psi_E}, \end{eqnarray} in which $H=H_S + H_B +H_{int}$ is the total Hamiltonian of the system plus its environment. Formally, the eigenstate can be written as $\ket{\psi_E}= \left(\sum_{n=1}^N \alpha_n \ket{1}_n\ket{0}^{\otimes (N-1)} \right)\otimes \ket{0}^{\otimes M} +\ket{0}^{\otimes N} \otimes \left(\sum_{k=1}^{M} \beta_k \ket{1}_k\ket{0}^{\otimes (M-1)} \right)$. Substituting $\ket{\psi_E}$ into Eq. \eqref{h} and eliminating $\beta_k$, we get \begin{eqnarray}\label{psie} \left(\alpha_{n+1} + \alpha_{n-1}\right) &+& \Delta \cos(2\pi \beta n +\phi)\alpha_n +\nonumber \\ && \int_0^{\infty} \text{d}\omega\frac{J(\omega)}{E-\omega} \sum_{n=1}^N \alpha_n = E \alpha_n. \end{eqnarray} By solving Eq.\eqref{psie}, the eigenenergy $E$ and the corresponding coefficient $\alpha_n$ can be determined, which characterizes the state of system after tracing out the environment. In this sense, we call the state $\sum_n \alpha_n \ket{1}_n$ as the reduced energy eigenstate of system. As shown in Appendix, because of the integral $ \int_0^{\infty} \text{d}\omega\frac{J(\omega)}{E-\omega}$, $E$ can be complex, and thus Eq.\eqref{psie} depicts the nonunitary dynamics of single excitation in the atomic chain. Formally, one can introduce an effective non-Hermitian Hamiltonian to characterize the nonunitary dynamics. In this point, the state $\sum_n \alpha \ket{1}_n$ can be considered as the eigenstate of the non-Hermitian Hamiltonian. However, it is difficult to construct the counterpart in this case since $E$ is involved in Eq. \eqref{psie}. Thus, in order to find $E$ and corresponding $\alpha_n$s, one has to resort to numerical method. In Appendix, Eq. \eqref{psie} is solved numerically for $a=0, \Delta=2.5$ and $a=0.5, \Delta=1$ respectively. Focusing on two highest $E$, we found that the difference of their real parts is consistent with the oscillation frequency of SR, observed in previous section. For instance, it is found for $a=0$ and $\Delta=2.5$ that the oscillation of SR has a frequency of $\sim 2\pi/77.8=0.08076$. Correspondingly, the calculated difference between the real parts of two highest $E$s is $2.952238 - 2.882305=0.069933$, as shown in Fig. \ref{fig:eta}(a). Similar observation can also be found for $a=0.5$ and $\Delta=1$, that SR shows an oscillation with frequency $\sim 2\pi/57.6=0.109083$. The calculated difference is $2.705113- 2.605926=0.099187$, as shown in Fig. \ref{fig:eta}(b). The small discrepancy can be attributed to the error of estimation. Moreover, it is noted that the imaginary part of the highest $E$ has a order of magnitude $\sim 10^{-6}$ in both cases. This means that the decaying of SP and IPR is slow and neither display a discernable descent in numerical simulation up to $t\sim 10^3$. In further, we also calculate the overlap between the initial state of system and the state $\ket{\psi_{ES}}$ corresponding to the highest $E$. For $a=0, \Delta=2.5$, $\left|\inp{ES}{\psi_{ES}}\right|^2=0.498776$, which is consistent to the extremal maximum of SP $0.496809$ extracted from the data in Fig. \ref{fig:a=0}. While for $a=0.5, \Delta=1$, $\left|\inp{ES}{\psi_{ES}}\right|^2=0.4471716$, which is consistent to the extremal maximum of SP 0.405453 extracted from the data in Fig. \ref{fig:a=0.5}. We have shown that the steady oscillation of SP comes from the coherent transition of system between two highest excited states. Physically, the transition is a consequence of the energy exchange between the system and its environment. Thus it is expected that the transition can be varied by changing the coupling strength $\eta$. A comparative study of SP or IPR is presented for $\eta=0.1$ (the solid line) and $\eta=0.5$ (the dashed line) in Fig. \ref{fig:eta}. It is clear that SP or IPR may show distinct response to the increasing of $\eta$, depending on the localization of initial state. For initial state being extended or weakly localized, the evolution of both SP and IPR display small variation for different $\eta$, as shown in Fig.\ref{fig:eta} (a) and (d). In contrast, for initial state being highly localized, a significant variation of SP and IPR can be observed. As shown in Figs.\ref{fig:eta} (c) and (f), the evolution of SP or IPR can be stretched significantly by increase of $\eta$. Especially, an oscillation is developed with increase of $\eta$, as demonstrated in Fig.\ref{fig:eta} (c). However, for the steady cases displayed in the previous section, the increase of coupling strength has no noticeable effect on SP or IPR, as shown in Figs.\ref{fig:eta} (b) and {e}. In order to explain this phenomenon, we also examine the two highest $E$s for different $\Delta$ or $\eta$, as shown in Table \ref{table:eta}. It is noted that the difference between real parts of the two highest $E$s is enlarged with increase of $\Delta$ or $\eta$. Combined with these observations together, one can conclude that the localization enhanced decaying of excitation is a consequence that the environment is unable to provide suitable energy to drive the coherent transition of excitation between the states corresponding to the two highest $E$, since their energy gap become large with increase of $\Delta$. As a result, the energy would unidirectionally flow into the environment, and the excitation may be absorbed completely by environment. Whereas for $ES$ being extended or weakly localized, the energy gap is small. Thus, the environment provides excessive energy such that make $ES$ interfered with more than one energy level of system. As a result, the excitation might be populated in the entire atomic sites, and the information of initial state becomes erased rapidly. The increase of $\eta$ enhances the energy exchange between the system and its environment. This is the reason that SP or IPR becomes stretched as shown in Figs.\ref{fig:eta} (c) and (f). \begin{table} \begin{tabular}{c|c|c} \multicolumn{3}{c}{$a=0$}\\ \hline \hline $\eta$ & 0.1 & 0.5 \\ \hline \multirow{2}*{$\Delta=1$} & $2.0671 - 1.4187\times 10^{-10} \mathbbm{i}$ & $2.0671 - 2.8230\times 10^{-11} \mathbbm{i}$ \\ & $2.0591 - 5.8097 \times 10^{-8}\mathbbm{i}$ & $2.05910 - 1.1555\times 10^{-8} \mathbbm{i}$\\ \hline \multirow{2}*{$\Delta=2.5$} & $2.9522 - 5.0623\times 10^{-6} \mathbbm{i}$ & $2.9522 - 1.0123 \times 10^{-6} \mathbbm{i}$ \\ & $2.8823 - 5.3124\times 10^{-5} \mathbbm{i}$ & $2.8822 - 1.0584\times 10^{-5} \mathbbm{i}$ \\ \hline \multirow{2}*{$\Delta=6$} &$6.1522 - 3.3326\times 10^{-4} \mathbbm{i}$ & $6.1519 - 6.7207 \times 10^{-5} \mathbbm{i}$ \\ & $5.9587 - 4.0397 \times 10^{-3} \mathbbm{i}$ & $5.9539 - 8.2535\times 10^{-4} \mathbbm{i}$ \\ \hline\hline \end{tabular} \begin{tabular}{c|c|c} \multicolumn{3}{c}{$a=0.5$}\\ \hline \hline $\eta$ & 0.1 & 0.5 \\ \hline \multirow{2}*{$\Delta=0.5$} & $ 2.1666 - 2.7500\times 10^{-8} \mathbbm{i}$ & $2.1666 - 5.4767\times 10^{-9} \mathbbm{i}$ \\ & $ 2.1365 - 5.3164\times 10^{-8}\mathbbm{i}$ & $ 2.1365 - 1.0566\times 10^{-8} \mathbbm{i}$\\ \hline \multirow{2}*{$\Delta=1$} & $ 2.7051- 7.0911\times 10^{-6} \mathbbm{i}$ & $ 2.7051 - 1.4174\times 10^{-6} \mathbbm{i}$ \\ & $ 2.6059 - 6.0298\times 10^{-5} \mathbbm{i}$ & $ 2.6058 - 1.1989\times 10^{-5} \mathbbm{i}$ \\ \hline \multirow{2}*{$\Delta=6$} &$ 11.9802 - 0.01578\mathbbm{i}$ & $ 11.9766 - 3.2128\times 10^{-3} \mathbbm{i}$ \\ & $ 11.3612- 0.1374 \mathbbm{i}$ & $ 11.3003 - 0.0305 \mathbbm{i}$ \\ \hline\hline \end{tabular} \caption{\label{table:eta} A comparison of the two highest $E$ when $\eta=0.1$ and $\eta=0.5$. The value of $E$ is determined numerically by solving Eq. \eqref{psie}. The other parameters are same as those in Fig. \ref{fig:eta}} \end{table} \section{Conclusion} In conclusion, the exact dynamics of excitation, initially embedded in the highest excited state of a GAAH model coupled to an Ohmic-type environment, is studied by evaluating SP and IPR. An important observation is that the stable oscillation and the enhanced decaying of SP and IPR can be detected when the localization of the initial state is moderate or strong. This finding is distinct from the common perception that the localization in the system would protect quantum information against dissipation. To gain a further understanding of this result, the eigen energy $E$ is determined analytically by solving Eq, \eqref{psie}. It is shown that the stable oscillation of SP and IPR is a result of the coherent transition of excitation between the states corresponding the two highest $E$s, which stems from the energy exchange between the system and the environment. Consequently, the environment can feed back appropriate energy into the system, which induces a periodic population of the system on the two states. However, with the substantial increase of $\Delta$, the energy difference between these two states is enlarged. Thus, the environment could not feed back enough energy to give rise to population. In contrast, when the energy difference is small for, the feedback of energy from environment would make the highest level interfered with the other levels. As a consequence, the information of initial state is eventually erased. This explanation is further verified by establishing the influence of the coupling strength $\eta$. As shown in Fig. \ref{fig:eta} (c) and (f), the increase of $\eta$ significantly stretches decaying of both SP and IPR when the highest energy level $E$ is strongly localized. Whereas for the highest energy level being extended or weakly localized, the increase of $\eta$ generally enhances the decaying of SP and IPR at a short time, as shown in Fig. \ref{fig:eta} (a) and (d). Finally, it should be pointed out that the appearance of particle interaction can significantly modify the dynamics of excitation in the system. However, the exact treatment of open dynamics in the context of interacting many-body systems is a challenging task. Recent studies on disordered many-body systems showed that the role of interaction is to delocalize the system \cite{rmp, exp-quasidisorder}. In this sense, the interaction could act as an environment, which thermalizes the system. Concerning the fact that the localization-enhanced dissipation is state-dependent only, it may still be observed even if the interaction happens. \section*{Acknowledgments} HTC acknowledges the support of Natural Science Foundation of Shandong Province under Grant No. ZR2021MA036. MQ acknowledges the support of National Natural Science Foundation of China (NSFC) under Grant No. 11805092 and Natural Science Foundation of Shandong Province under Grant No. ZR2018PA012. HZS acknowledges the support of NSFC under Grant No.11705025. XXY acknowledges the support of NSFC under Grant No. 11775048. \renewcommand\thefigure{A\arabic{figure}} \renewcommand\theequation{A\arabic{equation}} \setcounter{equation}{0} \setcounter{figure}{0} \section*{Appendix} The integral $ \int_0^{\infty} \text{d}\omega\frac{J(\omega)}{E-\omega}$ in Eq. \eqref{psie} is divergent when $E>0$. Therefore, to determine $E$, we apply the Sokhotski-Plemelj (SP) formula to evaluate the above integral. The SP formula can be given as \begin{eqnarray}\label{sp} \lim_{\epsilon\rightarrow 0} \frac{1}{x-x_0+\pm \mathbbm{i} \epsilon} = \text{P} \frac{1}{x- x_0} \mp \mathbbm{i} \pi \delta\left(x - x_0\right), \end{eqnarray} in which P denotes the principle value of Cauchy. Thus, \begin{eqnarray} \lim_{\epsilon\rightarrow 0} \int_0^{\infty} \text{d}\omega\frac{J(\omega)}{\omega- E - \mathbbm{i} \epsilon} = \text{P} \int_0^{\infty} \text{d}\omega\frac{J(\omega)}{\omega- E } + \mathbbm{i} \frac{\pi}{2} J(E). \end{eqnarray} In the above derivation, the case of $- \mathbbm{i} \epsilon$ is selected. As shown in the following discussion, this choice guarantee $E$ has a negative imaginary part that characterizes dynamic dissipation. \begin{figure*} \center \includegraphics[width=8.5cm]{fa1a.pdf} \includegraphics[width=8.5cm]{fa1b.pdf} \caption{(Color online) The contour plots of numerical determination of $E$ in Eq. \eqref{psie} when (a) $a=0, \Delta=2.5$ and (b) $a=0.5, \Delta=1$. The solid-blue and dashed-red line panels represent respectively the vanishing real and imaginary part of the coefficient determinant in Eq. \eqref{psie}. The inner panels in (a) and (b) decide the two highest reduced energy levels. $\eta=0.1$, $\omega_c=10$, $s=1$ and $\phi=\pi$ are chosen for these plots } \label{fig:psie} \end{figure*} The value of $E$ can be established by finding zero coefficient determinants in Eq. \eqref{psie}. In Fig. \ref{fig:psie}, the contour plots for the vanishing real (solid- blue line) and imaginary (dashed- red line) part of coefficient determinant are presented, respectively, for (a) $a=0, \Delta=2.5$ and (b) $a=0.5, \Delta=1$, in which the crossing point of the two lines corresponds to the value of reduced energy $E$. The two inner panels in each plot show the details for the two $E$s with the largest real part. The rigorous value of $E$ and the corresponding $\alpha_n$ can be decided by recurrently solving the eigenvalue equation Eq. \eqref{h}. By doing so, the result shows that $E_1= 2.952238 - 5.062298\times 10^{-6} \mathbbm{i}$, $E_2= 2.882305 -5.312399\times 10^{-5} \mathbbm{i}$ for case (a) and $E_1= 2.705113 - 7.091364\times 10^{-6} \mathbbm{i}$, $E_2=2.605926 - 6.029\times 10^{-5} \mathbbm{i}$ for case (b). \begin{figure*} \center \includegraphics[width=15cm]{fa2.pdf} \caption{(Color online) The plots of $\langle\delta^2 n\rangle$ for different situations. Except of $a$ and $\Delta$, $\eta=0.1$, $\omega_c=10$, $s=1$ and $\phi=\pi$ are chosen for these plots. } \label{fig:variance} \end{figure*} To characterize the stability of SP or IPR observed in Fig. \ref{fig:a=0} and \ref{fig:a=0.5}, the variance of the position of excitation within the atomic site, defined as $\langle\delta^2 n\rangle= \frac{\sum_{n=1}^N \left|\alpha_n(t)\right|^2\left(n-\langle n\rangle\right)^2}{\sum_n \left|\alpha_n(t)\right|^2}$ and $\langle n\rangle=\sum_n\left|\alpha_n(t)\right|^2 n$, is studied for different cases in Fig. \ref{fig:variance}. It is evident that $\langle\delta^2 n\rangle$ can display regular oscillation for $a=0, \Delta=2.5$ and $a=0.5, \Delta=1$. This feature can also be understood from the coupling induced coherent transition of the two highest energy levels, as a result of which $\langle\delta^2 n\rangle$ would be determined by the two levels. In contrast, for $a=0, \Delta=1$ and $a=0.5, \Delta=0.5$, the time evolution of $\langle\delta^2 n\rangle$ becomes irregular. This observation can be attributed to the extensity of the system, by which the excitation would populate uniformly in the atomic sites. However, for $a=0, \Delta=1$ and $a=0.5, \Delta=3$, the system is deeply in the localized phase. Thus, one can find that $\langle\delta^2 n\rangle$ tends to be stable as shown in Fig. \ref{fig:variance}.
2206.14949
\section{Introduction} The problem of particles attached to an interface between two immiscible fluids has been extensively studied for many decades and has a wide variety of engineering and medical applications, such as the formations of Pickering emulsions and particle monolayers. There exists a number of excellent texts and review articles focused on this topic \cite{Leal1980,Maldarelli2022,binks2006}. The simplest problem that has been studied consists of a single rigid sphere translating along a flat fluid interface between two immersible fluids at low Reynolds number. Having a better estimate of the drag on this particle would have an impact on many applications such as modeling the collective motion of particles on a drop in an applied electric field \cite{Yi2021}. The hydrodynamic drag force exerted on a sphere can be written as $F_D = -6\pi \mu_1 a U f(\mu_2/\mu_1,b/a,\Delta \rho)$, where $f$ is a dimensionless drag coefficient, $a$ is the radius of the sphere, $b$ is the immersion depth into the upper fluid, $U$ is the translational velocity, and $\mu_1$ and $\mu_2$ are the viscosities of the two fluid phases (see Fig. \ref{fig:single-setup12}(a) for a sketch of the problem). For the simple case of a homogeneous fluid, $f = 1.$ A variety of experimental and theoretical studies have obtained the drag coefficient in terms of the immersion depth and the viscosity ratio. The analytical solution for the drag force acting on two fused spheres obtained by Zabarankin \cite{Zabarankin2007} provides the solution for a translating sphere at a flat gas/liquid interface with immersion depth $b$ and for $0 < \theta < \pi/2$ \cite{Dani2015,dorr2016}. The calculation was extended by D\"orr et al. \cite{dorr2016} to cover the full range of contact angles. Dani et al. \cite{Dani2015} and D\"orr et al. \cite{dorr2016} assumed that the three-phase contact line (TCL) is pinned to the particle surface, which prevents particle rotation. D\"orr and Hardt \cite{dorr2015} considered particle rotation in their problem by allowing it to rotate until the hydrodynamic torque and the torque caused by the interfacial tension are balanced, and the steady-state interfacial deformation was calculated \cite{dorr2015}. Numerical calculations of the drag coefficient $f$ were carried out by \cite{Danov1995,Danov2000,Das2018,Pozrikidis2007}. Danov et al. and Das et al. obtained the drag force acting on a sphere straddling a flat gas-liquid interface \cite{Danov1995, Danov2000} and a spherical interface at an arbitrary viscosity ratio \cite{Das2018}. Pozrikidis \cite{Pozrikidis2007} solved the problem of a spherical particle at an interface in the presence of a simple shear flow centered at the sphere. As the immersion depth into the liquid phase increases, the drag coefficient $f$ is found to increase monotonically. A more recent numerical study by Loudet et al. \cite{Loudet2020} calculated the two-dimensional drag on a circular cylinder straddling a deformable fluid interface at an arbitrary viscosity using a phase-field model. Instead of pinning the TCL, the three-phase contact angle was prescribed. Hemauer et al. \cite{Hemauer2021} further extended Loudet et al. \cite{Loudet2020}'s work by including particle rotation. The pair interaction of particles at low Reynolds number in a homogeneous fluid has been well addressed in the literature. For particles attached to a fluid interface, capillary interactions arise from the interfacial deformation around them \cite{Danov2010,Kralchevsky2000}. D\"orr and Hardt \cite{dorr2015} examined the capillary interaction between two particles via the linear superposition of the single-particle interfacial deformation, assuming a large particle separation. The capillary interaction between two mutually approaching particles was studied by Dani et al. \cite{Dani2015}, where the viscous drag due to the mutual approach is approximated by multiplying the single-particle drag by the mobility function, which accounts for the increased hydrodynamic resistance as the particles travel closer to each other. In this work, we wish to further understand the influence of interfacial deformations on the drag force acting on particles at an interface. The problem is in general complex because of the coupling of fluid flow, interfacial deformation and contact line dynamics. Hence a fully numerical approach would be complex. Here we developed an asymptotic solution approach which allows for a direct and straightforward examination of the impact of the physical parameters on the flow. It has been noted in the literature \cite{Maldarelli2022} that uniform flow past a sphere can represent flow past a sphere at an interface in the limit of small Capillary number. This solution is valid for arbitrary viscosity ratio of upper and lower fluids as long as the interface intersects the particle at its equator at $90^\circ$. This observation allows for an estimate of the Stokes drag where the viscosity is the average of the two phases. Here we use this result as the leading order solution in a perturbation analysis for small Ca and small deviation of the contact angle from $90^\circ$. We obtain the analytical solution for the interfacial deformation around a single particle. In the two-particle deformation case, a straightforward numerical solution approach is developed. We apply the Lorentz reciprocal theorem to the zeroth-order approximations for a spherical particle at a flat interface and the first corrections to obtain explicit analytical expressions for the hydrodynamic drag. We compute the drag force as a function of the three-phase contact angle, the viscosity ratio between the two fluids, the separation distance between the particles, and the Bond number (a dimensionless number that measures the relative importance of gravity and surface tension forces). \section{Fluid motion past a single particle} \label{sect:single-particle} Consider the uniform creeping flow past a fixed spherical particle of radius $a$ located approximately midway at an interface between two viscous fluids. The lower fluid phase is denoted by $i =1$ and the upper fluid phase by $i = 2$ (see Fig. \ref{fig:single-setup12}(a)). The fluid viscosities and densities are denoted by $\mu_i$ and $\rho_i$, respectively. \begin{figure} \centering \includegraphics[width=16cm]{single-setup12_side.pdf} \caption{Illustrations of (a) a spherical particle on an flat interface between two immiscible viscous fluids and (b) the cross-section view of a spherical particle at a deformable interface with contact angle $\theta_s$ and immersion depth $b$.} \label{fig:single-setup12} \end{figure} We assume the fluid interface is deformable and the deformation remains small, which requires that the surface tension forces are large relative to the viscous forces, i.e., the Capillary number, Ca $=\mu_1 U/\gamma$, is small. Here, $U$ is the absolute value of the uniform background flow velocity, and $\gamma$ is the interfacial tension. At the TCL, we enforce a constant contact angle $\theta_s$ on the particle surface, where the assumption of small interfacial deformation requires $\theta_s$ to be close to $90^\circ$. Note that if the top and bottom fluids were the same and $\theta_s = 90^\circ$, this would represent uniform flow past a sphere in a homogeneous flow with its TCL located at the equator of the sphere. In addition, we allow for the particle to be displaced vertically and set $\mathbf{x}_P = b \hat{\mathbf{e}}_z$ as the position of the particle's center of mass. Particle rotation is ignored. The motion of the fluids is governed by the Stokes equations: \begin{align} -\nabla p^{i} + \lambda_i \nabla^2 \mathbf{u}^{i} = & \mathbf{0} \quad \mbox{ in fluid }i, \label{eqn:single-dimensionless-mom1}\\ \nabla\cdot \mathbf{u}^{i} = & 0 \quad \mbox{ in fluid }i, \label{eqn:single-dimensionless-cont} \end{align} where $p^i$, $\mathbf{u}^i$, and $ \lambda_i$ are the respective pressure, velocity, and viscosity in fluid phase $i$ ($i = 1, 2$), where $\lambda_1 = 1$ and $\lambda_2 = \lambda = \mu_2/\mu_1$. All variables are made dimensionless using the characteristic unit of length $a$, the characteristic unit of velocity $U$, the characteristic unit of pressure $\mu_1 U/a$, and the viscosity in fluid 1, $\mu_1.$ Let $\Sigma_{P_{i}}$ denote the particle surface immersed in fluid $i$, and $\Sigma_I = \{ (x,y,z) \vert F_s(x,y,z) = z - h(x,y) = 0 \}$ denote the fluid interface. Along the fluid interface $\Sigma_I$, the normal velocity vanishes, the tangential velocity and shear stress are continuous, and the normal stress is discontinuous, i.e., \begin{align} & \mathbf{u}^i \cdot \hat{\mathbf{n}}=0, \label{eqn:single-zero-normal-vel}\\ & \hat{\mathbf{t}}\cdot (\mathbf{u}^2 - \mathbf{u}^1) = 0, \label{eqn:single-cont-tang-vel}\\ & \hat{\mathbf{t}}\cdot (\bm{\sigma}^2 - \bm{\sigma}^1) \cdot \hat{\mathbf{n} } = 0,\label{eqn:single-tang-stress-cond}\\ & (\bm{\sigma}^2 - \bm{\sigma}^1) \cdot \hat{\mathbf{n} } = \frac{1}{\mbox{Ca}}(\nabla\cdot\hat{\mathbf{n}})\hat{\mathbf{n}} + \frac{\mbox{Bo}}{\mbox{Ca}} h \hat{\mathbf{n}}, \label{eqn:single-stressJumpCond1} \end{align} where $\bm{\sigma}^i = -p \mathbf{I} + \lambda_i \left(\nabla \mathbf{u}^i + (\nabla \mathbf{u}^i) ^T\right)$ is the stress tensor, $\hat{\mathbf{n}}$ and $\hat{\mathbf{t}}$ are the unit normal and tangential vectors , respectively, to the fluid interface $\Sigma_I$. The Capillary number, Ca, and the Bond number, Bo, are dimensionless parameters defined as $$ \mbox{Ca}=\frac{\mu_1 U}{\gamma} \quad \mbox{ and } \mbox{Bo} = \frac{(\rho_1 - \rho_2) g a^2 }{\gamma},$$ where $g$ is the acceleration of gravity. At the particle surface $\Sigma_{P_i}$, we impose the no-slip and no-penetration conditions: \begin{align} & \Tilde{\mathbf{n}}\cdot \mathbf{u}^i = 0 , \label{eqn:single-noslip-normal}\\ & \Tilde{\mathbf{t}}\cdot \mathbf{u}^i = 0, \label{eqn:single-noslip-tang} \end{align} where $\Tilde{\mathbf{n}}$ and $\Tilde{\mathbf{t}}$ denote the unit normal and tangential vectors, respectively, to the particle surface $\Sigma_{P_i}$. Far from the particle, the velocity field approaches the uniform background flow: \begin{align} \mathbf{u}^i \rightarrow \mathbf{u}^\infty = \hat{\mathbf{e}}_y, \label{eqn:single-far-field-cond} \end{align} At the TLC, we define the contact angle $\theta_s$ to be the angle between the tangent to the particle surface $\Sigma_{P_i}$ and the tangent to the interface $\Sigma_I$, both in the plane containing the normal to the TCL, which is illustrated in Fig. \ref{fig:single-setup12}(b). The constant contact angle condition is given by \begin{align} \Tilde{\mathbf{n}}\cdot \hat{\mathbf{n}} = \cos{\theta_s}.\label{eqn:single-contactAngleCond1} \end{align} \ \subsection{Asymptotic expansions} Assume $$\mathrm{Ca} \ll 1, \quad \theta_s = \pi/2+\delta \tilde{\theta_s} , \quad \mbox{ with } \delta \ll 1,\mbox{ } \tilde{\theta}_s = \mathcal{O}(1), \quad \mbox{ and } \mbox{Bo} = \mathcal{O}(1), $$ where $\delta$ is the small parameter that describes the scale of the contact angle's deviation from $90^\circ. $ We consider the following two-parameter asymptotic expansion for any quantity $f$: \begin{align} f= f^{(0,0)}+ & \mbox{Ca} f^{(1,0)} + \delta f^{(0,1)} + \cdots, \label{eqn:single-asmExpansions-u} \end{align} and for convenience, we omit the superscript $i$ when referring to quantities in both fluids. Although we will only consider the leading order behaviors, we introduce an expansion in Ca and $\delta$ so that the origin of the resulting forces is clear. Here $\mathbf{u}^{(0,0)}$ is given by \begin{align} \mathbf{u}^{(0,0)} =\frac{1}{4\rho^5} \left( - 3xy(\rho^2-1)\hat{\mathbf{e}}_x + (-3y^2(\rho^2-1) + (\rho-1)( 4\rho^2+\rho+1 )\rho^2 ) \hat{\mathbf{e}}_y - 3yz(\rho^2-1) \hat{\mathbf{e}}_z \right), \label{eqn:single-leading-sol-u} \end{align} and, to within an arbitrary additive constant, \begin{align} p^{(0,0)} = \left\{\begin{array}{ll} - \lambda 3 y /(2\rho^3) & z> 0 \\ - 3 y /(2\rho^3) & z < 0 \end{array} \right., \label{eqn:single-leading-sol-p} \end{align} where $\rho = \sqrt{x^2 + y^2 + z^2}$ (\cite{leal2010}). Note that the velocity \eqref{eqn:single-leading-sol-u} satisfies the no-slip conditions \eqref{eqn:single-noslip-normal} and \eqref{eqn:single-noslip-tang}, the far field condition \eqref{eqn:single-far-field-cond}, and the velocity conditions \eqref{eqn:single-zero-normal-vel} and \eqref{eqn:single-cont-tang-vel}. The tangential stress along $z = 0$ is zero so Eq. \eqref{eqn:single-tang-stress-cond} is satisfied. The normal stress condition is not identically satisfied except in the Ca $\rightarrow 0$ limit of a flat interface. The effect of a particle density is to raise or lower the center of mass of the particle. This can be accounted for by perturbing the particle position from the origin, i.e., \begin{align} \mathbf{x}_P = \delta \Tilde{b} \hat{\mathbf{e}}_z.\label{eqn:single-asmExpansions-b} \end{align} The actual impact of a specific particle density can then be predicted afterwards by a balance of vertical forces. For simplicity, we scale the immersion depth $b$ as $\delta \tilde{b}$ and assume the effect of the background flow on the particle position is accounted for by the value $\tilde b$. The interface shape $h$ is perturbed from the flat interface, i.e., \begin{align} h= & \mbox{Ca}h^{(1,0)} + \delta h^{(0,1)} + \cdots,\label{eqn:single-asmExpansions-h} \end{align} where $\mbox{Ca}h^{(1,0)}$ describes the interfacial deformation induced by the background flow $\mathbf{u}^\infty $, and $\delta h^{(0,1)}$ is the static deformation induced by the contact angle, which describes the equilibrium interface shape in the absence of flow. Note that for the two-parameter asymptotic solutions to be valid, we require $\delta >\mbox{Ca}^2$ and $\delta^2 < \mbox{Ca}$, which implies $ \delta \sim \mbox{Ca}^\alpha$ with $\frac{1}{2} < \alpha <2. $ Also note that this separation of scales allows us to independently consider the impacts of immersion depth and capillarity. Substituting Eqs. \eqref{eqn:single-asmExpansions-u} - \eqref{eqn:single-asmExpansions-h} into Eqs. \eqref{eqn:single-dimensionless-mom1} - \eqref{eqn:single-dimensionless-cont} along with the boundary conditions on the particle surface and the fluid interface, \begin{align} & \Sigma_{P_i} = \{ (x,y,z)\vert x^2 + y^2 + (z-\delta \Tilde{b})^2 = 1 \}, \\ & \Sigma_{I} = \{ (x,y,z) \vert z = \mbox{Ca}h^{(1,0)} + \delta h^{(0,1)} \}, \end{align} we can now collect terms with similar powers of Ca and $\delta$. This result is discussed below. Additional details can be found in \cite{zhou2022}. \subsection{Interfacial deformations } To parametrize the interface shape $\Sigma_I$ up to orders Ca and $\delta,$ we introduce cylindrical coordinates $(r,\phi,z)$. The relation between Cartesian and cylindrical coordinates are \begin{align*} x = r \cos\phi, \quad y = r \sin\phi. \end{align*} The $\mathcal{O}(\mbox{Ca})$ interface shape $h^{(1,0)}$ accounts for the deformation caused by the background flow and satisfies the stress balance equation \begin{align} \nabla^2 h^{(1,0)} -\mbox{Bo} h^{(1,0)} = - \hat{\mathbf{e}}_z\cdot [\bm{\sigma}^{(0,0)} ]\cdot \hat{\mathbf{e}}_z, \label{eqn:single-deformationEqn_Ca} \end{align} and the boundary conditions \begin{align} & 0 = - r \frac{\partial h^{(1,0)}}{\partial r} + h^{(1,0)}\quad \mbox{ at } r = 1, \label{eqn:single-deformationBC_Ca_r=1}\\ & h^{(1,0)}\rightarrow 0 \quad \mbox{ as } r \rightarrow \infty. \label{eqn:single-deformationBC_Ca_inf} \end{align} The $\mathcal{O}(\delta)$ interface shape $h^{(0,1)}$ accounts for the deformation in the absence of the flow, and it satisfies the stress balance equation \begin{align} \nabla^2 h^{(0,1)} -\mbox{Bo} h^{(0,1)} = 0,\label{eqn:single-deformationEqn_delta} \end{align} and the boundary conditions \begin{align} & -\tilde{\theta_s} +\tilde{b} = - r \frac{\partial h^{(0,1)}}{\partial r} + h^{(0,1)}\quad \mbox{ at } r = 1, \label{eqn:single-deformationBC_del_r=1} \\ & h^{(0,1)}\rightarrow 0 \quad \mbox{ as } r \rightarrow \infty. \label{eqn:single-deformationBC_del_inf} \end{align} The RHS of Eq. \eqref{eqn:single-deformationEqn_Ca} can be computed from the leading order solutions \eqref{eqn:single-leading-sol-u} and \eqref{eqn:single-leading-sol-p}, and is given by \begin{align} & -\hat{\mathbf{e}}_z\cdot [\bm{\sigma}^{(0,0)}]\cdot \hat{\mathbf{e}}_z = -(\sigma_{zz}^{2(0,0)} -\sigma_{zz}^{1(0,0)} ) = (1 - \lambda ) \frac{3 y}{2(x^2 + y^2)^{5/2}} =(1-\lambda) \frac{3 }{2 r^4} \sin\phi. \end{align} To solve for $h^{(1,0)}$, we assume the solution form $h^{(1,0)} = R_P(r)\sin\phi$, then Eq. \eqref{eqn:single-deformationEqn_Ca} can be reduced to a non-homogeneous Bessel's equation in $R_P(r)$, i.e., \begin{align} r^2 R_P'' + r R_P' - (1+\mbox{Bo}r^2 ) R_P = (1-\lambda)\frac{2}{3 r^2 }. \end{align} Using the method of variation of parameters plus the boundary condition \eqref{eqn:single-deformationBC_Ca_inf}, the solution $R_P$ is obtained. Finally the boundary condition \eqref{eqn:single-deformationBC_Ca_r=1} is applied and we find that \begin{align} \begin{split} h^{(1,0)}(r,\phi) & = (1-\lambda)\left[C_1 K_1(\sqrt{\mbox{Bo}}r) - \frac{3}{2} K_1(\sqrt{\mbox{Bo}} r) \int_1^r \frac{1}{r^3} I_1(\sqrt{\mbox{Bo} }r)\mbox{ d}r \right. \\ & \left.- \frac{3}{2} I_1(\sqrt{\mbox{Bo} }r ) \int_r^{\infty} \frac{1}{r^3} K_1(\sqrt{\mbox{Bo} }r)\mbox{ d}r \right]\sin\phi, \end{split} \label{eqn:deformationSol} \end{align} where \begin{align*} C_1 = - \frac{3 M I_2 (\sqrt{\mbox{Bo}})}{2 K_2(\sqrt{\mbox{Bo}})},\quad M =\int_1^\infty \frac{1}{r^3} K_1(\sqrt{\mbox{Bo}}r)\mbox{ d}r \end{align*} The solution of \eqref{eqn:single-deformationEqn_delta} with boundary conditions \eqref{eqn:single-deformationBC_del_r=1} and \eqref{eqn:single-deformationBC_del_inf} is \begin{align} h^{(0,1)} = \frac{(-\tilde{\theta_s}+\tilde{b})K_0 (\sqrt{\mbox{Bo} }r)}{\sqrt{\mbox{Bo}} K_1(\sqrt{\mbox{Bo}}) + K_0(\sqrt{\mbox{Bo}})}= (-\tilde{\theta}_s + \tilde{b})C_0 K_0(\sqrt{\mbox{Bo} } r ). \label{eqn:single-staticDeformSol} \end{align} Here, $K_n$ and $I_n$ are the modified Bessel functions of order $n$. The leading order interfacial deformation is the sum $h = \mbox{Ca} h^{(1,0)} + \delta h^{(0,1)}. $ Fig. \ref{fig:single-deformations2} shows the $y$-$z$ cross-sections of the static deformation $\delta h^{(0,1)}$ and flow-induced deformation $\mbox{Ca} h^{(1,0)}$ with $\mbox{Ca} = \delta = \tilde{\theta}_s =1,$ $\mbox{Bo} = 1, \lambda=2$, and $\tilde{b}=0$. The static deformation $\delta h^{(0,1)}$, induced by the contact angle, describes the equilibrium interface shape in the absence of flow and is axisymmetric; the flow-induced deformation $\mbox{Ca} h^{(1,0)}$ represents the deformation caused by the uniform background flow $\mathbf{u}^\infty = \hat{\mathbf e}_y$ and appears anti-symmetric in the $y$ direction. Note that $h^{(0,1)}$ is independent of $\phi$ and $h^{(1,0)}$ depends on $\phi$ as $\sin\phi.$ \begin{figure} \centering \includegraphics[scale=0.65]{single-deformations2_side_crop.pdf} \caption{Cross-sections of the static deformation $\delta h^{(0,1)}$ (a) and the flow-induced deformation $\mbox{Ca}h^{(1,0)}$ (b) with $\mbox{Ca} = \delta =\tilde{\theta}_s=1,$ $\mbox{Bo}=1,$ $\lambda=2$, and $\tilde{\theta}_s = 1, \tilde{b}=0.$} \label{fig:single-deformations2} \end{figure} In Fig. \ref{fig:single-deformations1}, we plot the $y$-$z$ cross-sections of $h = \mbox{Ca} h^{(1,0)} + \delta h^{(0,1)}$ with $\mbox{Ca} = \delta = 1$ and varying values of Bo, from which we see that the amplitude of the deformation decreases as Bo increases, meaning increasing the density mismatch between the two fluid phases flattens the interface shape. Note that in the limit of small Bond number (Bo $\rightarrow 0$), the asymptotic assumption Bo$=\mathcal{O}(1)$ is violated and the deformation solutions become invalid. Also, the values of the parameters chosen in Fig. \ref{fig:single-deformations1} are outside the limits of applicability of our expansions and are chosen to illustrate how the interface is affected by them. Because of this, and since we ignore higher-order terms in the expansions, we observe a mismatch between the fluid interface and the particle surface. \begin{figure} \centering \includegraphics[scale=0.65]{single-deformations1.pdf} \caption{The $y$-$z$ cross-section views of the interfacial deformation $h = \mbox{Ca} h^{(1,0)} + \delta h^{(0,1)} $ with $\mbox{Ca} = \delta=\tilde{\theta}_s=1$ and varying values of $\mbox{Bo}.$} \label{fig:single-deformations1} \end{figure} \subsection{Calculation of the drag force} The drag force exerted on a spherical particle straddling a fluid interface is \begin{align} F_D = \mathbf{F}_D\cdot (-\mathbf{u}^\infty) = & \sum_{i=1,2} \iint_{\Sigma_{P_i}} \bm{\sigma}\cdot(- \tilde{\mathbf{n}})\cdot (-\mathbf{u}^\infty) \mbox{ d}\Sigma. \label{eqn:single-dragFormula} \end{align} Inserting the expansions of $\bm{\sigma}$ and $h$ into Eq. \eqref{eqn:single-dragFormula}, we obtain to $\mathcal{O}(\mbox{Ca})$ and $\mathcal{O}(\delta)$ { \begin{align} \begin{split} F_D = & \int_{0}^{2 \pi } \left( \int_{0}^{1+\delta b^{(1)}} (\bm{\sigma}^{2(0,0)} + \mbox{Ca} \bm{\sigma}^{2(1,0)} + \delta \bm{\sigma}^{2(0,1)} )\cdot (\tilde{\mathbf{n} }^{(0,0)} + \delta \tilde{\mathbf{n} }^{(1,0)} )\cdot \mathbf{u}^\infty \mbox{ d}z \mbox{ d}\phi \right. \\ & \left. + \int_{-1+{\delta b^{(1)}}}^0 (\bm{\sigma}^{1(0,0)} + \mbox{Ca} \bm{\sigma}^{1(1,0)} + \delta \bm{\sigma}^{1(0,1)} )\cdot (\tilde{\mathbf{n} }^{(0,0)} + \delta \tilde{\mathbf{n} }^{(1,0)} )\cdot \mathbf{u}^\infty \mbox{ d}z \mbox{ d}\phi \right) \\ & - \int_{0}^{2 \pi } \left( \mbox{Ca} h^{(1,0)} + \delta h^{(0,1)}\right) [\bm{\sigma}^{(0,0)}]\cdot (\tilde{\mathbf{n} }^{(0,0)} + \delta \tilde{\mathbf{n} }^{(1,0)} )\cdot \mathbf{u}^\infty \mbox{ d}\phi. \end{split} \label{eqn:single-dragFormula-undulation} \end{align} } To evaluate the surface integrals in Eq. \eqref{eqn:single-dragFormula-undulation}, we introduce spherical coordinates $(\rho,\vartheta,\varphi)$, defined by \begin{align} x = \rho \sin\varphi \sin\vartheta,\quad y = \rho \cos\vartheta, \quad z = \rho \cos\varphi \sin\vartheta, \end{align} where $0 \leq \vartheta \leq \pi$ and $0 \leq \varphi< 2\pi.$ The spherical surface can be described as \begin{align} \Sigma_P = \{(\rho,\varphi,\vartheta) \vert \rho = 1 + \delta \phi^{(1)}(\varphi,\vartheta)\}, \end{align} where $\phi^{(1)} = \tilde{b}\sin\vartheta\cos\varphi$, and the unit normal vector $\tilde{\mathbf{n}}$ to the particle surface is $ \Tilde{\mathbf{n}} = \Tilde{\mathbf{n}}^{(0,0)} + \delta \Tilde{\mathbf{n}}^{(0,1)} $ with \begin{align} \Tilde{\mathbf{n}}^{(0,0)} = \hat{\mathbf{e}}_\rho,\quad \Tilde{\mathbf{n}}^{(0,1)} =-\tilde{b} \cos\vartheta\cos\varphi /\rho \hat{\mathbf{e}}_\vartheta + \tilde{b} \sin\varphi /\rho \hat{\mathbf{e}}_\varphi . \end{align} Substituting in the leading order solutions for flow past a sphere given by Eqs. \eqref{eqn:single-leading-sol-u} and \eqref{eqn:single-leading-sol-p}, we find that the $\mathcal{O}(1)$ drag force is the classical result \cite{Maldarelli2022}: \begin{align} F_D^{(0,0)} = 3 \pi (\lambda + 1). \end{align} The formula of the $\mathcal{O}(\mbox{Ca} )$ and $\mathcal{O}(\delta)$ drag forces are given by \begin{align} F_D^{(j,k)} = \underbrace{ \sum_{i=1,2} \iint_{\Sigma_{P_i}^{(0)}} \bm{\sigma}^{(j,k)} \cdot \tilde{\mathbf{n}}^{(0,0)}\cdot \mathbf{u}^\infty \mbox{ d}\Sigma}_{ \circled{1}} \underbrace{ - \int_0^{2\pi} h^{(j,k)} [ \bm{\sigma}^{(0,0)} ] \cdot \tilde{\mathbf{n}}^{(0,0)}\cdot \mathbf{u}^\infty \mbox{ d}\phi}_{ \circled{2}}, \label{eqn:single-correctionDragFormula} \end{align} with $(j,k) = (1,0)$ and $(0,1)$, where $F_D^{(1,0)}$ is the flow-induced correction drag and $F_D^{(0,1)}$ is the contact angle induced correction drag. The integral \textcircled{\small 2} in Eq. \eqref{eqn:single-correctionDragFormula} can be evaluated directly with \textcircled{\small 2} $= 0$ for $(j,k) = (1,0)$ and \textcircled{\small 2}$ =- 3 \pi (\lambda-1) (-\tilde{\theta}_s + \tilde{b})C_0 K_0(\sqrt{\mbox{Bo}})$ for $(j,k) = (0,1)$, where $C_0$ is defined in Eq \eqref{eqn:single-staticDeformSol}. The integral in \textcircled{\small 1} containing the correction stress still needs to be evaluated. \subsubsection{Lorentz reciprocal theorem} We use the Lorentz reciprocal theorem to evaluate the term \textcircled{\small 1} in the $\mathcal{O}(\mbox{Ca})$ and $\mathcal{O}(\delta)$ drag formula \eqref{eqn:single-correctionDragFormula}. \begin{figure} \centering \includegraphics[scale=0.45]{single-reciprocal2.pdf} \caption{Geometry for the Lorentz reciprocal theorem.} \label{fig:single-reciprocal} \end{figure} Let $D_i$ denote the region bounded by the particle surface $\Sigma_{P_i}^{(0)}$, the flat interface $\Sigma_I^{(0)}$, and the hemi-spherical surface $\Sigma_{\infty_i }$ at infinity (see Fig. \ref{fig:single-reciprocal}). Let us define $\Sigma_{\infty} = \Sigma_{\infty_1} \cup \Sigma_{\infty_2}$. We apply the Lorentz reciprocal theorem to the following two flow problems: \\ \\ \noindent \textbf{Problem 1:} The first problem is constructed by setting Ca and $\delta$ to zero, which describes a sphere bisected by a flat interface in a uniform flow. The flow field and the stress tensor are denoted by $\mathbf{u}^{(0,0)}$ and $\bm{\sigma}^{(0,0)}$, respectively. We let $\mathbf{u}_D^{(0,0)} = \mathbf{u}^{(0,0)} - \mathbf{u}^\infty$ denote the leading order disturbance field. Then, at the boundaries, \begin{align} &\mathbf{u}_D^{(0,0)} = -\mathbf{u}^\infty \quad \mbox{ for } \mathbf{x} \in \Sigma_{P_i}^{(0)}, \\ &\mathbf{u}_D^{(0,0)} = \mathbf{0} \quad \mbox{ for }\mathbf{x} \in \Sigma_\infty. \end{align} \\ \\ \noindent \textbf{Problem 2:} The second problem is described by the truncated asymptotic expansions in $\mbox{Ca}$ and $\delta$: \begin{align} \mathbf{u}_D = & \mathbf{u}_D^{(0,0)}+ \mbox{Ca} \mathbf{u}^{(1,0)}_D + \delta \mathbf{u}^{(0,1)}_D,\\ \bm{\sigma} = & \bm{\sigma}^{(0,0)} + \mbox{Ca} \bm{\sigma}^{(1,0)} + \delta \bm{\sigma}^{(0,1)}, \end{align} where $\mathbf{u}_D$ denotes the disturbance field. For $\mathbf{x} \in \Sigma_\infty $, \begin{align} \mathbf{u}_D = 0, \end{align} for $\mathbf{x} \in \Sigma^{(0)}_{P_i},$ \begin{align} & \mathbf{u}^{(1,0)}_D = -\mathbf{u}^\infty,\quad \mathbf{u}^{(1,0)}_D = \mathbf{0}, \quad \mathbf{u}^{(0,1)}_D = -\tilde{b}\frac{\partial \mathbf{u}^{(0,0)}_D}{\partial z },\label{eqn:single-reciprocal-bc1} \end{align} and for $\mathbf{x} \in \Sigma^{(0)}_I$, \begin{align} & \mathbf{u}_D^{(0,0)} \cdot \hat{\mathbf{e}}_z = 0, \label{eqn:single-reciprocal-ic1}\\ & \mathbf{u}_D^{(1,0)} \cdot \hat{\mathbf{e}}_z = -\mathbf{u}_D^{(0,0)}\cdot \hat{\mathbf{n}}^{(1)} - \frac{\partial \mathbf{u}_D^{(0,0)}}{\partial z} h^{(1,0)}\cdot \hat{\mathbf{e}}_z ,\label{eqn:single-reciprocal-ic2}\\ & \hat{\mathbf{t}}^{(0,0)} \cdot [\bm{\sigma}^{(1,0)}] \cdot \hat{\mathbf{e}}_z = - \hat{\mathbf{t}}^{(0)} \cdot \frac{\partial [\bm{\sigma}^{(0,0)}]}{\partial z }h^{(1,0)} \cdot \hat{\mathbf{e}}_z - \hat{\mathbf{t}}^{(0,0)} \cdot [\bm{\sigma}^{(0,0)}] \cdot \hat{\mathbf{n}}^{(1,0)} - \hat{\mathbf{t}}^{(1,0)} \cdot [\bm{\sigma}^{(0,0)}] \cdot \hat{\mathbf{e}}_z,\label{eqn:single-reciprocal-ic3} \end{align} and similarly for $\mathbf{u}^{(0,1)}\cdot \hat{\mathbf{e}}_z$ and $ \hat{\mathbf{t}}^{(0,0)} \cdot [\bm{\sigma}^{(0,1)}] \cdot \hat{\mathbf{e}}_z$. Since solutions $(\mathbf{u}_D^{(0,0)}, \bm{\sigma}^{(0,0)})$ and $(\mathbf{u}_D, \bm{\sigma})$ are defined in the same geometry, they are related by the reciprocal theorem \begin{align} \iint_{\Sigma_i}(\bm{\sigma}^{(0,0)} \cdot \mathbf{n}) \cdot \mathbf{u}_D \mbox{ d}\Sigma = \iint_{\Sigma_i} (\bm{\sigma} \cdot \mathbf{n}) \cdot \mathbf{u}^{(0,0)}_{D} \mbox{ d}\Sigma,\quad i = 1,2, \label{eqn:single-reciprocalEqn1} \end{align} where $\Sigma_i =\partial D_i = \Sigma^{(0)}_{P_i} \cup \Sigma^{(0)}_{I_i} \cup \Sigma_{\infty_i}$, and $\mathbf{n}$ denote the outward normal of $\Sigma_i$. The contribution from the far-field integral vanishes, since \begin{align} \vert \vert \mathbf{u}_D^{(0,0)} \vert \vert \sim \vert \mathbf{x}\vert^{-1}, \quad \vert \vert \mathbf{u}_D\vert \vert \sim \vert \mathbf{x}\vert^{-1}, \quad \vert \vert \bm{\sigma}^{(0,0)}\cdot \mathbf{n} \vert \vert \sim \vert \mathbf{x}\vert^{-2}, \quad \mbox{ and } \vert \vert \bm{\sigma}\cdot \mathbf{n} \vert \vert \sim \vert \mathbf{x}\vert^{-2}. \end{align} Collecting coefficients of $\mbox{Ca}$ and $\delta$ in Eq. \eqref{eqn:single-reciprocalEqn1}, we are able to express terms \textcircled{\small 1} in Eq. \eqref{eqn:single-correctionDragFormula} in terms of integrals over the flat interface $\Sigma_{I}^{(0)}$, i.e., \begin{align} \begin{split} \mbox{\textcircled{\small 1}} & = \iint_{\Sigma^{(0)}_I} [\bm{\sigma}^{(0,0)}] \cdot (-\hat{\mathbf{e}}_z) \cdot \mathbf{u}_D^{(1,0)} \mbox{ d}\Sigma -\iint_{\Sigma^{(0,0)}_I} [\bm{\sigma}^{(1,0)}] \cdot (-\hat{\mathbf{e}}_z) \cdot \mathbf{u}^{(0,0)}_D \mbox{ d}\Sigma, \end{split} \label{eqn:single-reciprocal_drag_10} \end{align} for the $\mathcal{O}(\mbox{Ca})$ drag $F_D^{(1,0)}$, and \begin{align} \begin{split} \mbox{ \textcircled{\small 1}} = & \iint_{\Sigma^{(0)}_I} [\bm{\sigma}^{(0,0)}] \cdot (-\hat{\mathbf{e}}_z) \cdot \mathbf{u}_D^{(0,1)} \mbox{ d}\Sigma -\iint_{\Sigma^{(0,0)}_I} [\bm{\sigma}^{(0,1)}] \cdot (-\hat{\mathbf{e}}_z) \cdot \mathbf{u}^{(0,0)}_D \mbox{ d}\Sigma \\ &-\sum_{i=1,2}\iint_{\Sigma^{(0)}_{P_i}} \bm{\sigma}^{(0,0)} \cdot (-\tilde{\mathbf{n}}^{(0,0)} ) \cdot \mathbf{u}^{(0,1)} \mbox{ d}\Sigma \end{split} \label{eqn:single-reciprocal_drag_01} \end{align} for the $\mathcal{O}(\delta)$ drag $F_D^{(0,1)}$, where the correction velocities and stress jumps are given in the correction boundary conditions (see Appendix \ref{app:single-leading-corr-problems}) The additional drag contribution when the particle's center is shifted from the origin is \begin{align} -\sum_{i=1,2}\iint_{\Sigma^{(0)}_{P_i}} \bm{\sigma}^{(0,0)} \cdot (-\tilde{\mathbf{n}}^{(0,0)} ) \cdot \mathbf{u}^{(0,1)} \mbox{ d}\Sigma = \frac{27}{16} \pi (\lambda-1)\tilde{b}. \end{align} The result recovers the correction drag in Eq. (3.17) from \cite{dorr2015}, where the particle translates along a flat gas-liquid interface ($\lambda = \mu_2/\mu_1 = 0$) with immersion depth $\delta \tilde{b}.$ Continuing, we can now calculate the first correction to the drag. Because of the anti-symmetry of the flow field given by Eqs. \eqref{eqn:app-single-leading-ur} - \eqref{eqn:app-single-leading-uz}, the flow induced drag $F^{(1,0)}$ is zero. The contact angle induced correction drag is $F_D^{(0,1)}$ is given by \begin{align} \begin{split} F_D^{(0,1)} = & \pi \int_1^\infty \left( -B_z(r) \frac{3(\lambda-1)}{2r^4} + B_r(r) (\tilde{u}_r^{(0,0)} (r)-1) + B_\phi(r) (\tilde{u}_\phi^{(0,0)}(r) -1) \right) r \mbox{ d}r \\ & + \frac{27}{16} \pi(\lambda-1)\tilde{b} - 3\pi (\lambda -1)(-\tilde{\theta}_s + \tilde{b})C_0 K_0(\sqrt{\mbox{Bo}}), \end{split} \label{eqn:single-correctionDragFormula_01-2} \end{align} Here, \begin{align} & \tilde{u}_r^{(0,0)} = \frac{1}{4}\left( \frac{-6 r^2 + 2 }{r^3} + 4 \right),\quad \tilde{u}_\phi^{(0,0)} = \frac{1}{4} \left( -\frac{3}{r} - \frac{1}{r^3} + 4 \right), \quad \frac{\partial \tilde{u}^{(0,0)}_z }{\partial z} = \frac{3(1-r^2)}{4 r^4},\\ & B_z= (\tilde{u}^{(0,0)}_r-1) \frac{\mbox{d} h^{(0,1)} }{\mbox{d}r} - \frac{\partial \tilde{u}^{(0,0)}_z }{\partial z} h^{(0,1)},\\ & B_r = \frac{3(\lambda-1)}{2r^5} \left( (4-3r^2)h^{(0,1)} + 3 r(r^2-1)\frac{\mbox{d} h^{(0,1)} }{\mbox{d}r} \right),\\ & B_\phi = \frac{3(\lambda-1)}{2 r^5} (r \frac{\mbox{d} h^{(0,1)} }{\mbox{d}r} - h^{(0,1)}). \end{align} The contact angle induced drag \eqref{eqn:single-correctionDragFormula_01-2} is numerically evaluated using the trapezoidal rule. We can now write the total drag to $\mathcal{O}(\mbox{Ca})$ and $\mathcal{O}(\delta)$ as \begin{align} F_D = & 3\pi (\lambda+1) + \delta (\lambda-1) \left( (\tilde{\theta}_s - \tilde{b})f^{(1)}(\mbox{Bo}) + \frac{27}{16} \pi \tilde{b}\right) \label{eqn:single-truncatedDrag} \end{align} where $f^{(1)}$, shown in Fig. \ref{fig:single-dragCoeff}, is the correction drag coefficient in terms of Bo. Recall that increasing Bo, which represents the density mismatch between the two fluid phases, flattens the interface shape near the particle. Consequently, an increase in Bo (e.g., by increasing the density mismatch) reduces the correction drag force caused by interfacial deformation, as shown in Fig. \ref{fig:single-dragCoeff}. \begin{figure} \centering \includegraphics[scale=0.65]{single-dragCoeff_1fig.pdf} \caption{The drag coefficient as a function of Bond number Bo. } \label{fig:single-dragCoeff} \end{figure} In Eq. \eqref{eqn:single-truncatedDrag}, we observe that the correction drag force at order $\delta$ scales linearly with the viscosity difference $(\lambda-1)$, and when the two fluid phases have the same viscosity ($\lambda$=1), the $\mathcal{O}(\delta)$ drag vanishes. This vanishing of the correction drag when $\lambda=1$ is not expected if higher order terms were included. In Figs. \ref{fig:single-loudet-fig7} and \ref{fig:single-loudet-fig6}, we compare the normalized drag \begin{align} F_D^* = \frac{F_D}{3\pi (\lambda+1)} \label{eqn:single-normalizedDrag} \end{align} with the 2D numerical results of Loudet et al. \cite{Loudet2020}. The drag forces in \cite{Loudet2020} are calculated with $\mbox{Ca} \sim 10^{-3}-10^{-4}$ and $40^\circ<\theta_s <140^\circ$. In Fig. \ref{fig:single-loudet-fig7}, we plot the normalized drag as a function of viscosity ratio $\lambda$ in comparison with Loudet et al.'s results for contact angle $\theta_s = 75^\circ$ and $\theta_s = 110^\circ.$ We see that the asymptotic solutions are qualitatively consistent with the numerical results in that as the viscosity ratio $\lambda$ tends to 1, the effect of deformations on the drag force decreases. Quantitative differences are observed. There could be several reasons for this. First, the comparisons are made between a 3D flow in an unbounded Stokes fluid and a 2D Navier-Stokes flow confined between two parallel planes. Second, the values of the contact angles, $\theta_s = 75^\circ$ and $\theta_s = 110^\circ$, violates the assumption of small correction contact angles for the asymptotic expansion. In Fig. \ref{fig:single-loudet-fig6}, we set $\lambda = 0.75$ and compare the predictions. The asymptotic solution predicts the drag dependence on $\theta_s$ to first order in $\delta$ (linear effects), and \cite{Loudet2020}'s solution to the full flow problem captures the contact angle's higher-order nonlinear effects on the drag force. \begin{figure} \centering \includegraphics[scale=0.65]{single-loudet-fig7-corrected.pdf} \caption{Comparison of the normalized drag $F_D^*$ with numerical results from Loudet et al. (2020), for contact angles $\theta_s = 75^\circ$ and $110^\circ, \mbox{Bo}\approx 0.2, \tilde{b} = 0. $ } \label{fig:single-loudet-fig7} \end{figure} \begin{figure} \centering \includegraphics[scale=0.65]{single-loudet-fig6-corrected.pdf} \caption{Comparison of the normalized drag $F_D^*$ with numerical results from Loudet et al. (2020) for viscosity ratio $\lambda = 0.75, \mbox{Bo} \approx 0.2,$ and $\tilde{b}=0$. } \label{fig:single-loudet-fig6} \end{figure} \section{Pair interactions of particles} In this section, we consider the steady motion of two spherical particles at a fluid interface under creeping flow conditions, where the background flow is arbitrarily oriented relative to the spheres' line-of-centers. The linearity of the Stokes equations and the boundary conditions allows us to decompose the problem into two sub-problems: uniform flows past two spheres at an interface, where the imposed flow direction is either perpendicular or parallel to the spheres' line-of-centers. To move forward, we employ the solutions for the motion of two spheres in an unbounded fluid \cite{Stimson1926,Goldman1966}. Stimson and Jeffery \cite{Stimson1926} solved the problem of two spheres translating with a constant velocity parallel to their line-of-centers. Goldman et al. \cite{Goldman1966} calculated the terminal setting motion of two arbitrarily oriented spheres by combining Stimson and Jeffery's solutions \cite{Stimson1926} with the solutions to the side-by-side problem, in which the motion of the spheres is perpendicular to their line-of-centers. Using the same approach as for the single-particle problem, we study the influence of interfacial deformations on the drag force acting on the particles, where the solutions obtained by Goldman et al. \cite{Goldman1966} and Stimson and Jeffery \cite{Stimson1926} are used to solve the leading order problems. \subsection{Flow perpendicular to the particles' line-of-centers } \subsubsection{Problem formulation} Consider two spherical particles of radii $a$ straddling a fluid interface between two viscous fluids with respective viscosities $\mu_1$ and $\mu_2$ in a uniform flow perpendicular to the line-of-centers of the two spheres (see Fig. \ref{fig:perp-setup1}). We assume the two particles have their centers of masses pinned at $-L/2\hat{\mathbf{e}}_x$ and $L/2 \hat{\mathbf{e}}_x$, respectively, where $L$ denotes the dimensionless separation distance between the two particles. The nondimensionalized background flow is denoted by $\mathbf{u}^\infty_\perp = \hat{\mathbf{e}}_y,$ and we adopt similar notations used in the previous section. \begin{figure} \centering \includegraphics[width=11cm]{perp-setup1.pdf} \caption{Illustration of two spherical particles straddling a fluid interface between two immersible viscous fluids, where the background uniform flow is perpendicular to the line-of-centers of the two particles.} \label{fig:perp-setup1} \end{figure} As in section \ref{sect:single-particle}, we perturb the contact angle $\theta_s$ from $90^\circ$, i.e., $\theta_s = \pi/2 + \delta\tilde{\theta}_s$. For any nondimensionalized quantity $f_\perp$, where the subscript $``\perp"$ indicates variables in the perpendicular flow problem, we consider the two-parameter asymptotic expansion for Ca and $\delta$: \begin{align} f_\perp = f_\perp^{(0,0)} + \mbox{Ca} f_\perp^{(1,0)} + \delta f_\perp^{(0,1)} + \cdots. \label{eqn:perp-asym-exp} \end{align} In this discussion, we do not perturb the height of the center of the sphere. This could be easily included by paralleling the analysis of section \ref{sect:single-particle}. The leading order problem describes two spheres at a flat fluid interface in a perpendicular uniform flow. The equivalent problem of two spheres translating at a constant velocity perpendicular to their line-of-centers in a viscous fluid in the absence of an interface is solved by Goldman et al. \cite{Goldman1966}. The analytical solutions of the pressure and velocity field in \cite{Goldman1966} are the leading order pressure $p^{(0,0)}_\perp$ and velocity field $\mathbf{u}^{(0,0)}_\perp$ (see Appendix \ref{app:perp-leading-sol}). As in section \ref{sect:single-particle}, we modified these solutions to account for the different viscosities of the two fluids. To better describe the two-sphere geometry, we introduce bicylindrical coordinates $(\sigma,\tau,z)$. The relations between the Cartesian coordinates and the bicylindrical coordinates are \begin{align} x = \frac{c\sinh\tau}{\cosh\tau -\cos\sigma}, \quad y = \frac{c\sin\sigma}{\cosh\tau - \cos\sigma},\quad -\pi < \sigma <\pi, \quad -\tau_1 < \tau < \tau_1, \label{eqn:perp-bicylindricalCoordinates} \end{align} where $c = \sqrt{(L/2)^2-1}$ is the separation coefficient, and $\tau = \pm \tau_1 = \pm \mbox{arccosh}(L/2)$ describe the TCLs at the particle surfaces at leading order. \subsubsection{Interfacial deformations} Paralleling the analysis of section \ref{sect:single-particle}, the $\mathcal{O}(\mbox{Ca})$ deformation $h_\perp^{(1,0)}$ and $\mathcal{O}(\delta)$ deformation $h^{(0,1)}_\perp$ satisfy the stress balance equations \begin{align} \nabla^2 h^{(1,0)}_\perp -\mbox{Bo} h^{(1,0)}_\perp = -\hat{\mathbf{e}}_z \cdot [\bm{\sigma}_\perp^{(0,0)}]\cdot \hat{\mathbf{e}}_z, \label{eqn:perp-stressBalanceEq_10} \end{align} and \begin{align} \nabla^2 h^{(0,1)}_\perp -\mbox{Bo} h^{(0,1)}_\perp = 0.\label{eqn:perp-stressBalanceEq_01} \end{align} The normal-normal stress difference $ \hat{\mathbf{e}}_z \cdot [\bm{\sigma}_\perp^{(0,0)}]\cdot \hat{\mathbf{e}}_z$ can be calculated from the leading order solutions $p^{(0,0)}_\perp$ and $\mathbf{u}_\perp^{(0,0)}$. Then, Eq. \eqref{eqn:perp-stressBalanceEq_10} in bicylindrical coordinates reads \begin{align} \frac{(\cosh\tau - \cos\sigma)^2 }{c^2 } \left( \frac{\partial^2 h_\perp^{(1,0)}}{\partial \sigma^2 } + \frac{\partial^2 h_\perp^{(1,0)}}{\partial \tau^2 }\right) -\mbox{Bo} h_\perp^{(1,0)} = -\frac{2(\lambda-1)X(\sigma,\tau)}{c \sin\sigma/(\cosh\tau - \cos\sigma)}, \label{eqn:perp-stressBalanceEq_10_bicyl} \end{align} with \begin{align} X = & (\cosh\tau - \cos\sigma)^{1/2} \sin^2\sigma \sum_{n=2}^\infty F_n \cosh(n+1/2)\tau P_n''(\cos\sigma), \end{align} where $P_n$ denotes the Legendre polynomial of order $n$, and the coefficients $F_n$ are given in Eqs. (3.55) and (3.56) in \cite{Goldman1966} and included in Appendix \ref{app:perp-leading-sol} (see \cite{zhou2022} for further details). Likewise, the stress balance equation \eqref{eqn:perp-stressBalanceEq_01} is given by \begin{align} \frac{(\cosh\tau - \cos\sigma)^2 }{c^2 } \left( \frac{\partial^2 h_\perp^{(0,1)}}{\partial \sigma^2 } + \frac{\partial^2 h_\perp^{(0,1)}}{\partial \tau^2 }\right) -\mbox{Bo} h_\perp^{(0,1)} = 0. \end{align} The unperturbed fixed contact angle conditions at the TCL are \begin{align} \cos(\pi/2 - \Psi_c) = \pm \hat{\mathbf{e}}_\tau \cdot \hat{\mathbf{n}}_\perp \big\vert_{\tau =\pm \tau_1},\label{eqn:perp-contact-angle1} \end{align} where $\Psi_c$ is the inclination angle, $\hat{\mathbf{e}}_\tau$ is the unit tangent to the $\tau$ contour lines, and $\hat{\mathbf{n}}_\perp \big\vert_{\tau =\pm \tau_1}$ is the unit normal to the fluid interface evaluated at the TCLs (see Fig. \ref{fig:perp-contact-angle}). Substituting the asymptotic expansions into Eq. \eqref{eqn:perp-contact-angle1} and expanding in Ca and $\delta$ yields the boundary conditions for $h_\perp^{(1,0)}$ and $h_\perp^{(0,1)}$ \begin{align} & \left. \left[ \pm \frac{\cosh\tau-\cos\sigma}{c}\frac{\partial h_\perp^{(1,0)}}{\partial \tau} + h_\perp^{(1,0)} \right] \right\vert_{\tau = \pm\tau_1} = 0,\label{eqn:perp-h10-bc}\\ & \left. \left[ \pm \frac{\cosh\tau-\cos\sigma}{c}\frac{\partial h_\perp^{(0,1)}}{\partial \tau} + h_\perp^{(0,1)} \right] \right\vert_{\tau = \pm\tau_1} = -\tilde{\theta}_s \label{eqn:perp-h01-bc}, \end{align} respectively (see Appendix \ref{app:perp-contact-angle-cond} for details). It should be noted that the PDEs \eqref{eqn:perp-stressBalanceEq_10} and \eqref{eqn:perp-stressBalanceEq_01} are defined on the rectangular region $|\tau| < \tau_1$ and $|\sigma|<\pi.$ These second order linear PDEs can be solved with a straightforward centered finite difference scheme, along with second order finite difference approximations of Eqs. \eqref{eqn:perp-h10-bc} and \eqref{eqn:perp-h01-bc}, plus the periodic conditions at $\sigma = \pm \pi.$ The resulting linear system is solved using MATLAB's backslash operator. Second order convergence is observed. The partial derivatives of $h_\perp^{(1,0)}$ and $h_\perp^{(0,1)}$ with respect to $x$ and $y$ are obtained using finite difference approximations, which have linear convergence. Fig. \ref{fig:perp-deformations1-2} shows the numerically evaluated interfacial deformation $h = \mbox{Ca}h_\perp^{(1,0)} +\delta h_\perp^{(0,1)}$ around two spherical particles and its cross-section plots with $\mbox{Ca} =1, \delta = \tilde{\theta}_s =1$, Bo $=1,$ and $L = 6.$ As in Figs. \ref{fig:single-deformations2} and \ref{fig:single-deformations1}, we set $\mbox{Ca} = \delta = \theta_s = 1$ to illustrate the deformations. \begin{figure} \centering \includegraphics[scale=0.4]{perp-contact-angle.pdf} \caption{Sketch of the fluid interface near two spherical particles.} \label{fig:perp-contact-angle} \end{figure} \begin{figure} \centering \includegraphics[scale=0.68]{perp-deformations1-2-colorbar.pdf} \caption{The numerically calculated interfacial deformation, $h = \mbox{Ca}h_\perp^{(1,0)} +\delta h_\perp^{(0,1)}$, around two spherical particles with $\mbox{Ca} = \delta=\tilde{\theta}_s =1$, Bo $=1,$ and $L = 6$: (a) 3D visualization, where the colormap shows the interfacial height near the particle (b) $x$-$z$ cross-section view for $y = 0$, (c) $y$-$z$ cross-section view for $x = L/2$. } \label{fig:perp-deformations1-2} \end{figure} \subsubsection{Calculation of the drag force} We let $\Sigma_{P_{i}^\text{\tiny I}}$ and $\Sigma_{P_{i}^\text{\tiny II}}$ denote the respective surfaces of the two particles in fluid phase $i$. Then, the total drag force acting on the two particles is defined as \begin{align} F_\perp = \mathbf{F}_\perp \cdot (-\mathbf{u}^\infty_\perp) = & \sum_{i=1,2} \iint_{\Sigma_{P^\text{\tiny I}_i}} \bm{\sigma}_\perp\cdot(- \tilde{\mathbf{n}}_\perp)\cdot (-\mathbf{u}_\perp^\infty) \mbox{ d}\Sigma + \iint_{\Sigma_{P^\text{\tiny II}_i}} \bm{\sigma}_\perp\cdot(- \tilde{\mathbf{n}}_\perp)\cdot (-\mathbf{u}_\perp^\infty) \mbox{ d}\Sigma. \label{eqn:perp-dragFormula1} \end{align} By symmetry arguments, the drag forces on the two spheres are equal and the drag on each sphere is $F_\perp/2.$ Paralleling the drag force calculation \eqref{eqn:single-dragFormula-undulation} for a single sphere, we find that to order Ca and $\delta$, $F_\perp = F_\perp^{(0,0)} + \mbox{Ca} F_\perp^{(1,0)} + \delta F_{\perp}^{(0,1)},$ where the $\mathcal{O}(1)$ drag force \begin{align} F_\perp^{(0,0)} = \sum_{i=1,2} \iint_{\Sigma^{(0)}_{P_i^{\text{\tiny I,II}}}} \bm{\sigma}_\perp^{(0,0)} \cdot \tilde{\mathbf{n} }_{\perp} ^{(0,0)} \cdot \mathbf{u}_\perp^\infty \mbox{ d}\Sigma \end{align} is computed by Goldman et al. \cite{Goldman1966}. The $\mathcal{O}(\mbox{Ca})$ and $\mathcal{O}(\delta)$ drag forces, $F_\perp^{(1,0)}$ and $F_\perp^{(0,1)}$, are defined as \begin{align} & F_\perp^{(j,k)} = \sum_{i=1,2} \iint_{\Sigma^{(0)}_{P_i^{\text{\tiny I,II}}}} \bm{\sigma}_\perp^{(j,k)} \cdot \tilde{\mathbf{n} }_{\perp} ^{(0,0)} \cdot \mathbf{u}_\perp^\infty \mbox{ d}\Sigma - \int_{\Sigma^{(0)}_{\text{\scriptsize TLC}^{\text{\tiny I,II}}}} h_\perp^{(j,k)} [\bm{\sigma}_\perp^{(0,0)}]\cdot \tilde{\mathbf{n} }_{\perp} ^{(0,0)}\cdot \mathbf{u}_\perp^\infty \mbox{ d}s,\label{eqn:perp-corrDrag} \end{align} with $(j,k) = (1,0)$ and $(0,1)$, where the integrals can be evaluated over particle I and II with \begin{align*} \Sigma^{(0)}_{P_i^{\text{\tiny I,II}}} = \{(x,y,z)\vert (x\pm L/2)^2 + y^2 +z^2=1 \}, \quad \mbox{ and }\Sigma^{(0)}_{\text{\scriptsize TLC}^{\text{\tiny I,II}}} =\{(\sigma,\tau,z)\vert \tau =\mp \tau_1, z = 0 \}. \end{align*} The surface integrals in Eq. \eqref{eqn:perp-corrDrag} are calculated using the Lorentz reciprocal theorem, similar to the single particle problem (see details in Appendix \ref{app:lorentz-perp}). The line integrals in Eq. \eqref{eqn:perp-corrDrag} represent the force contribution from the interfacial deformations at TCLs (see \cite{zhou2022}). Evaluating $F_\perp^{(1,0)}$ and $F_\perp^{(0,1)}$ using the trapezoidal rule, we obtain the truncated asymptotic expansion for the drag force \begin{align} F_\perp = 6\pi (\lambda+1) f_\perp^{(0)}(L) + \delta \tilde{\theta}_s (\lambda-1) f_\perp^{(1)}(\mbox{Bo},L), \label{eqn:perp-dragFormula3} \end{align} where $f_\perp^{(0)}$ is the leading order drag coefficient obtained by Goldman et al. \cite{Goldman1966}, and $f_\perp^{(1)}$ is the correction drag coefficient in terms of Bo and $L$, which is shown in Fig. \ref{fig:perp-dragBoL}. Note that the flow-induced drag $F_\perp^{(1,0)}$ integrates to zero due to anti-symmetry. Fig. \ref{fig:perp-dragBoL}(a) shows that an increase in Bo reduces the drag coefficient $f_\perp^{(1)}$. In Fig. \ref{fig:perp-dragBoL}(b), we see that $f^{(1)}_\perp$ decreases as the separation distance $L$ decreases. This is because as the particles become closer to each other, the total amount of interfacial deformation around them decreases, and thus the correction drag caused by the deformation decreases. In the limit of large separation, the flow field and the interfacial deformation near each particle converge toward the single-particle solutions, and the correction drag coefficient $f_\perp^{(1)}/2$ converges to the value of $f^{(1)}$ in Eq. \eqref{eqn:single-truncatedDrag}. \begin{figure} \centering \includegraphics[scale=0.65]{perp-dragBoL-corrected.pdf} \caption{The drag coefficient plotted as functions of (a) Bond number Bo and (b) separation $L$. } \label{fig:perp-dragBoL} \end{figure} \subsection{Flow parallel to the particles' line-of-centers } \label{subsect:parallel-flow} Next, we consider the problem of two spherical particles at a fluid interface undergoing uniform flow parallel to their line-of-centers (see Fig. \ref{fig:para-setup1}). The centers of masses of the particles are located at $-L/2\hat{\mathbf{e}}_y$ and $L/2\hat{\mathbf{e}}_y$, respectively, and the nondimensionalized background flow is denoted by $\mathbf{u}_\parallel ^\infty= \hat{\mathbf{e}}_y.$ We use the same asymptotic approach as we did for the previous problems and adopt similar notations. \begin{figure} \centering \includegraphics[width=11cm]{para-setup1.pdf} \caption{Illustration of two spherical particles straddling a fluid interface between two immersible viscous fluids, where the background uniform flow is parallel to the line-of-centers of the two particles.} \label{fig:para-setup1} \end{figure} Stimson and Jeffery \cite{Stimson1926} solved the problem of two spheres translating parallel to their line-of-centers at low Reynolds number using the stream function method, which gives us the analytical solution for the leading order velocity field $\mathbf{u}_\parallel^{(0,0)}$ (see Appendix \ref{app:para-leading-sol}). A similar bicylindrical coordinates system $(\sigma,\tau,z)$ is introduced: \begin{align} x = \frac{c \sin\sigma}{\cosh\tau - \cos\sigma},\quad y = \frac{c \sinh\tau}{ \cosh\tau -\cos\sigma}, \quad -\pi < \sigma < \pi, \quad -\tau_1 < \tau < \tau_1, \end{align} where $ c = \sqrt{(L/2)^2 -1}$, and $\tau = \pm \tau_1 = \pm \mbox{arccosh} (L/2)$ describe the equilibrium TCLs at the particle surfaces. \subsubsection{Leading order pressure recovery} Unlike the previous cases, the jump in the pressure across the interface needed for the drag calculation is not given explicitly in \cite{Stimson1926}. But this can be found numerically by solving the differential equation for $(\lambda-1)\tilde{p}_\parallel = p_\parallel^{2(0,0)} - p_\parallel^{1(0,0)}$, the leading order pressure difference across the flat fluid interface $\Sigma_{I}^{(0)}$. The equation that $\tilde{p}_\parallel$ satisfies is \begin{align} & \frac{\partial^2 \tilde{p}_\parallel}{\partial x^2} + \frac{\partial^2 \tilde{p}_\parallel}{\partial y^2} = - \frac{1}{x} \left( \frac{\partial^2 u_{\parallel x}^{(0,0)} }{\partial x^2 } + \frac{1}{x}\frac{\partial u_x^{(0,0)} }{\partial x } + \frac{\partial^2 u_{\parallel x}^{(0,0)}}{\partial y^2} - \frac{u_{\parallel x}^{(0,0)}}{x^2} \right),\label{eqn:para-pressureEqn4a}\\ & \frac{\partial \tilde{p}_\parallel}{\partial n} = \nabla^2 \mathbf{u}_\parallel^{(0,0)}\cdot \tilde{\mathbf{n}}^{(0,0)}\quad \mbox{ at } \Sigma^{(0)}_{\text{\scriptsize TLC}^{\text{\tiny I,II}}}, \label{eqn:para-pressureEqn4}\\ & \tilde{p}_\parallel \rightarrow 0 \quad \mbox{as } \vert \mathbf{x} \vert \rightarrow \infty, \end{align} where $\partial/\partial n$ denotes the normal derivative at the base TCLs, and the boundary condition \eqref{eqn:para-pressureEqn4} is obtained by taking the normal component of the momentum equation (see Appendix \ref{app:pressure-recovery} for the detailed derivation of Eq. \eqref{eqn:para-pressureEqn4a}). In bicylindrical coordinates, the problem for $\tilde{p}_\parallel$ is given by \begin{align} &\frac{(\cosh\tau-\cos\sigma)^2}{c^2} \left( \frac{\partial^2 \tilde{p}_\parallel}{\partial \sigma^2 } + \frac{\partial^2 \tilde{p}}{\partial \tau^2} \right) = F(\sigma,\tau), \\ &\left.\frac{\partial \tilde{p}_\parallel}{\partial n } \right\vert_{\tau = \pm\tau_1}= \tilde{f}(\sigma,\pm\tau_1),\\ & \tilde{p}_\parallel(-\pi,\tau) = \tilde{p}_\parallel ( \pi,\tau) ,\quad \frac{\partial \tilde{p}_\parallel}{\partial \sigma} (-\pi,\tau) = \frac{\partial \tilde{p}_\parallel}{\partial \sigma} (-\pi,\tau),\\ & \tilde{p}_\parallel(0,0) = 0 \quad ( |\mathbf{x}|\rightarrow \infty \implies (\sigma,\tau) \rightarrow (0,0)), \end{align} where \begin{align} & \tilde{F}(\sigma,\tau) = - \frac{1}{x} \left( \frac{\partial^2 u_{\parallel x}^{(0,0)} }{\partial x^2 } + \frac{1}{x}\frac{\partial u_{\parallel x}^{(0,0)} }{\partial x } + \frac{\partial^2 u_{\parallel x}^{(0,0)}}{\partial y^2} - \frac{u_{\parallel x}^{(0,0)}}{x^2} \right),\\ & \tilde{f}(\sigma,\pm\tau_1) = \nabla^2 \mathbf{u}_\parallel^{(0,0)} \cdot \tilde{\mathbf{n}}(\sigma,\pm\tau_1) . \end{align} The unit normal vector to the base TCLs, $\tilde{\mathbf{n}}^{(0,0)}$, is \begin{align} & \tilde{\mathbf{n}}^{(0,0)} \big\vert_{\tau=\pm \tau_1} = \mp \hat{\mathbf{e}}_\tau = \mp \left( -\frac{\sin\sigma \sinh(\pm \tau_1)}{\cosh(\pm\tau_1) - \cos\sigma} \hat{\mathbf{e}}_x + \frac{1-\cosh\sigma \cosh(\pm\tau_1)}{\cosh(\pm\tau_1) - \cos\sigma} \hat{\mathbf{e}}_y \right). \end{align} This partial differential equation for $\tilde{p}_\parallel$ is solved numerically using the finite difference method and MATLAB's backslash operator to invert the discretized linear system of difference equations, and the numerical solutions show quadratic convergence \cite{zhou2022}. \subsection{Interfacial deformations and drag force} The static deformation $\delta h_\parallel^{(0,1)}$, induced by the contact angle, describes the equilibrium interface shape in the absence of flow. Thus, the static deformation is unaffected by the flow orientation and $h_\perp^{(0,1)} \equiv h_\parallel^{(0,1)}$ with the proper axis rotation. The $\mathcal{O}(\mbox{Ca})$ interfacial deformation $h^{(1,0)}_\parallel$, induced by the background flow, satisfies the stress balance equation \begin{align} \nabla h_\parallel^{(1,0)} -\mbox{Bo}h_\parallel^{(1,0)} = - \hat{\mathbf{e}}_z \cdot [\bm{\sigma}^{(0,0)}_\parallel]\cdot \hat{\mathbf{e}}_z, \label{eqn:para-deformationEqn1} \end{align} where the stress difference is given by \begin{align} [\bm{\sigma}^{(0,0)}_\parallel] = (\lambda-1) ( -\tilde{p}_\parallel + \nabla[\mathbf{u}^{(0,0)}_\parallel ] + (\nabla[\mathbf{u}^{(0,0)}_\parallel ] )^T ). \end{align} The stress balance equation \eqref{eqn:para-deformationEqn1} is solved as before using a second order centered finite difference method (see \cite{zhou2022} for details). We use the same approach as in the previous cases to obtain the drag force exerted on the two particles, which is given in the form of a truncated asymptotic expansion: \begin{align} F_\parallel = 6\pi (\lambda+1)f_\parallel^{(0)}(L) + \delta \tilde{\theta}_s (\lambda-1)f_\parallel^{(1)} (\mbox{Bo},L), \label{eqn:para-dragFormula} \end{align} where $f_\parallel^{(0)}$ is the leading order drag coefficient obtained by Stimson and Jeffery \cite{Stimson1926}, and $f_\parallel^{(1)}$ is the correction drag coefficient for the contact angle induced deformation $\delta h^{(0,1)}$. The drag contribution from the flow-induced deformation $\mbox{Ca}h^{(1,0)}$ integrates to zero due to anti-symmetry. Fig. \ref{fig:para-dragBoL} shows the drag coefficient $f_\parallel^{(1)}$ as a function of Bo and $L$. The dependence of $f_\parallel^{(1)}$ on Bo and $L$ is similar to that found for the perpendicular flow past two spheres. As the separation distance $L$ increases, the value of $f_\parallel^{(1)}/2$ converges to the single-particle drag coefficient $f^{(1)}$ in Eq. \eqref{eqn:single-truncatedDrag}. However, a slower convergence is observed compared to the case of two particles in a perpendicular flow. This can be explained by the difference in the convergence rates of the leading order solutions, i.e., $f_\parallel^{(0)} \sim 1 - 3/2L$ and $ f_\perp^{(0)} \sim 1 - 3/4L$ for $L \gg 1$ \cite{Stimson1926,Goldman1966}. \begin{figure} \centering \includegraphics[scale=0.65]{para-dragBoL_corrected.pdf} \caption{The drag coefficient plotted as functions of (a) Bond number Bo and (b) separation $L$. } \label{fig:para-dragBoL} \end{figure} \subsection{Arbitrarily oriented flow} The analyses of two spherical particles undergoing flows perpendicular and parallel to their line-of-centers allow the calculation of arbitrarily oriented flow past two spheres at an interface. As illustrated in Fig. \ref{fig:arb-setup1}, the uniform background flow $\mathbf{u}^\infty$ is oriented at an arbitrary angle relative to the spheres' line-of-centers. The flow $\mathbf{u}^\infty$ can be decomposed into a perpendicular component to the line-of-centers and a parallel component, i.e., \begin{align} \mathbf{u}^\infty = \mathbf{u}_\perp^\infty + \mathbf{u}_\parallel^\infty \quad (|| \mathbf{u}^\infty|| = 1), \end{align} with \begin{align} \mathbf{u}_\perp^\infty = \sin\Theta \hat{\mathbf{e}}_x, \quad \mathbf{u}_\parallel^\infty = \cos\Theta \hat{\mathbf{e}}_y, \end{align} where $\Theta$ denotes the angle between the flow direction and the line-of-centers of the two spheres (see Fig. \ref{fig:arb-setup1}). \begin{figure} \centering \includegraphics[width=10cm]{arb-setup1.pdf} \caption{Sketch of the $x$-$y$ cross-section view of two particles at a fluid interface undergoing arbitrarily oriented uniform flow in the $x$-$y$ plane.} \label{fig:arb-setup1} \end{figure} The linearity of the Stokes equation and the boundary conditions of the $\mathcal{O}(\mbox{Ca})$ and $\mathcal{O}(\delta)$ problems allows us to calculate the drag force acting on the particles, $\mathbf{F}_D$, by vectorially combining the forces exerted by the perpendicular flow $ \mathbf{u}_\perp^\infty$ and the parallel flow $ \mathbf{u}_\parallel^\infty$, i.e., \begin{align} \mathbf{F}_D = \mathbf{F}_\perp + \mathbf{F}_\parallel \end{align} with \begin{align} \mathbf{F}_\perp = F_\perp \sin\Theta \hat{\mathbf{e}}_x,\quad \mathbf{F}_\parallel = F_\parallel \cos\Theta \hat{\mathbf{e}}_y, \end{align} where $F_\perp$ and $F_\parallel$ are given in Eqs. \eqref{eqn:perp-dragFormula3} and \eqref{eqn:para-dragFormula}, respectively. The magnitude of the drag force is given by \begin{align} F_D = \vert \vert \mathbf{F}_D\vert \vert = \sqrt{ (F_\perp \sin\Theta )^2 + (F_\parallel \cos\Theta)^2}. \end{align} In Fig. \ref{fig:arb-dragAngle}, $F_D$ is plotted as a function of the orientation angle $\Theta.$ Given the same set of parameters, the drag force has a larger magnitude when the background flow is perpendicular to the line-of-centers than when parallel. As $\Theta$ increases from $0^\circ$ to $90^\circ$, the perpendicular component in the background flow becomes more dominant and $F_D$ increases. \begin{figure} \centering \includegraphics[scale=0.65]{arb-dragAngle_smooth.pdf} \caption{The magnitude of the drag force plotted as a function of the orientation angle $\Theta$ with $\delta = \theta_s = 1, \lambda = 1/2$, and varying values of separation $L$ (a) and Bond number Bo (b). } \label{fig:arb-dragAngle} \end{figure} \subsection{Capillary attraction force} The dimensionless capillary force scaled by $\gamma a$ exerted on the spherical particle due to the interfacial deformation can be computed by integrating the capillary stress along the TCL, i.e., \begin{align} \mathbf{F}_C = \int_{\Sigma_\text{\scriptsize TLC}} -\tilde{\mathbf{n}}_C \mbox{ d}s, \end{align} where $\tilde{\mathbf{n}}_C = \tilde{\mathbf{t}}_C \times \hat{\mathbf{n}}$ is the capillarity unit vector that is normal to the TCL and lies in the interface, and $\tilde{\mathbf{t}}_C$ is the unit tangent vector to the TCL. Let $\mathbf{r}_{C}=\hat{\mathbf{e}}_r + \hat{\mathbf{e}}_z h(r,\phi) \hat{\mathbf{e}}_z$ denote the position vector describing a point on the TCL at the particle surface. Substituting all expansions into this vector formula, we find that the capillarity unit vector $\tilde{\mathbf{n}}_C$ is \begin{align} \tilde{\mathbf{n}}_C =& \tilde{\mathbf{t}}_{C}\times \hat{\mathbf{n}} = \hat{\mathbf{e}}_r + \left( \mbox{Ca}\frac{\partial h^{(1,0)}}{\partial r} + \delta \frac{\partial h^{(0,1)}}{\partial r}\right)\hat{\mathbf{e}}_z, \end{align} and to order Ca and $\delta$, the capillary force is \begin{align} \mathbf{F}_C = F_C \hat{\mathbf{e}}_z = & - \int_0^{2\pi} \left( \mbox{Ca}\frac{\partial h^{(1,0)}}{\partial r}(1,\phi) + \delta \frac{\partial h^{(0,1)}}{\partial r}(1,\phi)\right)\hat{\mathbf{e}}_z \mbox{ d}\phi \\ = & - 2 \pi \delta (-\tilde{\theta}_s + \tilde{b}) C_0 K_0(\sqrt{\mbox{Bo}})\hat{\mathbf{e}}_z, \label{eqn:capillary-singleParticle} \end{align} where $C_0$ is defined in Eq. \eqref{eqn:single-staticDeformSol}. We observe that the flow-induced deformation does not contribute to the capillary force at leading order due to the anti-symmetric pattern of the interfaical height at the TCL. The interfacial height depends on the azimuthal angle $\phi$ in the form of $\sin\phi$ and the capillary stress along the TCL integrates to zero. The leading order contribution to $\mathbf{F}_C$ comes from the static deformation, which yields an $\mathcal{O}(\delta)$ capillary force in the vertical ($\hat{\mathbf{e}}_z$) component. Note that the lateral capillary force is zero at leading order because the corrections to the shape of the TCL is at orders $\delta$ and Ca in the vertical component and at higher orders in the horizontal components. Paralleling the single-particle capillary force calculation, we are able to obtain capillary forces exerted by the deformed interface near two spherical particles. We first consider the case where the two particles at the interface undergo uniform flows perpendicular to their line-of-centers (see Appendix \ref{app:capillaryForce} for detailed calculations). The leading order capillary force $F_C$ is given by Eq. \eqref{eqn:app-F_C}. Similar to the single-particle problem, the flow-induced deformation does not contribute to the capillary force at leading order due to the anti-symmetry argument. Also, due to the symmetric static interface shape in the direction of the line-of-centers of the particles, the vertical capillary forces exerted on the two particles are identical. In Fig. \ref{fig:capillary-perp}, the capillary forces due to the single-particle deformation and the two-particle deformations with perpendicular background flows are plotted as a function of the Bond number. As the separation distance increases, the static deformation near each particle converges to the single-particle deformation, and we see that the capillary force exerted on each particle converges to the single-particle capillary force. \begin{figure} \centering \includegraphics[scale=0.65]{Fcap-perp-totalDeformation-reverse.pdf} \caption{The dimensionless capillary force plotted as a function of the Bond number Bo. The green curve shows the single-particle capillary force; the red, black and blue curves are the capillary forces exerted on one of the two particles due to the two-particle deformations in perpendicular background flows, with separation distance $L = 4, 8$ and 12, respectively. Parameter values: $\mbox{Ca}=1,\delta=\tilde\theta_s =1, \tilde{b}=0$. } \label{fig:capillary-perp} \end{figure} In the case of two particles at an interface in a parallel background flow, the flow-induced deformation's contribution to the leading order capillary force does not vanish because the two particles' interaction with the flow breaks the anti-symmetry at the TLCs. As a result, the flow-induced deformations at the two particles' TCLs have the same magnitude but opposite signs, which implies the $\mathcal{O}(\mbox{Ca})$ capillary forces acting on the two particles also have the same magnitude and different signs. Fig. \ref{fig:capillary-para-flowInducedDeformation} shows the capillary force exerted on the particle centered at $-L/2\hat{\mathbf{e}}_y$ ($\Sigma_P^\text{\scriptsize I}$) as a function of the Bond number Bo, with different values of separation $L.$ Since the orientation of the background flow does not affect the static deformation, the $\mathcal{O}(\delta)$ capillary force is identical to the one in the perpendicular flow problem. In Fig. \ref{fig:capillary-para-totalDeformations}, we show the leading order capillary force acting on each particle due to the total deformations. \begin{figure} \centering \includegraphics[scale=0.65]{Fcap-para-flowInducedDeformation-reverse.pdf} \caption{The dimensionless capillary force exerted on particle I ($\Sigma_{P}^\text{\scriptsize I }$) due to the parallel flow-induced deformations plotted as a function of the Bond number Bo, with separation $L = 4, 8,$ and 12 (Ca =1).} \label{fig:capillary-para-flowInducedDeformation} \end{figure} \begin{figure} \centering \includegraphics[scale=0.65]{Fcap-para-twoParticles-reverse.pdf} \caption{The dimensionless capillary force due to the total deformations near two particles undergoing parallel background flows. Parameter values: $\mbox{Ca}=1,\delta=\tilde\theta_s =1. $ } \label{fig:capillary-para-totalDeformations} \end{figure} The lateral capillary force at higher orders was calculated by Vella and Mahadevan \cite{Vella2005}, where they derived the formula of the (dimensionless) capillary force in the absence of flow from the Nicolson approximation \cite{Chan1981, Kralchevsky1994, Nicolson1949}: \begin{align} \tilde{F}_C = 2\pi \mbox{Bo}^{5/2} \Sigma^2 K_1(\sqrt{\mbox{Bo}}L), \end{align} where $ \Sigma = \frac{2(\rho_P-\Delta \rho) - 1}{3} -\frac{1}{2}\cos\theta_s + \frac{1}{6}\cos^2\theta_s$ and $\rho_P$ is the particle density. The assumptions that the contact angle is close to $90^\circ$ and the immersion depth is small ($\theta_s = \pi/2+\delta\tilde\theta_s , b = \delta\tilde{b}$) imply $\Sigma \sim \delta $ and $ \tilde{F}_C \sim \delta^2$, which verifies our discovery that the static deformation does not contribute to the lateral capillary force at order $\delta.$ D\"orr and Hardt \cite{dorr2015} studied the pair interaction of particles by constructing the interfacial deformation around two particles via linear superposition of the single-particle deformation. Under the assumptions of rotated and pinned TCLs and a large particle separation, the interfacial deformation is induced solely by the uniform background flow. Similar to our analysis, D\"orr and Hardt's calculation shows that the leading order capillary force is in the vertical direction and the lateral capillary force comes at higher orders and that the vertical capillary force vanishes when the uniform background flow is perpendicular to the particles' line-of-centers. \section{Conclusions} In this work, we have studied the problems of fluid motion past one and two spherical particles attached to a deformable fluid interface undergoing uniform Stokes flow. Using the two-parameter asymptotic expansions for small Capillary number and correction contact angle, we have obtained the analytical expressions for the flow-induced deformation and the static deformation (induced by the contact angle) around a single particle. In the two-particle problems, where the background flow is perpendicular or parallel to the particles' line-of-centers, similar deformation solutions were calculated numerically using finite difference methods. To study the effects of interfacial deformations on the drag force exerted on the particles, we used the Lorentz reciprocal theorem to derive analytical expressions for the correction drag forces in terms of the zeroth-order approximations and the deformation solutions. For the single-particle problem, the drag force is given in the form in Eq. \eqref{eqn:single-truncatedDrag}, where the drag caused by the flow-induced deformation integrates to zero due to its anti-symmetric configuration in the flow direction, and the correction drag caused by the static deformation is shown to linearly depend on the correction contact angle and the viscosity difference and have a nonlinear dependence on the Bond number. The Bond number characterizes the density mismatch between the two fluid phases, and an increase in the density mismatch flattens the interface shape near the particle, which reduces the effect of the interfacial deformation on the drag force. The normalized drag $F_D^*$ (see Eq. \eqref{eqn:single-normalizedDrag}) is shown to be consistent with the 2D numerical results by Loudet et al. \cite{Loudet2020}. For the two-particle problems, we derived the first-order approximations from the solutions of two spheres translating perpendicular and parallel to their line-of-centers in a viscous fluid \cite{Goldman1966,Stimson1926}. Similar to the single-particle problem, the flow-induced interfacial deformations do not affect the drag force acting on the particles at leading order. The corrected drag forces for the static deformations are given by Eqs. \eqref{eqn:perp-dragFormula3} and \eqref{eqn:para-dragFormula}. A more general solution where the uniform background flow is arbitrarily oriented relative the particles' line-of-centers has been obtained by vectorially combining the drag forces exerted by the perpendicular and parallel flows. Our predictions for the drag also compares well with the experimental results of Petkov et al. \cite{Petkov1995} (see Appendix \ref{app:petkov}). This is also true of D\"orr et al. \cite{dorr2016} 's model which has different assumptions. Additional work is needed to clarify these predictions. In addition, we were able to calculate the capillary force exerted on the particles due to the interfacial deformation. It is shown that the static deformation contributes to the capillary force at order $\delta$ in the vertical ($\hat{\mathbf{e}}_z$) component, and the flow-induced deformation doesn't contribute at order Ca in the single-particle case and the two-particle case when the background flow is perpendicular to the particles' line-of-centers. In the case of two particles at an interface in a parallel flow, the flow-induced deformation is shown to have a nonzero contribution to the $\mathcal{O}(\mbox{Ca})$ vertical capillary force. \begin{acknowledgments} This work was partially supported by NSF grants DMS 1718114 and DMS 2108502. \end{acknowledgments}
2109.10816
\section{Introduction} It is well known that droplets released by infected persons through coughing, sneezing, speaking or breathing contain microorganism (bacteria, virus, fungi, etc) causing a large number of diseases~\cite{cflugge,Wells}. Several decades ago it was thought-out that the diseases which are spread by droplets are airborne~\cite{Gelfand} and recently ten scientific reasons have been provided in support of SARS-COV-2 being an airborne infection~\cite{10reasons} (see also ~\cite{review} for review). The transmission routes ~\cite{kutter} of the virus crucially depends on how the droplets evolve is space with the progression of time. For example, the big droplets settle gravitationally on different surfaces like, hand rail, door handle, tables, etc and eventually infect through direct contact. The small droplets either directly discharged or formed due to evaporation of the big droplets will remain suspended in the air for long time due to the dominance of the diffusive force over the gravitational force. It still remains as a big challenge to understand the density of virus in the air~\cite{mpan}, their capability to infect~\cite{bynumbers}, their survivability on different surfaces~\cite{doremalen} and comparison. The survivability of the virus on solid surfaces increases under humid conditions~\cite{AA}. Therefore, these factors limits our ability to evaluate the risk~\cite{Nazhu}. The release of virus via respiration process~\cite{richard,herfst} and speaking is well known~\cite{kkwang,asadi,stadnytskyl}. The saliva of the infected persons contains the Coronavirus~\cite{kkwang,lazzi}. The increase in emission of pathogens with the loudness in speaking which may depend on some unknown physiological factors varying among individuals has also been reported ~\cite{asadi}. Recently it has been reported that the droplets and aerosols (droplets with size $<5 \mu$m) can travel a distance much larger distance than the prescribed 6 feet and remain suspended in the air for hours and in fact SARS-COV2 RNA is recovered in air sample~\cite{jama20}. The propagation and aerosolisation of the droplets have been investigated within the purview of statistical mechanics and fluid dynamics ~\cite{dbouk,mittal}. Euler-Lagrange equation has been applied to study the effects of the size, ejection velocity and angle of emission of the droplets on the motion of the droplets during sneezing. The application of the stochastic statistical mechanics~\cite{Reif,pathria} is particularly crucial to study the motion of aerosols (droplets with diameter $ <5 \mu$m~\cite{seta}) for which the airborne transmission turns out to be very vital. The Brownian motion of the aerosols in the air-bath can be studied by solving Langevin differential equation which takes care of the stochastic motion~\cite{SKDas} normally ignored in Eulerian-Lagrangian approach~\cite{dbouk}. We investigate the spread of these ejected droplets in the neighbourhood of the infected individual under varying weather conditions which will help in planning the preventive strategies at different climatic conditions. The droplets ejected will interact with the molecules of the still/flowing air at temperature ($T$), relative humidity (RH) in presence of gravity. The interaction force between the droplets and the air molecules is changing continuously as the molecules are changing their coordinates continuously with time. This makes the problem very complex, if not exactly unsolvable. The spread of the droplets, however, will depend on these interactions. Under such circumstances the air molecules can be regarded as forming a thermal bath where the droplets are executing Brownian motion with its changing mass due to evaporation. The interaction of the evaporating droplets with the bath can then be grouped into drag and diffusive forces quantified through the drag and diffusion coefficients under the action of gravity. Therefore, there are three types of force acting on a droplet which are (i) drag and (ii) diffusive forces between the droplet and the air molecules and the (iii) Newtonian gravitational force on the droplets due to their non-zero mass. However, it is crucial to note that droplets are undergoing loss of mass due to evaporation and hence all these forces are changing with time. In the present work we investigate the propagation of the virus-containing droplets subject to continuous evaporation by solving the Langevin stochastic differential equation of statistical mechanics~\cite{Reif,pathria} coupled with the equation that governs the evaporation of the droplets. It is crucially important to mention here that the correction to the drag force required for flow beyond the laminar regime has been taken care of by using the experimental parameterization the Reynold's number~\cite{Holterman}. The Langevin equation is applicable in the present context as the mass of the droplets are much higher than the mass of oxygen and nitrogen molecules present in the air. The inclusion of all the forces mentioned above enable us to study the trajectories of droplets with a wide range of sizes. It will be interesting to study how the evaporating droplets evolve in space and time under the influence of gravitation which will act to pull the droplet on the ground in contrast to the diffusive and drag forces which will prevent it to do so. The big (hence massive) droplets is expected to settle gravitationally quickly and the smaller one is expected to remain suspended in the air for longer time. However, under evaporation the droplet will suffer continuous loss of mass and consequently a droplet which will otherwise fall on the ground due gravitation may not do so but may remain suspended as smaller droplets/aerosol or isolated virus for longer time before complete decomposition. The paper is organized as follows. In the next section the solution of the Langevin equation coupled with the equation controlling the evaporation has been discussed. Section III contains the results and section IV is devoted to summary and discussions. \section{Monte-Carlo solution of the Langevin equation in the presence of evaporation} The Langevin equation governing the motion of the droplet of mass ($M$) in the still air in the presence of gravitational field~\cite{Reif} is given by: \begin{equation} \frac{dr_i}{dt} = v_i \label{eq1} \end{equation} \begin{equation} M \frac{dv_i}{dt} = -\lambda v_i + \xi(t) +F^G \label{eq2} \end{equation} The mass of the droplet is related its diameter ($D$) by $M=\pi D^3\rho_L/6$ which is bound to change due to evaporation. The rate of decrease of the diameter $D$ of a spherical liquid drop due to evaporation is given by ~\cite{Kukkonen,Holterman}: \begin{equation} \frac{dD}{dt×}=-\frac{4M_LD_v}{D\rho_L R T_f×}\Delta p(1+0.276Re^{1/2}Sc^{1/3}) \label{eq3} \end{equation} These coupled set of equations, {\it i.e.} Eqs.~\ref{eq1}, \ref{eq2} and \ref{eq3} are solved simultaneously to understand the spread of the virus through droplets. In Eqs.~\ref{eq1} and \ref{eq2} the $dr_i$ and $dv_i$ are the shifts of the coordinate and velocity in each discrete time step $dt$, $i(=x,y,z)$ stands for the Cartesian components of the position and velocity vectors. The $\lambda$ in Eq.~\ref{eq2} is the drag coefficient which will be fixed later. The first term in the right hand side of Eq.~\ref{eq2} represents the dissipative force and the second term stands for the diffusive (stochastic) force where $\xi(t)$ is regulated by the diffusion coefficient $\kappa$. $\xi(t)$ is also called noise due to its stochastic nature. We study the evolution with a white noise ansatz for $\xi(t)$, {\it i.e} $\langle{\bf \xi}(t)\rangle = 0$ and $\langle {\bf \xi}(t) {\bf \xi}(t')\rangle = \kappa\delta(t-t')$. White noise describes a fluctuating field without memory, whose correlations have an instantaneous decay called $\delta$ correlation. The third term in Eq.~\ref{eq2}, $F^G$ represents the gravitational force $(=Mg$, $g=9.8$ m/s$^2$) acting on a droplet of mass $M$ which will changing with time due to the evaporation. Eq.~\ref{eq3} describes the evaporation of the droplets with $M_L$ ($=0.018$ kg/mol) and $\rho_L$ are the molecular weight and density of the evaporating liquid respectively, $D_v$ is the diffusion coefficient of the vapor molecules in the saturated film around the droplets and $T_f$ is the average temperature of the film formed around the droplets due to evaporation. $R=8.3144$ J/(mol K) is the gas constant. $Re$ is the Reynold's number~\cite{Landau}: \begin{equation} Re=\frac{\rho_a D_v v}{\eta_a×} \end{equation} where $v$ is the relative velocity of the droplet to the surrounding air, $\rho_a$ is the density and $\eta_a$ is the viscosity of the air at temperature $T_f$. $Sc$ is the Schmidt's number, given by: \begin{equation} Sc=\frac{\eta_a}{\rho_a D_v×} \end{equation} $\Delta p$ is the difference between the vapor pressure near the droplet and in the atmosphere which acts as the driving force for the transport of vapor away from the droplet surface. $\Delta p$ can be related to the saturated vapor pressures at ambient temperature $(T)$ and wet-bulb temperature ($T_w$) as: \begin{equation} \Delta p=p_{sat}-p=\gamma (T-T_w) \end{equation} where $p_{sat}$ is the vapor pressure near the surface of the droplet, $p$ is the vapor pressure in the ambient air and $\gamma$ is approximately constant (~ 67 pa/K). $T_w$ can be expressed in terms of $T$ and $RH$ as: \begin{equation} T_w=T-[(a_0+a_1T)+(b_0+b_1T)RH+(c_0+c_1T)RH^2] \end{equation} with $a_0$=5.1055, $a_1$=0.4295, $b_0$=-0.04703, $b_1$=-0.005951, $c_0$=-0.00004005 and $c_1$=0.0000166. For details we refer to ~\cite{Holterman}. The trajectory of the droplets will depend on the initial ejected velocity and on the flow velocity of the ambience resulting in the path difference in still and flowing air conditions [such as in a air conditioned (AC) room or even is open air]. The results for both the scenarios of still and flowing air will be derived here. The air flow velocity has been taken into account through the Galilean transformation of the Langevin equation. Since the virus carrying droplets follow different trajectories in still and flowing air the preventive strategies for indoor and out door conditions should take care of this fact. Here we consider a velocity profile for the air flow as: $u(x)=u_0(1-\frac{x}{x_{\text{max}}})$ with its upward and downward components as zero to serve this purpose, where $x$ is the running coordinate, $u_0$ is the peak value of $u(x)$ at $x=0$ and $x_{\text{max}}$ is the maximum value of $x$, which may be restricted to the size of an AC room. It is obvious that any non-zero upward (downward) component will influence the time of suspension of droplets in the air. We will also consider constant (no dependence on spatial coordinates) air flow velocity to calculate the distance that an ejected drop will travel an open air. Eqs.~\ref{eq1}, \ref{eq2} and \ref{eq3} have been simultaneously solved by using Monte-Carlo techniques~\cite{SKDas,mc1,mc2} with the following inputs. The initial spatial coordinate of the droplet is , $x=y=0$ and $z=H_0$, where $H_0$ is the height (1.7 meter) at which the droplet is released (nose/mouth), {\it i.e.} $(x,y,z)=(0,0,1.7{\text {meter}})$, is the point of ejection. The initial velocity (at $t=0$) is uniformly distributed in the $x-y$ plane with $v_z=0$. The calculation is done for droplet radii ($R$) varying from 10 $\mu$m to 200$\mu$m~\cite{RR} and the ejection velocity, $V_0=21$ m/s~\cite{VE}. The value of the drag coefficients, $\lambda$ is estimated by using the relation, \begin{equation} \lambda=\frac{1}{2}C_D\rho_a\,S\,v \label{eq4} \end{equation} where $S$ is the projected cross sectional area of the spherical droplet of diameter $D$, $v$ is the velocity and the following expression is used for $C_D$~\cite{hongping} to extend the validity of the model beyond the regime of laminar flow. \begin{equation} C_D=\frac{24}{R_e}+\frac{6}{1+\sqrt{R_e}}+0.4 \end{equation} The diffusion coefficient is obtained by using the Einstein relation \cite{pathria}, $D=K_BT\lambda$, where $K_B=1.38\times 10^{-23} J/^{\circ}K$, is the Boltzmann constant. It may be noted that for $C_D=24/Re$ the Stokes law is recovered. By solving the set of Eqs.~\ref{eq1}, \ref{eq2} and \ref{eq3} we get the trajectories of the droplets which enables us to calculate the horizontal distance ($L(t)$) traveled by the droplets from the point of ejection as a function of time $L(t)=\sqrt{x(t)^2+y(t)^2}$. Its maximum value of $L(=L_{\text{max}})$ dictates the stationary distance that to be maintained between infected and healthy persons to prevent the virus. The solution of these equations can also be used to estimate the maximum time ($t_{\text{max}}$) of suspension of the droplets. In the following results for both $L_{\text{max}}$ and $t_{\text{max}}$ have been presented. \section{Results} The virus carrying droplets are ejected with different sizes and initial velocity in the ambience with widely varying climatic conditions. Therefore, results with different size of the droplets under different meteorological conditions are exhibited here to understand the prevention measures to be adopted to avoid the infection. We assume that the ejection velocity is 21 m/s (unless stated otherwise) which is close to its highest possible value as found experimentally ~\cite{VE,zhu}. For given wind flow velocity, this will give the maximum distance that a droplet can travel which in turn will provide the maximum risk involved and accordingly helps in deciding the safest possible measures to be taken. In the present work the weather effects through temperature, relative humidity, wind flow have been taken care of. The effects of evaporation and correction to drag force for flow beyond the laminar regime have been included in all the results displayed below unless explicitly mentioned. The value of $T=20^\circ$C whenever the value of $T$ is not mentioned. \begin{figure}[ht] \includegraphics[width=18pc,clip=true]{C_x_H.eps}\hspace{2pc} \caption{The variation of the height ($H(t)$) with the horizontal distance ($L(t)=\sqrt{x^2+y^2}$) is shown for droplets of various radii for the initial ejection velocity, $V_0= 21$ m/s and the peak value of the wind flow velocity, $u_0$=0.1 m/s. The results contain the effects of evaporation at ambience temperature $20^\circ$ C. } \label{fg1} \end{figure} In Fig.~\ref{fg1} the variation of the height ($H$) with longitudinal distance ($L$) for droplets of different radii released at a height of 1.7 meter is depicted. It is seen that a large droplets of radius 200 $\mu$m will propagate upto a distance of 1.4 meter horizontally in still air condition and 1.45 meter in wind flow condition with the peak velocity of the wind, $u_0=0.1$ m/s. The interplay between the ejection velocity and the wind velocity can be understood from the following discussions. It may be mentioned here that the terminal velocity ($v_t$) for a droplet of diameter $D$ in air is given by \begin{equation} v_t=\sqrt{\frac{4}{3}\,g\frac{\rho-\rho_a}{\rho_a}\frac{D}{C_D}} \end{equation} which reduces to the well known expression for $v_t$ in laminar flow regime (with $C_D=24/R_e$) as: \begin{equation} v_t=\sqrt{\frac{g}{18}\,R_e\frac{\rho-\rho_a}{\rho_a}} \end{equation} The effects of wind flow will be insignificant if the terminal velocity of a droplet is larger than the wind flow velocity. For a large droplet of size 200 $\mu$m the $v_t$($\sim \sqrt{D}$) is larger than $u_0$ which makes the effect of wind flow marginal as observed in the results displayed in Fig~\ref{fg1}. A droplet of size 100 $\mu$m will travel a distance 0.55 meter and 0.7 meter respectively in still and wind flow conditions implying that the wind flow has larger effects on smaller droplet as their terminal velocity is smaller. For smaller droplets of size 50 $\mu$m the effect of wind flow is larger as shown in ~Fig.~\ref{fg1}. We have observed that the droplets of size 25 $\mu$m evaporates to aerosols with mean radius of 2 $\mu$m. The motion of these aerosols are controlled by the diffusive force and therefore, these droplets have the ability float in the air for longer time (see also~\cite{SChatterjee}). The results displayed in Fig.~\ref{fg1} illustrate that the social distance to be maintained is about 1.5 meter. For high wind speed the effects of terminal velocity will be small. For example, a droplet of radius $100 \mu$m can travel about 6.6 meter for wind speed of 2 m/s~\cite{HLi}. \begin{figure}[ht] \includegraphics[width=7.0cm]{C_tH_RH.eps} \includegraphics[width=7.0cm]{C_tH_RH_T.eps} \caption{(Color online) Left panel:The change in the height, $H(t)$, as a function of time for droplets of different sizes have been depicted. Here the initial ejection velocity, $V_0=21$ m/s and the peak value of the wind flow velocity, $u_0$=0.1 m/s at $T=20^\circ$ C. Right panel: Same as left panel showing the sensitivity of results on the ambient temperature ($T=20^\circ$ C and $35^\circ$ C. The results are derived with the inclusion of evaporation.} \label{fg2} \end{figure} Fig.~\ref{fg2} illustrates the time that droplets of different sizes will take to settle gravitationally under different conditions of relative humidity and wind flow. We find that the droplets at smaller $RH$ will evaporate faster and make the effect of gravity weaker due to loss of mass. In such cases the time of suspension in the air will be prolonged before settling gravitationally on the ground. For larger value of $RH$ the evaporation process will be slowed down resulting in smaller loss of mass and hence forcing it to fall on the ground under the action of gravitation. Comparison of results for droplet of initial radius $50\mu$m displayed in Fig.~\ref{fg2} for $RH=40\%, 60\%$ and $80\%$ substantiate this fact with the indication that the effects of $RH$ is significant. The smaller droplets with initial radius 25 $\mu$m, however, remain suspended in the air after evaporating to droplet nuclei of mean radius 2 $\mu$m which do not settle under the action of gravitation but continue to diffuse in air making the use of mask mandatory to prevent these aerosol. At higher temperature ($35^\circ$ C) the evaporation becomes faster. It is observed that at high $RH$ and low $T$ the effect of evaporation is small and hence the virus may survive for longer time in such cases. It is important to note that smaller droplets at higher $RH$ and low $T$ will survive longer in the air. In other words the evaporation is lesser effective at high $RH$ and low $T$. The relation between the climatic condition and index of airborne infection rate and concentration rate of particle in saliva is found in Ref.\cite{dbouk3} (see also \cite{dbouk2}). \begin{figure}[ht] \includegraphics[width=7.0cm]{C_x_H1.eps} \includegraphics[width=7.0cm]{C_x_H2.eps} \caption{(Color online) Left panel:The change in the height, $H$, as a function of $L$ for a droplet of radius 100 $\mu$m for different peak values ($u_0$) of the wind velocity profile at $T=20^\circ$ and $RH=60\%$. The droplet is ejected in a room of size 5 metre with initial ejection velocity is 21 m/s. Right panel: Same as left panel for different wind velocity of constant values (independent of space coordinate) as indicated in the figure. } \label{fg2a} \end{figure} The effects of wind flow in indoor (left panel) and outdoor (right panel) conditions have been demonstrated in Fig.~\ref{fg2a}. The left panel shows the results for flow profile described above for various values of $u_0$, say, in a room of size 5 metre. The right panel indicates the results for different wind flow velocities of constant values (independent of space coordinate), say, an open air. It is clearly observed that the wind velocity for given temperature, relative humidity and ejection velocity affects the results substantially when the wind velocity is larger than the terminal velocity discussed above. The droplet released by an infected individual can travel a distance much larger than the prescribed 2 meters separation if the wind velocity is large. \begin{figure}[ht] \includegraphics[width=18pc,clip=true]{C_LR.eps}\hspace{2pc} \caption{The variation of the maximum horizontal distance ($L_{max}$) traveled by the droplets as a function of the droplet radius for different relative humidity have been shown. Here initial ejection velocity, $V_0=21$ m/s and the peak value of the wind flow velocity, $u_0$=0.1 m/s.} \label{fg3} \end{figure} In Fig~\ref{fg3}, we have displayed the variation of the maximum horizontal distance traveled by droplets as a function of its radius for different relative humidity. We have checked that in still air bigger droplets travel larger distance for the same initial ejection velocity due to the effect of inertia. The droplets of small and intermediate sizes are strongly influenced by the $RH$. We notice mild effects of flow and RH on larger droplets. However, the droplets with intermediate radii are influenced the most due to variation in RH and flow mainly due to their longer life time in air than the smaller droplets which are subjected to quick evaporation at lower $RH$. It is also clear that the prevention of mass loss in absence to evaporation allows small droplets to travel longer distance. \begin{figure}[ht] \includegraphics[width=18pc,clip=true]{C_tR.eps}\hspace{2pc} \caption{The variation of the maximum time the droplets take to settle on the ground under the action of gravity and the time the smaller droplets take to evaporate to generate aerosols of average radii $2\mu$m are displayed here. The sensitivity of the results on the relative humidity has also been shown for $u_0=0.1$ m/s.} \label{fg4} \end{figure} \begin{figure}[ht] \includegraphics[width=18pc,clip=true]{Eva_R_t.eps}\hspace{2pc} \caption{Change in the running radius, $R(t)$ normalized to the initial radius have been depicted as a function of time for different relative humidity. Here $R_0$ is the initial radius of the droplets. The value of $V_0$ and $u_0$ are taken as $21$ m/s and $0.1$ m/s respectively.} \label{fg5} \end{figure} \begin{figure}[ht] \includegraphics[width=18pc,clip=true]{Eva_M_t.eps}\hspace{2pc} \caption{Same as Fig.~\ref{fg5} for running mass to initial mass of the droplets. } \label{fg6} \end{figure} Now we would like to estimate the maximum time that evaporating droplets of different radii will remain suspended in the air before settling on the ground under different weather conditions. Relevant results to address this issue are displayed in Fig.~\ref{fg4}. The effects of evaporation is found to be significant for small droplets. The time of suspension is more without evaporation. At higher RH the droplets have the higher survival probability and hence to travel larger distance in the air as indicated by the results shown here. However, the time of suspension of intermediate and smaller sized droplets are very sensitive to both the evaporation and RH. Through the process of evaporation smaller droplets may generate isolated virus which may survive for more than an hour~\cite{swxong}. However, droplets having larger radius fall on the ground quickly due to stronger gravitational force compare to the diffusive force. Droplets having intermediate radius stay in the air for longer time due to the balancing of gravitational force by the diffusive forces. In Fig~\ref{fg5}, we have plotted the change in the running radius $(R)$ of the droplets normalized to their initial values ($R_0$) for different humidity. The reduction of mass due to evaporation at higher RH allows the droplet to survive longer. It is noticed that smaller droplets are very sensitive to evaporation and they generate droplet nuclei before reaching the ground. In Fig~\ref{fg6}, the change in the running masses ($M$) of the droplet normalized to their initial masses ($M_0$) is illustrated as a function of time for different humidity. The results are consistent with those displayed in Fig.~\ref{fg5}. The results clearly indicate the effects of RH on the evolution of the droplets. For example a droplet of radius $50 \mu$m will approximately loss $60\%$ and $20\%$ of its mass due to evaporation at $RH=60\%$ and $80\%$ respectively. \section{Summary and Discussions} We have investigated the evolution of the droplets ejected during coughing, sneezing or speaking by solving the Langevin equation with the inclusion of evaporation of the droplets. The drag, diffusive and the gravitational forces are included in this study. The correction to the drag force in the non-laminar flow regime due to large Reynolds number has been implemented through appropriate parametrization of experimental data. The droplets of various sizes have been considered with large ejection velocity (21 m/s) to get the idea on the upper limit of the distance cross over by the droplets. The effects of different weather conditions have been taken into consideration through temperature, relative humidity and wind flow. It is found that the maximum distance that a large droplet of size 200 $\mu$m travels is about 1.5 meter ($u_0=0.1$ m/s) which is not affected significantly by low wind velocity. Tinier droplets after traversing a spatial interval evaporate which can diffuse through air to go far away from the point of ejection~\cite{Bouroaiba} due to weaker gravitational influence. It is found that~\cite{Liu} in two Wuhan hospitals the micrometer and sub-micrometer droplets of Sars-COV-2 were found at a distance of about 3 meters away from the bed of an infected person. An infected person emits droplets of varying sizes. In the present work we found that the smaller droplets can also be created by the mechanism of evaporation (see also~\cite{FanLia}). The smaller droplets can also be created by by fragmentation ~\cite{xuwang}. It is found that droplets of smaller size evaporates and create aerosols, smaller the sizes quicker they evaporate. Isolated virus generated by the process of evaporation can survive in the air for more than an hour~\cite{swxong}. The evolution of these droplets will be controlled by the diffusive forces and such droplets may remain suspended in the air for hours. However, it has also been reported~\cite{jama} that a multi-phase turbulent gas cloud is produced along with the droplets at the time of coughing and sneezing. The droplets within the envelope of this gas cloud may evade evaporation and prolong their survivability as isolated droplets. Droplets of larger size (say, 100 $\mu$m) can travel a distance as large as 3.6 meter in indoor condition and even more in outdoor condition if the wind flow velocity is large. It is appropriate to mention at this juncture that the intermediate size droplets are very sensitive to the weather conditions. The climatic conditions determines whether the intermediate size droplets will settle in the ground under the action of gravity or evaporate to aerosol. This indicates that the weather condition play crucial role in deciding the severity of spreading of the infection. Therefore, the use of mask and face shield should be used to prevent the virus ~\cite{Howard,pnas,verma}. The maintenance of ventilation in indoor situation is also very crucial to avoid the infection~\cite{NSen,SBathula}. The maintenance of only a social distance of 2 meter as norm to avoid the virus is not corroborated by the present investigation because the aerosol created by the evaporation may remained suspended in the air for more than an hour. The impact of diffusion, represented by the term $\xi(t)$ appearing in the right hand side of Eq.~\ref{eq2} on droplets of smaller of size, say $5 \mu$m or less is more important. However, the impact of diffusion on larger droplets is insignificant. \section*{Data Availability Statement} The data that supports the findings of this study are available within the article. \section*{Acknowledgement} SKD would like to acknowledge IIT Goa for internal funding (No. 2020/IP/SKD/005) and Professor Barada Kanta Mishra for useful discussions and encouragement. Fruitful discussions with Raja Mitra is thankfully acknowledged.
1411.2885
\section{Wprowadzenie} Kody liniowe stosowane są powszechnie w przesyłaniu danych w zaszumionym medium transmisyjnym. Przez wymiar kodu rozumie się przepustowość łącza dla transitowanej informacji, z kolei kowymiar kodu, mierzy tak zwaną nadmiarowość czyli ilo informacji niezbędnej do wykrywania i ewentualnej korekcji błędów w przesyłanych danych. Liniowość kodu zankomicie upraszcza procesy kodowania i dekodowania, co skutkuje dużą wydajnością implementowanych algorytmów. Z oczywistych powodów, najczęściej stosuje się kody binarne, co znacznie zawęża spektrum możliwej do uzyskania jakości kodu. Zastosowanie większej liczby stanów (ciał skończonych charakterystyki większej niż dwa), daje elastyczną strukturę kolekcji kodów liniowych. \section{Kod liniowy} Niech, ~$k, n, p, w, q \in\mathbb{N}, ~~p~ - \textrm{liczba pierwsza}, ~~q=p^w, ~k\leqslant n$. \newline \begin{df} Każdą ~$k$-wymiarową podprzestrzeń wektorową ~$C$~ przestrzeni ~$n$-wymiarowej ~$\mathbb{F}_q^n$~ nazywamy \emph{kodem liniowym} o długości $n$, wymiaru ~$k$, ~nad ciałem ~$\mathbb{F}_q$ ~~(\cite{SB}, \cite{VP}). \end{df} Wybór bazy ~~$B=(b_1,\ldots, b_k), ~b_1,\ldots,b_k\in C \subset \mathbb{F}_q^n$~ indukuje monomorfizm przestrzeni liniowych \begin{displaymath} \iota \colon \mathbb{F}_q^k \longrightarrow \mathbb{F}_q^n, ~~ \left( \begin{array}{c} \xi_1\\ \vdots\\ \xi_k \end{array} \right) \mapsto \sum_{j=1}^{k} \xi_j b_j, ~~\operatorname{im}(\iota) = C. \end{displaymath} zwany \emph{kodowaniem liniowym}. Dostajemy \begin{displaymath} 0 \longrightarrow \mathbb{F}_q^k \xrightarrow{~~\iota~~} \mathbb{F}_q^n \xrightarrow{~~\pi~~} \mathbb{F}_q^n/C \longrightarrow 0 \end{displaymath} tzw. krótki ciąg dokładny przestrzeni wektorowych. ~$\operatorname{codim} C = \dim \mathbb{F}_q^n/C = n-k$. ~ Składając ~$\pi$~ z dowolnym izomorfizmem ~~$\mathbb{F}_q^n/C \xrightarrow{~~\approx~~} \mathbb{F}_q^{n-k}$~~ dostajemy ponownie (krótki ciąg dokładny). Operator (macierz) ~$H$~ nazywamy \emph{anihilatorem, macierzą kontrolną (check matrix)} kodu ~$C$. \begin{displaymath} 0 \longrightarrow \mathbb{F}_q^k \xrightarrow{~~\iota~~} \mathbb{F}_q^n \xrightarrow{~~H~~} \mathbb{F}_q^{n-k} \longrightarrow 0 \end{displaymath} Kowymiar podprzestrzeni ~$\operatorname{codim} C = n-k$~ to ,,ilość stopni kontrolnych kodu ~-~ nadmiarowość'' ~ a wymiar ~$\dim C = k$~ ,,zawartość informacji''. \newline Wektory bazowe $b_1,\ldots,b_k \in \mathbb{F}_q^n$ są liniowo niezależne, więc znajdziemy podciąg $1\leqslant j_1 < \ldots < j_k \leqslant n$, taki że macierz \begin{displaymath} b_{j_1,\ldots,j_k} = \left[ \begin{matrix} b_{j_1,1} & \ldots & b_{j_1,k}\\ \vdots & \vdots & \vdots \\ b_{j_k,1} & \ldots & b_{j_k,k} \end{matrix} \right] \end{displaymath} jest nieosobliwa. \begin{displaymath} P \cdot B \cdot b_{j_1,\ldots,k_k}^{-1} = \left[ \begin{matrix} 1 & \ldots & 0\\ \vdots & \ddots & \vdots \\ 0 & \ldots & 1 \\ a_{1,1} & \ldots & a_{1,k} \\ \vdots & \ddots & \vdots \\ a_{n-k,1} & \ldots & a_{n-k,k} \end{matrix} \right] \end{displaymath} $P$~~ jest odpowiednią macierzą permutacji osi współrzędnych przestrzeni ~~$\mathbb{F}_q^n$. ~Jest to tzw. \emph{standardowa} postać bazowa kodu liniowego ~$C$. Dla postaci standardowej kodu ~$C$~ \begin{displaymath} B = \left[\begin{array}{c} I_{k,k}\\ A \end{array}\right] \end{displaymath} anihilator (macierz kontrolna, check matrix) ~~$H$~~ ma postać \begin{displaymath} H = \left[\begin{array}{cc} -A & I_{n-k,n-k} \end{array}\right]. \end{displaymath} gdzie ~$I_{k,k}, ~I_{n-k,n-k}$~ są macierzami jednostkowymi odpowiednich wymiarów. \subsection{Grassmanian} \begin{df} Ogół wszystkich podprzestrzeni ~$k$-wymiarowych przestrzeni ~$n$-wymiarowej ~$\mathbb{F}_q^n$~ nazywamy \emph{rozmaitością Grassmana} lub \emph{Grassmanianem} i oznaczamy \begin{displaymath} Grass( k, n, \mathbb{F}_q ) = \{ V\colon V\subset \mathbb{F}_q^n \wedge \dim_{\mathbb{F}_q} V = k \}. \end{displaymath} \end{df} Z postaci standardowej widać, że Grassmanian ~$Grass( k, n, \mathbb{F}_q )$~ jest rozmaitością wymiaru ~$k\cdot (n-k)$ nad ciałem ~$\mathbb{F}_q$~ i można go naturalnie zanurzyć jako kwadrykę w przetrzeni rzutowej \begin{displaymath} Grass( k, n, \mathbb{F}_q ) \hookrightarrow \mathbb{P}( \varLambda^k \mathbb{F}_q^n ) \end{displaymath} \begin{displaymath} \operatorname{span}( v_1, \ldots, v_k ) \mapsto\operatorname{span}( v_1\wedge \ldots \wedge v_k ). \end{displaymath} Pełna grupa liniowa ~$GL(\mathbb{F}_q^n)$~ działa tranzytywnie na podprzestrzeniach ustalnego wymiaru, stąd \begin{stw} Grassmanian jest przestrzenią jednorodną \begin{displaymath} Grass( k, n, \mathbb{F}_q ) \simeq GL(\mathbb{F}_q^n) / F( k, n, \mathbb{F}_q ) \end{displaymath} ~$F( k, n, \mathbb{F}_q )$~ jest grupą macierzy postaci \begin{displaymath} \left[ \begin{array}{cc} a & b\\ 0 & c \end{array} \right] \end{displaymath} gdzie \begin{displaymath} a \in GL(\mathbb{F}_q^k ), ~~~c \in GL(\mathbb{F}_q^{n-k}), ~~~ b \in M( k, n-k, \mathbb{F}_q ). \end{displaymath} ~$b$~ jest dowolną macierzą prostkątną o ~$k$ ~wierszach i ~$n-k$ ~kolumnach i elemntach w ciele ~$\mathbb{F}_q$. \end{stw} Ponieważ grupa liniowa składa się z \begin{align*} \# ~~GL( \mathbb{F}_q^n ) &= (q^n-1)(q^n-q)(q^n-q^2)\cdots (q^n-q^{n-1}) \\ &=(q^n-1)(q^{n-1}-1)\cdots (q-1)\cdot q^{\binom{n}{2}} \end{align*} elementów, otrzymujemy \begin{wn} Liczba elementów rozmaitości Grassmana wynosi \begin{align*} \# ~~Grass( k, n, \mathbb{F}_q ) = \dfrac{(q^n-1)(q^{n-1}-1)\cdots (q^{n-k+1}-1)}{(q^k-1)(q^{k-1}-1)\cdots (q-1)}. \end{align*} \end{wn} Poniżej przedstawiono kilka przykładów zestawienia wymiaru kodu( ~$k$), długości kodu (~$n$), ilości elementów w ciele $\mathbb{F}_q$ (~$q$) z ilością elementów Grassmanianu (~$\# ~[k,n]_q$). \begin{displaymath} \begin{array}{rrrr} k & n & q & \# ~[k,n]_q\\ \\ 4 & 7 & 2^4 & 301\,490\,686\,407\,185\\ 4 & 8 & 2^4 & 19\,758\,795\,115\,067\,683\,345\\ 8 & 16 & 2 & 63\,379\,954\,960\,524\, 853\,651\\ 3 & 6 & 7^2 & 1\,663\,045\,363\,565\,300 \end{array} \end{displaymath} \section{Generowanie kodów} Rozważmy teraz liczbę pierwszą $p>2$, oraz liczby naturalne $k,~n=2k$. Będziemy poszukiwać $k$-wymiarowych kodów liniowych o długości $n$, tzn. długość kodu będzie równa podwojonemu wymiarowi. Grassmanian ~$Grass(k, 2k, \mathbb{F}_p)$~ jest ,,najbogatszy w wymiarze połówkowym'' bowiem składa się z \begin{displaymath} \dfrac{(p^{2k}-1)(p^{2k-1}-1)\cdots (p^{k+1}-1)}{(p^k-1)(p^{k-1}-1)\cdots (p-1)} \end{displaymath} elementów. Przestrzeń wekorową $\mathbb{F}_p^n$ nad ciałem $\mathbb{F}_p$ możemy potraktować jako ciało $\mathbb{F}_{p^n}$ poprzez rozszerzenie stopnia $n$ ciała prostego $\mathbb{F}_p$ za pomocą nieprzywiedlnego wielomianu $f\in\mathbb{F}_p[X], ~\deg f = n$. Od tej pory będziemy w powyższy sposób utożsamiać ciało $\mathbb{F}_{p^n}$ z przestrzenią liniową $\mathbb{F}_p^n$ nad ciałem $\mathbb{F}_p$: \begin{displaymath} \mathbb{F}_{p^n} \simeq_f \mathbb{F}_p^n. \end{displaymath} Rozważmy automorfizm Frobeniusa \begin{displaymath} \sigma: \mathbb{F}_p^n \to \mathbb{F}_p^n, ~~\sigma(x)= x^p, \end{displaymath} którego $n$-ta iteracja \begin{displaymath} \sigma^n(x)= x^{p^n} \end{displaymath} jest identycznością ($Id=1$) na $\mathbb{F}_p^n$. Ponieważ $n=2k$, to $k$-ta iteracja $\sigma^k(x)= x^{p^k}$ jest inwolucją. Oznaczmy ją przez $\tau$. Otrzymaliśmy operator liniowy \begin{displaymath} \tau: \mathbb{F}_p^n \to \mathbb{F}_p^n, ~~\tau^2= 1, \end{displaymath} który w naturalny sposób rozkłada przestrzeń $\mathbb{F}_p^n$ na sumę prostą dwóch podprzestrzeni własnych: \begin{displaymath} V^{+} = \ker (\tau-1), ~~V^{-} = \ker (\tau+1) \end{displaymath} \begin{displaymath} \mathbb{F}_p^n = V^{+}\oplus V^{-} \end{displaymath} Ponieważ charakterystka ciała jest różna od dwóch, dostajemy dwa operatory idempotentne (rzuty) \begin{displaymath} \pi^{+}, \pi^{-}: \mathbb{F}_p^n \to \mathbb{F}_p^n \end{displaymath} \begin{displaymath} \pi^{+}= \frac{1}{2}(1-\tau), ~~\pi^{-}= \frac{1}{2}(1+\tau), \end{displaymath} spełniające warunki: \begin{displaymath} \pi^{+} + \pi^{-} = 1, \end{displaymath} \begin{displaymath} \pi^{+} \pi^{-} = 0 = \pi^{-} \pi^{+}, \end{displaymath} \begin{displaymath} \ker\pi^{+}= \operatorname{im}\pi^{-}= V^{+}, \end{displaymath} \begin{displaymath} \ker\pi^{-}= \operatorname{im}\pi^{+}= V^{-}. \end{displaymath} Z drugiej strony zauważmy, że \begin{displaymath} V^{+} = \ker\pi^{+} = \ker (1-\tau) = \ker (1-\sigma^k), \end{displaymath} co oznacza, że $V^{+}$ jest rozszerzeniem stopnia $k$ ciała prostego $\mathbb{F}_p$, tzn. jest izomorficzne z ciałem skończonym $p^k$-elementowym ~$\mathbb{F}_{p^k}$. Podsumowując, otrzymaliśmy ciąg kolejnych ciał, rozszerzeń ciała prostego $\mathbb{F}_p$: \begin{displaymath} \mathbb{F}_p \varsubsetneq V^{+} \varsubsetneq \mathbb{F}_p^n \end{displaymath} gdzie \begin{displaymath} V^{+} \simeq \mathbb{F}_{p^k}, ~~\mathbb{F}_p^n \simeq \mathbb{F}_{p^n}. \end{displaymath} \begin{displaymath} |V^{+} : \mathbb{F}_p|=k, ~|\mathbb{F}_p^n : V^{+}|=2, ~|\mathbb{F}_p^n : \mathbb{F}_p|=n=2k. \end{displaymath} Kluczowe dla naszej konstrukcji jest rozszerzenie stopnia dwa $\mathbb{F}_p^n / V^{+}$ ciała $p^k$-elementowego $V^{+}$ przez ciało $p^n$-elementowe $\mathbb{F}_p^n$. Mianowicie, traktujemy ciało $\mathbb{F}_p^n$ jako dwuwymiarową przestrzeń wektorową nad ciałem $V^{+}$. Automofizm ciała $\mathbb{F}_p^n$ jako operator liniowy nad ciałem prostym $\mathbb{F}_p$ \begin{displaymath} \tau: \mathbb{F}_p^n \to \mathbb{F}_p^n \end{displaymath} jest niezmienniczy na podprzestrzeni $V^{+}$, więc możemy go traktować jako operator liniowy nad ciałem $V^{+}\simeq\mathbb{F}_{p^k}$. Jeżeli wybierzemy dowolny element $\xi\in V^{-}, \xi\neq 0$ to możenie przez $\xi^{-1}$ ustala izomorfizm pomiędzy podprzestrzeniami: \begin{displaymath} \xi^{-1}: V^{-} \to V^{+}, ~~\xi^{-1}(x)= \xi^{-1}\cdot x \end{displaymath} a izomorfizmem odwrotnym jest: \begin{displaymath} \xi: V^{+} \to V^{-}, ~~\xi(x)= \xi\cdot x \end{displaymath} Ponieważ $\xi^2\in V^{+}, \xi\notin V^{+}$ to $\mathbb{F}_p^n \simeq V^{+}[\xi]$, tzn. element $\xi$ realizuje rozszerzenie $\mathbb{F}_p^n / V^{+}$ stopnia dwa. W powyższy sposób dostajemy rozkład ciała $\mathbb{F}_p^n$ na sumę prostą podprzestrzeni liniowych \begin{align*} \mathbb{F}_p^n \simeq&_\xi V^{+}\oplus V^{+}\\ \mathbb{F}_p^n \ni u \mapsto ( \xi^{-1}\cdot \pi&^{+}u, \pi^{-}u ) \in V^{+}\oplus V^{+} \end{align*} Każdej jednowymiarowej (nad $V^{+} \simeq \mathbb{F}_{p^k}$) podprzestrzeni wektorowej odpowiada naturalnie $k$-wymiarowa (nad $\mathbb{F}_p$) podprzestrzeń liniowa przestrzeni $\mathbb{F}_p^n$. \begin{displaymath} \varTheta: \mathbb{P}^1(\mathbb{F}_{p^k}) \longrightarrow Grass( k, 2k, \mathbb{F}_p ). \end{displaymath} Wykorzystując wsółrzędne jednorodne, prostą rzutową $\mathbb{P}^1(\mathbb{F}_{p^k})$ możemy utożsamić z $\mathbb{F}_{p^k} \cup \{\infty\}$ \begin{displaymath} (V^{+}\oplus V^{+})/\mathbb{F}_{p^k} \ni [x, y] \longmapsto \left\{\begin{array}{lll} \dfrac{x}{y} & dla & y\neq 0 \\ \infty & dla & y=0. \end{array} \right. \end{displaymath} Jawna postać włożenia $\varTheta$ wygląda następująco: \begin{displaymath} \mathbb{F}_{p^k} \simeq V^{+} \ni x \longmapsto \operatorname{span}_{\mathbb{F}_{p^k}} \{ x + \xi \} \subset \mathbb{F}_p^n \end{displaymath} \begin{displaymath} \infty \longmapsto V^{+} \subset \mathbb{F}_p^n. \end{displaymath} \section{Implementacja metody} Generator kodów opisaną metodą został zaimplementowany w języku C, przy wykorzystaniu biblioteki algebraicznej Computer Algebra System z Uniwersytetu w Bordeaux. Wykorzytano tu ciało skończone $\mathbb{F}_{7^6}$ rzędu $7^6 = 117\,649$ przyjmując następujące wartości parametrów: \begin{displaymath} p=7, ~k=3, ~n=~2k=6, \end{displaymath} wielomian nieprzywiedlny $f \in\mathbb{F}_7[X] $ stopnia $n=6$: \begin{displaymath} f(X)= X^6 + X^5 + 2X^4 + X^3 + 5X^2 + 3X + 2, \end{displaymath} generator (pierwiastek pierwotny) $g$ ciała $\mathbb{F}_{7^6}$ rzędu $7^6-1 = 117\,648$ \begin{displaymath} g(X)= 3X^5 + 4X^4 + 5X^2 + 2X + 2. \end{displaymath} Do obliczeń wykorzystano funkcje biblioteki Computer Algebra System (\textit{ffinit(),ffgen(), ffprimroot(), fforder()}) : \begin{verbatim} void init_kody(long prec) { GEN p1; p = pol_x(fetch_user_var("p")); k = pol_x(fetch_user_var("k")); n = pol_x(fetch_user_var("n")); f = pol_x(fetch_user_var("f")); t = pol_x(fetch_user_var("t")); g = pol_x(fetch_user_var("g")); p = stoi(7); k = stoi(5); n = gmulsg(2, k); f = ffinit(p, gtos(n), -1); t = ffgen(f, -1); g = ffprimroot(t, NULL); p1 = fforder(g, NULL); { GEN j; for (j = gen_0; gcmp(j, p1) <= 0; j = gaddgs(j, 1)) pari_printf( } return; } \end{verbatim} Realizacją ciała $\mathbb{F}_{7^6}$ jest ucięta algebra wielomianów: \begin{displaymath} \mathbb{F}_{7^6} \simeq \mathbb{F}_7[X]/(f). \end{displaymath} Obliczenia będziemy prowadzić w uporządkowanej bazie sześciowymiarowej przestrzeni wektorowej $\mathbb{F}_{7^6} \simeq\mathbb{F}_7^6$ nad ciałem $\mathbb{F}_7$: \begin{displaymath} ( X^5, X^4, X^3, X^2, X, 1 ). \end{displaymath} Macierze automorfizmu Frobeniusa ~$\sigma: \mathbb{F}_7^6 \rightarrow \mathbb{F}_7^6$~ oraz inwolucja $\tau= \sigma^3$: \begin{displaymath} \sigma = \left[\begin{array}{cccccc} 6 & 0 & 2 & 5 & 6 & 0\\ 4 & 5 & 6 & 4 & 1 & 0\\ 2 & 3 & 5 & 1 & 3 & 0\\ 4 & 2 & 1 & 3 & 2 & 0\\ 2 & 4 & 3 & 6 & 1 & 0\\ 6 & 6 & 4 & 6 & 2 & 1 \end{array} \right], ~~~ \tau = \left[\begin{array}{cccccc} 3 & 5 & 5 & 0 & 5 & 0\\ 6 & 5 & 0 & 3 & 5 & 0\\ 4 & 4 & 6 & 2 & 1 & 0\\ 1 & 2 & 5 & 1 & 6 & 0\\ 1 & 2 & 5 & 2 & 5 & 0\\ 6 & 2 & 3 & 6 & 2 & 1 \end{array} \right] \end{displaymath} operatory rzutu (idempotenty) $\pi^{+}$ i $\pi^{-}$: \begin{displaymath} \pi^{+} = \left[\begin{array}{cccccc} 6 & 1 & 1 & 0 & 1 & 0\\ 4 & 5 & 0 & 2 & 1 & 0\\ 5 & 5 & 1 & 6 & 3 & 0\\ 3 & 6 & 1 & 0 & 4 & 0\\ 3 & 6 & 1 & 6 & 5 & 0\\ 4 & 6 & 2 & 4 & 6 & 0 \end{array} \right], ~~~ \pi^{-} = \left[\begin{array}{cccccc} 2 & 6 & 6 & 0 & 6 & 0\\ 3 & 3 & 0 & 5 & 6 & 0\\ 2 & 2 & 0 & 1 & 4 & 0\\ 4 & 1 & 6 & 1 & 3 & 0\\ 4 & 1 & 6 & 1 & 3 & 0\\ 3 & 1 & 5 & 3 & 1 & 1 \end{array} \right] \end{displaymath} bazy podprzestrzeni ~$V^{+}$~ i ~$V^{-}$~(jako odpowiednie kolumny macierzy): \begin{displaymath} V^{+} = \left[\begin{array}{ccc} 6 & 1 & 0\\ 5 & 0 & 0\\ 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{array} \right], ~~~ V^{-} = \left[\begin{array}{ccc} 3 & 1 & 2\\ 0 & 4 & 5\\ 6 & 4 & 6\\ 1 & 0 & 0\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{array} \right] \end{displaymath} elementy ~$\xi, ~\xi^{-1} \in V^{-}$: \begin{displaymath} \xi= 2X^5 + 6X^4 + 5X^3 + 5X^2 + 4, ~~~\xi^{-1}= 3X^5 + 6X^3 + X^2. \end{displaymath} Dla ujednolicenia oznaczeń, bazą przestrzeni ~$\mathbb{F}_7^6$ jest \begin{displaymath} ( X^5, X^4, X^3, X^2, X, 1 ) = ( e_1, e_2, e_3, e_4, e_5, e_6 ) \end{displaymath} oraz baza podprzestrzeni ~$V^{+}$ \begin{align*} f_1 &= 6e_1 + 5e_2+e_3\\ f_2 &= e_1 + e_4 + e_5\\ f_3 &= e_6. \end{align*} Mnożenie w ciele ~$\mathbb{F}_7^6$~ można przedstawić jako tensor \begin{displaymath} \mathbb{F}_7^6 \otimes_{\mathbb{F}_7} \mathbb{F}_7^6 \longrightarrow \mathbb{F}_7^6 \end{displaymath} \begin{displaymath} e_j\cdot e_k = \sum_{l=1}^6 c_{j,k}^l e_l, ~~~j, k= 1, \ldots, 6 \end{displaymath} gdzie współczynniki ~$c_{j,k}^l\in\mathbb{F}_7$~ są stałymi struktury (mnożenia). Gdy mnożenie przez lewy czynnik ograniczymy do podprzestrzeni ~$V^{+} \simeq \mathbb{F}_{7^3}$~ to otrzymamy częściowy tensor w postaci trzech macierzy \begin{displaymath} f_1\cdot = \left[\begin{array}{cccccc} 0 & 6 & 6 & 4 & 6 & 6\\ 2 & 6 & 5 & 3 & 3 & 5\\ 3 & 0 & 4 & 6 & 1 & 1\\ 0 & 2 & 6 & 1 & 5 & 0\\ 5 & 2 & 4 & 5 & 3 & 0\\ 2 & 2 & 6 & 2 & 2 & 0 \end{array} \right], ~~f_2\cdot = \left[\begin{array}{cccccc} 1 & 3 & 3 & 6 & 6 & 1\\ 4 & 4 & 6 & 2 & 5 & 0\\ 1 & 3 & 3 & 4 & 0 & 0\\ 6 & 4 & 6 & 2 & 3 & 1\\ 6 & 0 & 5 & 1 & 4 & 1\\ 1 & 1 & 2 & 2 & 5 & 0 \end{array} \right], ~~f_3\cdot = Id. \end{displaymath} Włożenie generujące ~$7^3+1=344$~ kodów liniowych \begin{displaymath} \varTheta: \mathbb{P}^1(\mathbb{F}_{7^3}) \longrightarrow Grass( 3, 6, \mathbb{F}_7 ) \end{displaymath} realizujemy teraz następująco: \begin{displaymath} \mathbb{F}_{7^3} \simeq V^{+} \ni x \longmapsto \operatorname{span}_{\mathbb{F}_{7^3}} \{ x + \xi \} \subset \mathbb{F}_7^6 \end{displaymath} \begin{displaymath} \infty \longmapsto V^{+} \subset \mathbb{F}_7^6. \end{displaymath} Ogólnie, każda podprzestrzeń jednowymiarowa nad ~$\mathbb{F}_{7^3}$~ jest odwzorowywana na podprzestrzeń wymiaru trzy nad ~$\mathbb{F}_7$~ za pomocą operacji \begin{displaymath} \mathbb{F}_7^6 \ni u, ~~~\operatorname{span}_{\mathbb{F}_{7^3}} \{ u \} \mapsto \operatorname{span}_{\mathbb{F}_7} \{ f_1\cdot u, f_2\cdot u, f_3\cdot u \}. \end{displaymath} Poniżej przedstawiono niektóre z funkcji realizujące obliczanie kodów. Funkcja \textit{tuple()} generuje potrzebne krotki, natomiast wywoływana przez nią funkcja \textit{act()} wykonuje odpowiednie operacja macierzowe: \begin{verbatim} void tuple( int n, int k, int d, int (*f)( int*, int, int[_n][_k] ), int *a, int b[_n][_k]) { if( d > 0 ) { int j; for( j=0; j<n; ++j ) { a[d-1]= j; tuple( n, k, d-1, f, a, b ); } } else f( a, k, b ); } int act( int u[_k]) { int v[_n], w[_n], r[_n][_k]; static unsigned long l= 1UL; mulV3( v, Vplus, u ); addV( w, v, xi0 ); mul3U( r, w ); print3U( r ); } \end{verbatim} W wyniku tych działań otrzymujemy poszukiwane kody: \begin{verbatim} 1. [6 5 1 0 0 0 1 0 0 1 1 0 0 0 0 0 0 1] 2. [5 2 4 5 4 0 6 2 6 3 4 0 2 6 5 5 0 4] 3. [6 0 5 0 6 0 2 3 2 2 3 6 1 4 6 5 0 4] ... 344. [2 5 6 4 3 1 3 4 5 6 0 0 2 1 4 4 6 3] \end{verbatim} Kolejnym krokiem będzie analiza jakościowa otrzymanego kodu. \footnotesize \renewcommand\refname{LITERATURA}
1411.2943
\section*{Supplementary material} Table~\ref{tab:bcyield} shows the ratio $R(\ensuremath{p_{\rm T}},y)$ in each $(\mbox{$p_{\rm T}$}\xspace,y)$ bin. \begin{table}[!hbp] \tabcolsep 4mm \begin{center} \caption{\small \label{tab:bcyield}$R(\ensuremath{p_{\rm T}},y)$ in units of $10^{-2}$ as a function of \ensuremath{p_{\rm T}}\ and $y$. The first uncertainty is statistical and the second systematic.} \resizebox{\textwidth}{!}{% \begin{tabular}{@{}r@{$\,\ensuremath{p_{\rm T}}\,$}lccc|c@{}} \toprule \multicolumn{2}{c}{\ensuremath{p_{\rm T}}(\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace)} & $2.0<y<2.9$ & $2.9<y<3.3$ & $3.3<y<4.5$ & $2.0<y<4.5$ \\ \midrule $ 0<$ & $<2 $ & $ 0.67\pm0.10\pm0.01 $ & $ 0.73\pm0.10\pm0.01 $ & $ 0.35\pm0.06\pm0.01 $ & $ 0.54\pm0.05\pm0.01 $ \\ $ 2<$ & $<3 $ & $ 0.70\pm0.09\pm0.02 $ & $ 0.72\pm0.09\pm0.02 $ & $ 0.50\pm0.06\pm0.01 $ & $ 0.62\pm0.05\pm0.01 $ \\ $ 3<$ & $<4 $ & $ 0.62\pm0.08\pm0.01 $ & $ 0.58\pm0.08\pm0.01 $ & $ 0.57\pm0.07\pm0.02 $ & $ 0.59\pm0.05\pm0.01 $ \\ $ 4<$ & $<5 $ & $ 0.83\pm0.08\pm0.02 $ & $ 0.60\pm0.07\pm0.01 $ & $ 0.81\pm0.08\pm0.02 $ & $ 0.79\pm0.05\pm0.01 $ \\ $ 5<$ & $<6 $ & $ 0.90\pm0.09\pm0.02 $ & $ 0.78\pm0.09\pm0.01 $ & $ 0.76\pm0.09\pm0.02 $ & $ 0.83\pm0.06\pm0.01 $ \\ $ 6<$ & $<7 $ & $ 0.84\pm0.09\pm0.01 $ & $ 0.99\pm0.11\pm0.02 $ & $ 0.64\pm0.08\pm0.01 $ & $ 0.79\pm0.06\pm0.01 $ \\ $ 7<$ & $<8 $ & $ 0.95\pm0.10\pm0.01 $ & $ 0.74\pm0.11\pm0.01 $ & $ 0.65\pm0.09\pm0.01 $ & $ 0.82\pm0.06\pm0.01 $ \\ $ 8<$ & $<10 $ & $ 0.80\pm0.08\pm0.01 $ & $ 0.57\pm0.08\pm0.01 $ & $ 0.80\pm0.09\pm0.02 $ & $ 0.77\pm0.05\pm0.01 $ \\ $ 10<$ & $<14 $ & $ 0.70\pm0.06\pm0.01 $ & $ 0.75\pm0.09\pm0.01 $ & $ 0.60\pm0.08\pm0.01 $ & $ 0.68\pm0.05\pm0.01 $ \\ $ 14<$ & $<20 $ & $ 0.74\pm0.09\pm0.01 $ & $ 0.68\pm0.15\pm0.03 $ & $ 0.55\pm0.13\pm0.02 $ & $ 0.68\pm0.07\pm0.01 $ \\ \midrule $ 0<$ & $<20 $ & $ 0.76\pm0.03\pm0.01 $ & $ 0.70\pm0.03\pm0.01 $ & $ 0.58\pm0.03\pm0.01 $ & $ 0.68\pm0.02\pm0.01 $ \\ \bottomrule \end{tabular} } \end{center} \end{table} The results are compared with the theoretical predictions in Fig.~\ref{fig:res_theorycomp} and Fig.~\ref{fig:pt_y_rescomp}. For ${\ensuremath{\B_\cquark^+}}\xspace$ meson the predictions following the $\alpha_s^4$ approach~\cite{Chang:2005hq} are shown. We use the CTEQ6LL~\cite{Pumplin:2002vw} parton distribution functions, and the leading order running $\alpha_s$, the characteristic energy scale $Q^2 = \mbox{$p_{\rm T}$}\xspace^2 + m_{{\ensuremath{\B_\cquark^+}}\xspace}^2$, and the masses of the $b$ and $c$ quarks are set to $m_b=4.95\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace$ and $m_c=1.326\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace$. The normalization of the theoretical predictions uses $0.47\ensuremath{{\rm \,\upmu b}}\xspace$ as the {\ensuremath{\B_\cquark^+}}\xspace\ production cross-section in the whole phase space and 0.33\% for $\BF(\ensuremath{\bsubc\to\jpsi \pi^+})$~\cite{Qiao:2012hp}, corrected for the latest measurement of the ${\ensuremath{\B_\cquark^+}}\xspace$ lifetime. The theoretical prediction on the ${\ensuremath{\B^+}}\xspace$ cross-section is based on the fixed order + next-to-leading log (FONLL) framework~\cite{Cacciari:2012ny}. The uncertainties on the theory curves are the uncertainties of the FONLL calculation, including the uncertainties of the $b$ quark mass, the renormalisation and factorisation scales, and CTEQ6.6~\cite{Nadolsky:2008zw} functions. The FONLL predictions are scaled according to the measured branching fraction value $\BF({\ensuremath{\B^+}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace K^+)=0.106\%$~\cite{PDG2014} and the \ensuremath{B^+}\ production cross-section $38.9\ensuremath{{\rm \,\upmu b}}\xspace$ measured at $\sqrt{s}= 7\ifthenelse{\boolean{inbibliography}}{\ensuremath{~T\kern -0.05em eV}\xspace}{\ensuremath{\mathrm{\,Te\kern -0.1em V}}}\xspace$~\cite{LHCb-PAPER-2013-004} increased by 20\% due to higher collision energy~\cite{LHCb-PAPER-2013-016}. \begin{figure}[!htbp] \includegraphics[width=7.5cm]{Fig5a} \includegraphics[width=7.5cm]{Fig5b} \includegraphics[width=7.5cm]{Fig5c} \caption{Ratio $R(\ensuremath{p_{\rm T}},y)$ as a function of $\mbox{$p_{\rm T}$}\xspace$ in the regions $2.0<y<2.9$ ({\it top left}), $2.9<y<3.3$ ({\it top right}), and $3.3<y<4.5$ ({\it bottom left}), with theoretical predictions following the $\alpha_s^4$ approach~\cite{Chang:2005hq} overlaid.} \label{fig:res_theorycomp} \end{figure} \begin{figure}[!htbp] \centering \includegraphics[width=7.5cm]{Fig6a} \includegraphics[width=7.5cm]{Fig6b} \caption{Ratio $R(\ensuremath{p_{\rm T}})$ as a function of $\ensuremath{p_{\rm T}}$ integrated over $y$ in the region 2.0$<y<$4.5 ({\it left}) and $R(y)$ as a function of $y$ integrated over $\ensuremath{p_{\rm T}}$ in the region 0$<\mbox{$p_{\rm T}$}\xspace<20\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ ({\it right}) are compared to the theoretical predictions following the $\alpha_s^4$ approach~\cite{Chang:2005hq}. } \label{fig:pt_y_rescomp} \end{figure} \clearpage \addcontentsline{toc}{section}{References} \setboolean{inbibliography}{true} \bibliographystyle{LHCb}
1411.2609
\section{Composed black-hole-scalar-field configurations.} While early no hair theorems have shown that asymptotically flat black holes cannot support regular static scalar configurations in their exterior regions \cite{Chas}, they have not ruled out the existence of non-static composed black-hole-scalar-field configurations. In fact, it has recently \cite{Hodstat} been demonstrated that rotating black holes can support linearized stationary scalar configurations (scalar `clouds' \cite{Notecloud,Barr}) in their exterior regions. Since non-linear (self-interaction) effects tend to stabilize the outside hair \cite{Nun,Hod11}, we conjectured in \cite{Hodstat} the existence of rotating black hole solutions endowed with genuine non-static scalar hair. These non-static hairy black-hole-scalar-field configurations are the non-linear counterparts of the linear scalar clouds studied analytically in \cite{Hodstat}. In a very interesting Letter, Herdeiro and Radu \cite{HerRa} have recently solved numerically the non-linear coupled Einstein-scalar equations, and confirmed the existence of these non-static hairy black-hole configurations. The composed black-hole-scalar-field configurations \cite{Notebos} explored in \cite{Hodstat,HerRa} are intimately related to the intriguing phenomenon of superradiant scattering of bosonic fields in rotating black-hole spacetimes \cite{Zel,PressTeu1,Ins1,Ins2}. In particular, the linearized stationary scalar configurations studied in \cite{Hodstat,HerRa} are characterized by orbital frequencies which are integer multiples of the central black-hole angular frequency \cite{Noteunits}: \begin{equation}\label{Eq3} \omega_{\text{field}}=m\Omega_{\text{H}}\ \ \ \ \text{with}\ \ \ \ m=1,2,3,...\ . \end{equation} It is well-established \cite{Zel,PressTeu1,Ins1,Ins2} that the energy flux of the field into the central spinning black hole vanishes for bosonic modes which satisfy the relation (\ref{Eq3}). In this case, the bosonic field is not swallowed by the central black hole. This suggests that stationary bosonic configurations which are in resonance with the central spinning black hole (that is, bosonic fields with orbital frequencies $\omega_{\text{field}}=m\Omega_{\text{H}}$) may survive in the spacetime region exterior to the black-hole horizon. In order to have genuine stationary (non-decaying) field configurations around the central black hole, one should also prevent the field from escaping to infinity. A natural confinement mechanism is provided by the gravitational attraction between the massive field and the central black hole. In particular, for a scalar field of mass $\mu$, low frequency field modes in the regime \cite{Notedim} \begin{equation}\label{Eq4} \omega^2<\mu^2 \end{equation} are confined to the vicinity of the central black hole. As discussed above, the main goal of the present paper is to test the validity of the `no short hair' conjecture (\ref{Eq1}) \cite{Nun} beyond the regime of spherically-symmetric static black holes. To that end, we shall analyze the physical properties of the non-static (rotating) black-hole-scalar-field configurations \cite{Hodstat,HerRa} in the eikonal regime \begin{equation}\label{Eq5} M\mu\gg1\ , \end{equation} where $M$ is the mass of the central spinning black hole. \section{Description of the system.} The physical system we consider consists of a massive scalar field $\Psi$ linearly coupled \cite{Notelin} to an extremal Kerr-Newman black hole of mass $M$, angular-momentum per unit mass $a$, and electric charge $Q$. In Boyer-Lindquist coordinates $(t,r,\theta,\phi)$ the spacetime metric is given by \cite{Chan,Kerr,Newman} \begin{eqnarray}\label{Eq6} ds^2=-{{\Delta}\over{\rho^2}}(dt-a\sin^2\theta d\phi)^2+{{\rho^2}\over{\Delta}}dr^2+\rho^2 d\theta^2+{{\sin^2\theta}\over{\rho^2}}\big[a dt-(r^2+a^2)d\phi\big]^2 \end{eqnarray} where $\Delta\equiv r^2-2Mr+a^2+Q^2$ and $\rho\equiv r^2+a^2\cos^2\theta$. The extremality condition implies that the degenerate horizon of the black hole is located at \begin{equation}\label{Eq7} r_{\text{H}}=M=\sqrt{a^2+Q^2}\ . \end{equation} The angular velocity of the black hole is given by \cite{Chan,Kerr,Newman} \begin{equation}\label{Eq8} \Omega_{\text{H}}={{a}\over{M^2+a^2}}\ . \end{equation} The dynamics of the linearized massive scalar field $\Psi$ in the Kerr-Newman black-hole spacetime is governed by the Klein-Gordon (Teukolsky) wave equation \begin{equation}\label{Eq9} (\nabla^\nu\nabla_{\nu}-\mu^2)\Psi=0\ . \end{equation} It proves useful to use the ansatz \cite{Noteanz} \begin{equation}\label{Eq10} \Psi(t,r,\theta,\phi)=\int\sum_{l,m}e^{im\phi}{S_{lm}}(\theta;{s}\epsilon){R_{lm}}(r;s,\mu,\omega)e^{-i\omega t}d\omega\ \end{equation} for the scalar wave field in (\ref{Eq9}), where \begin{equation}\label{Eq11} {s}\equiv {{a}\over{M}} \end{equation} is the dimensionless angular-momentum (spin) of the black hole, and \begin{equation}\label{Eq12} \epsilon\equiv M\sqrt{\mu^2-\omega^2}\ . \end{equation} The angular equation for ${S_{lm}}(\theta;{s}\epsilon)$, which is obtained from the substitution of (\ref{Eq10}) into (\ref{Eq9}), is given by \cite{Stro,Heun,Fiz1,Teuk,Abram,Hodasy} \begin{eqnarray}\label{Eq13} {1\over {\sin\theta}}{{d}\over{\theta}}\Big(\sin\theta {{d S_{lm}}\over{d\theta}}\Big) +\Big[K_{lm}+({s}\epsilon)^2\sin^2\theta-{{m^2}\over{\sin^2\theta}}\Big]S_{lm}=0\ . \end{eqnarray} This angular equation is supplemented by the requirement that the angular functions $S_{lm}(\theta;{s}\epsilon)$ \cite{Noteangu} be regular at the poles $\theta=0$ and $\theta=\pi$. These boundary conditions single out the discrete set of angular eigenvalues $\{K_{lm}({s}\epsilon)\}$ with $l\geq |m|$ \cite{Abram}. We shall henceforth consider equatorial scalar modes in the eikonal regime \begin{equation}\label{Eq14} l=m\gg1\ \ \ \ \text{and}\ \ \ \ {s}\epsilon\gg 1\ , \end{equation} in which case the angular eigenvalues are given by \cite{Yang,Notetbp} \begin{equation}\label{Eq15} K_{mm}({s}\epsilon)=m^2-({s}\epsilon)^2+O(m)\ . \end{equation} The radial equation for ${R_{lm}}$, which is obtained from the substitution of (\ref{Eq10}) into (\ref{Eq9}), is given by \cite{Teuk,Stro} \begin{equation}\label{Eq16} \Delta{{d} \over{dr}}\Big(\Delta{{dR_{lm}}\over{dr}}\Big)+\Big[[(r^2+a^2)\omega-ma]^2 +\Delta[2ma\omega-\mu^2(r^2+a^2)-K_{lm}]\Big]R_{lm}=0\ . \end{equation} Note that the radial equation (\ref{Eq16}) for ${R_{lm}}$ is coupled to the angular equation (\ref{Eq13}) for ${S_{lm}}$ through the angular eigenvalues $\{{K_{lm}}({s}\epsilon)\}$ \cite{Notebr}. \section{Stationary bound-state resonances of the composed black-hole-scalar-field system.} In the present paper we shall explore the physical properties of the linearized {\it stationary} scalar configurations which characterize the composed Kerr-Newman-scalar-field system. These stationary bound-state resonances of the bosonic field are characterized by the critical frequency \begin{equation}\label{Eq17} \omega_{\text{field}}=\omega_{\text{c}}\equiv m\Omega_{\text{H}}\ \end{equation} for superradiant scattering in the black-hole spacetime [see Eq. (\ref{Eq3})] The bound-state solutions of the radial equation (\ref{Eq16}) are characterized by a decaying field at spatial infinity \cite{Ins2}: \begin{equation}\label{Eq18} R(r\to\infty)\sim {{1}\over{r}}e^{-\epsilon r/r_{\text{H}}}\ \end{equation} with $\epsilon^2>0$ \cite{Noteepp}. Regular (finite energy) field configurations are also bounded at the black-hole horizon: \begin{equation}\label{Eq19} R(r=r_{\text{H}})<\infty\ . \end{equation} The boundary conditions (\ref{Eq18}) and (\ref{Eq19}) single out the discrete family of radial eigenfunctions [along with the associated eigen field-masses, see Eq. (\ref{Eq31}) below] which characterize the bound-state stationary scalar configurations. We shall first obtain a simple analytic formula for the discrete spectrum of field masses, $\{\mu(m,{s};n)\}$ \cite{Notenr}, which characterize the stationary bound-state resonances of the massive scalar fields in the extremal Kerr-Newman black-hole spacetime. To that end, it proves useful to define a new dimensionless radial coordinate \cite{Teuk,Stro} \begin{equation}\label{Eq20} x\equiv {{r-M}\over {M}}\ , \end{equation} in terms of which the radial equation (\ref{Eq16}) becomes \begin{equation}\label{Eq21} x^2{{d^2R}\over{dx^2}}+2x{{dR}\over{dx}}+VR=0\ , \end{equation} where $V\equiv [M\omega_{\text{c}}(x+2)]^2-K+2Mm{s}\omega_{\text{c}}-(M\mu)^2[(x+1)^2+{s}^2]$. Remarkably, this radial equation for $R(x)$ can be solved {\it analytically} \cite{Abram,Hodstat}: \begin{equation}\label{Eq22} R(x)=C_1\times x^{-{1\over 2}+\beta}e^{-\epsilon x}M({1\over 2}+\beta-\kappa,1+2\beta,2\epsilon x)+C_2\times(\beta\to -\beta)\ , \end{equation} where $M(a,b,z)$ is the confluent hypergeometric function \cite{Abram} and $\{C_1,C_2\}$ are normalization constants. Here \begin{equation}\label{Eq23} \kappa\equiv{{\alpha}\over{\epsilon}}-\epsilon\ \ \ \ \text{with}\ \ \ \ \alpha\equiv (M\omega_{\text{c}})^2={{(m{s})^2}\over{(1+{s}^2)^2}}\ , \end{equation} and \cite{Notedel} \begin{equation}\label{Eq24} \beta^2\equiv{K+{1\over 4}-2Mm{s}\omega_{\text{c}}-(2M\omega_{\text{c}})^2+(M\mu)^2(1+{s}^2)}\ . \end{equation} The notation $(\beta\to -\beta)$ in (\ref{Eq22}) means ``replace $\beta$ by $-\beta$ in the preceding term.". Taking cognizance of Eqs. (\ref{Eq8}), (\ref{Eq15}) and (\ref{Eq17}), one can express $\beta$ in the eikonal regime (\ref{Eq14}) in the form \begin{equation}\label{Eq25} \beta=\sqrt{\beta^2_0+\epsilon^2}\ \ \ \ \ \text{with}\ \ \ \ \ \beta^2_0\equiv m^2{{1-3{s}^2}\over{(1+{s}^2)^2}}[1+O(m^{-1})]\ . \end{equation} We shall now analyze the spatial behavior of the radial wave function (\ref{Eq22}) in the asymptotic regimes $x\to 0$ and $x\to\infty$: \newline (1) The behavior of the radial function (\ref{Eq22}) in the near-horizon $x\ll 1$ region is given by \cite{Abram} \begin{equation}\label{Eq26} R(x\to 0)\to C_1\times x^{-{1\over 2}+\beta}+C_2\times x^{-{1\over 2}-\beta}\ . \end{equation} From Eq. (\ref{Eq26}) one learns that a well-behaved [see Eq. (\ref{Eq19})] stationary field configuration is characterized by \cite{Noteregu,Dolh} \begin{equation}\label{Eq27} C_2=0 \ \ \ \ \text{and}\ \ \ \ \Re\beta\geq {1\over 2}\ . \end{equation} \newline (2) The behavior of the radial function (\ref{Eq22}) in the asymptotic $x\to\infty$ region is given by \cite{Abram} \begin{eqnarray}\label{Eq28} R(x\to\infty)&\to& C_1\times(2\epsilon)^{\kappa-{1\over 2}-\beta}{{\Gamma(1+2\beta)}\over{\Gamma({1\over 2}+\beta+\kappa)}}x^{-1+\kappa}(-1)^{-{1\over 2}-\beta+\kappa}e^{-\epsilon x} \nonumber \\&& + C_1\times(2\epsilon)^{-\kappa-{1\over 2}-\beta}{{\Gamma(1+2\beta)}\over{\Gamma({1\over 2}+\beta-\kappa)}}x^{-1-\kappa}e^{\epsilon x}\ . \end{eqnarray} The bound-state (finite-energy) scalar configurations are characterized by asymptotically decaying eigenfunctions at large distances from the central black hole [see Eq. (\ref{Eq18})]. Thus, the coefficient of the growing exponent $e^{\epsilon x}$ in (\ref{Eq28}) must be identically zero. This boundary condition yields the resonance condition \cite{Noteabr} \begin{equation}\label{Eq29} {1\over 2}+\beta-\kappa=-n\ \ \ \ \text{with}\ \ \ n=0,1,2,...\ . \end{equation} for the linearized stationary bound-state resonances of the massive scalar fields in the rotating Kerr-Newman black-hole spacetime. Taking cognizance of Eqs. (\ref{Eq23}) and (\ref{Eq25}), one can express the resonance condition (\ref{Eq29}) in the form $\sqrt{\beta^2_0+\epsilon^2}=\alpha\epsilon^{-1}-\epsilon-(n+1/2)$, which in the eikonal regime ($m\gg n+1/2$) yields the simple relation [see Eqs. (\ref{Eq23}) and (\ref{Eq25})] \begin{equation}\label{Eq30} \epsilon=m{{{s}^2}\over{(1+{s}^2)\sqrt{1-{s}^2}}}[1+O(m^{-1})]\ \end{equation} for the bound-state resonances in the regime $0<{s}<{{1}\over{\sqrt{2}}}$. Finally, taking cognizance of the relation (\ref{Eq12}), one finds \begin{equation}\label{Eq31} M\mu(m,{s})=m{{{s}}\over{(1+{s}^2)\sqrt{1-{s}^2}}}[1+O(m^{-1})] \end{equation} for the scalar field-masses which characterize the stationary bound-state resonances of the composed Kerr-Newman-scalar-field system. \section{Effective lengths of the stationary bound-state scalar configurations.} Motivated by the intriguing `no short hair' theorem (\ref{Eq1}) \cite{Nun}, we shall now analyze the effective lengths of the linearized stationary bound-state scalar configurations. Taking cognizance of Eqs. (\ref{Eq27}) and (\ref{Eq29}), one can write the radial function (\ref{Eq22}) for the stationary bound-state configurations in the compact form $R(x)=Ax^{-{1\over 2}+\beta}e^{-\epsilon x}L^{(2\beta)}_n(2\epsilon x)$, where $A$ is a normalization constant and $L^{(2\beta)}_n(x)$ are the generalized Laguerre Polynomials \cite{Notesea}. In particular, the fundamental ($n=0$) bound-state resonance is characterized by the remarkably simple radial eigenfunction \cite{Noten0} \begin{equation}\label{Eq32} R^{(0)}(x)=Ax^{-{1\over 2}+\beta}e^{-\epsilon x}\ . \end{equation} The radial distribution (\ref{Eq32}) peaks at $x_{\text{peak}}=(\beta-1/2)/\epsilon$, which implies [see Eqs. (\ref{Eq25}) and (\ref{Eq30})] \begin{equation}\label{Eq33} x_{\text{peak}}={{1-2{s}^2}\over{{s}^2}}[1+O(m^{-1})]\ . \end{equation} Equation (\ref{Eq33}) reveals the remarkable fact that, the bound-state stationary scalar configurations can be made arbitrarily compact. In particular, one finds \begin{equation}\label{Eq34} x_{\text{peak}}\to 0\ \ \ \text{for}\ \ \ {s}\to {{1}\over{\sqrt{2}}}\ . \end{equation} One therefore concludes that rotating black holes can support extremely short-range stationary scalar configurations (linearized scalar `clouds') in their exterior regions. Our analysis thus provides evidence for the failure of the `no short hair' theorem \cite{Nun} beyond the regime of spherically-symmetric static black holes. \section{Summary.} In a very intriguing Letter \cite{Nun} a remarkable observation was made according to which static spherically-symmetric black holes cannot have short hair. In particular, it was proved \cite{Nun} that if a spherically-symmetric static black hole has hair, then this hair must extend beyond $3/2$ the horizon radius [see Eq. (\ref{Eq1})]. The main goal of the present paper was to test the general validity of this `no short hair' conjecture. To that end, we have analyzed the physical properties of non spherically symmetric rotating black holes coupled to stationary (rather than static) linear matter configurations. In particular, we have shown that rotating Kerr-Newman black holes can support extremely {\it short}-range stationary scalar configurations (linearized scalar `bristles') in their exterior regions. Our analysis thus provides compelling evidence for the failure of the `no short hair' conjecture (\ref{Eq1}) \cite{Nun} beyond the regime of spherically-symmetric static black holes. \bigskip \noindent {\bf ACKNOWLEDGMENTS} \bigskip This research is supported by the Carmel Science Foundation. I thank C. A. R. Herdeiro and E. Radu for helpful correspondence. I would also like to thank Yael Oren, Arbel M. Ongo and Ayelet B. Lata for stimulating discussions.
2109.09982
\section{Introduction} \label{sec:introduction} Nowadays the effective field theories (EFT) are a powerful tool for the analysis of the Nature \cite{Weinberg2021, Weinberg2016, Weinberg2009}. The general EFT by Kosteleck\'y \cite{Kostelecky2004}, based on the General Gravity (GR) coupled to the Standard Model (SM), has been extended by Kosteleck\'y and Li \cite{Kostelecky2021a, Kostelecky2021b} by the contributions of interactions, caused by beyond-Riemann gravity (BRG). These contributions are closely overlapped with the contributions of interactions violating Lorentz-invariance \cite{Kostelecky1997a, Kostelecky1998a, Kostelecky2002a, Kostelecky2002b, Kostelecky2003, Kostelecky2009, Kostelecky2016}. In \cite{Kostelecky2021b} Kosteleck\'y and Li have proposed to investigate the BRG as well as Lorentz-invariance violation (LV) contributions to the energy spectrum and transition frequencies of the quantum gravitational states of ultracold neutrons (UCNs). This paper is addressed to the analysis of the BRG and LV contributions to the energy spectrum and transition frequencies of the quantum gravitational states of ultracold neutrons (UCNs). For the analysis of these problems we follow \cite{Ivanov2019}. In \cite{Ivanov2019} we have calculated the LV contributions to the energy spectrum and transition frequencies of the quantum gravitational states of UCNs, caused by the effective low-energy potential, derived by Kosteleck\'y and Lane \cite{ Kostelecky1999} in the frame work of the Standard Model Extension (SME) \cite{Kostelecky1997a, Kostelecky1998a} (see also \cite{Kostelecky2004}) by using the Foldy-Wouthuysen transformations \cite{Foldy1950, Itzykson1980}. The paper is organized as follows. In section \ref{sec:potential} we discuss the effective low-energy potential, derived by Kosteleck\'y and Li \cite{Kostelecky2021b} for the analysis of BRG interactions in the terrestrial laboratories. We define such a potential in the standard coordinate frame related to the laboratory at the Institut Laue Langevin (ILL) in Grenoble. We specify the BRG and LV contributions to the phenomenological coupling constants of this potential. We adduce the wave functions of the quantum gravitational states of polarized and unpolarized UCNs. In section \ref{sec:earth} we calculate the BRG and LV contributions to the energy spectrum and transition frequencies of the quantum gravitational states of polarized and unpolarized UCNs. Using the current experimental sensitivity of the {\it q}BOUNCE experiments we give some estimates of the phenomenological constants of the BRG and LV interactions. In section \ref{sec:Abschluss} we discuss the obtained results and perspectives of further investigations of the BRG and LV interactions by using the quantum gravitational states of UCNs. \section{Effective non--relativistic potential of beyond-Riemann gravity interactions} \label{sec:potential} For the experimental analysis of the BRG and LV interactions in the terrestrial laboratories by using the quantum gravitational states of UCNs Kosteleck\'y and Li propose to use the following Hamilton operator \cite{Kostelecky2021b} \begin{eqnarray}\label{eq:1} {\rm H} = {\rm H}_0 + \Phi_{\rm RG} + \Phi_{\rm BRG} = \frac{\vec{p}^{\,2}}{2m} - m \vec{g}\cdot \vec{z} + \Phi_{\rm nRG} + \Phi_{\rm nBRG}, \end{eqnarray} where the first two terms are the operators of the UCN energy and the Newtonian gravitational potential of the gravitational field of the Earth, respectively, with the gravitational acceleration $\vec{g}$ such as $\vec{g}\cdot \vec{z} = - g z$ \cite{Kostelecky2021b}. Then, $\Phi_{\rm nRG}$ is the effective low-energy potential of the neutron-gravity interaction, calculated to next-to-leading order in the large neutron mass $m$ expansion and related to the contribution of Riemann gravity. It is equal to \cite{Kostelecky2021b} \begin{eqnarray}\label{eq:2} \Phi_{\rm nRG} = \frac{3}{4m}\,\big(\vec{\sigma}\times \vec{p}\,\big)\cdot \vec{g} - \frac{3}{4m}\,\big(\vec{p}^{\,2}\,\vec{g}\cdot \vec{z} + \vec{g}\cdot \vec{z}\,\vec{p}^{\,2}\big). \end{eqnarray} In turn, the potential $\Phi_{\rm nBRG}$ describes the BRG and LV contributions to neutron-gravity interactions \begin{eqnarray}\label{eq:3} \Phi_{\rm nBRG} = H_{\phi} + H_{\sigma \phi} + H_g + H_{\sigma g}, \end{eqnarray} where the operators $H_j$ for $j = \phi, \sigma\phi, g$ and $ \sigma g$ are equal to \cite{Kostelecky2021b} \begin{eqnarray}\label{eq:4} H_{\phi} &=& (k^{\rm NR}_{\phi})_n \vec{g}\cdot \vec{z} + (k^{\rm NR}_{\phi p})^j_n \frac{1}{2}\Big(p^j(\vec{g}\cdot \vec{z}\,) + (\vec{g}\cdot \vec{z}\,) p^j\Big) + (k^{\rm NR}_{\phi pp})^{jk}_n \frac{1}{2}\Big(p^jp^k (\vec{g}\cdot \vec{z}\,) + (\vec{g}\cdot \vec{z}\,) p^jp^k\Big),\nonumber\\ H_{\sigma\phi} &=&(k^{\rm NR}_{\sigma \phi})^j_n \sigma^j(\vec{g}\cdot \vec{z}\,) + (k^{(NR)}_{\sigma \phi p})^{jk}_n \frac{1}{2}\,\sigma^j \Big(p^k(\vec{g}\cdot \vec{z}\,) + (\vec{g}\cdot \vec{z}\,) p^k\Big) + (k^{\rm NR}_{\sigma\phi pp})^{jk\ell}_n \frac{1}{2}\,\sigma^j\Big(p^kp^{\ell} (\vec{g}\cdot \vec{z}\,) + (\vec{g}\cdot \vec{z}\,) p^kp^{\ell}\Big),\nonumber\\ H_g &=& (k^{(NR)}_g)^j_n g^j + (k^{(NR)}_{g p})^{jk}_n p^j g^k + (k^{(NR)}_{g pp})^{jk\ell}_n p^j p^k g^{\ell},\nonumber\\ H_{\sigma g} &=& (k^{\rm NR}_{\sigma g})^{jk}_n \sigma^j g^k + (k^{\rm NR}_{\sigma g p})^{jk\ell}_n \sigma^j p^k g^{\ell} + (k^{\rm NR}_{\sigma g pp})^{jk\ell m}_n \sigma^j p^kp^{\ell} g^m. \end{eqnarray} The non-relativistic Hamilton operator Eq.(\ref{eq:1}) is written in the coordinate system shown in Fig.\,\ref{fig:fig1}, where $m$ is the neutron mass, $\vec{z}$ is a radius-vector of a position of an UCN on the $z$-axis, $\vec{p} = - i \nabla$ is a 3-momentum of an UCN and $\vec{\sigma}$ is the Pauli $2 \times 2$ matrix of the UCN spin \cite{Itzykson1980}. The coefficients $(k^{\rm NR}_{\phi})_n$, $(k^{\rm NR}_{\ldots})^j_n$, $(k^{\rm NR}_{\ldots})^{jk}_n$, $(k^{\rm NR}_{\ldots})^{jk\ell}_n$, and $(k^{\rm NR}_{\ldots})^{jk\ell m}_n$ define the BRG and LV contributions, which can be tested in experiments with neutrons \cite{Kostelecky2021b} in the following way. The system of a Schrödinger quantum particle with mass $m$ bouncing in a linear gravitational field is known as the quantum bouncer \cite{Gibbs1975, Gea-Banacloche1999, Rosu2001, Robinett2004}. Above a horizontal mirror, the linear gravity potential leads to discrete energy eigenstates of a bouncing quantum particle. An UCN, bound on a reflecting mirror in the gravity potential of the earth, can be found in a superposition of quantum gravitational energy eigen-states. The quantum gravitational states of UCNs have been verified and investigated \cite{Nesvizhevsky2002, Nesvizhevsky2003, Nesvizhevsky2005, Abele2007} at the UCN beamline PF2 at the Institute Laue-Langevin (ILL), where the highest UCN flux is available worldwide. The {\it q}BOUNCE collaboration develops a gravitational resonant spectroscopy (GRS) method \cite{Abele2010}, which allows to measure the energy difference between quantum gravitational states with increasing accuracy. Recent activities \cite{Sedmik2019}, and a summary can be found in \cite{Jenke2019}. The energy difference can be related to the frequency of a mechanical modulator, in analogy to the Nuclear Magnetic Resonance technique, where the Zeeman energy splitting of a magnetic moment in an outer magnetic field is connected to the frequency of a radio-frequency field. The frequency range in GRS used so far is in the acoustic frequency range between 100 and 1000\,${\rm Hz}$. The quantum gravitational states of UCNs have peV energy, on a much lower energy scale compared to other bound quantum systems. Any gravity-like potential or a deviation from Riemann gravity would shift these energy levels \cite{Jenke2021, Jenke2011, Jenke2014a, Cronenberg2018} and an observation would point to new physical understanding. Our choice of the laboratory frame is related to the following. Indeed, the {\it q}BOUNCE experiments are being performed in the laboratory at Institut Laue Langevin (ILL) in Grenoble. The ILL laboratory is fixed to the surface of the Earth in the northern hemisphere. Following \cite{Kostelecky2002a, Kostelecky2002b, Kostelecky2003, Kostelecky2009, Kostelecky2016} (see also \cite{Kostelecky2021b, Ivanov2019}) we choose the ILL laboratory or the standard laboratory frame with coordinates $(t, x, y, z)$, where the $x$, $y$ and $z$ axes point south, east and vertically upwards, respectively, with northern and southern poles on the axis of the Earth's rotation with the Earth's sidereal frequency $\Omega_{\oplus} = 2\pi/(23\,{\rm hr}\, 56\,{\rm min}\, 4.09\,{\rm s} = 7.2921159 \times 10^{-5}\,{\rm rad/s}$. The position of the ILL laboratory on the surface of the Earth is determined by the angles $\chi$ and $\phi$, where $\chi = 44.83333^0$\,N is the colatitude of the laboratory and $\phi$ is the longitude of the laboratory measured to east with the value $\phi = 5.71667^0$\,E \cite{Grenoble}. The beam of UCNs moves from south to north antiparallel to the $x$--direction and with energies of UCNs quantized in the $z$--direction. The gravitational acceleration in Grenoble is $g = 9.80507\,{\rm m/s^2}$ \cite{Ivanov2019, Grenoble}. Following \cite{Ivanov2019} we may neglect the Earth's rotation assuming that the ILL laboratory frame is an inertial one. \begin{figure} \includegraphics[height=0.26\textheight]{Erde.eps} \caption{The position of the ILL laboratory of the {\it q}BOUNCE experiments on the surface of the Earth.} \label{fig:fig1} \end{figure} \subsection*{\bf The analysis of contributions to the effective low-energy potential $\Phi_{\rm nBRG}$ in Eq.(\ref{eq:3}) violating of Lorentz-invariance} Before we proceed to calculating the contributions of the effective potential Eq.(\ref{eq:3}) to the energy spectrum and transition frequencies of the quantum gravitational states of UCNs we would like to compare the potential $\Phi_{\rm nBRG}$ with the effective low-energy potential $\Phi_{\rm nLV}$ of the LV interactions (see Eq.(\ref{eq:4}) in \cite{Ivanov2019}), calculated in \cite{Kostelecky1999}. The effective low-energy potential $\Phi_{\rm nLV}$ is equal to \begin{eqnarray}\label{eq:5} \hspace{-0.15in}\Phi_{\rm nLV} &=& \Big(- b^n_{\ell} + m d^n_{\ell 0} - \frac{1}{2}\,m \,\varepsilon_{\ell k j} g^n_{k j 0} + \frac{1}{2}\,\varepsilon_{\ell k j} H^n_{k j}\Big) \sigma_{\ell} + \frac{1}{m}\, \Big(- a^n_j + m(c^n_{0j} + c^n_{j0}) + m e^n_j\Big) p_j\nonumber\\ \hspace{-0.15in}&+& \frac{1}{m}\, \Big(b^n_0 \delta_{j\ell} - m(d^n_{\ell j} + d^n_{00} \delta_{\ell j}) - \frac{1}{2}\,m\, \varepsilon_{\ell k m}\big(g^n_{m k j} + 2 g^n_{m00}\delta_{j k}\big) - \varepsilon_{j \ell k} H^n_{k 0}\Big) p_j \sigma_{\ell} - \frac{1}{2 m}\,\Big(2 c^n_{jk} + c^n_{00}\delta_{jk}\Big) p_j p_k \nonumber\\ \hspace{-0.15in}&+& \Big[\frac{1}{4 m}\,\Big(\big(4 d^n_{0j} + 2 d^n_{j0} - \varepsilon_{j m n} g^n_{m n 0}\big)\, \delta_{k\ell} + \varepsilon_{\ell m n} g^n_{mn0}\, \delta_{jk} - 2\,\varepsilon_{j \ell m}\,\big(g^n_{m 0 k} + g^n_{m k 0}\big)\Big) \nonumber\\\hspace{-0.15in}&+& \frac{1}{2 m^2}\,\Big(\big( - b_j - \frac{1}{2}\,\varepsilon_{j m n} H_{mn}\big)\,\delta_{k\ell} + b_{\ell} \delta_{jk}\Big)\Big]\,p_j p_k \sigma_{\ell}. \end{eqnarray} The LV contributions to the energy spectrum and transition frequencies of the quantum gravitational states of UCNs, induced by the effective low-energy potential Eq.(\ref{eq:5}), have been calculated in \cite{Ivanov2019}. From Eq.(\ref{eq:4}) one may see that the effective low-energy interactions $H_{\phi}$, $H_{\sigma \phi}$ and $(k^{(NR)}_g)^j_n g^j$ in $H_g$ are new in comparison with Eq.(\ref{eq:5}). So this means that the coefficients or the phenomenological coupling constants in these interactions are induced by the BRG interactions. Of course, these terms are able to contain the LV contributions (see Table III of Ref.\cite{Kostelecky2021b}) but such contributions should not dominate in them. In turn, the effective low-energy neutron-gravity interactions, defined by $H_g$ and $H_{\sigma g}$, have the structure of the effective low-energy potential $\Phi_{\rm nLV}$ in Eq.(\ref{eq:5}). From the comparison we may write the following relations \begin{eqnarray}\label{eq:6} (k^{\rm NR}_{\sigma g})^{jk}_ng^k &=& - b^n_j + m d^n_{j 0} - \frac{1}{2}\,m \,\varepsilon_{j k \ell} g^n_{k \ell 0} + \frac{1}{2}\,\varepsilon_{j k \ell} H^n_{k \ell} + \ldots,\nonumber\\ (k^{\rm NR}_{g p})^{jk}_ng^k &=&\frac{1}{m}\, \Big(- a^n_j + m(c^n_{0j} + c^n_{j0}) + m e^n_j\Big) + \ldots,\nonumber\\ (k^{\rm NR}_{\sigma g p})^{jk\ell}_ng^{\ell} &=&\frac{1}{m}\, \Big(b^n_0 \delta_{j k} - m(d^n_{k j} + d^n_{00} \delta_{k j}) - \frac{1}{2}\,m\, \varepsilon_{j t m}\big(g^n_{m t k} + 2 g^n_{m00}\delta_{j k}\big) - \varepsilon_{j k m} H^n_{m 0}\Big) + \ldots,\nonumber\\ (k^{\rm NR}_{g p p})^{jk\ell}_ng^{\ell} &=& - \frac{1}{2 m}\,\Big(2 c^n_{jk} + c^n_{00}\delta_{jk}\Big) + \ldots,\nonumber\\ (k^{\rm NR}_{\sigma g p p})^{jk\ell m}_ng^m &=& \frac{1}{4 m}\,\Big(\big(4 d^n_{0j} + 2 d^n_{j0} - \varepsilon_{j m n} g^n_{m n 0}\big)\, \delta_{k\ell} + \varepsilon_{\ell m n} g^n_{mn0}\, \delta_{jk} - 2\,\varepsilon_{j \ell m}\,\big(g^n_{m 0 k} + g^n_{m k 0}\big)\Big)\nonumber\\ &+& \frac{1}{2 m^2}\,\Big(\big( - b_j - \frac{1}{2}\,\varepsilon_{j m n} H_{mn}\big)\,\delta_{k\ell} + b_{\ell}\delta_{jk}\Big) + \ldots, \end{eqnarray} where ellipses denote the BRG contributions of neutron-gravity interactions (see Table III in Ref.\cite{Kostelecky2021b}). \subsection*{\bf The rotation-invariant effective low-energy potential $\Phi^{(\rm RI)}_{\rm nBRG}$ for the {\it q}BOUNCE experiments} For the experimental analysis of the BRG as well as LV interactions by the quantum gravitational states of UCNs Kosteleck\'y and Li proposed to use the following rotation-invariant (RI) effective low-energy potential \cite{Kostelecky2021b} \begin{eqnarray}\label{eq:7} \Phi^{(\rm RI)}_{\rm BRG} &=& \big(k^{\rm NR}_{\phi})_n\, \vec{g}\cdot \vec{z} + \big(k^{\rm NR}_{\sigma g}\big)'_n\,\vec{\sigma}\cdot \vec{g} + \big(k^{\rm NR}_{\sigma g \phi}\big)'_n\,\big(\vec{\sigma}\times \vec{p}\,\big)\cdot \vec{g} + \frac{1}{2}\,\big(k^{\rm NR}_{\sigma \phi p}\big)'_n\,\Big((\vec{\sigma}\cdot \vec{p}\,)(\vec{g}\cdot \vec{z}\,) + (\vec{g}\cdot \vec{z}\,)(\vec{\sigma}\cdot \vec{p}\,)\Big)\nonumber\\ &+& \big(k^{\rm NR}_{\sigma g pp}\big)'_n\,\vec{p}^{\,2}\,\vec{\sigma}\cdot \vec{g} + \big(k^{\rm NR}_{\sigma g pp}\big)''_n\, (\vec{\sigma}\cdot\vec{p}\,)(\vec{g}\cdot \vec{p}\,). \end{eqnarray} In this expression the coefficients with primes denote suitably normalized irreducible representations of the rotation group obtained from the nonrelativistic coefficients in Eq.(\ref{eq:4}) (see \cite{Kostelecky2021b}). Then, according to Kostelecky and Li \cite{Kostelecky2021b}, the effective low-energy potential Eq.(\ref{eq:4}) is of interest for certain experimental applications, in part because the rotation invariance ensures that all terms take the same form at leading order when expressed either in the laboratory frame or the Sun-centered frame. The latter implies, for example, no leading-order dependence on the local sidereal time or laboratory colatitude in experimental signals for these terms \cite{Kostelecky2021b}. Since the effective neutron-gravity interactions, proportional to $ \big(k^{(\rm NR)}_{\sigma g \phi}\big)'_n$ and $\big(k^{(\rm NR)}_{\sigma \phi p}\big)'_n$, do not contribute to the energy spectrum of the quantum gravitational states of UCNs, the possible contributions should be proportional to the coefficients $ \big(k^{\rm NR}_{\phi})_n$, $ \big(k^{\rm NR}_{\sigma g}\big)'_n$, $\big(k^{\rm NR}_{\sigma g pp}\big)'_n$ and $\big(k^{\rm NR}_{\sigma g pp}\big)''_n$, respectively. According to our discussion above, the coefficient $ \big(k^{\rm NR}_{\phi})_n$ is caused by the BRG interactions, whereas the coefficients $ \big(k^{\rm NR}_{\sigma g}\big)'_n$, $\big(k^{\rm NR}_{\sigma g pp}\big)'_n$ and $\big(k^{\rm NR}_{\sigma g pp}\big)''_n$ should be saturated by the LV ones \cite{Ivanov2019}. \subsection*{\bf Wave functions and energy spectrum of quantum gravitational states of UCNs} The non-perturbed quantum gravitational states of UCNs obey the Schr\"odinger-Pauli equation \cite{Gibbs1975} \begin{eqnarray}\label{eq:8} i\frac{\partial \Psi^{(0)}_{\vec{p}_{\perp} k \sigma}(t,\vec{r}\,)}{\partial t} = \Big(\frac{\vec{p}^{\,2}}{2 m} + m g z\Big)\Psi^{(0)}_{\vec{p}_{\perp} k \sigma}(t,\vec{r}\,) \end{eqnarray} where $\vec{r} = \vec{z} + \vec{r}_{\perp}$ is a radius-vector of a position of an UCN with $\vec{r}_{\perp} =(x,y)$, the wave function $\Psi_{\vec{p}_{\perp} k\sigma}(t,\vec{r}\,)$ is equal to $\Psi_{\vec{p}_{\perp} k\sigma}(t,\vec{r}\,) = \Psi^{(0)}_{k\sigma}(z)\, e^{\,i \vec{p}_{\perp}\cdot \vec{r}_{\perp} - i(E_{\vec{p}} + E^{(0)}_k)t}/2\pi = |\vec{p}_{\perp} k \sigma\rangle$, $\vec{p}_{\perp}$ and $E_{\vec{p}_{\perp}} = \vec{p}^{\,2}_{\perp}/2m \sim 10^{-7}\,{\rm eV}$ (for $|\vec{p}_{\perp}| \sim 24\,{\rm eV}$ or $v_{\perp} \sim 7\,{\rm m/s}$) are the momentum and kinetic energy of UCNs. Below all BRG and LV contributions will be calculated at $\vec{p}_{\perp} = 0$ \cite{Ivanov2019}, and $k = 1,2,\ldots$ is the principal quantum number \cite{Gibbs1975}. The wave function $\Psi_{\vec{p}_{\perp} k\sigma}(t,\vec{r}\,)$ is normalized by $\langle \sigma' k' \vec{p}^{\,'}_{\perp}|\vec{p}_{\perp} k \sigma\rangle = \delta^{(2)}(\vec{p}^{\,'}_{\perp} - \vec{p}_{\perp})\,\delta_{k'k}\delta_{\sigma'\sigma}$ \cite{LL1965, Davydov1965}. Then, $\Psi^{(0)}_{k\sigma}(z) = \psi^{(0)}_k(z)\,\chi_{\sigma} = |k\sigma\rangle$ is a two--component spinorial wave function of UCNs in the $k$--gravitational state with the binding energy $E^{(0)}_k$, and in a spin eigenstate $\chi_{\sigma}$ with $\sigma = \uparrow$ or $\downarrow$. They are normalized by $\langle \sigma'k'|k\sigma\rangle = \delta_{k'k}\delta_{\sigma'\sigma}$. The wave functions $\psi^{(0)}_k(z)$ are given by \cite{Gibbs1975} \begin{eqnarray}\label{eq:9} \psi^{(0)}_k(z) = \frac{\displaystyle {\rm Ai}(\xi - \xi_k)}{\sqrt{\ell}\,|{\rm Ai}'(-\xi_k)|}\,e^{\,i\alpha} \quad,\quad \int^{\infty}_0dz\,\psi^{(0)*}_{k'}(z)\psi^{(0)}_k(z) = \delta_{k'k}, \end{eqnarray} where $\xi = z/\ell$, ${\rm Ai}(\xi - \xi_k)$ is the Airy-function and ${\rm Ai}'(-\xi_k)$ its derivative at $z = 0$ \cite{Gibbs1975, HMF72, Albright1977, AiryF2004}, $e^{\,i\,\alpha}$ is a constant complex factor, $\ell = (2m^2g)^{-1/3} = 5.88\,{\rm \mu m}$ is the scale of the quantum gravitational states of UCNs and $\xi_k$ is the root of the equation ${\rm Ai}(-\xi_k) = 0$, cased by the boundary condition $\psi^{(0)}_k(0) = 0$ \cite{Gibbs1975}. The latter defines the energy spectrum of the quantum gravitational states of UCNs $E^{(0)}_k = E_0\,\xi_k$ for $k = 1,2,\ldots$ with $E_0 = m g \ell = \sqrt[3]{m g^2/2} = 0.6016\,{\rm peV}$ \cite{Ivanov2019}. Experimentally the quantum gravitational states of UCNs have been investigated in \cite{Nesvizhevsky2002, Nesvizhevsky2003, Nesvizhevsky2005}. The wave functions $\Psi^{(0)}_{k\sigma}(z) = \psi^{(0)}_k(z)\,\chi_{\sigma}$ describe the quantum gravitational states of polarized UCNs, whereas for the quantum gravitational states of unpolarized UCNs the wave functions are given by \cite{Ivanov2019} \begin{eqnarray}\label{eq:10} \Psi^{(0)}_k(z) = \psi^{(0)}_k(z)\,c_{\uparrow}\,\chi_{\uparrow} + \psi^{(0)}_k(z)\,c_{\downarrow}\,\chi_{\downarrow}, \end{eqnarray} where the coefficients $c_{\uparrow}$ and $c_{\downarrow}$ are normalized by $|c_{\uparrow}|^2 + |c_{\downarrow}|^2 = 1$ and determine the probabilities to find an UCN in the $k$--quantum gravitational state with spin {\it up} and {\it down}, respectively. The quantum gravitational states of UCNs with the wave function Eq.(\ref{eq:6}) are 2--fold degenerate \cite{LL1965, Davydov1965}. \section{The BRG and LV contributions to the energy spectrum and transition frequencies of quantum gravitational states of UCNs} \label{sec:earth} The energy spectrum of the quantum gravitational states of polarized UCNs with the RG, BRG and LV corrections are defined by the integrals \begin{eqnarray}\label{eq:11} E_{k \sigma} = \langle \sigma k|{\rm H}|k \sigma\rangle = \int^{\infty}_0 dz\,\Psi^{(0)\dagger}_{k\sigma}(z){\rm H}\Psi^{(0)}_{k\sigma}(z) = E^{(0)}_k + \langle \sigma k|\Phi_{\rm nRG}|k\sigma\rangle + \langle \sigma k|\Phi^{(\rm RI)}_{\rm nBRG}|k\sigma\rangle. \end{eqnarray} Using the table of integrals in \cite{Albright1977, AiryF2004} we obtain the RG, BRG and LV contributions to the energy spectrum of the quantum gravitation states of unpolarized UCNs. We get \begin{eqnarray}\label{eq:12} \langle \sigma k|\Phi_{\rm nRG}|k \sigma\rangle &=& \int^{\infty}_0 dz\,\Psi^{(0)\dagger}_{k\sigma}(z)\Phi_{\rm nRG}\Psi^{(0)}_{k\sigma}(z) = \frac{2}{5}\,\frac{(E^{(0)}_k)^2}{m},\nonumber\\ \langle \uparrow k|\Phi^{(\rm RI)}_{\rm nBRG}|k \uparrow\rangle &=& \int^{\infty}_0 dz\,\Psi^{(0)\dagger}_{k\uparrow}(z)\Phi^{(\rm RI)}_{\rm nBRG}\Psi^{(0)}_{k\uparrow}(z) = - \frac{2}{3}\,\big(k^{(\rm NR)}_{\phi})_n\,\frac{E^{(0)}_k}{m} - g \,\big(k^{(\rm NR)}_{\sigma g}\big)'_n \nonumber\\ &-& \frac{2}{3}\,m g\,E^{(0)}_k \Big(\big(k^{(\rm NR)}_{\sigma g pp}\big)'_n + \big(k^{(\rm NR)}_{\sigma g pp}\big)''_n\Big),\nonumber\\ \langle \downarrow k|\Phi^{(\rm RI)}_{\rm nBRG}|k \downarrow\rangle &=& \int^{\infty}_0 dz\,\Psi^{(0)\dagger}_{k\uparrow}(z)\Phi^{(\rm RI)}_{\rm nBRG}\Psi^{(0)}_{k\uparrow}(z) = - \frac{2}{3}\,\big(k^{(\rm NR)}_{\phi})_n\,\frac{E^{(0)}_k}{m} + g \,\big(k^{(\rm NR)}_{\sigma g}\big)'_n \nonumber\\ &+& \frac{2}{3}\,m g\,E^{(0)}_k \Big(\big(k^{(\rm NR)}_{\sigma g pp}\big)'_n + \big(k^{(\rm NR)}_{\sigma g pp}\big)''_n\Big). \end{eqnarray} Since the binding energies of the quantum gravitational states of UCNs are of a few parts of $10^{-12}\,{\rm eV}$, the RG contribution is of order of a few parts of $10^{-33}\,{\rm eV}$ and can be neglected. This concerns also the contributions proportional to $\frac{2}{3}\, m g E^{(0)}_k \le 10^{-25} \,{\rm eV^3}$ for $k \le 10$ \cite{Jenke2019, Sedmik2019}. As a result, the energy spectrum of the quantum gravitational states of UCNs together with the BRG and LV contributions is equal to \begin{eqnarray}\label{eq:13} E_{k \uparrow/k \downarrow} = E^{(0)}_k - \frac{2}{3}\,\big(k^{(\rm NR)}_{\phi})_n\,\frac{E^{(0)}_k}{m} \mp g \,\big(k^{(\rm NR)}_{\sigma g}\big)'_n, \end{eqnarray} where $g = 2.15 \times 10^{-23}\,{\rm eV}$ \cite{Ivanov2019}. The LV contribution, proportional to $\big(k^{(\rm NR)}_{\sigma g}\big)'_n$, is the same for all energy level. It depends only on the neutron spin-polarization. According to the energy spectrum Eq.(\ref{eq:13}), for the non-spin-flip $|k\sigma\rangle \to |k'\sigma\rangle$ and spin-flip $|k\sigma\rangle \to |k'\sigma'\rangle$ transitions we get \cite{Ivanov2019} \begin{eqnarray}\label{eq:14} \delta \nu_{k'\sigma k\sigma} &=& - \big(k^{\rm NR}_{\phi})_n\, \frac{E^{(0)}_{k'} - E^{(0)}_k}{3\pi m},\nonumber\\ \delta \nu_{k'\sigma' k\sigma} &=& \pm \,\frac{g}{\pi} \,\big(k^{(\rm NR)}_{\sigma g}\big)'_n - \big(k^{\rm NR}_{\phi})_n\, \frac{E^{(0)}_{k'} - E^{(0)}_k}{3\pi m}. \end{eqnarray} for $(\sigma = \uparrow, \sigma' = \downarrow)$ or $(\sigma = \downarrow, \sigma' = \uparrow)$, respectively. For current sensitivity $\Delta E = 2\times 10^{-15}\,{\rm eV}$ \cite{Jenke2019} (see also \cite{Ivanov2019}) and for the $|1\rangle \to |4\rangle$ transition \cite{Ivanov2019} we are able to obtain the upper bound on the BRG contribution $\big|\big(k^{(\rm NR)}_{\phi})_n\big|$ and an estimate for $\big(k^{(\rm NR)}_{\sigma g}\big)'_n$, i.e., \begin{eqnarray}\label{eq:15} \big|\big(k^{(\rm NR)}_{\phi})_n\big| < 10^{-3}\,{\rm GeV} \quad,\quad \big(k^{(\rm NR)}_{\sigma g}\big)'_n = 0. \end{eqnarray} The upper bound $\big|\big(k^{(\rm NR)}_{\phi})_n\big| < 10^{-3}\,{\rm GeV} $ is one order of magnitude better in comparison with the result $\big|\big(k^{(\rm NR)}_{\phi})_n\big| < 1.3 \times 10^{-2}\,{\rm GeV} $, obtained in \cite{Kostelecky2021b}. Then, our result $\big(k^{(\rm NR)}_{\sigma g}\big)'_n = 0$ agrees well with that by Kosteleck\'y and Li \cite{Kostelecky2021b}. The spin-flip transitions may also admit an upper bound $\big|\big(k^{(\rm NR)}_{\sigma g}\big)'_n \big| < 10^8$. However, it seems unrealistic, since the main contribution to $\big(k^{(\rm NR)}_{\sigma g}\big)'_n $ is caused by LV interactions \cite{Kostelecky2011b}. It is important to emphasize that in the coefficient $(k^{\rm NR}_{\phi})_n$ the dominate role belongs to the BRG interactions. According to \cite{Kostelecky2021b}, the coefficient $(k^{\rm NR}_{\phi})_n$ has the following structure (see Table III in Ref.\cite{Kostelecky2021b}): \begin{eqnarray}\label{eq:16} \big(k^{\rm NR}_{\phi})_n = 2\,(m{'}{}^{\rm L})^{ss}_n - 2\,(a^{\rm L})^{tss}_n + 2\,m\,(e^{\rm L}_h)^{tss}_n - 2\,m\,(c^{\rm L}_h)^{tss}_n+ 2\,m^2\,(m{^{(5)}_h}{}^{\rm L})^{ttss}_n - 2\,m^2\,(a{^{(5)}_h}{}^{\rm L})^{ttss}_n, \end{eqnarray} where the phenomenological coupling constants in the right-hand-side of Eq.(\ref{eq:16}) are fully induced by the BRG interactions (see Table I in Ref.\cite{Kostelecky2021b}). The energy spectrum of the quantum gravitational states of unpolarized UCNs, calculated by taking into account the 2-fold degeneracy of the energy levels \cite{Ivanov2019} (see also \cite{LL1965, Davydov1965}), is equal to \begin{eqnarray}\label{eq:17} E^{(\pm)}_k = E^{(0)}_k - \frac{2}{3}\,\big(k^{(\rm NR)}_{\phi})_n\,\frac{E^{(0)}_k}{m} \pm \Big|g \,\big(k^{(\rm NR)}_{\sigma g}\big)'_n \Big|. \end{eqnarray} Using Eq.(\ref{eq:15}) in Ref.\cite{Ivanov2019} we define the contributions to the transition frequencies of the quantum gravitational states of UCNs \begin{eqnarray}\label{eq:18} \delta \nu^{(\pm \pm)}_{k'k} &=& - \big(k^{(\rm NR)}_{\phi})_n \,\frac{E^{(0)}_{k'} - E^{(0)}_k}{3\pi m}, \end{eqnarray} and \begin{eqnarray}\label{eq:19} \delta \nu^{(\pm \mp)}_{k'k} &=& - \big(k^{(\rm NR)}_{\phi})_n\,\frac{E^{(0)}_{k'} - E^{(0)}_k}{3\pi m} \pm \Big| \frac{g}{\pi} \,\big(k^{(\rm NR)}_{\sigma g}\big)'_n\Big|. \end{eqnarray} One may see that the experimental analysis of the transition frequencies between the quantum gravitational states of unpolarized UCNs should lead to the estimates Eq.(\ref{eq:15}). \section{Discussion} \label{sec:Abschluss} We have analyzed a possibility to test contributions of interactions, induced by non-Riemann geometry beyond the standard Riemann General Relativity and the Lorentz-invariance violation (LV), by Kosteleck\'y and Li \cite{Kostelecky2021b}. Using the effective low-energy potential, derived in \cite{Kostelecky2021b}, we have calculated the contributions of beyond-Riemann gravity or the BRG contributions and the LV contributions to the energy spectrum and transition frequencies of the quantum gravitational states of polarized and unpolarized UCNs. Such UCNs are used as test-particles for probes of contributions of interactions beyond the Standard Model (SM) and Einstein's gravity \cite{Jenke2019, Sedmik2019, Abele2021, Abele2003, Abele2009, Jenke2009, Abele2010, Jenke2011, Abele2011c, Abele2012, Jenke2014a, Jenke2014b, Schmiedmayer2015, Cronenberg2015, Abele2016, Konrad2017, Cronenberg2018, Ivanov2020}. We have got the following constraints $\big|\big(k^{(\rm NR)}_{\phi})_n\big| < 10^{-3}\,{\rm GeV}$ and $ \big(k^{(\rm NR)}_{\sigma g}\big)'_n = 0$. The upper bound $\big|\big(k^{(\rm NR)}_{\phi})_n\big| < 10^{-3}\,{\rm GeV}$ is one order of magnitude better in comparison with the constraint obtained in \cite{Kostelecky2021b}. Then, from our analysis of the transition frequencies of the quantum gravitational states of UCNs follows that $ \big(k^{(\rm NR)}_{\sigma g}\big)'_n = 0$, whereas in \cite{Kostelecky2021b} such a value $ \big(k^{(\rm NR)}_{\sigma g}\big)'_n = 0$ has been imposed by assumption. It is important to emphasize that for the experimental sensitivity $\Delta E = 2 \times 10^{-17}\,{\rm eV}$, which should be reached in the {\it q}BOUNCE experiments in a nearest future \cite{Abele2010}, we should expect the upper bound $\big|\big(k^{(\rm NR)}_{\phi})_n\big| < 10^{-5}\,{\rm GeV}$. As has been pointed out by Kosteleck\'y and Li \cite{Kostelecky2021b}, the coefficient $(k^{\rm NR}_{\phi})_n$ should appear in the nonrelativistic Hamilton operator in Minkowski spacetime \cite{Kostelecky1999a} but it produces no measurable effects in that context because it amounts to an unobservable redefinition of the zero of energy or, equivalently, because it can be removed from the theory via field redefinitions \cite{Kostelecky1997a, Kostelecky1998a}. The observability of $(k^{\rm NR}_{\phi})_n$ is thus confirmed to be a consequence of the coupling to the gravitational potential, the presence of which restricts the applicability of field redefinitions \cite{Kostelecky2021a}. In the perspective of the further analysis of BRG and LV interactions by the quantum gravitation states of UCNs we see in the use of i) the total effective low-energy potential Eq.(\ref{eq:3}) for the calculation of the BRG and LV contributions to the energy spectrum and the transition frequencies of the quantum gravitational states of UCNs, and of ii) the quantum bouncing ball experiments with a free fall of UCNs in the gravitational field of the Earth \cite{Abele2011c, Abele2012}. \section{Acknowledgements} We are grateful to Alan Kosteleck\'y for fruitful discussions and comments. The work of A. N. Ivanov was supported by the Austrian ``Fonds zur F\"orderung der Wissenschaftlichen Forschung'' (FWF) under the contracts P31702-N27, P26636-N20 and P33279-N and ``Deutsche F\"orderungsgemeinschaft'' (DFG) AB 128/5-2. The work of M. Wellenzohn was supported by the MA 23.
1806.06363
\section{Introduction} Creating solid state quantum emitters and their integration in micro- and nanophotonic structures is one of the prime tasks in modern quantum engineering. Coupled solid state quantum emitter-cavity systems range among the most promising candidates for the realization of highly efficient single photon sources \cite{Ding2016,Schlehahn2016,Unsleber2015,Senellart2017,Chang2006}, spin photon interfaces \cite{DeGreve2012,Gao2012}, quantum sensing probes \cite{Anker2008} as well as building blocks for quantum simulation \cite{Wang2017d} and surface code quantum computing \cite{Greve2013}. While quantum emitters have been identified, studied and engineered in a variety of crystals including III-V \cite{Michler2000} and II-VI quantum dots \cite{Peng2000,Lowisch1996,Xin1996}, colour defects in diamonds \cite{Kurtsiefer2000}, impurities in SiC and organic polymers \cite{Castelletto2013,Castelletto2014}, atomically thin materials \cite{He2015,Srivastava2015a,Tonndorf2015,Koperski2015, Kumar2015} were recently established as a novel platform of quantum photonic devices. Quantum dots in III-V semiconductors and defect centers in diamonds certainly belong to the most mature implementations \cite{Balasubramanian2008}, but the quality of site-controlled emitters leaves still needs to be improved, putting a serious thread regarding their scalable fabrication in ordered arrays \cite{VanDerSar2009}. Ordered InAs/GaAs quantum dot arrays have been realized by selective area growth methods and epitaxial growth on patterned substrates \cite{schmidt2007lateral}, but in most cases the costly fabrication methods severly compromised their emission properties. Direct integration of positioned solid state quantum emitters with photonic resonators has been accomplished \cite{Gallo2008,Schneider2009,Sunner2008a}, but only in very selected cases and genuine scalability has remained elusive. The formation of quantum emitters in mono- and bilayers of transition metal dichalcogenides has now been observed in various implementations: initially, localized luminescence centers in exfoliated flakes were discovered close to their edges, and have been associated with strain wrinkles \cite{Tonndorf2015,Koperski2015, Kumar2015}. In epitaxially grown flakes, random positioning of such spots was observed \cite{He2015}, indicating emission from defect bound excitons. Recently, the formation of quantum emitters on modulated metal substrates \cite{Kern2015,Tripathi2018}, as well as nanopillars \cite{Palacios-Berraquero2017,Branny2017} was reported and associated with localized and engineered crystal strain fields, which outlines the unique possibility to deterministically induce quantum emitters in a straight forward manner by structuring the sample surface prior to the transfer. While the ordered formation of quantum emitters thus far has been mainly observed on dielectric, nanostructured surfaces, spontaneous emission enhancement was reported on rough metallic surfaces and gold-coated nanopillars, giving rise to localized plasmonic modes \cite{Tripathi2018,Cai2018}. Combining atomically thin materials which comprise either tightly localized excitons or strongly bound free excitons with nanoplasmonic cavities yields a promising pathway to study light-matter coupling on the nanoscale enabled by the enormous field enhancements provided by metallic nanostructures \cite{Krasnok2018, Lalanne2018}. However, the deterministic coupling of well-ordered quantum emitters in atomically thin materials with resonant plasmonic modes has only now been achieved \cite{Luo2018,Cai2018}. \section{Sample Structure and Setup} In this work, we demonstrate the feasibility to induce ordered arrays of quantum emitters by defined arrays of metallic nanopillars, fabricated on a SiO$_2$ substrate. Such structures directly represent a coupled quantum dot-nanocavity system, and act as polarization-controlled single photon sources. \begin{figure} \centering \includegraphics[scale=0.6]{fig1.pdf} \caption[]{\label{Surface structure and modulated monolayer}(a) Scanning electron microscope (SEM) image of the sample surface comprising metallic nanopillars as quantum emitter seeds and plasmonic nano-cavities. Inset: close-up view of a nanopillar. (b) Optical image of the pillar array after successfull dry-transfer of an atomically thin WSe$_{2}$ monolayer. (c) Close-up SEM image of a single pillar covered by a strained monolayer, showing the formation of wrinkles.} \end{figure} The sample consists of a semi-insulating silicon substrate, with a \SI{200}{\nano\meter} thick SiO$_2$ layer on top. In order to fabricate the nanopillars, we first spin-coated a thin layer of PMMA and performed electron beam lithography to selectively expose rectangular areas in the resist with dimensions of \SI{20}{\nano\meter} - \SI{240}{\nano\meter}. After developing the resist, a \SI{80}{\nano\meter} thick gold layer was evaporated on the sample, followed by a lift-off step. A scanning electron microscope (SEM) image of a prototype nanopillar array with a pitch of \SI{2}{\micro\meter} is shown in Fig. 1a). On selected samples we additionally deposited a \SI{3}{\nano\meter} thin layer of Al$_2$O$_3$ via atomic layer deposition. Next, we fabricated atomically thin layers of WSe$_2$ via mechanical exfoliation using adhesive tape, and transferred the layers on the pillar arrays via dry transfer \cite{Castellanos-Gomez2014} (Fig. 1b). We observe, that part of the pillars pierced the monolayer, while a substantial number of nanopillars (> 50 $\%$) locally strained the layer, yielding the tent-like structure shown in Fig. 1c). Spatially resolved optical spectroscopy was performed in a micro-photoluminescence setup with optional high spatial resolution (using fiber based confocal setting). The sample is excited by a frequency-doubled Nd:YAG laser at \SI{532}{\nano\meter}, mounted in a liquid helium cooled flow cryostat. \section{Experimental Results and Discussion} Fig. 2a) depicts an exemplaric power dependent luminescence spectrum recorded on the position of a nanopillar with dielectric coating. The spectrum is widely dominated by a zoo of sharp emission lines, a typical signature of strongly localized emission centers in the crystal. In the low-power regime, these emission lines exhibit a slightly sub-linear intensity increase with the pump power prior to their saturation level (Fig. 2b). This behaviour is mainly due to the gold pillars absorbing parts of the incident laser light \cite{Link1999}, but also the re-emitted light from the emitter, reducing their quantum efficiency. \begin{figure} \centering \includegraphics[scale=0.7]{fig2.pdf} \caption[]{\label{Microphotoluminescence spectroscopy of localized excitons} a) Power-dependent spectra on a nanopillar revealing many discrete emitters. b) Power-dependent study of a quantum emitter emission line before saturation starts above \SI{10}{\micro\watt}. c) Spatial map of a WSe$_2$ flake covering the nano-pillar array, showing the integrated intensity from $700 - \SI{800}{\nano\meter}$. The enhanced PL coincides with the \SI{4}{\micro\meter} pillar distance (black pattern). d) Spectral information extracted between the blue arrows in c), revealing a periodic increase in luminescence and the formation of additional localized emission centers.} \end{figure} The ordered formation of emitters on the nanopillar arrays is confirmed in a highly spatially resolved scanning microphotoluminescence study, applying the confocal configuration: Here, we carefully scan the sample's surface by utilizing a pair of motorized linear stages with a step width of \SI{500}{\nano\meter} underneath the excitation and collection spot. The spectrally integrated map (700-\SI{800}{\nano\meter}) is shown in Fig. 2c). It clearly evidences a regular pattern of bright emission sites, perfectly coinciding with the positions of the metallic pillars (dashed black pattern). Spectral information is best illustrated in a selected linescan between the blue arrows in Fig 2c). Here, we clearly observe a two-fold effect by the nanopillars (Fig. 2d): A strong luminescence enhancement of the overall signal due to plasmons \cite{Hugall2018}, as well as the regular formation of the sharp peaks below the free exciton energy (<\SI{1.74}{\electronvolt}), which we associate with tight exciton localization due to strain.\\ \begin{figure} \centering \includegraphics[scale=0.6]{fig3.pdf} \caption{\label{polarization and auto correlation} a) Second-order autocorrelation function of a quantum emitter on a pillar. The value of $g^{(2)}(\tau=0) = 0.17$ confirms single photon emission. Inset: spectrum of single photon emitter. b) SEM images of two individual rectangles covered by WSe$_2$. Pillar A is horizontally and pillar B vertically aligned. c) and d) Polarization characteristic of three individual quantum emitters each on two different \SI{90x30}{\nano\meter} nanopillars shown in b).} \end{figure} In order to provide evidence for the capability to emit single photons from the deterministically localized excitons, we performed second order correlation measurements by exciting the sample with a \SI{532}{\nano\meter} CW laser (Fig.~3a). We selected a dominant emission feature from one square pillar ($140 \times \SI{140}{\nano\meter}$). The luminesence was spectrally filtered (bandwidth: $\approx$ \SI{300}{\micro\electronvolt}, 300 grooves/mm grating) and passed to a fiber coupled Hanbury Brown and Twiss (HBT) setup. We observed a well-pronounced anti-bunching signal at zero delay time ($\tau = 0$), allowing us to extract a $g^{(2)}(\tau=0)$ value of 0.17, which clearly puts our system in the regime of single photon emission. Polarization resolved spectroscopy on different nanopillars revealed a strongly linear polarization of the luminescence from the emitters. In Fig. 3b two exemplary \SI{90x30}{\nano\meter} pillars are shown which are aligned perpendicular to each other and covered by the monolayer. Comparing the polarization of several emitters from these two pillars shows a strong correspondence of the polarization and pillar orientation (Fig. 3c,d). This alignment of the polarization along the long axis of the given gold rectangle can be associated with the coupling of the emitter to the plasmonic excitations in the metal which are much more pronounced in the extended axes as has been demonstrated with similar plasmonic structures before \cite{Luo2017}. Slight modifications of the rotation angle also depends on the way the monolayer bends around the pillar, which further acts on the polarization of the emission \cite{Kern2015}. \begin{figure} \centering \includegraphics[scale=0.6]{fig4.pdf} \caption{FDTD simulation: Vector maps of the field distribution and electric field enhancement E/E$_0$ at top surface of a pillar with a) a square cross-section of (\SI{140x140}{\nano\meter}) and a rod-like cross-section of (\SI{90x30}{\nano\meter}) excited by an electromagnetic field E$_0$ of 1 V/m at \SI{740}{\nano\meter}. c) Calculated scattering cross-section spectrum of square nanopillars with different size from \SI{140}{\nano\meter} to \SI{40}{\nano\meter}. d) Two scattering cross-section spectrums of a rod-like nanopillar with a size of \SI{90x30}{\nano\meter} for an incident light with orthogonal polarizations, E$_x$ (red) and E$_y$ (black). x-direction represents the direction of the long edge of the rod-like nanopillar.} \end{figure} \section{Theory} In order to understand the optical enhancement of the quantum emitters in a WSe$_2$ via a metal nanopillar array, we investigated the plasmon modes excited in a square and a rod-like nanopillar by finite-difference time-domain (FDTD) method. In a square pillar with a size of \SI{140x140}{\nano\meter}, the electric field distribution on top surface of the pillar is calculated, as shown in Fig. 4(a), where a quantum emitter of WSe$_2$ can be placed. At two vertical side edges orthogonal to the E$_x$ polarization, strong field enhancement (E/E$_0$) with a maximum value of 20 compared with an incident light is observed. In addition, vector plot shows the field enhancement is strongly attributed from the E$_x$ field at the edges. Because an emitter in 2D materials oscillates in plane, strong emission enhancement for the defect emitters at the edges can be induced by the resonant coupling of the plasmon mode. The mode of the Fig. 4a is obtained by assuming E$_x$ linearly polarized incident light, and a 90-degree rotated mode (not shown in the Figure) can also be observed for E$_y$ polarized incident light. Scattering cross-section spectrum of Fig. 4(c) represents the spectral dependence of the plasmonic mode in a square pillar for different sizes of the side edges from \SI{40}{\nano\meter} to \SI{140}{\nano\meter}, where the resonant peak for a size of \SI{140x140}{\nano\meter} is \SI{733}{\nano\meter} with a large FWHM of \SI{234}{\nano\meter}. Therefore, the radiative emission of the quantum emitter with a spectrum of Fig. 3(a) can be enhanced by resonant coupling with a plasmonic mode. The plasmon resonance shifts blue with decreasing the size. In order to understand the strong polarization dependence of the emission from a rod-like nanopillar, the mode profile and the scattering cross-section of a rod-like nanopillar with a cross-section of \SI{90x30}{\nano\meter} are investigated. For the E$_x$ polarized incident light, the electric field distribution of the plasmonic mode in Fig. 4b is similar with that of a square pillar, Fig. 4a except for that the mode is elongated along the x-direction following the rod-like shape. In the scattering cross-section spectrum, the rod-like pillar exhibits strong plasmon resonance peak at a wavelength of \SI{686}{\nano\meter} for an E$_x$ polarized light, however, there are no significant resonances for an E$_y$ polarized light. Therefore, \SI{90x30}{\nano\meter} nanopillar can enhance the x-directionally polarized emission of a quantum emitter and suppress the y-directionally polarized emission, resulting in strong linear polarization aligned along the long axis, as shown in Figs. 3(c) and (d). \section{Summary} In conclusion, we demonstrated the formation of ordered arrays of quantum emitters in an atomically thin layer of WSe$_2$, transferred on a metal nanopillar array. The gold nanopillars yield the formation of quantum emitters, and furthermore can act as plasmonic resonators granting active polarization control via deterministic light-matter coupling. Our work is a first step towards highly scalable cavity quantum electrodynamics with engineered quantum emitters in two dimensional materials. \textbf{Funding:} State of Bavaria, H2020 European Research Council (ERC). National Research Foundation of Korea, Korean Government Grant NRF-2016R1C1B2007007.
1806.06396
\section{Introduction} \vspace{-0.1cm} Due to the advantages of high mobility and flexibility, unmanned aerial vehicles (UAVs) have found interesting applications in wireless communications \cite{zeng2016wireless,qingqing18UAVmagazine,Mozaffari2015,Yang2018,Bor2016}. Line-of-sight (LoS) channels usually exist between UAVs and ground nodes in UAV wireless communication systems \cite{Lin2017}. This has also inspired a proliferation of studies recently on the new research paradigm of jointly optimizing the UAV trajectory design and communication resource allocation, for e.g. multiple access channel (MAC) and broadcast channel (BC) \cite{JR:wu2017_capacity,JR:wu2017_ofdm}, interference channel (IFC)\cite{WuGC2017}, and wiretap channel \cite{Zhang2017GC,Lian2018wcl}. In particular, as shown in \cite{ JR:wu2017_capacity} and \cite{JR:wu2017_ofdm}, significant communication throughput gains can be achieved by mobile UAVs over static UAVs/fixed terrestrial BSs by exploiting the new design degree of freedom via UAV trajectory optimization, especially for delay-tolerant applications. In \cite{WuGC2017}, a joint UAV trajectory, user association, and power control optimization framework is proposed for cooperative multi-UAV enabled wireless networks. However, legitimate UAV-ground communications are more prone to be intercepted by potential eavesdroppers on the ground, as compared to terrestrial wireless communication systems, which gives rise to a new security challenge. Although security can be conventionally handled by using cryptographic methods adopted in the higher communication protocol layers, physical layer security is now emerging as a promising alternative technology to realize secrecy in wireless communication \cite{Gopala2008}. One widely adopted performance metric in the physical layer security design is the so-called secrecy rate \cite{QiangLi2015}, at which confidential information can be reliably conveyed. For secure UAV communications, a joint UAV trajectory and transmit power control design framework has been proposed in \cite{Zhang2017GC}, where the average secrecy rate is maximized by proactively enhancing the legitimate link and degrading the eavesdropping link via UAV trajectory design in addition to power adaptation. However, the location of the eavesdropper is assumed to be perfectly known in \cite{Zhang2017GC}, which is overly optimistic. {In practice, although the UAV can estimate the location of a potential eavesdropper by applying an camera or synthetic aperture radar \cite{Li2015SAR},} the eavesdropper may remain silent to hide its existence and thus the location estimation is expected suffering from errors. As a result, existing security-enabled techniques based on the assumption of perfect location information of eavesdroppers may result in significant degradation on security performance. Moreover, there may be more than one eavesdroppers trying to intercept the legitimate UAV-ground communication in practice. In this scenario, the UAV transmitter needs to steer away from multiple eavesdroppers and at the same time approach its intended receiver as close as possible to enhance secrecy rate. Hence, designing the UAV trajectory in such a scenario is an interesting but challenging problem, which has not been addressed in \cite{Zhang2017GC}. \begin{figure}[!t] \centering \includegraphics[width=0.8\columnwidth]{DLModel.eps} \caption{A UAV (Alice) communicates with a ground node (Bob) with $K$ potential eavesdroppers (Eves) on the ground.} \label{Figphysical} \end{figure} In this paper, we consider secure legitimate UAV-ground communications via robust joint UAV trajectory and transmit power design in a practical scenario, where there are multiple eavesdroppers on the ground, as shown in Fig. \ref{Figphysical}. The UAV only knows the approximate regions in which the eavesdroppers are located while the exact locations of the eavesdroppers are unknown. { We aim to maximize the average worst-case secrecy rate over a given flight duration of the UAV, subject to its mobility constraints as well as its average and peak transmit power constraints. The main contributions are summarized as follows. \vspace{-0.1cm} \begin{itemize} \item The considered problem is intractable and obtaining the globally optimal solution is difficult due to its non-convexity and semi-infinite numbers of constraints. To tackle the intractability, we propose an efficient suboptimal algorithm to solve this problem, based on the block coordinate descent method, $\mathcal{S}$-Procedure, and the successive convex optimization method. \item Since the proposed algorithm takes into account and provides robustness against the imperfect location information of multiple eavesdroppers, it is more suitable for practical applications, as compared to the existing work on secure UAV communications \cite{Zhang2017GC,Lian2018wcl}. \item Simulation results show that the proposed algorithm can improve the average worst-case secrecy rate significantly, compared to other benchmark schemes assuming perfect location information of the eavesdroppers or ignoring the eavesdroppers. \end{itemize} } \vspace{-0.4cm} \section{System Model And Problem Formulation} \vspace{-0.1cm} {We consider a UAV-ground communication system, where $K$ eavesdroppers (Eves) on the ground try to intercept the legitimate communication from a UAV (Alice) to a ground node (Bob), as shown in Fig. \ref{Figphysical}.} We express locations in the three-dimensional Cartesian coordinate system. Without loss of generality, we assume that Bob locates at $(0,0,0)$, which is perfectly known by Alice. For $k \in \mathcal{K} \triangleq \{1,\ldots,K\}$, the exact location of Eve $k$, denoted by $(x_k,y_k,0)$ in meters (m), is not known, but its estimated location, denoted by $(x_{\text{E}_k}, y_{\text{E}_k}, 0)$ in m, is assumed to be known. The relation between the actual and the estimated $x$-$y$ coordinates of Eve $k$ is given by \vspace{-0.2cm} \begin{equation} \label{EquLocErr} x_k = x_{\text{E}_k} + \Delta x_k, \; y_k = y_{\text{E}_k} + \Delta y_k, \vspace{-0.2cm} \end{equation} respectively, where $\Delta x_k$ and $\Delta y_k$ denote estimation errors on $x_k$ and $y_k$, respectively, and satisfy the following condition \vspace{-0.15cm} \begin{equation} \label{EquErrArea} (\Delta x_k, \Delta y_k) \in \mathcal{E}_k \triangleq \{(\Delta x_k, \Delta y_k) | \Delta x_k^2 + \Delta y_k^2 \leq Q_k^2 \}, \vspace{-0.1cm} \end{equation} where $\mathcal{E}_k$ denotes a continuous set of possible errors. Thus, Eve $k$ can be regarded as locating in an uncertain circular region with center $(x_{\text{E}_k}, y_{\text{E}_k}, 0)$ and radius $Q_k$. It is assumed that Alice flies at a constant altitude $H$ in m, which is specified for safety considerations such as building avoidance \cite{WuGC2017}. Thus, Alice's coordinate over time is denoted as $(x(t),y(t),H)$, $0 \leq t \leq T$, where $T$ in seconds (s) is its flight duration. {To facilitate trajectory design for Alice, we quantize the flight duration $T$ into $N$ sufficiently small time slots with equal length $d_t$. Since $d_t$ is small enough, Alice can be regarded as static within each slot. Thus, Alice's trajectory over the duration $T$ can be represented by a sequence $\{( x[n],y[n],H )\}_{n=1}^{N}$. We let $(x[0],y[0],H)$ and $(x[N+1],y[N+1],H)$ denote Alice's initial and final locations, respectively, and then write the mobility constraints of Alice as \vspace{-0.15cm} \begin{equation} \label{EquMobilityCon} ( x[n+1]-x[n] )^2 + ( y[n+1]-y[n] )^2 \leq (v_{\max} d_t)^2, \; \forall n, \vspace{-0.15cm} \end{equation} where $v_{\max}$ denotes the maximum speed of Alice.} {The channel from Alice to Bob is assumed to be LoS channel \cite{WuGC2017,wu2018JSAC_ICC,wu2017joint_globecom}\footnote{Measurement results in \cite{Lin2017} show that the LoS channel model is a good approximation for UAV-ground communications in practice even if the UAV flies at a moderate altitude, e.g., $85$m.}. Thus, the power gain of the channel from Alice to Bob in slot $n$ is given by } \vspace{-0.2cm} \begin{equation} g_{\text{AB}}[n] = \beta_0 d_{\text{AB}}^{-2}[n] = \frac{ \beta_0 }{ x^2[n] + y^2[n] + H^2 }, \end{equation} {where $\beta_0$ denotes the power gain of a channel with reference distance $d_0=1$m \cite{WuGC2017}, and $d_{\text{AB}}[n] = \sqrt{ x^2[n] + y^2[n] + H^2 }$ denotes the distance between Alice and Bob in slot $n$. Similarly, the channel from Alice to Eve $k$ can be assumed to be LoS channel, whose power gain in slot $n$ is given by } \vspace{-0.2cm} \begin{equation} g_{\text{AE}_k}[n] = \frac{ \beta_0 }{ (x[n]- x_k)^2 + (y[n] - y_k)^2 + H^2 }. \vspace{-0.1cm} \end{equation} { Let $P[n]$ denote the transmit power of Alice in slot $n$, and $\bar{P}$ and $P_{\text{peak}}$ denote the average power and peak power of Alice, respectively. Thus, we write the average and peak transmit power constraints of Alice as} \vspace{-0.2cm} \begin{subequations} \label{EquPowerCon} \begin{align} &\frac{1}{N} \sum_{n=1}^{N} P[n] \leq \bar{P}, \label{EquAvgPowCon} \\ &0 \leq P[n] \leq P_{\text{peak}}, \; \forall n. \vspace{-0.1cm} \end{align} \end{subequations} To ensure that \eqref{EquAvgPowCon} is a non-trivial constraint, we assume $\bar{P}<P_{\text{peak}}$. {Then, we can express the achievable rate from Alice to Bob in slot $n$ in bits/second/Hertz (bps/Hz) as} \vspace{-0.2cm} \begin{align} R_{\text{AB}}[n] = & \log_2 \left(1 + \frac{ P[n] g_{\text{AB}}[n] }{ \sigma^2 } \right) \nonumber \\ = & \log_2 \left( 1+ \frac{ \gamma_0 P[n] } { x^2[n] + y^2[n] + H^2 } \right), \label{EquRAB} \vspace{-0.1cm} \end{align} {where $\gamma_0 =\beta_0 / \sigma^2$ and $\sigma^2$ is Gaussian noise power at the receiver. Similarly, we express the achievable rate from Alice to Eve $k$ in slot $n$ in bps/Hz as } \vspace{-0.2cm} \begin{equation} \label{EquRAE} R_{\text{AE}_{k}}[n] = \log_2 \bigg( 1 + \frac{ \gamma_0 P[n] }{ (x[n]- x_k)^2 + (y[n] - y_k)^2 + H^2 } \bigg). \end{equation} With \eqref{EquRAB} and \eqref{EquRAE}, {the average worst-case secrecy rate from Alice to Bob over the flight duration $T$ in bps/Hz is \cite{QiangLi2015}} \vspace{-0.2cm} \begin{equation} \label{EquSecrecyRate} R_{\text{sec}} = \frac{1}{N} \sum_{n=1}^N \left[ R_{\text{AB}}[n] - \max_{ k\in \mathcal{K} } \max_{ (\Delta x_k, \Delta y_k) \in \mathcal{E}_k } R_{\text{AE}_k}[n] \right]^{+}, \end{equation} where $[x]^+ \triangleq \max(x,0)$. { For secure the communication from Alice to Bob, we jointly design the trajectory and transmit power of Alice to maximize the average worst-case secrecy rate in \eqref{EquSecrecyRate} subject to its mobility and power constraints in \eqref{EquMobilityCon} and \eqref{EquPowerCon}. The optimization variables include Alice's trajectory and transmit power over $N$ time slots, which are denoted as $\mathbf{x} \triangleq \left[x[1], \ldots, x[N]\right]^{\dagger}$, $\mathbf{y} \triangleq \left[y[1], \ldots, y[N]\right]^{\dagger}$, and $\mathbf{P} \triangleq \left[P[1],\ldots,P[N] \right]^{\dagger}$, where $\dagger$ denotes the transpose operation. The problem is formulated as follows, where the constant term $1/N$ in \eqref{EquSecrecyRate} has been dropped, } \vspace{-0.2cm} \begin{align} \max_{ \mathbf{x}, \mathbf{y}, \mathbf{P} } & \; \sum_{n=1}^N \bigg[ R_{\text{AB}}[n] - \max_{k\in \mathcal{K}} \max_{ (\Delta x_k, \Delta y_k) \in \mathcal{E}_k } R_{\text{AE}_k}[n] \bigg]^+ \label{EquOriginal} \\ \text{s.t.} \; & \; \eqref{EquMobilityCon}, \; \eqref{EquPowerCon}. \nonumber \end{align} {Problem \eqref{EquOriginal} is difficult to solve optimally because of the following reasons. First, the operator $[\cdot]^+$ introduces non-smoothness to the objective function. Second, the objective function is still not jointly concave with respect to $\mathbf{x}$, $\mathbf{y}$, and $\mathbf{P}$ even without $[\cdot]^+$. Third, the infinite number of possible $(\Delta x_k, \Delta y_k)$ makes \eqref{EquOriginal} an intractable semi-infinite optimization problem.} In the following section, we propose a computational efficient iterative suboptimal algorithm to solve problem \eqref{EquOriginal} approximately. \vspace{-0.4cm} \section{Proposed Algorithm for Problem \eqref{EquOriginal}} \vspace{-0.1cm} We first tackle the non-smoothness of the objective function of problem \eqref{EquOriginal} by using the following lemma. \vspace{-0.1cm} \begin{lemma} Problem \eqref{EquOriginal} is equivalent\footnote{In this paper, the word ``equivalent'' means that both problems share the same optimal solution.} to the following problem: \vspace{-0.2cm} \begin{align} \max_{ \mathbf{x}, \mathbf{y}, \mathbf{P} } & \; \sum_{n=1}^N \bigg[ R_{\text{AB}}[n] - \max_{k\in \mathcal{K}} \max_{ (\Delta x_k, \Delta y_k) \in \mathcal{E}_k } R_{\text{AE}_k}[n] \bigg] \label{EquReform} \\ \text{s.t.} \; & \; \eqref{EquMobilityCon}, \; \eqref{EquPowerCon}. \nonumber \vspace{-0.1cm} \end{align} \end{lemma} \begin{proof} Denote $W_1^*$ and $W_2^*$ as the optimal values of problems \eqref{EquOriginal} and \eqref{EquReform}, respectively. First, since $[x]^+ \geq x, \forall x$, we have $W_1^* \geq W_2^*$. Next, denote $(\mathbf{x}^*, \mathbf{y}^*, \mathbf{P}^*)$ as the optimal solution to \eqref{EquOriginal}, where $\mathbf{P}^*=[ P^*[1],\ldots,P^*[N] ]^\dagger$. Let $f(P[n]) = R_{\text{AB}}[n] - \max_{k\in \mathcal{K}} \max_{ (\Delta x_k, \Delta y_k) \in \mathcal{E}_k } R_{\text{AE}_k}[n]$. We construct a feasible solution $(\hat{\mathbf{x}},\hat{\mathbf{y}},\hat{\mathbf{P}})$ to \eqref{EquReform}, such that $\hat{\mathbf{x}} = \mathbf{x}^*$, $\hat{\mathbf{y}} = \mathbf{y}^*$, and the elements of $\hat{\mathbf{P}}$ are obtained as: if $f(P^*[n])\geq 0$, $\hat{P}[n] = P^*[n]$; otherwise $\hat{P}[n] = 0$. Denote the objective value of \eqref{EquReform} attained at $(\hat{\mathbf{x}},\hat{\mathbf{y}},\hat{\mathbf{P}})$ as $\hat{W}$. The newly constructed solution $(\hat{\mathbf{x}},\hat{\mathbf{y}},\hat{\mathbf{P}})$ ensures that $\hat{W}=W_1^*$. Since $(\hat{\mathbf{x}},\hat{\mathbf{y}},\hat{\mathbf{P}})$ is feasible to \eqref{EquReform}, it follows that $W_2^* \geq \hat{W}$ and thus $W_2^* \geq W_1^*$. Therefore, $W_1^* = W_2^*$, which completes the proof. \end{proof} Although problem \eqref{EquReform} is more tractable, it is still difficult to solve due to its non-convexity. Nevertheless, we observe that the optimization variables can be partitioned into two blocks, i.e., $(\mathbf{x}, \mathbf{y})$ and $\mathbf{P}$, which facilitates the algorithm design for solving problem \eqref{EquReform} via the block coordinate descent method \cite{WuGC2017,Boyd2004}. Specifically, we solve \eqref{EquReform} by solving the following two sub-problems iteratively: sub-problem 1 optimizes $\mathbf{P}$ under given $(\mathbf{x}, \mathbf{y})$; while sub-problem 2 optimizes $(\mathbf{x}, \mathbf{y})$ under given $\mathbf{P}$, as detailed in the next two subsections, respectively. In the end, we summarize the overall algorithm and show its convergence. \vspace{-0.4cm} \subsection{Solution to Sub-Problem 1} \vspace{-0.1cm} For given $(\mathbf{x}, \mathbf{y})$, sub-problem 1 can be written as \vspace{-0.2cm} \begin{equation} \label{EquSubProb1} \max_{\mathbf{P}} \; \sum_{n=1}^N \big[ \log_2(1+\alpha_n P[n]) - \log_2(1+\beta_n) \big] \; \; \text{s.t.} \; \eqref{EquPowerCon}, \vspace{-0.1cm} \end{equation} where \vspace{-0.2cm} \begin{equation} \label{Equa} \alpha_n = \frac{ \gamma_0 } { x^2[n] + y^2[n] + H^2 }, \; \; \beta_n = \frac{ \gamma_0 } { \min_{k \in \mathcal{K}} \theta_{k,n} }, \quad \; \; \vspace{-0.1cm} \end{equation} \vspace{-0.2cm} \begin{equation} \label{Equck} \theta_{k,n} = \min_{ (\Delta x_k, \Delta y_k) \in \mathcal{E}_k } (x[n]-x_k)^2 + (y[n]-y_k)^2 + H^2. \vspace{-0.1cm} \end{equation} By substituting \eqref{EquLocErr} and \eqref{EquErrArea} into \eqref{Equck}, $\theta_{k,n}$ can be obtained as \vspace{-0.2cm} \begin{equation} \theta_{k,n} = \begin{cases} H^2 & d_k \leq Q_k , \\ (d_k-Q_k)^2 + H^2 & d_k > Q_k, \end{cases} \vspace{-0.05cm} \end{equation} where $d_k = \sqrt{ (x[n]-x_{\text{E}_k})^2 + ( y[n]-y_{\text{E}_k} )^2 }$. The optimal solution to problem \eqref{EquSubProb1} can be obtained as \cite{Gopala2008} \vspace{-0.2cm} \begin{equation} \label{EquPowSol} P^*[n] = \begin{cases} \min \left( [ \hat{P}[n] ]^+ , P_{\text{peak}} \right) & \alpha_n > \beta_n, \\ 0 & \alpha_n \leq \beta_n , \end{cases} \vspace{-0.1cm} \end{equation} where \vspace{-0.2cm} \begin{equation} \label{EquOptP} \hat{P}[n] = \sqrt{ \left( \frac{1}{2\beta_n} - \frac{1}{2\alpha_n} \right)^2 + \frac{1}{\lambda \ln 2} \left( \frac{1}{\beta_n} - \frac{1}{\alpha_n} \right) } - \frac{1}{2\beta_n} - \frac{1}{2\alpha_n}. \end{equation} In \eqref{EquOptP}, $\lambda \geq 0$ is a parameter to ensure that the constraint \eqref{EquAvgPowCon} is satisfied at the optimal solution, which can be determined by bisection search \cite{Boyd2004}. \vspace{-0.4cm} \subsection{Solution to Sub-Problem 2} \vspace{-0.1cm} \begin{figure*}[!t] \normalsize \begin{align} \max_{\mathbf{x},\mathbf{y}} \sum_{n=1}^N \bigg[ \log_2 \left( 1+\frac{P_n}{ x^[n] + y^2[n] + H^2 } \right) - \log_2 \bigg( 1 + \frac{P_n}{ \min\limits_{k \in \mathcal{K}} \min\limits_{ (\Delta x_k, \Delta y_k) \in \mathcal{E}_k } (x[n]-x_k)^2 + (y[n]-y_k)^2 + H^2 } \bigg) \bigg] \; \text{s.t.} \eqref{EquMobilityCon}. \label{EquProbSub2} \end{align} \hrulefill \end{figure*} By setting $P_n = \gamma_0 P[n]$, sub-problem 2 can be expressed as \eqref{EquProbSub2} shown at the top of next page, which cannot be solved optimally in polynomial time due to its non-convexity. By introducing slack variables $\mathbf{u} \triangleq [u[1],\ldots,u[N]]^\dagger$ and $\mathbf{t} \triangleq [t[1],\ldots,t[N]]^\dagger$, we first consider the following equivalent problem: \vspace{-0.2cm} \begin{subequations} \label{EquReform2} \begin{align} \max_{ \mathbf{x}, \mathbf{y}, \mathbf{u},\mathbf{t} } & \; \sum_{n=1}^N \bigg[ \log_2 \left( 1+ \frac{ P_n } { u[n] } \right) - \log_2 \bigg( 1 + \frac{ P_n }{ t[n] } \bigg) \bigg] \label{EquReform2Obj} \\ \text{s.t.} \; \; & \min_{ (\Delta x_k, \Delta y_k) \in \mathcal{E}_k } (x[n]- x_k)^2 + (y[n] - y_k)^2 + H^2 \nonumber \\ & \geq t[n], \; \forall n,k, \label{EquCont} \\ & x^2[n] + y^2[n] + H^2 - u[n] \leq 0, \; \forall n, \label{EquConu} \\ & t[n] \geq H^2, \; \forall n, \; \eqref{EquMobilityCon}.\nonumber \end{align} \end{subequations} {Problems \eqref{EquProbSub2} and \eqref{EquReform2} have the same optimal solution of $(\mathbf{x},\mathbf{y})$, since the constraints \eqref{EquCont} and \eqref{EquConu} are active at the optimal solution to problem \eqref{EquReform2}. This can be proved by contradiction: if constraints \eqref{EquCont} and \eqref{EquConu} are inactive, the objective value of \eqref{EquReform2} can be improved by increasing $t[n]$ (decreasing $u[n]$). Hence, we can focus on solving problem \eqref{EquReform2}.} However, problem \eqref{EquReform2} is still intractable, since there is an infinite number of $(\Delta x_k, \Delta y_k)$ in constraint \eqref{EquCont} due to the continuous nature of $\mathcal{E}_k$. Now, we convert \eqref{EquCont} into equivalent constraints as follows. First, we substitute \eqref{EquLocErr} and \eqref{EquErrArea} into \eqref{EquCont} and rewrite it as \vspace{-0.2cm} \begin{subequations} \begin{align} & \Delta x_k^2 + \Delta y_k^2 -Q_k^2 \leq 0 ,\; \forall k, \label{EquInequ1} \\ & -(x[n]- x_{\text{E}_k} -\Delta x_k )^2 - (y[n] - y_{\text{E}_k} -\Delta y_k)^2 \nonumber \\ &- H^2 + t[n] \leq 0 , \; \forall k. \label{EquInequ2} \vspace{-0.1cm} \end{align} \end{subequations} Next, according to $\mathcal{S}$-Procedure \cite{Boyd2004}, since there exists a point $(\Delta \hat{x}_k, \Delta \hat{y}_k)$ (e.g., $(\Delta \hat{x}_k, \Delta \hat{y}_k)=(0,0)$) such that $\Delta \hat{x}_k^2 + \Delta \hat{y}_k^2 -Q_k^2 < 0$, the implication \eqref{EquInequ1} $\Rightarrow$ \eqref{EquInequ2} holds if and only if there exists $\xi_k[n] \geq 0$ such that \vspace{-0.15cm} \begin{equation} \label{EquSProcMat1} \boldsymbol{\Phi}(x[n], y[n], t[n], \xi_k[n]) \succeq \mathbf{0}, \; \forall k,n, \vspace{-0.1cm} \end{equation} where \vspace{-0.2cm} \begin{displaymath} \begin{split} & \boldsymbol{\Phi}(x[n], y[n], t[n], \xi_k[n]) \\ \; \; \; \; \; \; \; =& \begin{bmatrix} \xi_k[n] + 1& 0 & x_{\text{E}_k}-x[n] \\ 0 & \xi_k[n]+1 & y_{\text{E}_k} - y[n] \\ x_{\text{E}_k}-x[n] & y_{\text{E}_k} - y[n] & -Q_k^2 \xi_k[n] + c_k[n] \end{bmatrix}, \; \text{and} \vspace{-0.1cm} \end{split} \end{displaymath} \vspace{-0.2cm} \begin{align} c_k[n] =& \; x^2[n] - 2 x_{\text{E}_k} x[n] + x_{\text{E}_k}^2 + y^2[n] - 2 y_{\text{E}_k} y[n] + y_{\text{E}_k}^2 \; \; \; \; \; \nonumber \\ & +H^2 - t[n]. \label{Equc} \end{align} By replacing \eqref{EquCont} with \eqref{EquSProcMat1} and introducing slack variables $\boldsymbol{\Xi} \triangleq [\boldsymbol{\xi}_1, \ldots, \boldsymbol{\xi}_K ]$, where $\boldsymbol{\xi}_k \triangleq [\xi_k[1],\ldots,\xi_k[N] ]^\dagger$, we rewrite problem \eqref{EquReform2} into an equivalent form: \vspace{-0.2cm} \begin{subequations} \label{EquReform3} \begin{align} \max_{ \mathbf{x}, \mathbf{y}, \mathbf{u},\mathbf{t}, \boldsymbol{\Xi} } \; & \sum_{n=1}^N \bigg[ \log_2 \left( 1+ \frac{ P_n } { u[n] } \right) - \log_2 \bigg( 1 + \frac{ P_n }{ t[n] } \bigg) \bigg] \label{EquReform3Obj} \\ \text{s.t.} \; \;\; & \boldsymbol{\Phi}(x[n], y[n], t[n], \xi_k[n]) \succeq \mathbf{0}, \; \forall k,n, \label{EquReform3Con2} \\ & t[n] \geq H^2, \; \xi_k[n] \geq 0, \; \forall k,n,\label{EquReform3Con3} \\ & \eqref{EquMobilityCon}, \; \eqref{EquConu}. \nonumber \end{align} \end{subequations} The objective function in \eqref{EquReform3Obj} is non-concave, since $ \log_2 ( 1+ \frac{ P_n } { u[n] } )$ is convex. Moreover, the constraint \eqref{EquReform3Con2} is non-convex, since the terms $x^2[n]$ and $y^2[n]$ contained in $c_k[n]$ (see \eqref{Equc}) are non-linear. Thus, problem \eqref{EquReform3} is difficult to be solved optimally due to its non-convexity. We propose an iterative algorithm to find an approximate solution to problem \eqref{EquReform3} as follows. First, the algorithm assumes a feasible point $\mathbf{x}_{\text{fea}} \triangleq \left[ x_{\text{fea}}[1], \ldots, x_{\text{fea}}[N] \right]^{\dagger}$, $\mathbf{y}_{\text{fea}} \triangleq \left[ y_{\text{fea}}[1], \ldots, y_{\text{fea}}[N] \right]^{\dagger}$ and $\mathbf{u}_{\text{fea}} \triangleq \left[ u_{\text{fea}}[1], \ldots, u_{\text{fea}}[N] \right]^{\dagger}$, which is feasible to \eqref{EquReform3}. Then, by using the first-order Taylor expansions of $\log_2 ( 1+ \frac{ P_n } { u[n] } )$, $x^2[n]$ and $y^2[n]$ at $u_{\text{fea}}[n]$, $x_{\text{fea}}[n]$ and $y_{\text{fea}}[n]$, respectively, \vspace{-0.2cm} \begin{align} \log_2 \left( 1+ \frac{ P_n } { u[n] } \right) \geq & \log_2 \left( 1+ \frac{ P_n } { u_{\text{fea}}[n] } \right) \nonumber \\ & - \frac{ P_n ( u[n] - u_{\text{fea}}[n] ) }{ \ln 2 ( u_{\text{fea}}^2[n] + P_n u_{\text{fea}}[n] ) }, \label{EquTayloru} \end{align} \begin{equation} \label{EquTaylorx} x^2[n] \geq - x_{\text{fea}}^2[n] + 2 x_{\text{fea}}[n] x[n], \vspace{-0.1cm} \end{equation} \begin{equation} \label{EquTaylory} y^2[n] \geq - y_{\text{fea}}^2[n] + 2 y_{\text{fea}}[n] y[n], \vspace{-0.1cm} \end{equation} problem \eqref{EquReform3} is approximately transformed to \vspace{-0.2cm} \begin{subequations} \label{EquSubProb2Recast} \begin{align} \max_{ \mathbf{x}, \mathbf{y}, \mathbf{u},\mathbf{t}, \boldsymbol{\Xi} } & \; \sum_{n=1}^N -\frac{ P_n ( u[n] - u_{\text{fea}}[n] ) }{ \ln2 (u_{\text{fea}}^2[n] + P_n u_{\text{fea}}[n] ) } - \log_2 \bigg( 1 + \frac{ P_n }{ t[n] } \bigg) \\ \text{s.t.} \; \;\; & \tilde{ \boldsymbol{\Phi} } (x[n], y[n], t[n], \xi_k[n]) \succeq \mathbf{0} , \; \forall k,n, \label{EquSubProb2RcCon2} \\ & \eqref{EquMobilityCon}, \; \eqref{EquConu}, \; \eqref{EquReform3Con3}. \nonumber \end{align} \end{subequations} where \vspace{-0.2cm} \begin{align} & \tilde{ \boldsymbol{\Phi} } (x[n], y[n], t[n], \xi_k[n]) \nonumber \\ \quad \; \; \; =& \begin{bmatrix} \xi_k[n] + 1& 0 & x_{\text{E}_k}-x[n] \\ 0 & \xi_k[n]+1 & y_{\text{E}_k}-y[n] \\ x_{\text{E}_k}-x[n] & y_{\text{E}_k}-y[n] & -Q_k^2 \xi_k[n] + \tilde{c}_k[n] \end{bmatrix}, \; \text{and} \nonumber \end{align} \begin{align} &\tilde{c}_k[n] = - x_{\text{fea}}^2[n] + 2 x_{\text{fea}}[n] x[n] - 2 x_{\text{E}_k} x[n] + x_{\text{E}_k}^2 - y_{\text{fea}}^2[n] \; \; \; \nonumber \\ & \quad \quad \; + 2 y_{\text{fea}}[n] y[n] - 2 y_{\text{E}_k} y[n] + y_{\text{E}_k}^2 +H^2 - t[n]. \label{Equctilde} \end{align} Note that problem \eqref{EquSubProb2Recast} is a semidefinite programming problem, which can be optimally solved by the interior-point method \cite{Boyd2004}. { \emph{Remark 1:} Since \eqref{EquTaylorx} and \eqref{EquTaylory} are lower bounds for $x^2[n]$ and $y^2[n]$, respectively \cite{Boyd2004}, we have $c_k[n] \geq \tilde{c}_k[n]$ and thus \vspace{-0.2cm} \begin{equation} \label{EquLargerPSM} \boldsymbol{\Phi} (x[n], y[n], t[n], \xi_k[n]) \succeq \tilde{ \boldsymbol{\Phi} } (x[n], y[n], t[n], \xi_k[n]), \vspace{-0.1cm} \end{equation} which means that \eqref{EquSubProb2RcCon2} implies \eqref{EquReform3Con2}. Hence, the solution to problem \eqref{EquSubProb2Recast} must be a feasible solution to problem \eqref{EquReform3}. \emph{Remark 2:} Since \eqref{EquTayloru} is a lower bound of $\log_2 ( 1+ \frac{ P_n } { u[n] } )$ \cite{Boyd2004}, problem \eqref{EquSubProb2Recast} maximizes a lower bound of the objective function of \eqref{EquReform3}. This lower bound is equal to the objective value of \eqref{EquReform3} only at $(\mathbf{x}_{\text{fea}}, \mathbf{y}_{\text{fea}}, \mathbf{u}_{\text{fea}})$, so the objective value of problem \eqref{EquReform3} with the solution to problem \eqref{EquSubProb2Recast} is equal to or greater than that with the solution $(\mathbf{x}_{\text{fea}}, \mathbf{y}_{\text{fea}}, \mathbf{u}_{\text{fea}})$.} \vspace{-0.5cm} \subsection{Overall Algorithm} \label{SecOverallAlg} \vspace{-0.1cm} {We summarize the detail of the overall algorithm in Algorithm 1, which solve problems \eqref{EquSubProb1} and \eqref{EquSubProb2Recast} alternatively and iteratively until it converges. Since as shown in the previous two subsections, the objective value of problem \eqref{EquReform} with the solutions obtained by solving problems \eqref{EquSubProb1} and \eqref{EquSubProb2Recast} is non-decreasing over iterations and the optimal value of problem \eqref{EquReform} must be finite, the solution obtained by Algorithm 1 can be guaranteed to converge to a suboptimal solution \cite{WuGC2017}. Algorithm 1 is suitable for UAV applications, since it has a complexity of $\mathcal{O} \left[ N_{\text{ite}} (4N+KN)^{3.5} \right]$ and can obtain the solution in polynomial time, where $N_{\text{ite}}$ is the iteration number \cite{Boyd2004}. } \begin{algorithm}[!t] \caption{Proposed Algorithm for Problem \eqref{EquOriginal}.} \begin{algorithmic}[1] { \STATE Initialize $\mathbf{P}^{(0)}$, $\mathbf{x}^{(0)}$, $\mathbf{y}^{(0)}$, and $\mathbf{u}^{(0)}$. Set $m=0$. \REPEAT \STATE Set $m \gets m+1$. \STATE Let $\mathbf{x}_{\text{fea}} = \mathbf{x}^{(m-1)}$, $\mathbf{y}_{\text{fea}} = \mathbf{y}^{(m-1)}$ and $\mathbf{u}_{\text{fea}} = \mathbf{u}^{(m-1)}$. Solve problem \eqref{EquSubProb2Recast} under given $\mathbf{P}^{(m-1)}$ to obtain $(\mathbf{x}^{(m)}, \mathbf{y}^{(m)})$. \STATE Solve problem \eqref{EquSubProb1} under given $(\mathbf{x}^{(m)}, \mathbf{y}^{(m)} )$ to obtain $\mathbf{P}^{(m)}$. \UNTIL {The fractional increase of the objective value is smaller than a threshold $\epsilon>0$.} } \end{algorithmic} \end{algorithm} \begin{figure*}[!t] \centering \subfloat[Trajectories of Alice.]{\includegraphics[width=0.31\textwidth]{Traj_TVT.eps} \label{FigTraj}} \hfil \subfloat[Average secrecy rate versus $T$.]{\includegraphics[width=0.31\textwidth]{SRvsT_TVT.eps} \label{FigSRvsT}} \hfil \subfloat[Average secrecy rate versus $\bar{P}$.]{\includegraphics[width=0.31\textwidth]{SRvsP_TVT.eps} \label{FigSRvsP}} \caption{Trajectories and average worst-case secrecy rates of different algorithms.} \label{fig_sim} \end{figure*} \vspace{-0.2cm} \section{Simulation Results} \vspace{-0.05cm} {This section provides simulation results to verify the performance of our proposed robust joint trajectory and transmit power design algorithm, as compared to the following two benchmark algorithms: 1) non-robust joint trajectory and transmit power design; 2) best-effort trajectory design with equal transmit power \cite{Zhang2017GC}}. Specifically, the non-robust algorithm only has the estimated locations of Eves and treats them as perfect information. Thus, it jointly designs the UAV trajectory and transmit power control by using Algorithm 1 assuming $Q_k=0$, $\forall k$. {The best-effort algorithm performs equal transmit power allocation over time and designs Alice's trajectory in the following heuristic best-effort manner: Alice flies to the location right above Bob at speed $v_{\max}$, then hovers there, and finally flies at speed $v_{\max}$ to reach the final location at time $T$. If Alice does not have sufficient time to reach the location above Bob, it will turn its direction midway and fly to the final location directly.} The initial feasible points for the proposed robust and benchmark non-robust algorithms are generated by the best-effort algorithm. There are $K=2$ Eves, whose estimated horizontal coordinates are $(x_{\text{E},1}, y_{\text{E},1}) = (-200,0)$m and $(x_{\text{E},2}, y_{\text{E},2}) = (200,0)$m, respectively, and $Q_1=20$m and $Q_2=80$m. The other parameters are set as follows: $H=100$m, $v_{\max}=10$m/s, $d_t=0.5$s, $\gamma_0=80$dB, $P_{\text{peak}} =4 \bar{P}$, $(x[0],y[0])=(-400,-200)$m, $(x[N],y[N])=(400,-200)$m, and $\epsilon=10^{-4}$. Fig. 2(a) shows the trajectories of Alice by applying different algorithms when $\bar{P}=-5$dBm. It is observed that when $T$ is small (e.g., $T=80$s), the trajectories obtained by the robust and non-robust algorithms are very similar, since the flexibility in trajectory design is limited as Alice is required to fly from the initial location to the final location in a given duration $T$. As $T$ increases, the flexibility in designing efficient trajectory increases. This magnifies the differences between the robust and non-robust algorithms. When $T$ is sufficiently large (e.g., $T=160$s), by these two algorithms, Alice first flies at its maximum speed in an arc path to keep away from Eve 1 and reaches a certain point near Bob; then it hovers at that point as long as possible, and finally flies to the final location along an arc path bypassing Eve 2 at its maximum speed. However, in the proposed robust algorithm, the hovering point is on the left of Bob; while in the non-robust algorithm, the hovering point is directly above Bob. This is because although the distances from the estimated locations of Eves 1 and 2 to Bob are equal, the radius of the uncertain region of Eve 1 is smaller than that of Eve 2. {Considering the worst case, the proposed robust algorithm adjusts the hovering point closer to Eve 1 and farther from Eve 2 to strike a balance between enhancing the legitimate link from Alice to Bob and degrading the quality of the eavesdropping links from Alice to Eves 1 and 2, while the non-robust algorithm fails to strike such a balance by treating Eve 1 and Eve 2 equally.} Figs. 2(b) and 2(c) show the corresponding average worst-case secrecy rates of all algorithms versus the flight duration $T$ and average power $\bar{P}$, respectively. In both figures, it can be observed that the secrecy rates of all algorithms increase with $T$ and $\bar{P}$. In particular, the proposed robust algorithm significantly outperforms the other two benchmark algorithms. In Fig. 2(c), it is observed that the secrecy rates of all algorithms are saturated when $\bar{P}$ is high. This is because as shown in \eqref{EquRAB}--\eqref{EquSecrecyRate}, the secrecy rate maximization problem \eqref{EquOriginal} is independent on the transmit power $P[n]$ and only depends on the UAV trajectory in the high transmit power regime. Furthermore, Fig. 2(c) shows that although the non-robust algorithm outperforms the best-effort algorithm, the secrecy rate gap between them becomes smaller as $\bar{P}$ increases. This is because the non-robust algorithm ignores the location estimation errors of Eves 1 and 2, and thus suffers from the performance loss. The above results demonstrate the importance and the potential performance gain brought by the robust joint trajectory and transmit power design. \vspace{-0.1cm} \section{Conclusion} \vspace{-0.05cm} This paper investigated a secure UAV communication system when the locations of the eavesdroppers are not perfectly known as in the practical scenario. A robust joint trajectory and transmit power design algorithm was proposed to maximize the average worst-case secrecy rate subject to the UAV's mobility constraints as well as its average and peak transmit power constraints. Simulation results showed that the proposed joint design algorithm which considers the location uncertainties of Eves can improve the worst-case secrecy rate performance significantly, as compared to two benchmark algorithms without considering the uncertainties of the eavesdroppers' location information. \vspace{-0.1cm}
2005.12566
\section{Introduction} Fashion recommendation plays an activate role in daily life, with the rapid growth of online shopping platforms (\emph{e.g., } Amazon and Taobao) and fashion related social networks (\emph{e.g., } Instagram). It can improve user experience and bring great profit to the shopping platforms. Personalized outfit recommendation is an emerging service, which targets at selecting a set of visually-compatible items as an outfit for a target user. Distinct from traditional fashion item recommendation which only routes a single product to users, it should satisfy two requirements --- 1) \emph{compatibility of fashion items}, meaning that the items within the same outfit should be visually compatible with each other, (\emph{e.g., } the long sleeve wear matches the high-heel shoes well in the outfit $o_{4}$ as Figure~\ref{fig:intro} shows); and 2)\emph{consistence with personal taste}, meaning that the whole outfit should holistically match user preference, \emph{i.e., } each user might have individual dressing style (\emph{e.g., } as Figure~\ref{fig:intro} shows, the outfit $o_{1}$ fits the casual style of user $u_{1}$, while user $u_{2}$ has special interests on the outfit $o_{4}$ due to its long sleeve style). \begin{figure*}[t] \centering \includegraphics[width=1.9\columnwidth]{images/intro.pdf} \caption{The illustration of our hierarchical fashion graph network, HFGN. HFGN consists of three levels (\emph{i.e., } user, outfit and item level). The message can propagate from lower level to higher level. } \vspace{-10px} \label{fig:intro} \end{figure*} Nevertheless, most present works focus only on one of the requirements --- that is, either compatibility matching~\cite{NGNN} or outfit recommendation~\cite{FHN} --- while seldom modeling such two tasks simultaneously. In particular, a research line on compatibility matching solely exploits the mapping relationships between outfits and single items to estimate whether multiple fashion items form a good match (\emph{i.e., } outfit). For example, NGNN~\cite{NGNN} builds a fashion graph upon a taxonomy of fashion categories, representing an outfit as a graph instance involving compatible items as its nodes. However, these works leave the personal interests of users untouched, hence typically serving as the tool to generate outfit features and failing to meet the second requirement of personalized recommendation. Another straightforward solution to outfit recommendation is to subsume such task under the framework of traditional recommendation, relying largely on the user-outfit interactions. Specially, the fashion items are simply seen as the pre-existing features of the outfit; as such, traditional recommenders can be adopted in such scenarios. For example, VBPR~\cite{VBPR} and ACF~\cite{ACF} can be employed on users' historical interactions with outfits to perform future recommendation. However, these methods are not tailored to outfit recommendation, forgoing the compatibility matching between fashion items. Studies on jointly conducting compatibility modeling and outfit recommendation is under explored until recent FPITF~\cite{FPITF} and FHN~\cite{FHN}. To be more specific, FPITF~\cite{FPITF} aggregates user preference on each item within an outfit, and integrates them with pairwise compatibility scores between items as a user's holistic preference on the outfit. FHN~\cite{FHN} resorts to the similar paradigm. Examining such paradigm, we find that the outfit-item mappings are treated as isolated data instance towards compatibility matching, while forgoing the relationships among data instances (\emph{e.g., } outfits with co-occurred items); analogously, the user-outfit interactions are fed individually into the recommender, while ignoring their relationships (\emph{e.g., } co-purchased outfits or behaviorally similar users). This paradigm obscures the complex relationships among users, outfits, and items, easily leading to suboptimal representations and further limiting the recommendation performance. In this work, to solve the foregoing limitations, we explicitly present the complex relationships among users, outfits, and items as a hierarchical fashion graph. To be more specific, it consists of three levels --- user, outfit, and item levels --- where each level contains of the corresponding type of nodes. Distinct from traditional graphs, such a hierarchical graph highlights the connections cross levels. For example, if outfit $o$ contains item $i$, the node $o$ in the outfit level will be connected with the node $i$ in the item level; analogously, if user $u$ has purchased outfit $o$, there is an edge between nodes $u$ and $o$ across the user and outfit levels. Thereafter, we build a new framework, termed \textbf{Hierarchical Fashion Graph Network (HFGN)}, upon the hierarchical fashion graph. In particular, HFGN employs the information propagation mechanism from graph neural networks (GNNs) to distill useful signals from the bottom to the top, inject the relationships into representations and facilitate the compatibility matching and outfit recommendation. More specifically, we assign each user/outfit with an ID embedding while representing each item with its visual features. The information propagation rule aggregates useful signals from fashion items to update outfit representations, and further refine user representations by integrating messages passing from his/her historical outfits. Furthermore, we propose a joint learning scheme to conduct the compatibility matching and outfit recommendation simultaneously. To demonstrate effectiveness of HFGN, we conduct extensive experiments, and the experimental results show that our model outperforms the state-of-the-art models \emph{w.r.t. } both two tasks. To summarize, we make the following main contributions in this paper: \begin{itemize} \item We propose a Hierarchical Fashion Graph Neural Network (HFGN) to obtain more expressive representations for users and outfits. Benefiting from the message propagation rules, the representations can be updated by the neighbour embeddings iteratively. \item Different from the existing methods which only consider item-level semantics for outfits, we incorporate outfit-level semantics into the representations for outfits. \item Compared to separately considering compatibility matching and personalized recommendation, we regard the compatibility information as a passing message in the graph and encode this information into item and outfit representations. Experiments show the rationality and effectiveness of this modeling. \end{itemize} \section{HFGN framework} We now present our HFGN framework which is equipped with three key components: 1) embedding initialization, which initializes embeddings for user, outfit, and item nodes; 2) hierarchical graph convolution, which refines the node embeddings by propagating information from lower levels to higher levels --- that is, gather information from item nodes to update outfit representations, and then augment user representations via the historical outfits; and 3) model prediction, which outputs the prediction score for personalized recommendation and compatibility prediction. \subsection{Embedding Initialization} As Figure~\ref{fig:intro} shows, we organize users, outfits, and items in the form of a hierarchical fashion graph, where these three types of nodes are at the top, internal, and bottom levels, respectively. To characterize the latent features, we represent each user/outfit/item ID with a vectorized representation (\emph{aka. } embedding). More formally, we denote IDs of user $u$, outfit $o$, and item $i$ separately as $\Mat{u}\in\Space{R}^{d}$, $\Mat{o}\in\Space{R}^{d}$, and $\Mat{i}\in\Space{R}^{d}$, where $d$ is the embedding size. As a result, we can obtain an embedding table for all the nodes as follows: \begin{equation} \Mat{E} = [\underbrace{\cdots,\Mat{i},\cdots}_{\rm item\: embeddings},\underbrace{\cdots,\Mat{o},\cdots}_{\rm outfit\: embeddings},\underbrace{\cdots,\Mat{u},\cdots}_{\rm user\: embeddings}], \end{equation} where $\Mat{E}\in\Space{R}^{(N_{U}+N_{O}+N_{I})\times d}$ concatenates embeddings of users, outfits, and items; $N_{U}$, $N_{O}$, and $N_{I}$ are the number of users, outfits, and fashion items, respectively. As only ID information is available for users and outfits, we get inspirations from the mainstream CF models~\cite{BPRMF,NCF} and project each user/outfit ID into an embedding. Such trainable embeddings are used to memorize the latent features of users and outfits. As for each fashion item $i$, we have its visual feature $\Mat{x}_{i}$. Moreover, since items are associated with different fashion categories (\emph{e.g., } jeans, T-shirt), we use category-aware encoders to extract useful information from the visual features as item embeddings. More formally, the initial embeddings of items are encoded as: \begin{gather} \Mat{e}_{(i)} = f_{c}(\Mat{x}_{i}), \end{gather} where $f_{c}(\cdot)$ is the encoder for category $c$, which is implemented by a two-layer MLP. As such, the item features are projected into the same latent space to that of users and outfits, facilitating the further modeling of their complex relationships. \begin{figure*}[ht] \centering \subfigure[Information propagation across items.]{ \label{fig:itemUpdate} \centering \includegraphics[width=0.32\textwidth]{images/item2item.pdf}} \subfigure[Information propagation from item to outfit level.]{ \label{fig:outfitUpdate} \centering \includegraphics[width=0.32\textwidth]{images/item2outfit.pdf}} \subfigure[Information propagation from outfit to user level.]{ \label{fig:userUpdate} \centering \includegraphics[width=0.32\textwidth]{images/outfit2user.pdf}} \caption{Information propagation across three levels in HFGN. (a) presents information propagation across items; (b) presents information propagation from item to outfit level; (c) presents information propagation from outfit to user level.} \label{fig:embeddingUpdate} \end{figure*} \subsection{Hierarchical Graph Convolution} By organizing users, outfits, and items in a hierarchical graph, we can leverage their connectivities to help exhibit their underlying relationships. For example, As the right in Figure~\ref{fig:intro} shows, the path $i_{1}\rightarrow o_{2}$ states the fact that outfit $o_{2}$ includes item $i_{1}$, while $o_{2}\rightarrow u_{1}$ presents the behavior that user $u_{1}$ purchases outfit $o_{2}$; meanwhile, the longer path $i_{1}\rightarrow o_{2}\rightarrow u_{1}$ might reflect user $u_{1}$'s interest on item $i_{1}$, while $\{i_{1},i_{2}\}\rightarrow o_{2}$ and $ o_{5}\rightarrow \{u_{1}, u_{2}\} $ separately indicate the compatibility of items and behavioral similarity of users. Hence, exploiting such connectivities is of crucial importance to explore relationships among users, outfits, and items, and is a promising solution to unify compatibility modeling and outfit recommendation. Recent studies on graph neural networks~\cite{GCN,GAT,graphsage} show that the information propagation over graph structure is able to effectively distill useful information from multi-hop neighbors and encode higher-order connectivity into the representations. Inspired by this, we devise a new hierarchical graph convolution (HGC) to perform the embedding propagation mechanism over our fashion graph, so as to refine their embeddings. In particular, there are three embedding-propagation steps --- 1) information propagation across items, which refines item embeddings by incorporating the compatibility modeling; 2) information propagation from items to outfits, which aggregates item semantics into outfit embeddings; and 3) information propagation from outfits to users, which integrates historical outfits as user representations. In what follows, we elaborate these three ingredients. \subsubsection{\textbf{Information Propagation Across Items}} Items are at the lowest level of HFGN, providing visual features of individual items, as well as their compatible relationships. For example, the connectivities $\{i_{1},i_{2}\}\rightarrow o_{2}$ not only describe that items $i_{1}$ and $i_{2}$ belong to the same outfit $o_{2}$, but also reflect that $i_{1}$ and $i_{2}$ are compatible. Hence, such compatibility information suggests that compatible items should have more information interchange than that in different outfits. Towards presenting compatibility among items in an explicit fashion, we construct an item graph for each outfit first. \vspace{5pt} \noindent\textbf{Item Graph Construction.} Before constructing item graphs for individual outfits, we first build a uniform category graph~\cite{NGNN} for all outfits, where the category information of items serve as the prior knowledge of items. In particular, each item is assigned with only one specific category, such as \emph{shirts}, \emph{sandals}, and \emph{jeans}. Different category pairs are associated with varying co-occurrence frequencies, reflecting the coarse-grained compatibility of items at a granular level of category. For instance, \emph{necklaces} co-occurs more frequently with \emph{coats} in outfits, than \emph{sandals}. Hence, we build a weighted category graph $\Set{G}_{\text{c}}=\{(c,c',w(c,c'))|c,c'\in\Set{C}\}$, where $\Set{C}$ is the set consisting of 60 categories in total (\emph{cf. } data statistics in Table~\ref{tab:fltb_dataset}). Wherein, each category pair $(c,c')$ is assigned with a weight as follows: \begin{gather} w(c,c') = \frac{g(c,c')/g(c')}{\sum_{c''\in\Set{C}}g(c,c'')/g(c'')}, \end{gather} where $g(c,c')$ denotes the co-occurrence frequency of categories $c$ and $c'$ appearing in the same outfits, while $g(c)$ counts the frequency of $c$ in the outfit-item and item-category mappings. Having established $\Set{G}_{\text{c}}$ for all outfits, we now construct an item graph tailored $\Set{G}_{\text{o}}$ for a single outfit $o$. In particular, we activate the category nodes which appear in outfit $o$ (\emph{e.g., } the orange nodes shown in Figure~\ref{fig:itemUpdate}), while removing the others. More formally, $\Set{G}_{\text{o}}$ is defined as $\{(c,c',w(c,c'))|c,c'\in \Set{N}_{o}\}$, where $\Set{N}_{o}$ is the item set of outfit $o$. Clearly, the weights of $\Set{G}_{\text{o}}$ directly inherit from the original category graph, and only parts of nodes with their connections are activated as the blue circle in Figure~\ref{fig:itemUpdate} shows. \vspace{5pt} \noindent\textbf{Item-Wise Information Construction.} Presenting the coarser-grained compatibility in the form of item graph, we now focus on one specific item and distill useful signals from its neighbors, where the compatibility \emph{w.r.t. } categories is encoded. In particular, the information being propagated from the neighboring item $i'$ to the ego item $i$ is formalized as: \begin{gather} \Mat{m}_{i'\rightarrow i}=w(i,i')\sigma(\Mat{W}_{1}(\Mat{i}\odot\Mat{i}')), \end{gather} where $\Mat{W}_{1}\in\Space{R}^{d\times d}$ is the trainable weight matrix to perform transformation; $\sigma(\cdot)$ is a nonlinear activation function, which is set as LeakyReLU~\cite{leakyrelu} here; $\odot$ is the element-wise product. Hence, the signal $\Mat{i}\odot\Mat{i}'$ accounts for the visual compatibility between items $i$ and $i'$, encouraging compatible items to contribute more. Furthermore, the weight $w(i,i')$ takes the categorical compatibility into consideration to control how much signals are being passed across categories. Such weights also can be seen as the discounting factors adopt in GNN models~\cite{GCN,GAT}. \vspace{5pt} \noindent\textbf{Cross-Item Information Aggregation.} For each item node, we can leverage the signals, which are pertinent to its affinity with co-occurred items (\emph{i.e., } neighbors), to update its embedding. Here we employ the sum aggregator on item $i$'s neighbors as follows: \begin{gather} \Mat{i}^{*}=\Mat{i}+\sum_{i'\in\Set{N}_{i}}\Mat{m}_{i'\rightarrow i}, \label{equ:item-representations} \end{gather} where $\Mat{i}^{*}$ is the updated embedding of item $i$. Here only the sum aggregator is applied, leaving the exploration of other aggregators, such as attention networks~\cite{GAT,SCA-CNN}, in future work. As a result, the compatibility information carried in the first-order connectivity are encoded into the item embeddings. We can stack more layers to synthesize richer semantics in higher-order connectivity, leaving this exploration in future work. \subsubsection{\textbf{Information Propagation from Item to Outfit Level}} Intuitively, an outfit can be described by its involved items. Taking Figure~\ref{fig:intro} as an example, outfit $o_{1}$ consists of a sweater, jeans, and running shoes, while outfit $o_{2}$ is composed of a pullover, jeans, and sneakers. Rich item features help reveal underlying relationships between outfits \emph{w.r.t. } visual similarity and categorical compatibility. For example, outfits $o_{1}$ can serve as a substitute for $o_{2}$, due to the closely compatibility style; meanwhile, the style of outfit $o_{4}$ is different from that of $o_{1}$ and $o_{2}$, since they have no overlapping categories or items. We hence augment ID embeddings of outfits with the representations of the involved items, to improve the quality of embedding. In particular, we build a heterogeneous graph involving item and outfit nodes, where the edges are the item-outfit links. \vspace{5pt} \noindent\textbf{Item-Wise Information Construction.} Focusing on an outfit, we refine the information that are influential to it from the involved items. Formally, the messages being passed from the neighboring item $i$ to the ego outfit $o$ is: \begin{gather} \Mat{m}_{i\rightarrow o}=\frac{1}{|\Set{N}_{o}|}\sigma(\Mat{W}_{2}\Mat{i}^{*}), \end{gather} where $\Mat{W}_{2}\in\Space{R}^{d\times d}$ is a trainable matrix to perform transformation; $\frac{1}{|\Set{N}_{o}|}$ is the normalization term to handle the varying number of involved items and stabilize the training. \vspace{5pt} \noindent\textbf{Outfit-Wise Information Aggregation.} Analogous to the cross-item information aggregation, we combine the information from all involved items together as the final representation of an outfit, as follows: \begin{gather} \Mat{o}^{*}=\Mat{o}+\sum_{i\in\Set{N}_{o}}\Mat{m}_{i\rightarrow o}. \end{gather} Obviously, the refined representation $\Mat{o}^{*}$ is composed of the ID embeddings and item-aware features. Different from prior studies~\cite{FPITF,FHN} that build outfit representations upon the visual features of items solely, this information aggregation additionally considers the compatibility scores. \subsubsection{\textbf{Information Propagation from Outfit to User Level.}} Present studies~\cite{FISM,NAIS,SVD++} have shown that personal history directly profiles a user's preference. For instance, analyzing the historical outfits (\emph{i.e., } $o_{1}$ and $o_{2}$) of user $u_{1}$, we might summarize her dressing style; moreover, $o_{5}\rightarrow \{u_{1}, u_{2} \}$ indicates the behavioral similarity between user $u_{1}$ and $u_{2}$. Furthermore, collected personal histories reflect CF signals~\cite{NGCF,Hoprec}, referring that behaviorally similar users would have similar preference on outfits. We hence enrich ID embeddings of users by incorporating the representations of historical outfits. Towards that, we organize the user-outfit interactions in the form of heterogeneous graph. \vspace{5pt} \noindent\textbf{Outfit-Wise Information Construction.} For a target user $u$, we focus on the items he/she adopted before $\Set{N}_{u}$, and extract useful information from each outfit $o$ as follows: \begin{gather} \Mat{m}_{o\rightarrow u}=\frac{1}{|\Set{N}_{u}|}\sigma(\Mat{W}_{3}\Mat{o}^{*}), \end{gather} where $\Mat{W}_{3}\in\Space{R}^{d\times d}$ is the transformation matrix; here we assume that different outfits might contribute equally to profile a user, hence using $\frac{1}{|\Set{N}_{o}|}$ as priors and leaving the exploration of attentive weights in future. \vspace{5pt} \noindent\textbf{User-Wise Information Aggregation} Thereafter, we employ the sum aggregator on the historical items, updating the user's representation as follows: \begin{gather} \Mat{u}^{*}=\Mat{u}+\sum_{o\in\Set{N}_{u}}\Mat{m}_{o\rightarrow u}, \end{gather} where the final representation $\Mat{u}^{*}$ consists of two components --- the ID embedding, which characterizes the intrinsic features of $u$, and the outfit-aware features, which present her dressing style explicitly. Distinct from prior works~\cite{FPITF,FHN} which use ID embeddings for users only, our HFGN results in better representation ability. After propagating information within the hierarchical fashion graph, we allow the information flow from the bottom to the top levels, exploiting the complex relationships among items, outfits, and users to guide the representation learning. \subsection{Model Prediction} Thereafter, we propose a joint learning scheme to conduct the compatibility matching and outfit recommendation simultaneously. \subsubsection{\textbf{Personalized Outfit Recommendation}} To predict how likely user $u$ would purchase outfit $o$, we employ the inner product on their representations as: \begin{gather} \hat{y}_{uo}=\Trans{\Mat{u}^{*}}\Mat{o}^{*}, \end{gather} which casts the predictive task as the similarity estimation between $u$ and $o$ in the same latent space. As the main focus of this work is representation learning, we leave the exploration of interaction modeling in future work. \subsubsection{\textbf{Compatibility Matching}} To estimate whether multiple fashion items form a good outfit, we utilize the item representations (\emph{cf. } Equation~\ref{equ:item-representations}) to calculate the matching score. Distinguished from present works~\cite{FHN,FPITF} that simply aggregate the pairwise compatibility scores together, we argue that items have varying importance for the outfit. For example, in outfit $o_3$, as the long dress determines the holistic style, it is more important than the accessories. We hence differentiate importance of items in one outfit via a self-attention mechanism, which generates a $R$-view attention map and a $R$-view score map. Formally, the attention map is calculated as: \begin{gather} \Mat{A}=\rho\Big(\Mat{W}_{4}\sigma(\Mat{W}_{5}\Trans{\Mat{I}})\Big), \end{gather} where $\Mat{I}\in\Space{R}^{ n\times d}$ is embedding matrix of one outfit and $n$ is the length of the outfit; $\Mat{W}_{4}\in\Space{R}^{R\times v}$ and $\Mat{W}_{5}\in\Space{R}^{v\times d}$ are two trainable weight matrices. After transformation by $\Mat{W}_{4}$ and $\Mat{W}_{5}$, we can get an $R$ views attention map $\Mat{A}\in\Space{R}^{r\times n}$. $\rho(\cdot)$ is set as the softmax function to normalize the attention scores over items. Thereafter, we establish the $R$-view score map for each outfit, considering each view as a latent factor which is influential for compatibility, as follows: \begin{gather} \Mat{C}=\sigma\Big(\Mat{W}_{6}\sigma(\Mat{W}_{7}\Trans{\Mat{I}})\Big), \end{gather} where $\Mat{W}_{6}\in\Space{R}^{R\times v}$ and $\Mat{W}_{7}\in\Space{R}^{v\times d}$ are weight matrices. As such, we project the original item representations into a latent space, describing each item from multiple factors. Based on such attention and score maps, we get the weighted compatibility score of the outfit $o$ as follows: \begin{gather} \hat{s}_{o}=\sum_{r=1}^{R}\Trans{\Mat{a}}_{r}\Mat{c}_{r}, \end{gather} where $\Mat{a}_{r}\in\Space{R}^{n}$ and $\Mat{c}_{r}\in\Space{R}^{n}$ is the $r$-th rows of $\Mat{A}$ and $\Mat{C}$, respectively. \subsection{Optimization} In the following, we introduce the objective function for our model and training strategy. \subsubsection{\textbf{Objective function}} We adopt Bayesian Personalized Ranking (BPR) algorithm \cite{BPRMF} for both tasks. BPR assumes the observed interaction has higher prediction scores than unobserved ones. The objective functions for two tasks are, \begin{equation} \mathbf{\mathop{\mathcal{L}}}_{mf} = \min_{\Theta}\sum_{(u,o,o')\in\Set{H}}-\ln{\mu(\hat{y}_{uo}-\hat{y}_{uo'})}, \end{equation} \begin{equation} \mathbf{\mathop{\mathcal{L}}}_{com} = \min_{\Theta}\sum_{(o,o')\in\Set{H}'}-\ln{\mu(\hat{s}_{o}-\hat{s}_{o'})}, \end{equation} where $\Set{H}=\{(u,o,o')\}$ is the training set for outfit recommendation, where each triple $(u,o,o')$ denotes user $u$'s historical interaction with outfit $o$ and a unobserved interaction with outfit $o'$; $\Set{H'}=\{(o,o')\}$ is the training set for compatibility modeling, where each pair $(o,o')$ denotes the observed outfit $o$ (\emph{i.e., } positive samples) and a unobserved outfit $o'$ (\emph{i.e., } negative samples by randomly generated); $\mu(\cdot)$ is the sigmoid function; and $\Theta$ is the set of model parameters, on which $L_{2}$ normalization is conducted to avoid overfitting. \subsubsection{\textbf{Training Strategy}} Due to the imbalance of the training data (\emph{i.e., } recommendation task has 1.6M positive samples while compatibility task has 0.027M positive samples in our dataset shown in table~\ref{tab:recom_dataset} and table~\ref{tab:fltb_dataset}.) and different model convergence speeds, we set different learning rates for two tasks. And we jointly optimize $\mathbf{\mathop{\mathcal{L}}}_{mf}$ and $\mathbf{\mathop{\mathcal{L}}}_{com}$ in one epoch in our training process. \section{Experiment} We perform our experiments on a benchmark dataset, POG~\cite{POG}, and we aim to answer the following questions: \begin{itemize} \item \textbf{RQ1}: How does HFGN perform as compared with the state-of-the-art methods on personalized outfit recommendation task? \item \textbf{RQ2}: How does the message propagation on each level affect HFGN? \item \textbf{RQ3}: How does HFGN perform as compared with the state-of-the-art methods on compatibility matching task? \end{itemize} \subsection{Experiment settings} \subsubsection{\textbf{Dataset}} As personalized outfit recommendation task is a fresh task in fashion community, the avaliable dataset is limited. POG~\cite{POG} is the only available large-scale dataset that meets our requirements. The dataset collects click actions from 3.57 million users, including 1.01 milion outfits and 583 thousand individual items with context information. POG dataset provides three files: 1) user data, which records the user clicks on both outfits and items, 2) outfit data, which lists the items composed of the outfit, and 3) item data, which provides the context details of items including categories, text descriptions and image download links. In our experiment, we utilize the user clicks on outfits to construct our dataset. We extract visual features for items from a pretrained Resnet-152. \vspace{5pt} \noindent\textbf{Dataset for recommendation.} To ensure the quality of dataset, we only retain the users with at least 20 interactions and the outfits with at least 10 interactions. The number of item categories is $60$. The statistics of recommendation dataset is shown in Table~\ref{tab:recom_dataset}. To evaluate the recommendation performance, we randomly select $80\%$ of click histories of each user to build the training set, and take the remaining histories as the test set. We also split $10\%$ of training set as validation set to tune the hyper-parameters. \begin{table} \centering \caption{Statistics of the dataset for personalized outfit recommendation.} \label{tab:recom_dataset} \resizebox{0.70\columnwidth}{!}{\begin{tabular}{c|c|c|c} \hline $\#$Users & $\#$Outfits & $\#$Interactions & $\#$Items \\ \hline\hline 53,897 & 27,694 & 1,679,708 & 42,563 \\ \hline \end{tabular}} \end{table} \vspace{5pt} \noindent\textbf{Dataset for compatibility matching.} We take the positive outfits in recommendation dataset as training set for compatibility matching, and additionally select $6,924$ ($10\%$ of training dataset) outfits that has never appeared in training set to construct test dataset. The statistics are shown in Table~\ref{tab:fltb_dataset}. To evaluate the performance of HFGN on compatibility matching, we adopt a widely used task, Fill-in-the-Blank~\cite{bi-lstm}. \vspace{5pt} \noindent\textbf{Fill-in-the-Blank (FLTB)}. Fill-in-the-Blank (FLTB)~\cite{bi-lstm} is a task to select the most compatible item from the candidates to fill in the blank in one outfit, as Figure~\ref{fig:caseStudy} shows. For each outfit in test dataset, we randomly mask one item as a blank, and then randomly select 3 items from other outfits to form a candidate set with the masked item. We set the masked item as the ground truth, assuming that the masked item is more compatible than other candidates. \begin{table} \centering \caption{Statistics of the dataset for compatibility matching.} \label{tab:fltb_dataset} \resizebox{0.75 \columnwidth}{!}{\begin{tabular}{c|c|c|c} \hline Dataset & \#Outfits & \#items & \#categories\\ \hline\hline Training set & 27,694 & 42,563 & 60\\ Test set & 6,924 & 15,984 & 60\\ \hline \end{tabular}} \end{table} \subsubsection{\textbf{Parameter Settings}} We perform grid search to tune the hyper-parameters for our model and baselines. We search the batch size in $\{256, 512, 1024\}$, and we tune regulation rate and learning rate in $\{10^{-5},10^{-4},10^{-3},10^{-2}\}$ and $\{0.0001,0.0005,0.001,0.005,0.01\}$, respectively. Moreover, we optimize all the models with Adam optimizer. \subsection{Personalized Outfit Recommendation } In this section, we will discuss the performance of our model on personalized outfit recommendation task compared to the state-of-the-art methods. \subsubsection{\textbf{Evaluation Metric}} To assess the performance of Top-$K$ recommendation, we adopt four widely-used metrics~\cite{NGCF,NCF}: HR@K, NDCG@K, Recall@K and Precision@K. \begin{itemize} \item HR@$K$ measures weather any positive samples present in the top $K$ position. It is 1 if yes, otherwise 0. \item Recall@$K$ is the proportion of positive samples that have been successfully recommended to the user. \item Precision@$K$ is the proportion of the recommended items that are relevant to the user. \item NDCG@$K$ is a widely used measure to evaluate the quality of the ranked list, considering the graded relevance among positive ad negative samples within the top $K$ ranking list. \end{itemize} We set $K$ as $10$ in our experiment. For all the metrics, we report the mean of all users as the final score. While the validation set is used to tune the hyper-parameters, we report the performance on the test set. \begin{table}[t] \centering \caption{Overall performance comparison on personalized outfit recommendation.$*$ denotes the statistical significance for $p<0.001$.} \label{tab:recommendationCompare} \resizebox{0.95\columnwidth}{!}{ \begin{tabular}{l|c c c c} \hline & HR@10 & NDCG@10 & Recall@10 & Precision@10 \\ \hline\hline FPITF & 0.1006 & 0.0420 & 0.0183 & 0.0112 \\ FHN & 0.1109 & 0.0490 & 0.0208 & 0.0119 \\ \hline MF & 0.2121 & 0.0872 & 0.0434 & 0.0239 \\ VBPR & 0.2201 & 0.0949 & 0.0449 & 0.0248 \\ \hline NGCF & $\Mat{0.2619}$ & $\Mat{0.1143}$ & $\Mat{0.0554}$ & $\Mat{0.0310}$\\ \hline HFGN & ${\Mat{0.2833}^{*}}$ & ${\Mat{0.1241}^{*}}$ & ${\Mat{0.0605}^{*}}$ & ${\Mat{0.0339}^{*}}$ \\ \hline\hline \%Improv. & 8.17\% & 8.57\% & 9.20\% & 9.35\%\\ \hline \end{tabular}} \vspace{-10px} \end{table} \subsubsection{\textbf{Baselines}} As the personalized outfit recommendation task is fresh, and only a few works~\cite{FPITF,FHN} has researched on it. To demonstrate the effectiveness of our proposed model, we select some traditional recommendation models (\emph{e.g., } MF, VBPR) as our baselines. Besides, we also select a graph-based recommendation model to compare since we exploit graph structure in our method. \begin{itemize} \item \textbf{FPITF}~\cite{FPITF}: FPITF represents users and items with ID embeddings and visual features respectively. The prediction score is composed of user's preference score for each item within one outfit and the compatibility score for each item pair in the outfit. \item \textbf{FHN}~\cite{FHN}: FHN encodes the visual features with category encoders and then learn binary code for both user and item embeddings. The final score is also composed of two parts, preference score for user-item pairs and compatibility score for item-item pairs. For each part, FHN additionally introduces a diagonal weighting matrix to better model the interactions. \item \textbf{MF}~\cite{BPRMF}: Matrix Factorization (MF) model is one of the most popular techniques in personalized recommendation task. In this model, users and items are both projected into a same latent space and represented by the vectors in this space and their interaction score is calculated by inner product. \item \textbf{VBPR}~\cite{VBPR}: Compared to MF model, VBPR additionally considers the user preference on visual factors. Here, we represent the visual feature of one outfit by averaging the visual features of the items within the outfit. \item \textbf{NGCF}~\cite{NGCF}: NGCF utilizes graph structure to model the high-order interaction between users and items. Node embeddings are refined by stacking multiple propagation layers in interaction graph. Here, we conduct a two-layer propagation and the nodes in interaction graph are users, outfits and items. \end{itemize} \begin{figure}[t] \centering \includegraphics[width=0.98\columnwidth]{images/ablationstudy.png} \vspace{-5px} \caption{Performance comparison of ablation study on HFGN.} \vspace{-10px} \label{fig:ablationStudy} \end{figure} \subsubsection{\textbf{Performance Comparsion (RQ1)}} Table \ref{tab:recommendationCompare} reports the performance results on personalized outfit recommendation. From the table, we have the following observations: \begin{itemize} \item FPITF and FHN perform much worse than other baselines. In both FPITF and FHN, they evaluate the user preference on one outfit only by averaging the preference score on each item. However, outfits and items have different semantics, it's insufficient to only consider item-level preference. In other models, they employ ID embeddings of outfits to capture this information, enhancing the performance greatly in personalized recommendation task. \item Compared to BPR, VBPR achieves better performance. Such improvement indicates the importance of incorporating visual signals into prediction formulations. \item From the Table~\ref{tab:recommendationCompare}, we can observe that NGCF performs much better than MF and VBPR. Benefiting from multiple propagation layers in interaction graph, NGCF is capable of modeling high-order connectivities among users, outfits and items. \item HFGN yields the best performance on all the metrics. In particular, HFGN achieves the improvement over the strongest baseline (\emph{i.e., } NGCF) of $9.35\% $ \emph{w.r.t. } Precision@10. Benefiting from message propagation mechanism, HFGN is capable of distilling useful signals from the bottom to the up and modeling complex relationships including user-outfit interactions and outfit-item mappings. \end{itemize} \subsubsection{\textbf{Study of HFGN (RQ2)}} To demonstrate the effectiveness of message propagation on each level, we conduct an ablation study. We disable the message propagation for each level and compare the performance, the results are shown in Figure~\ref{fig:ablationStudy}. \begin{itemize} \item First, we disable the message propagation on item level, termed HFGN$_{\text{w/o I}}$. The message discarded in this operation is the compatibility of pair of items in one outfit, which is an important factor that influences user's interest. From Figure~\ref{fig:ablationStudy}, we can see that HFGN$_{\text{w/o I}}$ slightly underperforms HFGN. It seems that discarding this part of information doesn't hurt the performance much in our experiment. That's because the negative samples in our test dataset is the remaining outfits that users haven't clicked. These outfits are collected directly from the website and have a good compatibility. Therefore, the compatibility influences slightly on evaluation results. In fact, our model has a good capability to analysis compatibility, which will be discussed in Section~\ref{sec:compatibilityModeling}. As we mainly research on the architecture of personalized outfit recommendation, we leave the expansion of the test dataset in future work. \item Then, we disable the message propagation from item to outfit level, termed HFGN$_{\text{w/o I\&O}}$. It means the representation of an outfit only has its ID embedding without item information. From Figure~\ref{fig:ablationStudy}, we can see that HFGN$_{\text{w/o I\&O}}$ performs worse than both HFGN and HFGN$_{\text{w/o I}}$. It makes sense since that item information plays a significant role in modeling outfit representation. It hence illustrates the rationality and effectiveness of our message propagation rule from item to outfit level. \item Finally, we disable the message propagation from outfit level to user level, termed HFGN$_{ \text{w/o I\&O\&U}}$. It means that we only utilize ID embeddings for both users and outfits, which equals to MF model. From Figure~\ref{fig:ablationStudy}, we can observe that the performance decreases greatly compared to the models mentioned above. Compared to HFGN$_{\text{w/o I\&O}}$, the message discarded in this operation is the historical interactions of users while modeling user profiles, which demonstrates that the historical interaction plays a significant role in modeling user preference. \end{itemize} \subsection{Compatibility Matching} \label{sec:compatibilityModeling} \subsubsection{\textbf{Evaluation Protocols}} For each outfit in test dataset, we randomly select an item as the blank, and set three negative candidates. The target is to select the correct answer from four candidates to fill in the blank masked in the outfit. We report the accuracy to assess the performance. \begin{table} \centering \caption{Overall performance comparison on compatibility matching.$*$ denotes the statistical significance for $p<0.001$.} \label{tab:compatibilityCompare} \resizebox{0.40\columnwidth}{!}{ \begin{tabular}{l|c } \hline & FLTB \\ \hline \hline Random & 0.2425 \\ SiameseNet & 0.5039 \\ Bi-LSTM & 0.6384 \\ FHN & 0.7422 \\ NGNN & $\Mat{0.8422}$ \\ \hline HFGN & ${\Mat{0.8797}^{*}}$ \\ \hline \end{tabular}} \vspace{-5px} \end{table} \subsubsection{ \textbf{Baselines}} We compare our model with the following baselines: \begin{itemize} \item \textbf{Random}: The result for FLTB is randomly selected from 4 candiates. \item \textbf{SiameseNet}~\cite{siamese}: SiameseNet sends the a pair of items into the a siamese network to project them into a style space and compare the distance between them. The compatibility score of the whole outfit is calculated by averaging the pairwise similarities in our experiment. \item \textbf{Bi-LSTM}~\cite{bi-lstm}: Bi-LSTM takes an outfit as a sequence. It applies a bidirectional LSTM to learn the compatibility and predict the masked item. \item \textbf{FHN}~\cite{FHN}: FHN encodes the visual features with category encoders and then learn binary codes for item embeddings. FHN only classifies the fashion items into $3$ categories (\emph{i.e., } up, bottom and shoes). The outfit score is the mean of pairwise compatibility score of items within the outfit. \item \textbf{NGNN}~\cite{NGNN}: NGNN exploits the category information to represent items as the nodes in the category graph. Thereafter, the node embeddings are updated through a one-layer graph convolution and a GRU cell. Finally, NGNN calculates the scores from each node embedding and introduces a self-attention mechanism~\cite{self-attention} to add them together as the output. \end{itemize} \subsubsection{ \textbf{Performance Comparision}} We report the accuracy of test results in Table~\ref{tab:compatibilityCompare}. From Table~\ref{tab:compatibilityCompare}, we have the following observations: \begin{figure}[t] \centering \vspace{10px} \includegraphics[width=0.95\columnwidth]{images/casestudy.pdf} \caption{Real examples of HFGN and two strong baselines (\emph{i.e., } NGNN and FHN) on Fill-in-the-Blank task. The green box circled the correct answer.} \label{fig:caseStudy} \end{figure} \begin{itemize} \item Compared to other methods, SiameseNet achieves poor performance, indicating that only averaging the compatibility scores of item pairs is insufficient to learn high-order compatibility knowledge due to the overlook of the integrity of outfit, further limiting the performance. \item We can see that Bi-LSTM performs better than SiameseNet. The reason might be that introducing Bi-LSTM can better learn the potential knowledge on compatibility. Compared to SiameseNet which directly averages the pairwise scores of items, Bi-LSTM takes the whole outfit as a sequence and learns high-order relations beyond item-level comparison. \item FHN performs much better than SiameseNet, although it just averages the pairwise compatibility as well. Such improvement might be attributed to the introduced category knowledge. It verifies the importance of category information in modeling compatibility. Nevertheless, it only considers $3$ category labels, leading to miss compatibility relations on fine-grained categories. \item The two graph based methods, NGNN and HFGN, achieves better performance compared to other methods, indicating that the graph structure can better model complex interactions among items, further effectively inferring compatibility information. The improvement over Bi-LSTM and FHN method verifies the graph representation can better model the item interactions than sequence representation and pairwise representation. \item Our model, HFGN, achieves the best performance. Benefiting from the graph structure and message propagation across items, HFGN is capable of modeling complex interaction among items within the same outfit. Compared to NGNN, we enhance item embedding with the compatibility information with its neighbours, the other items within the same outfit. Besides, we estimate the compatibility of outfits by introducing a R-view attention map, which can better capture the potential compatibility knowledge, further enhancing the model performance. \end{itemize} \subsubsection{\textbf{Case Study}} In figure~\ref{fig:caseStudy}, we visualize several test examples on Fill-in-the-Blank task to compare our model with the strong baselines (\emph{i.e., } NGNN and FHN). From example 1, we can see that all the three models infer that the outfit lacks of a pair of shoes and correctly select the answer. From example 2, we can see that although FHN correctly selects complementary category (\emph{i.e., } shoes), it misses the correct answer as it just considers 3 coarse-grained category information and is incapable of exploiting the fine-grained category information (\emph{e.g., } the compatibility distinction between high-heels and sports shoes). From example 3, we can see that both NGNN and HFGN have inferred that the query outfit lack of a bag. Nevertheless, HFGN chooses a more compatible bag than NGNN. That might be attributed to the more expressive propagation rules and compatibility formulation in HFGN. These examples demonstrate the rationality and effectiveness of our model on compatibility matching task. \section{Related Work} In this section, we first review works on graph neural networks, and then we introduce the two related tasks in our work: personalized recommendation and compatibility matching. \subsection{Graph Neural Network} Graph neural networks (GNNs) are a kind of structure to model a set of elements and their relations. Due to its great expressive power, GNNs have been widely used in many tasks involving rich relations including molecular analysis~\cite{MPNN,Protein}, image retrieval~\cite{Richang01,Richang02} and visual comprehension~\cite{CMAT}. Gori~\emph{et al.}~ are the first to propose graph neural networks (GNNs)~\cite{GNN} which are capable of directly processing graphs to retain topological information. Although GNNs can be applied on most types of graphs, \emph{e.g., } acyclic, cyclic, directed and undirected, the primitive GNNs have difficult to train for a fixed point. Thereafter, GCN~\cite{GCN} was proposed to generalize convolutions to graph domain, which has achieved great success. GCN can perform a convolution on the graph and aggregate the information derived from all the neighbours to update the node embeddings. While distinct from GCN, GraphSAGE~\cite{graphsage} updates a node embedding by uniformly sampling and aggregating features from its local neighbours. Although GNNs can update a node embedding by propagating information from neighbours with arbitrary depth, a long-term message propagation might cause some problems. To remedy this, recent advanced works~\cite{GGNN} attempt to introduce Gate Recurrent Units (GRU) in the propagation process. Most of the methods mentioned above have been proposed to solve node-focused problems. However, the edges in the graph may also contain rich information. In order to model relations between nodes, TransE~\cite{TransE} has been proposed to embed a graph into a continuous space, where each entity is represented as a point and each relation is modeled as a translating operation. However, TranE has flaws in dealing with complex relations, \emph{e.g., } 1-to-N, N-to-1 and N-to-N. To overcome this flaw, Wang \emph{et al.}~ proposed TransH~\cite{TransH} to model a relation as a translating operation on a hyperplane. Because of the powerful modeling capabilities for complex relationships, GNNs have been widely used in personalized recommendation and fashion analysis, which will be introduced in the following. \subsection{Personalized Recommendation} Recommender system has been widely deployed to capture user preference in online platforms. There have been a series of works committed to model user behavior effectively~\cite{EAR,MF,NCF,NGCF}. MF~\cite{MF,BPRMF} is one of the most popular techniques, which maps each user and item as a vector with ID information and models the user-item interaction with inner product. In order to enhance the model performance, some works attempted to incorporate side information into the prediction model. For instance, VBPR \cite{VBPR} incorporated visual features of products to enhance item representations. NSCR~\cite{NSCR} utilized social relations to help model user behaviors. Besides, a recent work~\cite{KPRN} introduced knowledge graph to provide complementary information while making recommendations. Apart from the works mentioned above, some effort devoted to model user-item interaction by exploiting deep learning techniques. As inner product only linearly models the interaction behavior between users and items, they are insufficient to explore nonlinear and complicated relationships. To remedy this, He \emph{et al.}~proposed NCF~\cite{NCF} to capture nonlinear relationships by leveraging a multi-layer perceptron. Due to the great expressive power of graphs on modeling complex relations, some graph based recommender systems have been proposed recently. For instance, an advanced model, HOP-Rec~\cite{Hoprec}, enriched the training data by performing random walks on the interaction graph. NGCF~\cite{NGCF} stacked multiple propagation layers to aggregate high-order information into node representations. KGAT~\cite{KGAT} integrates user-item interaction graph and knowledge graph by linking items with their attributes. In such a hybrid architecture, node embeddings can be recursively refined by message propagation. In addition, one recent research, LightGCN~\cite{LightGCN}, has found that two common designs in GCNs---feature transformation and nonlinear activation--- might degrade the recommendation performance and discarding these operations would benefit the model effectiveness. \subsection{Compatibility Matching} Fashion analysis~\cite{DeepFashion2,NGNN,MVC,LiaoMM18} has attracted increasing attention in recent years. In this paper, we focus on one fashion related task, compatibility matching. Some works focus on estimating pairwise compatibility between a pair of items. McAuley \emph{et al.}~proposed to map the items into a style space where compatible items will fall close to each other~\cite{Stylespace}. Following that, Veit \emph{et al.}~fed a pair of item images into an end-to-end siamese network and measured the distance between them. He \emph{et al.}~proposed Monomer~\cite{Monomer} to model heterogeneous relationships of items beyond mere visual similarity. Except for visual features, recent works highlight the importance of exploiting multi-modality features in fashion related task. For instance, Song \emph{et al.}~utilized multiple modalities (\emph{e.g., } visual and contextual modalities) to learn a latent compatibility space~\cite{NeuroStylist} via a dual autoencoder network. Afterwards, they proposed a knowledge distillation scheme~\cite{AKD} to learn from both data samples and domain knowledge so that the general knowledge rules can play a role as a teacher to guide the training process. Recently, outfit related tasks have aroused interest in fashion community and there has been much effort devoted to model the compatibility of the whole outfit. The key of modeling outfit compatibility is to represent an outfit properly. Han \emph{et al.}~represented an outfit as a sequence and exploited bidirectional LSTMs to predict the compatibility score of one outfit~\cite{bi-lstm}. Benefiting from the BiLSTM architecture, the model can also predict a compatible item to fill the outfit with a blank. There are also some works represent an outfit as a graph~\cite{TransNFCM,NGNN}. In NGNN~\cite{NGNN}, Cui \emph{et al.}~represented an outfit as a graph to model complex relations among items, which has been demonstrated to be more effective than sequence and pairwise representation. \section{Conclusion} In this paper, we have proposed a new framework, Hierarchical Fashion Neural Network (HFGN), to solve the task of personalized outfit recommendation, which requires the recommended targets not only have a nice compatibility but also meet user's personal taste. We build a hierarchical graph structure upon user-outfit interactions and outfit-item mappings. The graph consists of three levels (\emph{i.e., } user, outfit and item level), where each level contains the corresponding type of nodes. Through the message propagation on such hierarchical graph, the representations of nodes on each level can be refined by aggregating the interaction information derived from their neighbours. By introducing ID embedding for the outfit, we incorporate outfit-level semantic into the outfit representation, which has been overlooked by the previous works~\cite{FPITF,FHN}. And distinct from these works which separately considers compatibility matching and personalized recommendation, we regard the compatibility information as a passing message in the graph and encode it into the node representation. Extensive experiments have demonstrated the rationality and effectiveness of HFGN. This work explores the potential of graph neural network in personalized outfit recommendation task. Benefiting from such hierarchical graph and message propagation, we can extend our work by incorporating multiple features (\emph{e.g., } textual feature) into the graph network to refine the node embeddings and model higher-order relations among them. In future work, we will focus on exploring the relationship between the two tasks (\emph{i.e., } outfit recommendation and compatibility modeling) and devising a more effective formulation to exploit this knowledge.
1902.04916
\section{Introduction} Recently, DAMPE collaboration reported new measurements of the cosmic electron flux, which exhibit two exotic features in the energy spectrum, including a so-called break structure around 0.9 TeV and a sharp resonance near 1.4 TeV \cite{Ambrosi:2017wek}. The morphology of the excess electron events near 1.4 TeV in the DAMPE data hints a nearby cosmic ray source; a number of papers have appeared to interpret the DAMPE narrow resonance near 1.4 TeV, including astrophysical sources \cite{Yuan:2017ysv, Fang:2017tvj,Huang:2017egk,Cholis:2017ccs} and dark matter (DM) sources \cite{Liu:2017rgs, Yuan:2017ysv,Fan:2017sor,Duan:2017pkq,Gu:2017gle,Huang:2017egk, Zu:2017dzm,Tang:2017lfb,Chao:2017yjg,Athron:2017drj, Cao:2017ydw,Duan:2017qwj,Gu:2017bdw,Chao:2017emq,Chen:2017tva,Li:2017tmd, Zhu:2017tvk,Gu:2017lir,Nomura:2017ohi,Ghorbani:2017cey,Cao:2017sju, Niu:2017hqe,Yang:2017cjm,Ding:2017jdr,Liu:2017obm,Ge:2017tkd,Zhao:2017nrt, Sui:2017qra,Okada:2017pgr,Cao:2017rjr,Han:2017ars,Niu:2017lts,Nomura:2018jkd, Yuan:2018rys,Pan:2018lhc,Wang:2018pcc}. So far, most papers interpret only the 1.4 TeV excess as signals of a local source. In this analysis, we argue that both the 0.9 TeV break and the 1.4 TeV resonance in the DAMPE electron spectrum could have a common origin. We propose a dark matter explanation to both the break and the resonance in the DAMPE electron data. A dark matter subhalo {(SH)} is assumed to exist in the vicinity of the solar system motivated by the morphology of the 1.4 TeV resonance excess. We assume a simple cosmic electron background, a single power-law form with only two parameters. Thus, both the break and the sharp resonance in the DAMPE data have emerged as results of excess electrons in dark matter annihilations in the nearby subhalo. We propose a two-mediator dark matter model (2MDM) to interpret the excess electrons seen by DAMPE. In our model, dark matter can annihilate in the galaxy via two different annihilation channels due to the two mediators that interact both with dark matter and with the standard model (SM) sector. The two annihilation channels produce distinct signatures in cosmic electron flux because of different mass hierarchies between DM and mediators. One of the two mediators, denoted as $V_1$, has a mass nearly twice of the DM particle; thus the dominated annihilation channel mediated by the $V_1$ boson is the $\chi \bar \chi \to V_1 \to e^- e^+$ process which produces cosmic electrons (and positrons) with energy equal to the DM mass. This then leads to a sharp resonance in the energy spectrum, when DM annihilates in a nearby subhalo. Because the sharp resonance in the DAMPE data occurs around 1.4 TeV, we take the DM mass to be 1.5 TeV. The other mediator, denoted by $V_2$, is much lighter than DM such that the pair-production of on-shell $V_2$ bosons ($\chi \bar \chi \to V_2 V_2$) becomes the primary DM annihilation channel among the processes mediated by the $V_2$ boson. The lighter mediator $V_2$ in the final state further decays to produce SM fermions. If the $V_2$ boson can directly decay into a pair of electron and positron, the electrons (and positrons) have a box-shape energy spectrum which is centered at one-half of the DM mass and has a width determined by the mass ratio between $V_2$ and DM. The box-shape electron energy spectrum is further altered during the propagation between the source (the DM subhalo) and the observation point (the DAMPE satellite) to generate an extended excess in the electron energy spectrum. This then gives rise to a ``break'' structure roughly at one-half of the DM mass ($\sim 750$ GeV in our case) in the electron energy spectrum observed by DAMPE. In our model, because the $V_2$ boson is $L_\mu - L_\tau$ gauged so that the electrons originating from $V_2$ decays only carry a fraction of the total energy, the electron energy spectrum is further smeared. Thus, both the break and the sharp resonance observed by DAMPE arise due to the electrons coming from DM annihilations in the 2MDM model. We begin in section \ref{sec:model} by presenting the two-mediator DM models in which DM interacts with SM via two different gauge bosons. In section \ref{sec:BG}, we provide the cosmic electron background used in the analysis. In section \ref{sec:flux}, we describe the method to compute electron flux from DM annihilations both in the DM subhalo and in the Milky Way (MW) halo, as well as the method to compare our calculations with DAMPE data. In section \ref{sec:DAMPE}, we compute the DAMPE signals expected in the two-mediator dark matter models. In section \ref{sec:HESS}, we analyze HESS constraints on our DM model. In section \ref{sec:Fermi}, we study the constraints from the Fermi isotropic gamma ray background measurements. In section \ref{sec:AMS}, we investigate AMS constraints on our DM model. In section \ref{sec:ATLAS}, we calculate ATLAS constraints on our DM model. In section \ref{sec:RD}, we discuss Sommerfeld enhancement in our model and the impacts on DM relic abundance. In section \ref{sec:sum}, we summarize our findings. \section{The two-mediator dark matter model} \label{sec:model} We consider DM models in which DM is a Dirac fermion and charged under two $U(1)$ fields; the corresponding gauge bosons are $V_1$ and $V_2$. Hinted by the sharp resonance near 1.4 TeV in the DAMPE data, we fix the DM mass at 1.5 TeV. The mass of the $V_1$ boson can be near 3 TeV; The mass of the $V_2$ is lighter than the DM mass such that $V_2$ can be on-shell produced in DM annihilations in the DM halo. Thus the relevant DM annihilation channels are \begin{eqnarray} &&\chi \chi \to V_1 \to f \bar f \\ &&\chi \chi \to V_2 V_2 \to f \bar f f' \bar f' \end{eqnarray} The Feynman diagrams for the two annihilation channels are shown in Fig.\ (\ref{fig-diagram}). \begin{figure}[!htbp] \begin{centering} \includegraphics[width=0.65\columnwidth]{plots/2-channels} \caption{Feynman diagrams for the two different annihilation channels with different mediators.} \label{fig-diagram} \end{centering} \end{figure} In our analysis, we consider two cases: (1) $V_1$ is electrophilic and $V_2$ is the $L_\mu - L_\tau$ gauged; (2) $V_1$ is hidden and kinetically mixes with the SM hypercharge and $V_2$ is $L_\mu - L_\tau$ gauged. Below we provide one concrete model in the latter case where a Dirac fermion dark matter particle $\chi$ couples to two spin-$1$ mediators $V_{1}$ and $V_{2}$: \begin{eqnarray}\label{eq:eff_lag} \mathcal{L} &\supset &-\frac14 V_1^{\mu\nu}V_{1\mu\nu} + g_{1 }\bar{\chi} \gamma_\mu \chi V_1^\mu + \frac\epsilon 2 V_{1}^{\mu\nu}B_{\mu\nu} - \frac14 V_2^{\mu\nu}V_{2\mu\nu} + g_{2 }\bar{\chi} \gamma_\mu \chi V_2^\mu \nonumber \\ &+& g_2 (\bar{\mu} \gamma_\mu \mu -\bar{\tau} \gamma_\mu \tau +\bar{\nu}_\mu \gamma_\mu P_L \nu_\mu -\bar{\nu}_\tau \gamma_\mu P_L \nu_\tau) V_2^\mu, \end{eqnarray} where $\epsilon$ is the kinetic mixing parameter between gauge boson $V_{1\mu}$ and the SM $U(1)_Y$ hypercharge gauge boson $B_{\mu}$, $g_1 (g_2)$ is the gauge coupling for bosons $V_{1\mu}$ ($V_{2\mu}$). Here DM fields carry the $L_\mu - L_\tau$ quantum number. $V_{1\mu\nu} (V_{2\mu\nu})$ is the field strength of $V_{1\mu}$ ($V_{2\mu}$); $B_{\mu\nu} $ is the SM $U(1)_Y$ field strength. \section{Cosmic electron background} \label{sec:BG} Because DAMPE does not distinguish the charge of electron/positron events, we assume the following single power-law background (BG) for the total flux of electron and positron, \begin{equation} \Phi^{\text{BG}}_{e^\pm} = C E^{-\gamma}, \label{eq:bg} \end{equation} where $C$ and $\gamma$ are free parameters to be determined by data. In our analysis, we use the first eight points and the last eight points in the DAMPE data \cite{Ambrosi:2017wek} to fit this single power-law background, and obtain the following best-fit parameters: $C = 458$ (GeV m$^{2}$ s sr)$^{-1}$ and $\gamma$= 3.25. We use these background parameters throughout our study. Since DAMPE is unable to discriminate electrons from positrons, we will use the word ``electron'' in this paper to collectively denote both electron and positron when there is no confusion. \section{Electron flux from DM annihilations} \label{sec:flux} The sharp resonance of the excess events near 1.4 TeV in the DAMPE data hits a nearby electron/positron source. To fit the spectrum of the DAMPE data, we introduce a nearby DM subhalo with an NFW density profile \cite{Navarro:1996gj} \begin{equation} \rho(r) = \rho_s {(r/r_s)^{-\gamma} \over (1+r/r_s)^{3-\gamma}}. \end{equation} We use the following parameters for the subhalo: $\gamma=0.5$, $\rho_s=100~{\rm GeV/cm}^3$, and $r_s = 0.1~{\rm kpc}$ \cite{Liu:2017rgs}. The distance between the subhalo and us (denoted by $d_s$) is also crucial to the cosmic ray spectrum. We find that the above subhalo with $d_s=0.3$ kpc can fit the DAMPE data well. The above values of the four parameters are assumed for the subhalo in our analysis if not specified otherwise. The electron/positron flux can originate from dark matter annihilations both in the Milky Way dark matter halo and in a nearby subhalo. The electron flux from DM annihilations in the MW halo (denoted by $\Phi^{\chi-\text{MW}}$) is computed via PPPC4DMID \cite{Cirelli:2010xx}. For the electron flux arising from DM annihilations in a nearby subhalo, we use the Green's function method \cite{Ginzburg, Kuhlen:2009is, Delahaye:2010ji, Liu:2017rgs} \begin{equation} \Phi^{\chi-{\rm SH}}({\bf x}, E) = {v_e \over 4\pi} \int d^3 x_s \int dE_s\, G({\bf x}, E; {\bf x}_s, E_s) Q({\bf x}_s, E_s), \end{equation} where $G({\bf x}, E; {\bf x}_s, E_s)$ is the Green's function, $Q({\bf x}_s, E_s)$ is the source term due to DM annihilation, $v_e$ is the electron velocity, and $\Phi^{\chi-{\rm SH}}({\bf x}, E)$ is the electron flux due to DM annihilations in the subhalo, which has the unit of (GeV$^{-1}$ m$^{-2}$ s$^{-1}$ sr$^{-1}$). Here the subscript $s$ indicates the quantities associated with the DM source. The Green's function can be calculated via $G({\bf x}, E; {\bf x}_s, E_s) = b(E)^{-1} (\pi \lambda^2)^{-3/2} \exp\left[-{({\bf x-x}_s)^2/\lambda^2}\right]$ with the propagation scale $\lambda$ being given by $\lambda^2 = \int_E^{E_s} dE' {D(E') / b(E')}$ where the energy loss coefficient $b(E)=b_0 (E/\text{GeV})^2$ with $b_0=10^{-16}$ GeV/s and the diffusion coefficient $D(E)=D_0(E/\text{GeV})^\delta$ with $D_0$ = 11 pc$^2$/kyr, and $\delta=0.7$\ \cite{Cirelli:2008id}. The source function due to DM annihilations is \begin{equation} Q({\bf x_s}, E_s) = {1 \over 4} { \rho^2_\chi({\bf x_s}) \over m_\chi^2 } \langle \sigma v \rangle {dN \over dE_s}(E_s), \end{equation} where $m_\chi$ is the DM mass, $ \rho_\chi({\bf x_s})$ is the DM mass density, $\langle \sigma v \rangle$ is the velocity-averaged DM annihilation cross section, ${dN / dE_s}$ is the electron energy spectrum per DM annihilation. Thus, in our analysis, the total electron flux is given by $\Phi^{\text{th}} = \Phi^{\rm BG} + \Phi^{\chi-\text{MW}} + \Phi^{\chi-\text{SH}}$ where we consider three major contributions: the cosmic ray background, DM annihilations in the MW halo, and DM annihilations in the nearby subhalo. To compare our calculations with the DAMPE data, we further take into account the ``bin effects" by performing the following {computation} \begin{equation} \Phi_i^{\rm th} =\frac{1}{E_i^{\rm max}- E_i^{\rm min}}\int_{E_i^{\rm min}}^{E_i^{\rm max}} \Phi^{\rm th} (E) dE, \end{equation} where $E_i^{\rm min}$ ($E_i^{\rm max}$) is the lower (upper) bound of the $i$-th bin in the DAMPE data. To fit the DAMPE data, we carry out the following $\chi^2$ analysis \begin{equation} \chi^2=\sum_{i}\frac{( \Phi_i^{\text{th}} - \Phi_i^{\text{exp}} )^2}{\delta_i^2}, \label{eq:chi2} \end{equation} where $\Phi_i^{\text{exp}}$ ($\delta_i$) is the electron flux (uncertainty) reported by the DAMPE experiment \cite{Ambrosi:2017wek}. \section{{DAMPE excess events in the two-mediator dark matter models}} \label{sec:DAMPE} In this section, we compute the electron flux {expected in DAMPE} from DM annihilations in the two-mediator {DM} model. DM annihilations both in the subhalo and in the MW halo are considered in our analysis. We use $\langle \sigma v \rangle_1$ to denote the velocity averaged DM annihilation cross section for the $\chi\chi\to V_1 \to f \bar f$ process, which is mediated by the heavier gauge boson $V_1$; we use $\langle \sigma v \rangle_2$ to denote the velocity averaged DM annihilation cross section for the $\chi\chi\to V_2 V_2$ process where the lighter gauge boson $V_2$ is on-shell produced. The annihilation cross sections $\langle \sigma v \rangle_1$ and $\langle \sigma v \rangle_2$ are mainly responsible for the resonance and the break excess events in the DAMPE electron spectrum respectively. \subsection{Electrophilic and gauged $L_\mu - L_\tau$} Here we first consider the two-mediator model in which the heavier mediator $V_1$ is electrophilic and the lighter mediator $V_2$ is the $L_\mu - L_\tau$ gauge boson. In this case, for the annihilation process mediated by the $V_1$, only $\chi\chi\to V_1 \to e^+ e^-$ can occur. In the annihilation processes where $V_2$ is on-shell produced, $V_2$ further decays into $\mu\mu$, $\tau\tau$, and $\nu\nu$ final states with branching ratio BR$=1/3$ for each final state. The energy spectrum of the $\mu\mu$ and $\tau\tau$ final states exhibits a box-like distribution {that is centered at $m_\chi/2$} (see e.g.\ \cite{Mardon:2009rc, Ibarra:2012dw, Abdullah:2014lla, Cline:2014dwa, Agrawal:2014oha, Cline:2015qha} for early studies in the context of cosmic rays). {In our analysis, because we assume a simple power-law background, there is a wide range of electron excess events, extending from about 50 GeV to almost over 1 TeV, as shown in the left panel figure of Fig.\ (\ref{fig-dampe-ds2}). To generate such an extended electron excess events, the mass of the $V_2$ boson has to be sufficiently small since the width of the box-shape energy spectrum is given by $\sqrt{m_\chi^2 - m_{V_2}^2}$. In addition, the $V_2$ boson in our study is also required to decay into the $\tau\tau$ final state. Thus we take 10 GeV as the benchmark point for the $V_2$ boson mass, which is assumed throughout our analysis.} \begin{figure}[!htbp] \begin{centering} \includegraphics[width=0.45\columnwidth]{plots/DAMPE-ds03} \includegraphics[width=0.44\columnwidth]{plots/DAMPE-toymodel} \caption{Left panel: DAMPE electron energy spectrum. Overlaid is the sum of cosmic ray background and the electron flux from DM annihilations from the MW halo and from the nearby subhalo, which is $d_s= 0.3$ kpc away from us. Right panel: same as the left panel except that we take different $d_s$ values: $d_s=0.1, 0.2, 0.3, 0.4$ kpc. Here $V_1$ is electrophilic and $V_2$ is $L_{\mu}-L_\tau$ gauged. } \label{fig-dampe-ds2} \end{centering} \end{figure} The left panel figure of Fig.\ (\ref{fig-dampe-ds2}) shows the DAMPE electron flux data, the cosmic ray background, and the electron flux from {both} DM annihilations {and background}, where the subhalo takes its default parameters. Here the 1.4 TeV peak shown in Fig.\ (\ref{fig-dampe-ds2}) mainly comes from the $\chi\chi \to V_1\to e^- e^+$ annihilation channel; the 0.9 TeV break shown in Fig.\ (\ref{fig-dampe-ds2}) is primarily due to the $\chi\chi \to V_2V_2$ annihilation channel. The formation of the break structure in the electron energy spectrum here is due to several aspects of the problem here. Because the mass of $V_2$ in our analysis is taken to be 10 GeV, the box-shape energy spectrum of $\mu\mu$ and $\tau\tau$ have a wide energy range (extending almost from zero to $2m_\chi$). The energy loss in charged lepton decays and cosmic ray propagation in addition shapes the excess electrons. Thus one obtains an extended distribution of the excess electrons with a power-law break around TeV. \begin{table}[htbp] \begin{centering} \begin{tabular}{|c|c|c|c|c|} \hline $d_s$ (kpc) & 0.1 & 0.2 & 0.3 & 0.4 \\ \hline $\sigma v (\chi\chi\to e^+e^-) $(cm$^3$/s) & $7.9\times10^{-27}$ & $2.1\times10^{-26}$ & $4.9\times10^{-26}$ & $1.1\times10^{-25}$ \\ \hline $\sigma v (\chi\chi\to V_2 V_2)$ (cm$^3$/s) & $6.5\times10^{-25}$ & $1.3\times10^{-24}$ & $2.0\times10^{-24}$ & $2.8\times10^{-24}$ \\ \hline \end{tabular} \caption{Best-fitted cross sections of the $\chi\chi\to e^+e^-$ and $\chi\chi\to V_2 V_2$ processes for different $d_s$ values. Here $V_2$ is the $L_\mu - L_\tau$ gauge boson. } \label{tab-best-fits} \end{centering} \end{table} We further vary the distance between the subhalo and us on the right panel figure {of} Fig.\ (\ref{fig-dampe-ds2}), while keeping the rest of the parameters fixed for the subhalo. For each $d_s$ value, we find the best-fit DM annihilation cross sections for both channels {in fitting the DAMPE data}, which are shown in Table~(\ref{tab-best-fits}). We will adopt the case in which $d_s=0.3$ kpc as the benchmark model for our analysis, in which the DM annihilation cross sections for the two channels take $\langle \sigma v \rangle_1 = 4.9 \times 10^{-26}$ cm$^3$/s and $\langle \sigma v \rangle_2 = 2.0 \times 10^{-24}$ cm$^3$/s. Taking into account the Sommerfeld enhancement effects (see section (\ref{sec:RD}) for the detailed discussions), we find that one should have $g_2=0.68$ in order to obtain $\sigma v (\chi\chi\to V_2 V_2)=2.0 \times 10^{-24}$ cm$^3$/s. \subsection{Kinetic mixing and gauged $L_\mu - L_\tau$} Here we consider the two-mediator model in which the heavier mediator $V_1$ mixes with the SM hypercharge gauge boson via the kinetic mixing (KM) term and the lighter mediator $V_2$ is the $L_\mu - L_\tau$ gauge boson. The Lagrangian of this model is given in Eq.\ (\ref{eq:eff_lag}). Unlike the previous case in which the $V_1$ is electrophilic, the annihilation process mediated by the heavier mediator $\chi\chi\to V_1 \to f \bar f$ now produces all SM fermions. The analysis regarding the $V_2$ boson is similar to that in the previous section. Regarding the $V_1$ boson in the KM case, there usually are four free parameters in the calculation: the KM parameter {$\epsilon$}, the gauge coupling {$g_1$}, the DM mass {$m_\chi$} which is now fixed at 1.5 TeV, {and} the mediator mass {$m_{V_1}$} which is typically near $2m_\chi$ to provide a sufficient annihilation rate for the DM relic abundance. However, in our case, there is another lighter mediator which can significantly change the annihilation cross section mediated by $V_1$ boson, via the Sommerfeld enhancement mechanism. Thus, to correctly compute the DM annihilation cross section in the halo for the $\chi\chi\to V_1 \to f \bar f$ process, one has to multiply % the annihilation cross section due to $V_1$ (see e.g.\ Ref.\ \cite{Cline:2014dwa}) and the Sommerfeld enhancement factor due to the lighter $V_2$ mediator {(see section (\ref{sec:RD}) for the detailed discussions)} in {the} model. In Fig.\ (\ref{fig:DAMPE-KM}), we compute the electron flux arising from the {2MDM} model in which the $V_1$ boson kinetically mixes with the SM hypercharge gauge boson, and the $V_2$ is the $L_\mu - L_\tau$ gauged boson. Here the heavier $V_1$ boson can decay into various SM fermions and the branching ratios are determined primarily by the hypercharge quantum numbers of the SM fermions. Since the right-handed charge lepton has a relatively large hypercharge, the total branching ratio of the $V_1$ boson into the three generation charge leptons is rather large, $\sum_{\ell=e,\mu,\tau}{\rm BR} (V_1 \to \ell^+\ell^-) \simeq 37\%$. We find that the DM annihilation cross section $\langle \sigma v \rangle_{1} = 3.9 \times$ 10$^{-25}$ cm$^{3}$/s (for all SM final states in the $\chi \chi \to V_1 \to f \bar f$ process), and $\langle \sigma v \rangle_{2} = 2.0 \times$ 10$^{-24}$ cm$^{3}$/s (for the $\chi \chi \to V_2 V_2$ process) provide the best fit to the DAMPE data in this model. In Fig.\ (\ref{fig:DAMPE-KM}), the peak comes from the contributions of the $\chi \chi \to V_1 \to f \bar f$ processes (mainly due to the $e^+e^-$ final state), whereas the break are primarily due to processes mediated by the $L_{\mu} - L_{\tau}$ boson. \begin{figure}[!htbp] \begin{centering} \includegraphics[width=0.45\columnwidth]{plots/DAMPE} \caption{DAMPE electron energy spectrum. Overlaid is the sum of cosmic ray background and the electron flux from DM annihilations from the MW halo and from the nearby subhalo. Here $V_1$ kinetically mixes with the SM hypercharge boson and $V_2$ is the gauge $L_{\mu}-L_\tau$ boson. The DM annihilation cross sections $\langle \sigma v \rangle_{1} = 3.9 \times 10^{-25}$ cm$^{3}$/s (for all SM final states in the $\chi \chi \to V_1 \to f \bar f$ process) and $\langle \sigma v \rangle_{2} = 2.0 \times 10^{-24}$ cm$^{3}$/s (for the $\chi \chi \to V_2 V_2$ process) are used here. } \label{fig:DAMPE-KM} \end{centering} \end{figure} We note that the best-fit cross section for $\langle \sigma v \rangle_{2}$ is about the same as in the previous model so that $g_2 \simeq 0.68$. Taking into account the Sommerfeld enhancement factor, we find the model point ($\epsilon$, $g_1$, $m_{\chi}$, $M_{V_1}$) = (0.01, 0.1, 1500 GeV, 2994.2 GeV) in the parameter space can give rise to $\langle \sigma v \rangle_{1} = 3.9 \times 10^{-25}$ cm$^{3}$/s. Here the mass of the $V_1$ boson is smaller than $2m_\chi$ so that the invisible decay $V_1 \to \chi\chi$ cannot occur. In addition, the DM annihilation cross section at the early universe receives another suppression factor relative to that in the DM halo today, because the larger kinetic energy of the DM particles at the early universe moves the characteristic $\sqrt{s}$ of the DM annihilation process further away from the Breit-Wigner resonance relative to today \cite{Feldman:2008xs}. Because the invisible decay of the $V_1$ boson is kinetically disallowed here and the branching ratios into charged leptons are rather significant, the discovery potential of LHC for such $V_1$ boson is high. The discussions on LHC constraints on this model are given in section \ref{sec:ATLAS}. \section{HESS constraints} \label{sec:HESS} The gamma ray flux produced by DM annihilations can be calculated as follows \begin{equation} {d\Phi_\gamma \over dE_\gamma} = \sum_{i} { \langle \sigma v \rangle_{i} \over 8 \pi m_{\chi}^2} \left( {d N_\gamma \over d E_\gamma} \right)_{i} J(\Delta\Omega), \label{eq:gammaflux} \end{equation} where $m_{\chi}$ is {the} DM mass, $\langle \sigma v \rangle_{i}$ is the velocity-averaged DM annihilation cross section for channel $i$, $(d N_\gamma / d E_\gamma)_{i}$ is {the} gamma ray energy spectrum per annihilation for channel $i$, and $J(\Delta\Omega)$ is the J-factor for the region-of-interest (ROI). The differential flux $d \Phi_\gamma/ dE_\gamma$ has unit of (GeV cm$^2$ s)$^{-1}$. The J-factor is computed via \begin{equation} J(\Delta\Omega)=\int_{\Delta \Omega} d \Omega \int d s\, \rho^{2}_{\chi} , \label{eq:Jfactor} \end{equation} where $\Delta \Omega$ is the solid angle of the ROI, $\rho_{\chi}$ is the DM density, and $s$ is the distance along the line of light. HESS searched for very high energy $\gamma$-rays in the inner region of the Milky Way halo, which is a circular region of $1^{\circ}$ radius excluding a $\pm 0.3^{\circ}$ band in {the} Galactic latitude \cite{Abramowski:2011hc} \cite{Abdallah:2016ygi}. With the 254-hour data accumulated \cite{Abdallah:2016ygi}, stringent upper bounds can be set on the DM annihilation cross sections for various SM final states. In a recent study \cite{Profumo:2017obk}, the HESS constraints on dark matter annihilations into on-shell mediators for various SM final states {are analyzed}. Our analysis here is similar to that in Ref.\ \cite{Profumo:2017obk}, but in our case, on-shell mediators annihilate into % {a collection of} SM final states with branching ratios given in the % {2MDM} model. \subsection{HESS constraints on DM annihilations in the Galactic center} In the following, we calculate the upper limit on the DM annihilation cross section $\langle \sigma v \rangle_{\chi\chi\to V_2 V_2}$ from the HESS data, where $V_2$ is the gauge $L_\mu - L_\tau$ boson. The method we use here is to rescale the limits calculated in Ref.\ \cite{Abdallah:2016ygi} which analyzed 254-hour data recorded by HESS. The details of the method can be found in Appendix (\ref{HESSlimit}). \begin{figure}[htbp] \begin{centering} \includegraphics[width=0.45\columnwidth]{plots/hess_rescaled5.pdf} \caption{HESS upper limits on $\langle \sigma v \rangle (\chi\chi\to V_2 V_2)$ where $V_2$ is a gauged ${L_\mu-L_\tau}$ boson. We considered three different DM profiles: NFW (solid), isothermal (dashed), and Einasto (dot-dashed). Here only gamma rays from the MW halo are considered. The limits are computed based on the exclusion limits for the $\chi\chi \to \mu^+ \mu^-$ and $\chi\chi \to \tau^+ \tau^-$ processes given in Ref.\ \cite{Abdallah:2016ygi}. A {light} $V_2$ mass is assumed in this analysis. } \label{fig-hess-xw-ys} \end{centering} \end{figure} We first analyze the HESS limits on DM annihilations in the center of the galaxy. Because the DM distribution is not known to a good precision in the center of the galaxy and the gamma rays are very sensitive to the DM density distributions in the Galactic center, several DM profiles are considered in the HESS analyses \cite{Abramowski:2011hc} \cite{Abdallah:2016ygi}. We provide a comparison of the $J$-factors from different DM profiles in Appendix (\ref{app:Jfactor}). Here we consider three different DM profiles, % NFW, Isothermal, and Einasto, to interpret the HESS constraints. Fig.\ (\ref{fig-hess-xw-ys}) shows the 95\% CL limits on DM annihilation cross section $\langle \sigma v \rangle_{\chi\chi\to V_2 V_2}$ where $V_2$ is the gauged ${L_\mu-L_\tau}$ boson. For the 1.5 TeV DM annihilating into sufficiently light $V_2$ bosons, the HESS constraints are $ \langle \sigma v \rangle_2 \lesssim 1.1\times 10^{-25}\, (4 \times 10^{-24†})$ cm$^3$/s for the NFW (Isothermal) profile. Thus the DM annihilation cross section $ \langle \sigma v \rangle_2 =2.0\times 10^{-24}$ cm$^3$/s which is responsible for generating the break in the DAMPE data, is excluded if one considers the NFW or Einasto profile, but is still allowed if the isothermal profile is assumed. \subsection{HESS constraints on the location of the subhalo} DM annihilations in the subhalo also contribute to the gamma ray flux observed by the HESS experiment. Because the HESS search region is $1^\circ$ around Galactic center, the gamma ray flux observed by HESS from the subhalo is a function of $l_{\rm SH}$ that is the angle between the Galactic center and the center of the subhalo. We compute the $J$ factor of the subhalo inside the HESS search region in the left panel figure of Fig.~\ref{fig-subhalo-to-hess-rs} for different $d_s$ values. The subhalo $J$ factor increases when the subhalo moves towards either the Galactic center or us. \begin{figure}[!htbp] \begin{centering} \includegraphics[width=0.47\columnwidth]{plots/J_SH_Hess-ds2.pdf} \includegraphics[width=0.45\columnwidth]{plots/hesstoSH1.pdf} \caption{Left panel: the subhalo $J$ factor in the HESS search region as the function of $l_{\rm SH}$ for different $d_s$ values. Right panel: HESS lower limits on $l_{\rm SH}$ as a function of $d_s$. Here $l_{\rm SH}$ is the angle between the galactic center and the center of the subhalo. } \label{fig-subhalo-to-hess-rs} \end{centering} \end{figure} We further determine the minimum $l_{\rm SH}$ value by saturating the HESS constraints on DM annihilations. To determine the minimum $l_{\rm SH}$ value, we use $ \langle \sigma v \rangle_{\rm DAMPE} \times ( J_{\rm MW}^{\rm iso} +J_{\rm SH} (l_{\rm SH}^{\rm min})) = \langle \sigma v \rangle_{\rm HESS} \times J_{\rm MW}^{\rm iso}, $ where $\langle \sigma v \rangle_{\rm DAMPE}$ is the cross section needed for the DAMPE electron excess events, as given in Table\ (\ref{tab-best-fits}), {$J_{\rm MW}^{\rm iso}$ is the $J$ factor inside the HESS search region for the MW halo with the isothermal DM density profile which is $7.23\times10^{19}$ GeV$^2$ cm$^{-5}$, $\langle \sigma v \rangle_{\rm HESS}$ is the HESS 95\% CL upper bound on the DM annihilation cross section with the isothermal profile (which is $4 \times 10^{-24}$ cm$^3$/s as given by the isothermal curve on Fig.\ (\ref{fig-hess-xw-ys})),} $J_{\rm SH}$ is the $J$ factor inside the HESS search region for the subhalo. Because the gamma ray flux produced by the process $\chi\chi\to V_1\to e^+e^-$ is much smaller than $\chi\chi\to V_2 V_2$, we take $\langle \sigma v \rangle_{\rm DAMPE} \simeq \langle \sigma v \rangle (\chi\chi\to V_2 V_2)$ in the calculation here. The right panel figure of Fig.~\ref{fig-subhalo-to-hess-rs} shows the lower bound on the $l_{\rm SH}$ angle. When $d_s=0.3$ kpc, the subhalo has to be $> 21^\circ$ away from the Galactic center to avoid HESS constraints. \subsection{HESS limits for both {DM} annihilation channels} Here we analyze the HESS constraints for the model in which $V_1$ kinetically mixes with the SM hypercharge and $V_2$ is $L_\mu - L_\tau$ gauged. To take both channels into consideration, we use $ \Phi_\gamma (\langle \sigma v \rangle_1, m_{\chi}) + \Phi_\gamma (\langle \sigma v \rangle_2, m_{\chi}) = \Phi_\gamma^{95} (m_{\chi}), $ where $\Phi_\gamma^{95}$ is the 95\% CL upper bound from the 254-h HESS data on the total gamma ray flux (in unit of cm$^{-2}$ s$^{-1}$) integrated over the energy range $160$ GeV $<E_\gamma<m_\chi$. Here $\Phi_\gamma (\langle \sigma v \rangle_1, m_{\chi})$ and $\Phi_\gamma (\langle \sigma v \rangle_2, m_{\chi})$ are the gamma rays from the two annihilation channels respectively. Fig.\ (\ref{HESS_kineticmixing}) shows the HESS limits for both $\langle \sigma v \rangle_1$ and $\langle \sigma v \rangle_2$ for the case where $m_\chi=1.5$ TeV, where only contributions from the MW halo are considered. Our model is excluded if the NFW profile is used, but allowed if the isothermal profile is used for the DM distribution in the MW halo. \begin{figure}[!htbp] \begin{centering} \includegraphics[width=0.45\columnwidth]{plots/HESScs95_2_kineticmixing} \caption{HESS constraints on both annihilation channels. $\langle \sigma v \rangle_1$ is the DM annihilation cross section mediated by the $V_1$ boson that kinetically mixes with the SM hypercharge; {$\langle \sigma v \rangle_2$} is the DM annihilation cross section mediated by the $V_2$ boson that is $L_\mu - L_\tau$ gauged. Here $m_\chi$ = 1.5 TeV. The model point used in Fig.\ (\ref{fig:DAMPE-KM}) is indicated by the black point here. } \label{HESS_kineticmixing} \end{centering} \end{figure} \section{Fermi constraints} \label{sec:Fermi} Similar to the gamma ray flux measured by HESS, the gamma ray flux observed by Fermi due to DM annihilations is calculated as follows, \begin{equation} {d \Phi_\gamma \over dE_\gamma} = \sum_{i} { \langle \sigma v \rangle_{i} \over 8 \pi m_{\chi}^2} \left( {d N_\gamma \over d E_\gamma} \right)_{i} \bar{J}, \label{eq:gammaflux} \end{equation} where $\bar{J} = J(\Delta \Omega)/\Delta \Omega$ is the J-factor averaged over the region of interest. The Fermi isotropic gamma ray background (IGRB) data are reported as an intensity flux. The gamma ray flux computed in Eq.\ (\ref{eq:gammaflux}) is the intensity flux in unit of (GeV cm$^2$ s sr)$^{-1}$. The isotropic gamma ray background measured by Fermi is obtained from the all-sky data excluding the $|b| < 20^{\circ}$ band on the Galactic plane \cite{Ackermann:2014usa}. The averaged $J$ factor for the Fermi isotropic gamma ray background region can thus be computed as follows \begin{equation} \bar{J} = \frac{\int d s \int_{|b| > 20^{\circ}} d b \, dl \cos b \, \rho^{2}_{\chi}} {\int_{|b| > 20^{\circ}} d b \, dl \cos b}, \end{equation} where $\rho_{\chi}$ is the DM density, $b$ is the galactic latitude, $\ell$ is the galactic longitude, $s$ is the distance between the point where DM annihilates and us. In this study, we take into account both the MW halo and the DM subhalo when calculating the J-factor. In this section, we consider the same isothermal DM profile for the MW halo as in the HESS analysis. \subsection{Fermi isotropic gamma ray background constraints} \label{Fermi isotropic} Here we compare the gamma ray flux produced by dark matter annihilations in the subhalo as well as in the MW halo, with the isotropic background measured by Fermi-LAT \cite{Ackermann:2014usa} to obtain constraints on our DM model. Because the galactic plane is masked in the Fermi IGRB analysis \cite{Ackermann:2014usa}, the constraints from Fermi IGRB are minimized when the subhalo sits on the galactic plane. We use $b_{\rm SH}$ to denote the galactic latitude {of the subhalo center.} Thus we will set $b_{\rm SH}=0$ for our analysis unless specified otherwise. \begin{figure}[!htbp] \begin{centering} \includegraphics[width=0.45\columnwidth]{plots/Fermi_isotropic_20181103} \includegraphics[width=0.45\columnwidth]{plots/Fermi_isotropic_kineticmixing} \caption{Left panel: Fermi IGRB data \cite{Ackermann:2014usa} and gamma rays from DM where $V_1$ is electrophilic and $V_2$ is $L_\mu-L_\tau$ gauged. Right panel: Fermi IGRB data and gamma rays from DM where $V_1$ is kinetically mixed with SM hypercharge and $V_2$ is $L_\mu-L_\tau$ gauged and has a mass of 10 GeV. DM annihilation cross sections are the same as those in Fig.\ (\ref{fig:DAMPE-KM}). We use the default subhalo parameters for both figures.} \label{fig-subhalo} \end{centering} \end{figure} The left panel figure in Fig.\ (\ref{fig-subhalo}) shows the Fermi IGRB data \cite{Ackermann:2014usa} and the gamma rays from DM annihilations for the case in which the heavier $V_1$ boson is electrophilic and the lighter $V_2$ boson is $L_\mu-L_\tau$ gauged. The DM annihilation cross sections for the two annihilation channels are ($\langle \sigma v \rangle_1$, $\langle \sigma v \rangle_2$) = (4.9 $\times$ 10$^{-26}$ cm$^3$/s, 2.0 $\times$ 10$^{-24}$ cm$^3$/s), which are the same as those in the left panel figure of Fig.\ (\ref{fig-dampe-ds2}). Here the gamma ray flux arising from the $\chi \chi \to e^+ e^-$ process is only about 3\% of that due to $\chi \chi \to V_2 V_2$ in this case. We find that the J-factor of the subhalo is about the same as the J-factor of the MW halo in the Fermi IGRB search region, $J_{SH} \simeq J_{MW} \simeq 6 \times 10^{21} ~ {\rm GeV^2/cm^5}$. We have plotted the gamma rays from the MW halo on the left panel figure of Fig.\ (\ref{fig-subhalo}), as well as the gamma rays from both the MW halo and the subhalo. The predicted total gamma rays in our DM model do not exceed {the} current Fermi IGRB bound. For the DM model in which the heavier $V_1$ boson kinetically mixes with the SM hypercharge gauge boson and the lighter $V_2$ boson is $L_\mu-L_\tau$ gauged, the predicted gamma rays are shown on the right panel figure in Fig.\ (\ref{fig-subhalo}). We use the following DM annihilation cross sections ($\langle \sigma v \rangle_1$, $\langle \sigma v \rangle_2$) = (3.9 $\times$ 10$^{-25}$ cm$^3$/s, 2.0 $\times$ 10$^{-24}$ cm$^3$/s) which are the same as those in Fig.\ (\ref{fig:DAMPE-KM}). Unlike the DM model presented on the left panel figure of Fig.\ (\ref{fig:DAMPE-KM}), the annihilation process mediated by the $V_1$ boson on the right panel figure of Fig.\ (\ref{fig:DAMPE-KM}) has a larger cross section and various SM final states. We plotted the gamma rays from both annihilation channels on the right panel figure of Fig.\ (\ref{fig:DAMPE-KM}). We find that the isotropic gamma ray measurements are beginning to probe this DM model at the high energy bins in the Fermi IGRB data. \subsection{Fermi constraints on the subhalo} Here we study the effects on the Fermi IGRB data by changing various parameters for the DM subhalo. The gamma ray flux is very sensitive to the distance between the subhalo and us. We compute the gamma rays expected at Fermi using different $d_s$ values on the left panel figure of Fig.\ (\ref{fig-fermi-iso-bg6}). Different $d_s$ values not only lead to different J-factors in the Fermi search region, but also lead to different DM annihilation cross sections which are provided in Table (\ref{tab-best-fits}), since one has to fit the DAMPE data. The predicted gamma rays become larger when the subhalo moves towards us. In order to evade the Fermi IGRB constraints, the subhalo has to be at least 0.3 kpc away from us. We also compute the gamma rays from the subhalo when it moves away from the Galactic plane. The gamma ray flux expected in Fermi is shown on the right panel figure of Fig.\ (\ref{fig-fermi-iso-bg6}) for several different $b_{\rm SH}$ values. If the subhalo moves away from the Galactic plane for more than $10^\circ$, the gamma rays produced in the Fermi IGRB search region become significant above the current measurements. \begin{figure}[!htbp] \begin{centering} \includegraphics[width=0.45\columnwidth]{plots/Fermi-isotropic7.pdf} \includegraphics[width=0.45\columnwidth]{plots/Fermi-bSH.pdf} \caption{ Left panel: Fermi IGRB data \cite{Ackermann:2014usa} and gamma rays from DM where $V_1$ is electrophilic and $V_2$ is $L_\mu-L_\tau$ gauged, with different $d_s$ values: $d_s=(0.1, 0.2, 0.3, 0.4)$ kpc. The subhalo is placed at $b_{\rm SH}=0^{\circ}$. Right panel: same as the left panel except that we keep $d_s=0.3$ kpc fixed and let $b_{\rm SH}$ vary: $b_{\rm SH} =(0^\circ, 10^\circ, 15^\circ, 20^\circ)$. } \label{fig-fermi-iso-bg6} \end{centering} \end{figure} \begin{figure}[!htbp] \begin{centering} \includegraphics[width=0.45\columnwidth]{plots/Fermi-rs-rhos.pdf} \caption{Fermi IGRB data \cite{Ackermann:2014usa} and gamma rays from DM for different subhalo profiles. Here $V_1$ is electrophilic and $V_2$ is $L_\mu-L_\tau$ gauged. Here $d_s=0.3$ kpc and $b_{\rm SH} =0^\circ$. The DM annihilation cross sections are $\langle\sigma v\rangle_1=2.33\times10^{-26}$cm$^3$/s and $\langle\sigma v\rangle_2=1.06\times10^{-24}$cm$^3$/s, for the case where $r_s$=0.05 kpc and $\rho_s$=400 GeV/cm$^3$. The DM annihilation cross sections are $\langle\sigma v\rangle_1=2.26\times10^{-26}$cm$^3$/s and $\langle\sigma v\rangle_2=1.05\times10^{-24}$cm$^3$/s, for the case where $r_s$=0.08 kpc, $\rho_s$=200 GeV/cm$^3$. For the case where $r_s$=0.1 kpc, $\rho_s$=100 GeV/cm$^3$, the DM annihilation cross sections are listed in Table (\ref{tab-best-fits}).} \label{fig-fermi-rs-rhos} \end{centering} \end{figure} We further study the gamma rays by changing the subhalo profile parameters $(r_s,\rho_s)$, in the Fig.\ (\ref{fig-fermi-rs-rhos}) where $d_s=0.3$ kpc and $\gamma=0.5$ are fixed. Two sets of parameters in addition to the default values for the subhalo are used here. For each case, the DM annihilation cross sections for the two different channels are chosen such that one obtains the least $\chi^2$ fit to the DAMPE data. As shown in Fig.\ (\ref{fig-fermi-rs-rhos}), the Fermi constraints can be significantly alleviated if the DM subhalo becomes smaller and denser. \section{AMS constraints} \label{sec:AMS} We do not attempt to explain the AMS positron excess. However, the two-mediator DM model cannot produce too many positrons so that they violate the AMS data on the positron fraction measurement \cite{Accardo:2014lma}. \begin{figure}[!htbp] \begin{centering} \includegraphics[width=0.49\columnwidth]{plots/AMS} \includegraphics[width=0.46\columnwidth]{plots/AMSlimit} \caption{Left panel: AMS positron fraction data \cite{Accardo:2014lma} and the predicted value from the $L_\mu - L_\tau$ gauge boson. Here we use $m_\chi=1.5$ TeV, $m_{V_2}=10$ GeV, and $\langle \sigma v \rangle_{2}= 2.0 \times$ 10$^{-24}$ cm$^3$/s for the DM annihilation cross section. Right panel: 95$\%$ C.L. upper bound on $\langle \sigma v \rangle_{2}$ from each AMS data point. } \label{fig:AMS} \end{centering} \end{figure} To compute the AMS constraints on the DM model, we extrapolate our simple cosmic ray electron/positron background given by Eq.\ (\ref{eq:bg}) down to low electron energy range. We further assume that the background of the positron fraction take the following simple expression $f^{\rm BG}=1/(C_{f} E^{\gamma_{f}} + 1)$. We use first 15 data points in the AMS positron fraction data \cite{Accardo:2014lma} to find the best-fit parameters: $C_{f} = 11.2$ and $\gamma_{f} = 0.31$. The positron fraction including contributions both from the background and from DM annihilations is thus computed by \begin{equation} f^{\rm th} = {\Phi^{\rm BG}f^{\rm BG}+\Phi^{\chi}/2 \over \Phi^{\rm BG}+\Phi^{\chi}} \end{equation} where $\Phi^{\chi}$ is the cosmic flux including both electron and positron due to DM annihilations. We use ($f_i^{\rm AMS}+1.64\, \delta f_i^{\rm AMS}$) at each AMS data point (excluding the first 15 points) to compute the 95\% C.L. upper bound on DM annihilation cross section, where $f_i^{\rm AMS}$ is the AMS positron fraction data and the $\delta f_i^{\rm AMS}$ is the error bar for each data point. Fig.\ (\ref{fig:AMS}) shows the AMS constraints on the DM annihilation cross section mediated by the $L_\mu - L_\tau$ gauge boson using the positron fraction data. The most stringent limit comes from the highest energy bin in the AMS data, which provides the 95\% CL upper bound as $\langle \sigma v \rangle_{2} \lesssim 3 \times 10^{-24}$ cm$^{3}$/s for the $L_\mu - L_\tau$ gauge boson. The predicted positron fraction values at the AMS energy range in our model lie below the AMS measurements. We note in passing that the gap between our predicted positron fraction and the actual AMS data could be due to astrophysical sources. \section{LHC constraints} \label{sec:ATLAS} Here we study the LHC constraints on the $V_1$ boson that is kinetically mixed with the SM hypercharge. In this case, the $V_1$ boson couples to all SM fermions due to the kinetic mixing parameter $\epsilon$, which is given in Eq.\ (\ref{eq:eff_lag}). Thus, the $V_1$ boson can be produced in the Drell-Yan process at the LHC and can be searched for by reconstructing the dilepton final states. Here we utilize the recent ATLAS data \cite{Aaboud:2017buh} to put constraints on the kinetic mixing parameter $\epsilon$ between $V_1$ boson and the SM hypercharge boson. \begin{figure}[!htbp] \begin{centering} \includegraphics[width=0.454\columnwidth]{plots/atlas} \includegraphics[width=0.44\columnwidth]{plots/epsilon-atlas} \caption{Left panel: ATLAS constraints (13 TeV and 36.1 fb$^{-1}$) \cite{Aaboud:2017buh} on the $Z'$ boson in the dilepton channel. Overlaid are the predictions in the kinetic mixing (KM) model with parameter $\epsilon=0.1$ and in sequential SM (SSM) model. Right panel: ATLAS upper bound on $\epsilon$ in the KM model as a function of the $Z'$ mass. } \label{fig:atlas} \end{centering} \end{figure} Fig.\ (\ref{fig:atlas}) shows the ATLAS upper bound on the dilepton production cross section, using 36.1 fb$^{-1}$ data at the 13 TeV colliding energy. Predicted dilepton signals arising from the kinetic-mixing model and from the sequential standard model are also shown on the left panel figure of Fig.\ (\ref{fig:atlas}). The dilepton cross section with $\epsilon=0.1$ for the 3 TeV $M_{Z'}$ boson in the kinetic-mixing model is below the current LHC limit. We further compute the upper bound on $\epsilon$ from the dilepton final states in the entire ATLAS search range, on the right panel figure of Fig.\ (\ref{fig:atlas}). The limit on $\epsilon$ will certainly improve when all data currently accumulated at the LHC are analyzed (about 150 fb$^{-1}$ data have been collected by ATLAS and by CMS individually so far \cite{zhang}). However, to reach the sensitivity of probing the model point considered in our analysis, $\epsilon=0.01$ for a 3 TeV $Z'$ boson, more data in future LHC runs are probably needed. \section{Sommerfeld enhancement} \label{sec:RD} The cross section of the process $\chi\chi \to V_2 V_2$ is larger than the canonical thermal DM annihilation cross section by about two orders of magnitude, which would suppress the DM abundance significantly. However, we should take into account the Sommerfeld enhancement induced via $V_2$ exchanges between DM particles in the annihilation processes as illustrated in Fig.\ (\ref{fig-SE}), since the mediator $V_2$ is light and the velocity of DM is low in the MW halo. \begin{figure}[!htbp] \begin{centering} \includegraphics[width=0.65\columnwidth]{plots/2-channel-SE} \caption{Illustrations of light $V_2$ exchanges between annihilating DM particles in the two channels which induce the Sommerfeld enhancement.} \label{fig-SE} \end{centering} \end{figure} The Sommerfeld enhancement factor $S$ can be approximated by \cite{Cassel:2009wt} \cite{Slatyer:2009vg} \cite{Cline:2015qha} \begin{equation} S = \left({\pi\over \epsilon_v}\right){\sinh X \over \cosh X - \cos\sqrt{{(2\pi/\bar\epsilon_{2})} - X^2}}, \label{eq:sommerfeld} \end{equation} where $\bar\epsilon_{2}= (\pi/12)\epsilon_{2}$ and $X = \epsilon_v/\bar\epsilon_{2}$, and $\epsilon_{2} = {m_{V_2}/(\alpha_2 m_\chi)}$, $\epsilon_v = {v/\alpha_2}$ with $\alpha_2=g_2^2/(4\pi)$. We take $v=10^{-3}$ as the typical DM velocity in the halo. The left panel figure of Fig.\ (\ref{fig:S}) shows the Sommerfeld enhancement factor as a function of the gauge coupling $g_2$ where the mediator $V_2$ mass is 10 GeV and the dark matter mass is 1.5 TeV. \begin{figure}[htbp] \begin{centering} \includegraphics[width=0.44\columnwidth]{plots/Sommerfeld-factor} \includegraphics[width=0.45\columnwidth]{plots/Sommerfeld-xsec-V2} \caption{Left panel: Sommerfeld enhancement factor $S$ as a function of the coupling $g_2$ where $m_\chi=1.5$ TeV and $m_{V_2}=10$ GeV. Right panel: DM annihilation cross section $\langle \sigma v \rangle_2$ as a function of $g_2$. The blue-dashed line indicates the needed cross section to fit DAMPE data. } \label{fig:S} \end{centering} \end{figure} For the process $\chi\chi\to V_2 V_2$, the DM annihilation cross section is given by $\langle\sigma v\rangle_2 = \langle\sigma v\rangle_2^0 \times S(g_2)$ where $\langle\sigma v\rangle_2^0 \simeq {g_2^4/(16 \pi m_\chi^2)} $ is the annihilation cross section without taking account the Sommerfeld enhancement effect. By equating this expression with $2.0\times10^{-24}$ cm$^3$/s, the needed cross section to fit DAMPE, one obtains $g_2 =0.68$. Thus one obtains the corresponding Sommerfeld enhancement factor $S \simeq 93$. We further plot $\langle\sigma v \rangle_2$ as a function of coupling $g_2$ in the right panel figure of Fig.\ (\ref{fig:S}). For the process $\chi\chi\to V_1 \to f \bar{f}$, one also has to consider the same enhancement due to the $V_2$ mediator, so that the DM annihilation cross section should be computed via $\langle\sigma v\rangle_1=S \times\langle\sigma v\rangle_1^0$, where the superscript $0$ indicates the cross section without taking the Sommerfeld enhancement into account. Using $S\simeq 93$, we find that the model point ($\epsilon$, $g_1$, $m_{\chi}$, $M_{V_1}$) = (0.01, 0.1, 1500 GeV, 2994.2 GeV) in the parameter space of the KM model can give rise to $\langle\sigma v\rangle_1 = 3.9\times10^{-25}{\rm cm^3/s}$ which is needed to fit the DAMPE data. The DM relic abundance which is primarily determined by the DM annihilation cross section at the so-called freeze-out epoch at the early universe. Typical freeze-out occurs at the temperature $T\simeq m_\chi /(20-25)$ such that the DM velocity is approximately $v\simeq1/4$, where DM annihilation cross section no longer receives significant Sommerfeld enhancement that is present at the current galaxy. We compute the DM annihilation cross section for the processes $\chi\chi\to V_1\to f\bar{f}$ (KM) and $\chi\chi\to V_2V_2$ at the freeze-out and find that $\langle\sigma v\rangle_1 = 1.0\times 10^{-28}$ cm$^3$/s, and $\langle\sigma v\rangle_2 =2.2\times 10^{-26}$ cm$^3$/s, when $T = m_\chi /25$. Thus the total DM annihilation cross section is approximately $2.2\times 10^{-26}$ cm$^3$/s at freeze-out which is very close to the canonical thermal DM annihilation cross section needed to generate the right DM relic density in the universe. We note that there is a three orders of magnitude boost on $\langle\sigma v\rangle_1$ at current galaxy relative to the early universe, owing to both the Breit-Wigner enhancement and the Sommerfeld enhancement in this annihilation channel. \section{Conclusions} \label{sec:sum} There are two exotic features present in the new cosmic electron spectrum observed by the DAMPE collaboration, including a break at 0.9 TeV and a peak at 1.4 TeV. We propose to simultaneously explain both features in the DAMPE data via annihilations from one DM species that interacts with SM via two different mediators. Thus two different DM annihilations channels via the two mediators generate the two new features in the cosmic electron energy spectrum near TeV. The annihilation process mediated by the heavier $V_1$ boson generates the 1.4 TeV peak; the annihilation process mediated by the lighter $V_2$ boson produces the extended break near 0.9 TeV. In this work, we consider two concrete examples of the two-mediator DM models. In both cases the lighter $V_2$ boson is $L_\mu-L_\tau$ gauged and has mass 10 GeV such that $V_2$ can be on-shell produced in annihilations of DM which is taken to be 1.5 TeV. We consider the heavier $V_1$ boson to be either electrophilic or kinetically mixed with the SM hypercharge. We assume a single power-law cosmic electron background which contains only two parameters and a DM subhalo which is 0.3 kpc from us. Both electrophilic and KM $V_1$ bosons provide good fits to the 1.4 TeV excess, with the annihilation cross section $4.9\times10^{-26}$ cm$^3$/s and 3.9 $\times$ 10$^{-25}$ cm$^3$/s respectively; the $L_\mu-L_\tau$ gauge boson $V_2$ provides a good fit to the break with the annihilation cross section 2.0 $\times$ 10$^{-24}$ cm$^3$/s. Several experimental constraints on the DM models are analyzed, including HESS, Fermi IGBG, AMS positron fraction and LHC dilepton searches. Gamma rays expected at the HESS search region are mainly coming from annihilations via the $V_2$ boson due to the larger cross section. HESS constraints are very sensitive to the DM density profile for the MW halo. The needed cross section for the $V_2$ process is excluded if one assumes the NFW or Einasto profile for the MW halo, but still allowed if the isothermal profile is considered. In addition, a substantial amount of gamma rays also arise in DM annihilations via the kinetic-mixing $V_1$ boson; we find that the gamma rays from both annihilation channels are consistent with HESS data assuming the isothermal profile for the MW halo. We also find that the subhalo cannot be put at the Galactic center direction since it would contribute a significant amount of gamma rays to the HESS search region. Fermi isotropic gamma ray background constraints are sensitive to the distance between the subhalo and us. We find that our models do not violate the Fermi isotropic gamma ray background if the subhalo is placed at 0.3 kpc from us. We also note that one can begin to probe our model with more data accumulated at Fermi. DM annihilations in our model cannot provide satisfactory explanations to the AMS positron fraction excess. Nonetheless, one can use the AMS data to put the constraints on DM models by demanding that the predicted positron fraction in DM models not exceed the AMS measurement. We find that the highest energy bin in the AMS data gives the most stringent bound on the $L_\mu-L_\tau$ gauge boson process, and will probe our model in the near future. LHC constraints on the KM $V_1$ boson are analyzed in the dilepton channel. For a 3 TeV $V_1$ boson, the upper bound on $\epsilon$ is about 0.1. The DM annihilation cross sections needed to fit DAMPE data are much larger than the canonical thermal cross section. This discrepancy can be nicely explained by the Sommerfeld enhancement due to the light $V_2$ mediator in the models. Taking into account the non-perturbative Sommerfeld enhancement corrections present in the current galaxy, our model is consistent with the relic density requirement in the thermal DM framework. \acknowledgments We thank Farinaldo Queiroz and Lei Zhang for helpful correspondence. The work is supported in part by the National Natural Science Foundation of China under Grant Nos.\ 11775109 and U1738134, by the National Recruitment Program for Young Professionals, by the Nanjing University Grant 14902303, by the National Postdoctoral Program for Innovative Talents under Grant No.\ BX201700116, and by the Jiangsu Planned Projects for Postdoctoral Research Funds under Grant No.\ 1701130B.
1902.05207
\section{Introduction} London was the first to explain attractive interactions between neutral atoms or molecules by applying quantum mechanics~\cite{London}. Nowadays, the attractive forces are called the van der Waals--London forces, and are described by the potential energy decaying as~$R^{-6}$ for $R$ sufficiently large.\footnote{More precisely, if one takes the interactions between electrons and the quantized Maxwell field according to non-relativistic QED into account, the $R^{-6}$ behavior is true for the near-field region (very vaguely ``sufficiently large $R$ but not too large'', and discussed on \cite[p.~157]{Craig} and \cite{MS2}), but for the far-field region (where ``retardation effects become important'') the presented results show a $R^{-7}$ behavior. In the approximation where the quantum fluctuations of the Maxwell field are ignored, only the electrostatic Coulomb interaction remains. In this case, the binding energy behaves as $R^{-6}$ provided that $R$ is sufficently large. This $R^{-6}$ behavior is well-understood, mathematically \cite{AMR,AS, LT, MoSi}.} Here, $R$ denotes the distance between two atoms or molecules. It is recognized that these forces come from the quantum fluctuations of the charges inside the atoms. Because even a simple hydrogen atom displays a fluctuating dipole, the van der Waals--London forces are ubiquitous and therefore very fundamental. Casimir and Polder took the interactions between electrons and the quantized radiation fields into consideration and perfomed the fourth order perturbative computations~\cite{CP}. They found that the finiteness of the speed of light weakens the correlation between nearby dipoles and causes the attractive potential between atoms to behave as \begin{gather} V_{\mathrm{CP}}(R)\cong -\frac{23}{4\pi} \bigg(\frac{1}{2\pi}\bigg)^2\frac{1}{R^7} \alpha_A \alpha_B,\qquad R\gg 1, \label{CPpo} \end{gather} where $\alpha_A$ and $\alpha_B$ are the static polarizability of the atoms. The potential $V_{\mathrm{CP}}$ is called the Casimir--Polder potential or the retarded van der Waals potential. For reviews, see, e.g., \cite{BMM,Keller, Ex, MB,Milonni}. Although this result is plausible, Casimir--Polder's arguments are heuristic, and lack mathematical rigor. There are few rigorous results concerning the Casimir--Polder potential; In \cite{MS1, MS2}, Miyao and Spohn gave a path integral formula for $V_{\mathrm{CP}}$ and applied it to computing the second cumulant. Under the assumption that all of higher order cumulants behave as $O\big(R^{-9}\big)$ and their coefficients are small enough to control, they rigorously refound that $V_{\mathrm{CP}}$ behaves as $R^{-7}$ as $R\to \infty$. Although this assumption appears to be plausible, to prove it is extremely hard. Therefore, to give a mathematical foundation of the Casimir--Polder potential is an open problem even today. In the present paper, we will examine the Pauli--Fierz model under the following assumptions \cite[equations~(13.127) and~(13.123)]{Spohn}: \begin{itemize}\itemsep=0pt \item[(C.1)] the dipole approxiamtion (see~(\ref{DIPA})); \item[(C.2)] the electrons are strongly bound around each nucleus (see (\ref{BOUND1}) and (\ref{BOUND2})). \end{itemize} The dipole approximation (C.1) is widely accepted as a convenient procedure in the community of the nonrelativistic QED~\cite{Spohn}. The assumption (C.2) is often useful when we study the low energy behavior of the system. Under the assumptions, we prove that the binding energy for two hydrogen atoms actually behaves as $R^{-7}$. In the context of the Born--Oppenheimer approximation, this indicates that the effective potential between two hydrogen atoms behaves as $R^{-7}$ too. This result supports our assumptions for the model without dipole approximation, and is expected to become a starting point for study of the non-approximated model. Our proof relies on the fact that the dipole approximated Hamiltoninas can be diagonalized by applying Feynman's representation of the quantized radiation fields~\cite{Feynman}. It has been believed that the dipole approximated model also exhibits~$R^{-7}$ behavior by the forth order perturbation theory. However, the arguments concering the error terms are completely missing. Indeed, this part is tacitly assumed to be trivial in literatures. In this paper, we actually perform systematic error estimates which are far from trivial. In mathematical physics, it is known that rigorous studies of the Pauli--Fierz Hamiltonian require an extra care due to the infamous infrared problem \cite{BFS, GLL, Spohn}. Fortunetely, within the assumptions (C.1) and (C.2), we can control the problem relatively easily. Before we proceed, we have additional remarks. In his Ph.D.~Thesis \cite{Koppen}, Koppen studied the retarded van der Waals potential; he examined the Pauli--Fierz model with the dipole approximation~(C.1), but the condition~(C.2) is not assumed in \cite{Koppen}. In contrast to the present study, he imposed the infrared cutoff $\sigma$ on the Hamiltonian in order to apply the naive perturbation theory and obtained an expansion formula for the binding energy: $E_{\sigma}(R)=\sum\limits_{i=0}^{\infty} e^i V_i^{\sigma}(R)$. Then he removed the infrared cutoff from each term: $V_i(R):=\lim\limits_{\sigma\to +0} V_i^{\sigma}(R)$. Finally, he proved that some $V_i(R)$ satisfies~(\ref{CPpo}). His observation could be regareded as a nice starting point of mathematical analysis of the retarded van der Waals potential, however, there are still some problems to be considered. For example, the magnetic contributions to the~$-R^{-7}$ decay are completely overlooked. In addition, in the mathematical study of the Pauli--Fierz model, it is well-known that to prove that $\lim\limits_{\sigma\to +0}E_{\sigma}(R)=\sum\limits_{i=0}^{\infty} e^i \lim\limits_{\sigma\to +0}V_i^{\sigma}(R)$ is very hard problem, the aforementioned infrared problem. Our contributions are \begin{itemize}\itemsep=0pt \item to provide a minimal QED model which can rigorously explain the Casimir--Polder potential by a relatively simple and easy way; \item to perform systematic error estimates without the infrared cutoff. \end{itemize} In this way, the present paper and the thesis \cite{Koppen} are complementary to each other. Since the electrons obey Fermi--Dirac statistics, the wave functions of the two-electron system belong to $(\mathfrak{H}\wedge \mathfrak{H})\otimes \Fock\big(L^2\big(\BbbR^3\times \{1, 2\}\big)\big)$, where $\mathfrak{H}=L^2\big(\BbbR^3\big)\otimes \BbbC^2$, the Hilbert space with spin $1/2$, the symbol $\wedge $ indicates the anti-symmetric tensor product and $\Fock\big(L^2\big(\BbbR^3\times \{1, 2\}\big)\big)$ is the Fock space over $L^2\big(\BbbR^3\times \{1, 2\}\big)$. Usually, the ground state of this system is a spin singlet. Thus, the spatial part of the ground state is symmetric and we can end up with minimizing the energy in an unrestricted manner on $\big(L^2\big(\BbbR^3\big)\otimes L^2\big(\BbbR^3\big)\big)\otimes \Fock\big(L^2\big(\BbbR^3\times \{1, 2\}\big)\big)$. For this reason, we perform our analysis on $\big(L^2\big(\BbbR^3\big)\otimes L^2\big(\BbbR^3\big)\big)\otimes \Fock\big(L^2\big(\BbbR^3\times \{1, 2\}\big)\big)$.\footnote{Or we could simply say that one considers the ``distinguishable particles'', see Section~\ref{Discuss} for detail.} However, it should be mentioned that our observation here can not be extended to general $N$-electron systems, directly. In fairness, we mention the following two difficulties of the assumptions~(C.1) and~(C.2). For details, see discussions in Section~\ref{Discuss}. \begin{itemize}\itemsep=0pt \item The condition (C.2) breaks the indistinguishability of the electrons. \item Under the conditions (C.1) and (C.2), we cannot reproduce the exact cancellation of the term with $R^{-6}$ decay (the van der Waals--London potential) by the contribution from the quantized Maxwell field. Note that this cancellation is known to be fundamental to explain the retarded van der Waals potential \cite{MS1, MS2}. \end{itemize} The present paper is organized as follows. In Section~\ref{SecM}, we introduce the dipole approximated Pauli--Fierz Hamiltonian and state the main result. In Section~\ref{FeynRep}, we switch to the Feynman representation of the quantized radiation fields. This representation enables us to diagonalize the Hamiltonians as we will see in the following sections. Further, we introduce a canonical transformation which induces the quantized displacement fields in the Hamiltonians in Section~\ref{CanoTr}. Section~\ref{FiniteVApp} is devoted to the finite volume approximation, which is a standard method in the study of the quantum field theory~\cite{AH, GJ}. Then we diagonalize the Hamiltonians in Sections~\ref{Dai1} and~\ref{Dai2}. In Section~\ref{PfMainT}, we give a proof of the main theorem. Section~\ref{Discuss} is devoted to the discussions of the approximations~(C.1) and~(C.2). In Appendices~\ref{List},~\ref{NumC} and~\ref{BasicI}, we collect various auxiliary results which are needed in the main sections. \section{Main result}\label{SecM} Let us consider a single hydrogen atom with an infinitely heavy nucleus located at the origin~$0$. The nonrelativistic QED Hamiltonian for this system is given by \begin{gather*} H_{\mathrm{1e}}=\frac{1}{2}\big({-}\im \nabla-eA(x)\big)^2-e^2V(x)+\Hf. \end{gather*} The nucleus has charge $e>0$, and the electron has charge $-e$. We assume that the charge distribution $ \varrho$ satisfies the following properties: \begin{itemize}\itemsep=0pt \item[(A.1)] $\varrho$ is normalized: $\int_{\BbbR^3} \dm x\, \varrho(x)=1$. \item[(A.2)] $\varrho(x)=\varrho(-x)$. Thus the Fourier transformation $\hat{\varrho}$ is real. \item[(A.3)] $\hat{\varrho}$ is rotation invariant, $\hat{\varrho}(k) = \hat{\varrho}_{\mathrm{rad}}(|k|)$, of rapid decrease and smooth. \end{itemize} The smeared Coulomb potential $V$ is given by \begin{gather*} V(x)=\int_{\BbbR^3}\dm k\, \hat{\varrho}(k)^2|k|^{-2}\ex^{-\im k\cdot x}. \end{gather*} The photon annihilation operator is denoted by $a(k, \lambda)$. As usual, this operator satisfies the standard commutation relation: \begin{gather*} [a(k, \lambda), a(k', \lambda')^*]=\delta_{\lambda\lambda'}\delta(k-k'). \end{gather*} The quantized vector potential $A(x)$ is defined by \begin{gather*} A(x)=\sum_{\lambda=1,2}\int_{\BbbR^3}\dm k\, \frac{\hat{\varrho}(k)}{\sqrt{2|k|}}\vepsilon(k, \lambda)\big(\ex^{-\im k\cdot x}a(k, \lambda)^*+\ex^{\im k\cdot x}a(k, \lambda)\big), \end{gather*} where $\vepsilon(k, \lambda)=(\vepsilon_1(k, \lambda), \vepsilon_2(k, \lambda), \vepsilon_3(k, \lambda)),\, \lambda=1, 2$ are polarization vectors. For concreteness, we choose as \begin{gather} \vepsilon(k, 1)=\frac{(k_2, -k_1, 0)}{\sqrt{k_1^2+k_2^2}},\qquad \vepsilon(k, 2)=\frac{k}{|k|}\wedge \vepsilon(k, 1).\label{PolariDef} \end{gather} Note that $A(x)$ is essentially self-adjoint. We will denote its closure by the same symbol. The field energy $\Hf$ is given by \begin{gather*} \Hf=\sum_{\lambda=1,2}\int_{\BbbR^3}\dm k\, |k|a(k, \lambda)^*a(k,\lambda). \end{gather*} The operator $H_{\mathrm{1e}}$ acts in the Hilbert space $ L^2\big(\BbbR^3\big)\otimes \Fock\big(L^2\big(\BbbR^3_k\times \{1,2\}\big)\big)$, where $\Fock(\h)$ is the bosonic Fock space over~$\h$: $\Fock(\h)=\bigoplus\limits_{n=0}^{\infty} \h^{\otimes_{\mathrm{s}}n}$. Here, $\otimes_{\mathrm{s}}$ indicates the symmetric tensor product. To examine the Casimir--Polder potential, we consider two hydrogen atoms, one located at the origin and the other at $r=(0, 0, R)$ with $R>0$. For computational convenience, we define the position of the second electron relative to $r$, see Fig.~\ref{Figure1}. \begin{figure}[t]\centering \includegraphics{Miyao-Fig1} \caption{} \label{Figure1} \end{figure} Then the two-electron Hamiltonian reads \begin{gather*} H_{\mathrm{2e}}=\frac{1}{2}\big({-}\im \nabla_1-eA(x_1)\big)^2-e^2V(x_1) +\frac{1}{2}\big({-}\im \nabla_2-eA(x_2+r)\big)^2-e^2V(x_2)\\ \hphantom{H_{\mathrm{2e}}=}{}+e^2V_R(x_1, x_2)+\Hf \end{gather*} with \begin{align*} V_R(x_1, x_2)&=-V(x_1-r)-V(x_2+r)+V(r)+V(r+x_2-x_1)\\ &=\int_{\BbbR^3}\dm k\, \hat{\varrho}(k)^2|k|^{-2}\big(1-\ex^{-\im k\cdot x_1}\big) \big(1-\ex^{\im k\cdot x_2}\big) \ex^{\im k\cdot r}. \end{align*} The operator $H_{\mathrm{2e}}$ acts in $L^2\big(\BbbR^3_{x_1}\big)\otimes L^2\big(\BbbR_{x_2}^3\big)\otimes \Fock\big(L^2\big(\BbbR^3_k\times \{1,2\}\big)\big)$. The dipole approximation (C.1) means the following replacement: \begin{gather} A(x_1)\leadsto A(0),\qquad A(x_2+r)\leadsto A(r). \label{DIPA} \end{gather} By the assumption (C.2), we can take $x_1$ and $x_2$ sufficiently small. Therefore, we assume that the Coulomb potential of the nuclei together with the Coulomb interaction between the electrons can be approximated by harmonic potentials. Then one has \begin{gather} V(x_j)\simeq -\frac{1}{2}\nu_0^2x_j^2+\mathrm{const} \label{BOUND1} \end{gather} with $\nu^2_0=\frac{1}{3}\int \dm k \, \hat{\varrho}(k)^2$ and \begin{gather} V_R(x_1, x_2)\simeq \int\dm k\, \hat{\varrho}(k)^2 \ex^{\im k\cdot r}\big(x_1\cdot \hat{k}\big) \big(x_2\cdot \hat{k}\big) \label{BOUND2} \end{gather} with $\hat{k}=k/|k|$. Hence, we arrive at \begin{gather*} H_{\mathrm{D1e}}=\frac{1}{2}\big({-}\im \nabla -eA(0)\big)^2+\frac{1}{2}e^2\nu_0^2x^2+\Hf \end{gather*} and \begin{gather*} H_{\mathrm{D2e}} =\frac{1}{2}\big({-}\im \nabla_1-eA(0)\big)^2+\frac{1}{2}e^2\nu_0^2 x_1^2 +\frac{1}{2}\big({-}\im \nabla_2-eA(r)\big)^2+\frac{1}{2}e^2\nu_0^2 x_2^2 \\ \hphantom{H_{\mathrm{D2e}} =}{} +e^2 \int_{\BbbR^3}\dm k\, \hat{\varrho}(k)^2 \ex^{\im k\cdot r}(x_1\cdot \hat{k}) (x_2\cdot \hat{k})+\Hf. \end{gather*} Note that $H_{\mathrm{D1e}}$ and $H_{\mathrm{D2e}}$ are self-adjoint and bounded from below~\cite{LHB}, because the cross-term $\int \dm k\, \hat{\varrho}(k)^2 \ex^{\im k\cdot r} \big(x_1\cdot \hat{k}\big) \big(x_2\cdot \hat{k}\big)$ becomes very small provided that~$R$ is large enough. As for physical discussions of the approximation above, see Section~\ref{Discuss} in detail. In what follows, we assume an additional condition: \begin{itemize}\itemsep=0pt \item[(A.4)] We regard $\nu_0$ as a parameter. Thus, $\nu_0$ is independent of $\varrho$. \end{itemize} Hence, there are three parameters $e$, $R$ and $\nu_0$ in our models. \begin{Thm}\label{Rto7} Let $E(R)=\inf \operatorname{spec}(H_{\mathrm{D2e}})$ and let $E=\inf \operatorname{spec}(H_{\mathrm{D1e}})$, where $\operatorname{spec}(X)$ indicates the spectrum of a linear operator $X$. Let \begin{gather*} c_{\infty}=\max \left\{ \sqrt{2}e \big\||k|^{-1} \hat{\varrho}\big\|_{L^2}, \frac{\||k| \hat{\varrho}\|_{L^2}}{\sqrt{2}e\nu_0^2} , \frac{\|\hat{\varrho}\|_{L^2}}{\nu_0} \right\}. \end{gather*} Choose $e$ and $\nu_0$ such that $c_{\infty}<1/2$, $1\le \sqrt{2} e \nu_0$ and $\sqrt{2} e \|\hat{\varrho}\|_{L^2}<1$. Then one has \begin{gather*} \lim_{R\to \infty}R^7(2E-E(R))=\frac{23}{4\pi}\left(\frac{1}{2\pi}\right)^2\left( \frac{1}{4}\alpha_{\mathrm{E, at}}\right)^2, \end{gather*} where $\alpha_{\mathrm{E, at}}=\nu_0^{-2}$. \end{Thm} \begin{rem}\quad \begin{itemize}\itemsep=0pt \item[1.] The constant $\alpha_{\mathrm{E, at}}$ is the dipole moment of a decoupled atom, i.e., \begin{gather} \alpha_{\mathrm{E, at}}= \frac{2}{3}\big\la \psi_{\mathrm{at}}|x\cdot (h_{\mathrm{at}}-3e \nu_0/2)^{-1}x\psi_{\mathrm{at}}\big\ra, \label{DMoment} \end{gather} where $h_{\mathrm{at}}=-\frac{1}{2}\Delta+\frac{e^2\nu_0^2}{2} x^2$ and $\psi_{\mathrm{at}}$ is the ground state of $h_{\mathrm{at}}$. Note that $x_j \psi_{\rm at}$ is orthogonal to $\psi_{\rm at}$: $\la \psi_{\rm at}|x_j\psi_{\rm at}\ra=0$. Thus, the vectors $(h_{\rm at}-3e\nu_0/2)^{-1}x_j\psi_{\rm at}$ in~(\ref{DMoment}) are mathematically meaningful. \item[2.] The restrictions of the parameters in Theorem~\ref{Rto7} come from technical reasons: As we will see in the later sections, these are needed in order to control the perturbative expansions for~$E$ and~$E(R)$. \end{itemize} \end{rem} \begin{exam}Let $\eta\in \mathscr{S}\big(\BbbR^3\big)$, the Schwartz space. Suppose that $\eta$ satisfies the following: \begin{itemize}\itemsep=0pt \item $\eta(0)=(2\pi)^{-3/2}$; \item $\eta(k)$ is real-valued; \item $\eta(k)=\eta_{\rm rad}(|k|)$. \end{itemize} For given $\xi>0$, we define $\varrho$ by \begin{gather*} \hat{\varrho}(k)=\eta(\xi k). \end{gather*} Then $\varrho$ satisfies (A.1)--(A.3). In addition, since \begin{gather*} \|\hat{\varrho}\|_{L^2}\propto \xi^{-3/2},\qquad \||k| \hat{\varrho}\|_{L^2} \propto \xi^{-5/2},\qquad \| |k|^{-1}\hat{\varrho} \|_{L^2} \propto \xi^{-1/2}, \end{gather*} the all assumptions in Theorem \ref{Rto7} are fulfilled, provided that $\xi$ is large enough. Note that a~typical choice of $\eta$ is $\eta(k)=(2\pi)^{-3/2}\ex^{-|k|^2}$. \end{exam} \section{Feynman Hamiltonians} \label{FeynRep} \subsection{Preliminaries} To prove our main result, let us introduce Feynman Hamiltonians of the nonrelativistic QED~\cite{Feynman}. These Hamiltonians can be diagonalized readily as we will see in Sections~\ref{Dai1} and~\ref{Dai2}. First, remark the following identification: \begin{gather*} L^2\big(\BbbR^3\big)=L^2_e\big(\BbbR^3\big) \oplus L^2_o\big(\BbbR^3\big), \end{gather*} where \begin{gather*} L_e^2\big(\BbbR^3\big)=\big\{f\in L^2\big(\BbbR^3\big)\, |\, f(-x)=f(x)\ \text{a.e.} \big\},\\ L^2_o\big(\BbbR^3\big) =\big\{f\in L^2\big(\BbbR^3\big)\, |\, f(-x)=-f(x)\ \text{a.e.} \big\}. \end{gather*} For notational convenience, we denote by $\vepsilon_j(\cdot, \lambda)$ the multiplication operator by the function~$\vepsilon_j(\cdot, \lambda)$. We begin with the following lemma. \begin{lemm} Let \begin{alignat*}{3} & \mathfrak{H}_1 =\bigcup_{j=1}^3 \overline{\R}\big(\vepsilon_j(\cdot, 1) \restriction L^2_e\big(\BbbR^3\big)\big), \qquad && \mathfrak{H}_2 =\bigcup_{j=1}^3 \overline{\R}\big(\vepsilon_j(\cdot, 2) \restriction L^2_e\big(\BbbR^3\big)\big),& \\ & \mathfrak{H}_3 =\bigcup_{j=1}^3 \overline{\R}\big(\vepsilon_j(\cdot, 1) \restriction L^2_o\big(\BbbR^3\big)\big), \qquad && \mathfrak{H}_4 =\bigcup_{j=1}^3 \overline{\R}\big(\vepsilon_j(\cdot, 2) \restriction L^2_o\big(\BbbR^3\big)\big),& \end{alignat*} where $A\restriction \mathfrak{X}$ indicates the restriction of~$A$ to $\mathfrak{X}$. Then $\mathfrak{H}_1$, $\mathfrak{H}_2$, $\mathfrak{H}_3$ and $\mathfrak{H}_4$ are subspaces of~$L^2\big(\BbbR^3\big)$. \end{lemm} \begin{proof} Let $\mathbb{D}=\big\{k\in \BbbR^3\, |\, k_1\neq 0, k_2\neq 0, k_3\neq 0\big\}$. Trivially, $\vepsilon_j(k, \lambda)$ is well-defined on $\mathbb{D}$. In addition, $\vepsilon_1(k, 1)^{-1}$, $\vepsilon_2(k, 1)^{-1}$, $\vepsilon_1(k, 2)^{-1}$, $\vepsilon_2(k, 2)^{-1} $ and $\vepsilon_3(k, 2)^{-1}$ are well-defined on $\mathbb{D}$.\footnote{These facts immediately follow from (\ref{PolariDef}). Here, note that $\vepsilon(k, 2)$ is written as $\vepsilon(k, 2)=\big(k_1k_3, k_2k_3, -k^2_1-k_2^2\big)\big/|k|\sqrt{k_1^2+k_2^2}.$} Let~$C_0(\mathbb{D})$ be the set of continuous functions on $\mathbb{D}$ of compact support. Because the Lebesgue measure of $\mathbb{D}^c$, the complement of $\mathbb{D}$, is equal to zero, $C_0(\mathbb{D})$ is dense in $L^2\big(\BbbR^3\big)$. Thus, it holds that \begin{gather} \overline{\R}\big( \vepsilon_j(\cdot, 1) \restriction L^2_e\big(\BbbR^3\big) \big) = \overline{\R}\big( \vepsilon_j(\cdot, 1) \restriction C_{0, e}(\mathbb{D}) \big),\qquad j=1, 2, \label{RanEquiv} \end{gather} where $C_{0, e}(\mathbb{D})=\{f\in C_0(\mathbb{D})\, |\, f(-k)=f(k)\}$. Let $F, G\in \mathfrak{H}_1$. Then there exist $i, j\in \{1, 2\}$ such that $F\in \overline{\R}\big( \vepsilon_j(\cdot, 1) \restriction L^2_e\big(\BbbR^3\big) \big)$ and $G\in \overline{\R}\big( \vepsilon_i(\cdot, 1) \restriction L^2_e\big(\BbbR^3\big) \big)$. By (\ref{RanEquiv}), there exist approximating sequences $(F_n)\subset \R\big( \vepsilon_j(\cdot, 1) \restriction C_{0, e}(\mathbb{D}) \big) $ and $(G_n)\subset \R\big( \vepsilon_i(\cdot, 1) \restriction C_{0, e}(\mathbb{D}) \big) $ such that $\|F-F_n\| \to 0$ and $\|G-G_n\| \to 0$ as $n\to 0$. Hence, for each $\alpha, \beta\in \BbbC$, it holds that \begin{gather} \alpha F_n+\beta G_n \to \alpha F+\beta G\qquad \mbox{as $n\to \infty$}. \label{LConv} \end{gather} Note that we can write $F_n=\vepsilon_j(\cdot, 1) f_n$ and $G_n=\vepsilon_i(\cdot, 1) g_n$ with $f_n, g_n\in C_{0, e}(\mathbb{D})$. Thus, we have $G_n=\vepsilon_j(\cdot, 1) g_n'$, where $g_n'=\vepsilon_j(\cdot, 1)^{-1} \vepsilon_i(\cdot, 1) g_n$. Because $\vepsilon_j(k, 1)^{-1} \vepsilon_i(k, 1)$ is an even function on $\mathbb{D}$, we see that $g_n'\in C_{0, e}(\mathbb{D})$. Accordingly, $ \alpha F_n+\beta G_n=\vepsilon_j(\cdot, 1) (\alpha f_n+\beta g_n') \in \R\big( \vepsilon_j(\cdot, 1) \restriction C_{0, e}(\mathbb{D}) \big) $. Combining this, (\ref{RanEquiv}) and (\ref{LConv}), we conclude that $\alpha F+\beta G\in \mathfrak{H}_1$, in particular, $\mathfrak{H}_1$~is a subspace of~$L^2\big(\BbbR^3\big)$. By similar arguments, we can prove that $ \mathfrak{H}_2, \mathfrak{H}_3$ and $\mathfrak{H}_4$ are subspaces of~$L^2\big(\BbbR^3\big)$. \end{proof} \begin{lemm} We have the following identifications: \begin{gather} L^2\big(\BbbR^3\times \{1, 2\}\big) =L^2\big(\BbbR^3\big)\oplus L^2\big(\BbbR^3\big) =\mathfrak{H}_1\oplus \mathfrak{H}_2\oplus\mathfrak{H}_3\oplus \mathfrak{H}_4. \label{4DirectSum} \end{gather} \end{lemm} \begin{proof} The first identification in (\ref{4DirectSum}) is trivial. In what follows, we will concentrate on the proof of the second identification. Note that the multiplication operator $\vepsilon_j(\cdot, 1)^{-1}\, (j=1, 2)$ is self-adjoint and $\D\big(\vepsilon_j(\cdot, 1)^{-1}\big)$ is dense in $L^2\big(\BbbR^3\big)$. Because $\R(\vepsilon_j(\cdot, 1))\supseteq \D\big( \vepsilon_j(\cdot, 1)^{-1}\big) \supseteq C_0(\mathbb{D})$ for $j=1, 2$, we obtain $\overline{\R}(\vepsilon_j(\cdot, 1))=L^2\big(\BbbR^3\big)$. For each $f\in L^2\big(\BbbR^3\big)$, we set $f_e(k)=\frac{1}{2}(f(k)+f(-k))$ and $f_o(k)=\frac{1}{2}(f(k)-f(-k))$. Because $\vepsilon_j(k, 1)^{2}$ is an even function, we have $\la \vepsilon_j(\cdot, 1)f_e|\vepsilon_j(\cdot, 1)f_o\ra=0$ for all $f\in L^2\big(\BbbR^3\big)$ and $j=1, 2$, which implies that $\overline{\R}\big(\vepsilon_j(\cdot, 1)\restriction L^2_e\big(\BbbR^3\big)\big)\perp \overline{\R}\big(\vepsilon_j(\cdot, 1) \restriction L^2_o\big(\BbbR^3\big)\big)$. Since \begin{gather*} \underbrace{\vepsilon_j(\cdot, 1)f}_{\in \R(\vepsilon_j(\cdot, 1))}= \underbrace{\vepsilon_j(\cdot, 1)f_e}_{\in \R(\vepsilon_j(\cdot, 1)\restriction L_e^2(\BbbR^3) )}+ \underbrace{\vepsilon_j(\cdot, 1)f_o}_{\in \R(\vepsilon_j(\cdot, 1)\restriction L_o^2(\BbbR^3) )}, \qquad f\in L^2\big(\BbbR^3\big), \end{gather*} we conclude that \begin{gather*} L^2\big(\BbbR^3\big)=\overline{\R}\big(\vepsilon_j(\cdot, 1)\restriction L^2_e\big(\BbbR^3\big)\big)\oplus \overline{\R}\big(\vepsilon_j(\cdot, 1) \restriction L^2_o\big(\BbbR^3\big)\big) \end{gather*} for $j=1, 2$. For $i, i\rq{}\in \{1, 2\}$, we set $ \mu_{ii\rq{}}^{(1)}(k)=\vepsilon_i(k, 1) \vepsilon_{i\rq{}}(k, 1)$. Because $\mu_{ii'}^{(1)}(k)$ is an even function, we see that, for each $f\in L_e^2\big(\BbbR^3\big)$ and $g\in L_o^2\big(\BbbR^3\big)$, \begin{gather*} \la \vepsilon_i(\cdot, 1) f|\vepsilon_{i'}(\cdot, 1) g\ra=\big\la f|\mu_{ii'}^{(1)} g\big\ra=0. \end{gather*} Therefore, $\mathfrak{H}_1\perp \mathfrak{H}_3$ holds. Because $\mathfrak{H}_1\oplus \mathfrak{H}_3 \supseteq \overline{\R}\big(\vepsilon_j(\cdot, 1)\restriction L^2_e\big(\BbbR^3\big)\big)\oplus \overline{\R}\big(\vepsilon_j(\cdot, 1) \restriction L^2_o\big(\BbbR^3\big)\big)$, we finally arrive at $L^2\big(\BbbR^3\big)=\mathfrak{H}_1\oplus \mathfrak{H}_3$. By arguments similar to the above, we get that $ L^2\big(\BbbR^3\big)=\mathfrak{H}_2\oplus \mathfrak{H}_4 $. \end{proof} We will construct a useful identification between $\Fock\big(L^2\big(\BbbR^3\big)\oplus L^2\big(\BbbR^3\big)\big)$ and $\bigotimes\limits_{\lambda=1}^4 \Fock(\mathfrak{H}_{\lambda})$ in Section~\ref{Sec34}. For this purpose, we recall some basic definitions in Sections \ref{Sec32} and \ref{Sec33}. \subsection[Second quantized operators in $\Fock\big(L^2\big(\BbbR^3\big) \oplus L^2\big(\BbbR^3\big)\big)$]{Second quantized operators in $\boldsymbol{\Fock\big(L^2\big(\BbbR^3\big) \oplus L^2\big(\BbbR^3\big)\big)}$}\label{Sec32} Let $a(f_1\oplus f_2)$ be the annihilation operator acting in $\Fock\big(L^2(\BbbR^3\times \{1, 2\}))=\Fock(L^2\big(\BbbR^3\big) \oplus L^2\big(\BbbR^3\big)\big)$. As usual, we express this operator as \begin{gather*} a(f_1\oplus f_2)=\sum_{\lambda=1, 2} \int_{\BbbR^3} \dm k\, f_{\lambda}^*(k) a(k, \lambda). \end{gather*} The Fock vacuum in $\Fock\big(L^2\big(\BbbR^3\big) \oplus L^2\big(\BbbR^3\big)\big)$ is denoted by $\Psi_0$. Let~$F$ be a real-valued function on~$\BbbR^3$ which is finite almost everywhere. The multiplication operator by $F$ is also written as~$F$. The second quantization of $F\oplus F$ is then given by \begin{gather*} \dG(F\oplus F)=0\oplus \Bigg[\bigoplus_{n=1}^{\infty} \sum_{j=1}^n 1\otimes \cdots \otimes \underbrace{(F\oplus F)}_{j^{\mathrm{th}}}\otimes \cdots \otimes 1\Bigg]. \end{gather*} Needless to say, $\dG(F\oplus F)$ acts in $\Fock\big(L^2\big(\BbbR^3\big) \oplus L^2\big(\BbbR^3\big)\big)$. It is known that $\dG(F\oplus F)$ is essentially self-adjoint on a dense subspace \begin{gather*} \big\{\Psi=\{\Psi_n\}_{n=0}^{\infty}\, |\, \Psi_n\in (\D(F)\oplus \D(F))^{\odot n}, \mbox{$\exists\, N\in \BbbN$ s.t.\ $\Psi_m=0$ $ \forall\, m>N$} \big\}, \end{gather*} where $\odot $ indicates the algebraic tensor product. We will denote the closure of $\dG(F\oplus F)$ by the same symbol. Symbolically, we express $\dG(F\oplus F)$ as \begin{gather*} \dG(F\oplus F)=\sum_{\lambda=1, 2} \int_{\BbbR^3} \dm k\, F(k)a(k, \lambda)^*a(k, \lambda). \end{gather*} \subsection[Second quantized operators in $\bigotimes\limits_{\lambda=1}^4\Fock(\mathfrak{H}_{\lambda})$]{Second quantized operators in $\boldsymbol{\bigotimes\limits_{\lambda=1}^4\Fock(\mathfrak{H}_{\lambda})}$} \label{Sec33} Let $a_{\lambda}(f_{\lambda})$ be the annihilation operator on $\Fock(\mathfrak{H}_{\lambda})$. We employ the following identifications: $ a_1(f_1)=a_1(f_1)\otimes \one \otimes \one \otimes \one$, $a_2(f_2)=\one \otimes a_2(f_2) \otimes \one \otimes \one $ and so on. Thus, $a_{\lambda}(f_{\lambda})$ can be regarded as a linear operator acting in the Hilbert space $\bigotimes\limits_{\lambda=1}^4\Fock(\mathfrak{H}_{\lambda})$. Let~$F$ be a real-valued function on~$\BbbR^3$. Suppose that $F$ is even: $F(-k)=F(k)$ a.e.. $\dG_{\lambda}(F)$ denotes the second quantization of $F$ which acts in $\Fock(\mathfrak{H}_{\lambda})$. As before, we can also regard $\dG_{\lambda}(F)$ as a linear operator acting in $\bigotimes\limits_{\lambda=1}^4\Fock(\mathfrak{H}_{\lambda})$. The Fock vacuum in $\Fock(\mathfrak{H}_{\lambda})$ is denoted by $\Psi_{\lambda}$. We will freely use the following notations: \begin{gather*} a_{\lambda}(f) = \int_{\BbbR^3} \dm k\, f(k)^* a(k, \lambda),\qquad f\in \mathfrak{H}_{\lambda},\\ \dG_{\lambda}(F) =\int_{\BbbR^3} \dm k\, F(k)a^*(k, \lambda)a(k, \lambda),\qquad \lambda=1, 2, 3, 4. \end{gather*} \subsection[Identifications between $\Fock\big(L^2\big(\BbbR^3\big)\oplus L^2\big(\BbbR^3\big)\big)$ and $\bigotimes\limits_{\lambda=1}^4 \Fock(\mathfrak{H}_{\lambda})$]{Identifications between $\boldsymbol{\Fock\big(L^2\big(\BbbR^3\big)\oplus L^2\big(\BbbR^3\big)\big)}$ and $\boldsymbol{\bigotimes\limits_{\lambda=1}^4 \Fock(\mathfrak{H}_{\lambda})}$}\label{Sec34} For each ${\bs f}=(f_1, f_2)\in L^2\big(\BbbR^3\big) \oplus L^2\big(\BbbR^3\big)$, we set \begin{gather} b_{ij}({\bs f})=a\big( \vepsilon_i(\cdot, 1) f_1 \oplus \vepsilon_j(\cdot, 2)f_2 \big),\nonumber\\ c_{ij}({\bs f})=a_1(\vepsilon_i(\cdot, 1)f_{1, e})+a_2(\vepsilon_j(\cdot, 2)f_{2, e})\label{EquiA} -\im a_3(\vepsilon_i(\cdot, 1)f_{1, o})-\im a_4(\vepsilon_j(\cdot, 2) f_{2, o}), \end{gather} where $f_e(k)=(f(k)+f(-k))/2$ and $f_o(k)=(f(k)-f(-k))/2$. Let $\Psi_0$ be the Fock vacuum in $\Fock\big(L^2\big(\BbbR^3\big)\oplus L^2\big(\BbbR^3\big)\big)\colon \Psi_0=1\oplus 0 \oplus 0 \oplus \cdots$. \begin{lemm}\label{FockIden} We define a linear operator $V\colon \Fock(L^2\big(\BbbR^3\big) \oplus L^2\big(\BbbR^3\big)) \to \bigotimes\limits_{\lambda=1}^4 \Fock( \mathfrak{H}_{\lambda})$ by \begin{gather*} V\Psi_0=\bigotimes_{\lambda=1}^4\Psi_{\lambda},\\ V\left[\prod_{\ell=1}^N b_{i_{\ell} j_{\ell}}({\bs f}_{\ell})^*\right] \Psi_0 =\left[\prod_{\ell=1}^N c_{i_{\ell} j_{\ell}}({\bs f}_{\ell})^*\right] \bigotimes_{\lambda=1}^4\Psi_{\lambda} \end{gather*} for each ${\bs f}_1, \dots, {\bs f}_N\in L^2\big(\BbbR^3\big) \oplus L^2\big(\BbbR^3\big)$ and $N\in \BbbN$. Then $V$ can be extended to the unitary operator. In what follows, we denote the extension by the same symbol. Then we have \begin{gather} Vb_{ij}({\bs f})V^{-1}=\overline{c_{ij}({\bs f})} \label{AnCrEq} \end{gather} for each ${\bs f} \in L^2\big(\BbbR^3\big) \oplus L^2\big(\BbbR^3\big)$ and $i, j\in \{1, 2, 3\}$, where the bar indicates the closure of the operator. \end{lemm} \begin{proof} For $i, i\rq{}\in \{1, 2, 3\}$, we set \begin{gather*} \mu_{ii\rq{}}^{(1)}(k)=\vepsilon_i(k, 1) \vepsilon_{i\rq{}}(k, 1),\qquad \mu_{ii\rq{}}^{(2)}(k)=\vepsilon_i(k, 2) \vepsilon_{i\rq{}}(k, 2). \end{gather*} For ${\bs f}, {\bs f}\rq{}\in L^2\big(\BbbR^3\big)\oplus L^2\big(\BbbR^3\big)$ and $i, j, i\rq{}, j\rq{}\in \{1, 2, 3\}$, define \begin{gather*} D({\bs f}|{\bs f}\rq{})_{ij; i\rq{} j\rq{}}= \big\la f_1|\mu_{ii\rq{}}^{(1)}f_1\rq{} \big\ra+ \big\la f_2|\mu_{ii\rq{}}^{(2)}f_2\rq{} \big\ra. \end{gather*} First, we prove that $\big\{b_{ij}({\bs f})| {\bs f}\in L^2\big(\BbbR^3\big)\oplus L^2\big(\BbbR^3\big),\ i, j\in \{1, 2, 3\} \big\}$ and $\big\{c_{ij}({\bs f})| {\bs f}\in L^2\big(\BbbR^3\big)\oplus L^2\big(\BbbR^3\big),\ i, j\in \{1, 2, 3\}\big\}$ satisfy the similar commutations relations, that is, \begin{gather*} [b_{ij}(\bs f), b_{i\rq{}j\rq{}}({\bs f}\rq{})^*]=D({\bs f}|{\bs f}\rq{})_{ij; i\rq{} j\rq{}},\qquad [b_{ij}(\bs f), b_{i\rq{}j\rq{}}({\bs f}\rq{})]=0 \end{gather*} and \begin{gather*} [c_{ij}(\bs f), c_{i\rq{}j\rq{}}({\bs f}\rq{})^*]=D({\bs f}|{\bs f}\rq{})_{ij; i\rq{} j\rq{}},\qquad [c_{ij}(\bs f), c_{i\rq{}j\rq{}}({\bs f}\rq{})]=0. \end{gather*} To see this, note that $\mu_{ii\rq{}}^{(1)}(k)$ and $\mu_{ii\rq{}}^{(2)}(k)$ are even functions. Thus, \begin{gather*} \big\la f_e|\mu_{ii\rq{}}^{(1)} g_o\big\ra=0=\big\la f_e|\mu_{ii\rq{}}^{(2)} g_o\big\ra,\qquad f, g\in L^2\big(\BbbR^3\big). \end{gather*} Accordingly, we have \begin{align*} [c_{ij}(\bs f), c_{i\rq{}j\rq{}}({\bs f}\rq{})^*]&=\big\la f_{1, e}|\mu_{ii\rq{}}^{(1)} f_{1, e}\rq{}\big\ra +\big\la f_{2, e}|\mu_{ii\rq{}}^{(2)} f_{2, e}\rq{}\big\ra +\big\la f_{1, o}|\mu_{ii\rq{}}^{(1)} f_{1, o}\rq{}\big\ra +\big\la f_{2, 0}|\mu_{ii\rq{}}^{(2)} f_{2, o}\rq{}\big\ra\nonumber\\ &=D({\bs f}|{\bs f}\rq{})_{ij;i\rq{}j\rq{}}. \end{align*} To check other commutation relations are easy. Using the above fact, we readily confirm that \begin{gather*} \Bigg\la \Bigg[\prod_{\ell=1}^N b_{i_{\ell} j_{\ell}}({\bs f}_{\ell})^*\Bigg] \Psi_0\Bigg| \Bigg[\prod_{\ell=1}^{N\rq{} } b_{i_{\ell}\rq{} j_{\ell}\rq{}}({\bs f}_{\ell}\rq{})^*\Bigg] \Psi_0\Bigg\ra\\ \qquad{} = \Bigg\la \Bigg[\prod_{\ell=1}^N c_{i_{\ell} j_{\ell}}({\bs f}_{\ell})^*\Bigg] \bigotimes_{\lambda=1}^4\Psi_{\lambda}\Bigg| \Bigg[\prod_{\ell=1}^{N\rq{} } c_{i_{\ell}\rq{} j_{\ell}\rq{}}({\bs f}_{\ell}\rq{})^*\Bigg] \bigotimes_{\lambda=1}^4\Psi_{\lambda}\Bigg\ra \end{gather*} for every ${\bs f}_1, \dots {\bs f}_N, {\bs f}_1\rq{}, \dots, {\bs f}_{N\rq{}}\rq{}\in L^2\big(\BbbR^3\big)\oplus L^2\big(\BbbR^3\big)$ and $N, N\rq{}\in \BbbN$. From (\ref{4DirectSum}), it follows that the subspace spanned by the set of vectors $ \Big\{ \Big[\prod\limits_{\ell=1}^N b_{i_{\ell} j_{\ell}}({\bs f}_{\ell})^*\Big] \Psi_0 \Big\} $ is dense in $\Fock\big(L^2\big(\BbbR^3\big) \oplus L^2\big(\BbbR^3\big)\big)$ and the subspace spanned by the set of vectors $ \Big\{\Big[\prod\limits_{\ell=1}^N c_{i_{\ell} j_{\ell}}({\bs f}_{\ell})^*\Big] \bigotimes\limits_{\lambda=1}^4\Psi_{\lambda}\Big\} $ is dense in $\bigotimes\limits_{\lambda=1}^4 \mathfrak{H}_{\lambda}$. Hence, $V$ can be extended to the unitary operator. To check~(\ref{AnCrEq}) is easy. \end{proof} \begin{lemm}\label{aoaoa} Let $F$ be a real-valued even function on $\BbbR^3$. Assume that $F$ is continuous. Then we obtain \begin{gather*} V\dG(F \oplus F)V^{-1} =\sum_{\lambda=1}^4 \dG_{\lambda}(F). \end{gather*} \end{lemm} \begin{proof} For readers\rq{} convenience, we will provide a~sketch of the proof. We will continue to use the notations in the proof of Lemma~\ref{FockIden}. Set \begin{gather*} {\bs B}_{{\bs i}{\bs j}}({\bs f}_1, \dots, {\bs f}_N) =\Bigg[\prod_{\ell=1}^N b_{i_{\ell} j_{\ell}}({\bs f}_{\ell})^*\Bigg] \Psi_0,\\ {\bs C}_{{\bs i}{\bs j}}({\bs f}_1, \dots, {\bs f}_N) =\Bigg[\prod_{\ell=1}^N c_{i_{\ell} j_{\ell}}({\bs f}_{\ell})^*\Bigg] \bigotimes_{\lambda=1}^4\Psi_{\lambda} \end{gather*} for ${\bs f}_1, \dots, {\bs f}_N\in L^2\big(\BbbR^3\big) \oplus L^2\big(\BbbR^3\big)$, $ {\bs i}=(i_1, \dots, i_N)$, ${\bs j}=(j_1, \dots, j_N)\in \{1, 2, 3\}^N $ and $N\in \BbbN$. We define dense subspaces of $\Fock\big(L^2\big(\BbbR^3\big)\oplus L^2\big(\BbbR^3\big)\big)$ and $\bigotimes\limits_{\lambda=1}^4\mathfrak{H}_{\lambda}$ by \begin{gather} \mathfrak{V}_1=\operatorname{Lin}\big\{ {\bs B}_{{\bs i}{\bs j}}({\bs f}_1, \dots, {\bs f}_N)\, \big|\, {\bs f}_1, \dots, {\bs f}_N\in C_0\big(\BbbR^3\big)\!\oplus\! C_0\big(\BbbR^3\big),\, {\bs i}, {\bs j} \!\in\! \{1, 2, 3\}^N,\, N\in \BbbN \big\},\nonumber\\ \mathfrak{V}_2=\operatorname{Lin}\big\{ {\bs C}_{{\bs i}{\bs j}}({\bs f}_1, \dots, {\bs f}_N)\, \big|\, {\bs f}_1, \dots, {\bs f}_N\in C_0\big(\BbbR^3\big)\!\oplus\! C_0\big(\BbbR^3\big),\, {\bs i}, {\bs j} \!\in\! \{1, 2, 3\}^N,\, N\in \BbbN \big\}, \!\!\!\!\label{DenseSubS} \end{gather} where $\operatorname{Lin}(S)$ indicates the linear span of~$S$. As is well-known, $\dG(F\oplus F)$ and $\sum\limits_{\lambda=1}^4\dG_{\lambda}(F)$ are essentially self-adjoint on $\mathfrak{V}_1$ and $\mathfrak{V}_2$, respectively. We readily confirm that \begin{gather*} \dG(F\oplus F) {\bs B}_{{\bs i}{\bs j}}({\bs f}_1, \dots, {\bs f}_N) = \sum_{\alpha=1}^N {\bs B}_{{\bs i}{\bs j}}({\bs f}_1, \dots, F\oplus F{\bs f}_{\alpha}, \dots, {\bs f}_N),\\ \sum_{\lambda=1}^4\dG_{\lambda}(F) {\bs C}_{{\bs i}{\bs j}}({\bs f}_1, \dots, {\bs f}_N) = \sum_{\alpha=1}^N {\bs C}_{{\bs i}{\bs j}}({\bs f}_1, \dots, F\oplus F{\bs f}_{\alpha}, \dots, {\bs f}_N). \end{gather*} Therefore, by Lemma~\ref{FockIden}, we obtain \begin{gather*} V\dG(F\oplus F) V^{-1}=\sum_{\lambda=1}^4 \dG_{\lambda}(F)\qquad \mbox{on $\mathfrak{V}_2$}. \end{gather*} This concludes the proof of Lemma~\ref{aoaoa}. \end{proof} \subsection{Definition of the Feynman Hamiltonians}\label{Sec35} In this subsection, we introduce the Feynman Hamiltonians. To this end, let \begin{gather*} \A(x)= \int_{\BbbR^3}\dm k\, \frac{\hat{\varrho}(k)}{\sqrt{2|k|}} \big\{ \vepsilon(k, 1)\big( a(k, 1)\cos (k\cdot x)+a(k, 3)\sin (k\cdot x) \big)\\ \hphantom{\A(x)=}{} +\vepsilon(k, 2)\big( a(k, 2)\cos (k\cdot x)+a(k, 4)\sin (k\cdot x) \big)+\mathrm{h.c.}\big\},\\ H_0= \sum_{\lambda=1}^4\int_{\BbbR^3}\dm k\, |k|a(k, \lambda)^* a(k, \lambda). \end{gather*} Here, h.c.\ denotes the hermite conjugates of the preceeding terms. Note that $\mathscr{A}(x)$ is essentially self-adjoint on $\mathfrak{V}_2$ defined by~(\ref{DenseSubS}). We denote its closure by the same symbol. By~(\ref{EquiA}) and~(\ref{AnCrEq}), we have the following: \begin{gather*} V A(x)V^{-1}=\A(x),\qquad V\Hf V^{-1}=H_0. \end{gather*} Now we define the two-electron Feynman Hamiltonian $H_{\mathrm{F2e}}$ by \begin{gather*} H_{\mathrm{F2e}}= \frac{1}{2}\big({-}\im \nabla_1-e\A(0)\big)^2+\frac{1}{2}e^2\nu_0^2 x_1^2 +\frac{1}{2}\big({-}\im \nabla_2-e\A(r)\big)^2+\frac{1}{2}e^2\nu_0^2 x_2^2 \\ \hphantom{H_{\mathrm{F2e}}=}{} +e^2 \int_{\BbbR^3}\dm k\, \hat{\varrho}(k)^2 \ex^{\im k\cdot r}\big(x_1\cdot \hat{k}\big) \big(x_2\cdot \hat{k}\big)+H_0. \end{gather*} Remark that $H_{\mathrm{F2e}}$ acts in $L^2\big(\BbbR_{x_1}^3\big) \otimes L^2\big(\BbbR_{x_2}^3\big)\otimes \Big( \bigotimes\limits_{\lambda=1}^4\Fock(\mathfrak{H}_{\lambda}) \Big)$ and is bounded from below, provided that $R$ is sufficiently large. The following proposition plays an important role in the present paper. \begin{Prop} If $R$ is large enough, $VH_{\mathrm{D2e}}V^{-1}= H_{\mathrm{F2e}}$. \end{Prop} \begin{proof} Apply Lemmas \ref{FockIden} and \ref{aoaoa}. \end{proof} As for the one-electron Feynman Hamiltonian, we obtain the following. \begin{Prop}Let \begin{gather*} H_{\mathrm{F1e}}=\frac{1}{2}\big({-}\im \nabla-e\A(0)\big)^2+\frac{1}{2}e^2\nu_0^2 x^2+H_0. \end{gather*} We have $VH_{\mathrm{D1e}} V^{-1}=H_{\mathrm{F1e}}$. \end{Prop} In Remark \ref{WhyF?}, we will explain why the Feynman Hamiltonians are useful. \section{Canonical transformations} \label{CanoTr} Let $U$ be a unitary operator on $L^2\big(\BbbR_{x_1}^3\big) \otimes L^2\big(\BbbR_{x_2}^3\big)\otimes \Big( \bigotimes\limits_{\lambda=1}^4\Fock(\mathfrak{H}_{\lambda}) \Big)$ defined by \begin{gather*} U=\exp \{\im e x_1\cdot \A(0)+\im e x_2\cdot \A(r)\}. \end{gather*} Then one readily confirms that \begin{gather*} U^* (-\im \nabla_1) U=-\im \nabla_1+e\A(0),\qquad U^*(-\im \nabla_2)U=-\im \nabla_2+e\A(r) \end{gather*} and \begin{gather*} U^* a(k, \lambda) U= \begin{cases} a(k, \lambda)+\im e\dfrac{\hat{\varrho}(k) }{\sqrt{2|k|}} \vepsilon(k, \lambda)\cdot (x_1+x_2 \cos (k\cdot r)) &\mbox{for $\lambda=1,2$},\vspace{1mm}\\ a(k, \lambda)+\im e \dfrac{\hat{\varrho}(k)}{\sqrt{2|k|}} \vepsilon(k, \lambda-2)\cdot x_2 \sin (k\cdot r) &\mbox{for $\lambda=3,4$}. \end{cases} \end{gather*} Here, we used the following fact: \begin{gather*} \ex^T a(k, \lambda)\ex^{-T}=a(k, \lambda)+G(k, \lambda), \end{gather*} where $T=\sum\limits_{\lambda=1}^4 \{a_{\lambda}(G(\cdot, \lambda))-a(G(\cdot, \lambda))^*\}^{**}$, $G(\cdot, \lambda)\in \mathfrak{H}_{\lambda}$. Hence, we arrive at\footnote{The reason why the last term in the right-hand side of (\ref{TransformedH}) appears is as follows. After performing the unitary transformation, we see that $U^*H_{\mathrm{F2e}} U$ contains the term concerning $\big(x_1\cdot \hat{k}\big)\big(x_2\cdot \hat{k}\big)$ and $(\vepsilon(k, \lambda)\cdot x_1)(\vepsilon(k, \lambda)\cdot x_2)$, which is given by \begin{gather} e^2\int_{\BbbR^3}\dm k\, \hat{\varrho}(k)^2 \cos(k\cdot r) \bigg\{ \big(x_1\cdot \hat{k}\big)\big(x_2\cdot \hat{k}\big)+\sum_{\lambda=1, 2} (\vepsilon(k, \lambda)\cdot x_1)(\vepsilon(k, \lambda)\cdot x_2) \bigg\}. \label{x_1x_2Term} \end{gather} Here, we used the fact that $\int \dm k\, \hat{\varrho}(k)^2\sin(k\cdot r) \big(x_1\cdot \hat{k}\big)\big(x_2\cdot \hat{k}\big)=0$. By applying the basic property $ \sum\limits_{\lambda=1, 2} |\vepsilon(k, \lambda)\ra \la \vepsilon(k, \lambda)|=\one_3-|\hat{k}\ra\la \hat{k}|$, we conclude that~(\ref{x_1x_2Term}) is equal to $e^2\int_{\BbbR^3}\dm k\, \hat{\varrho}(k)^2\cos( k\cdot r) (x_1\cdot x_2)$.} \begin{gather} U^* H_{\mathrm{F2e}} U= -\frac{1}{2}\Delta_1+\frac{1}{2}e^2\nu^2x_1^2-\frac{1}{2}\Delta_2+\frac{1}{2}e^2\nu^2x_2^2 +ex_1\cdot E(0)+ex_2\cdot E(r)+H_0\nonumber\\ \hphantom{U^* H_{\mathrm{F2e}} U=}{} +e^2\int_{\BbbR^3}\dm k\, \hat{\varrho}(k)^2\cos( k\cdot r) (x_1\cdot x_2), \label{TransformedH} \end{gather} where $\nu^2=2\nu_0^2$ and \begin{gather*} E(x)= \im \int_{\BbbR^3} \dm k\, \sqrt{\frac{|k|}{2}} \hat{\varrho}(k) \big\{\vepsilon(k,1)\big(\cos(k\cdot x) a(k, 1) + \sin (k\cdot x) a(k, 3)\big)\\ \hphantom{E(x)=}{} +\vepsilon(k,2)\big(\cos (k\cdot x) a(k, 2) + \sin (k\cdot x) a(k, 4)\big)-\mathrm{h.c.}\big\}. \end{gather*} Let $\Nf$ be the number operator defined by $\Nf=\sum\limits_{\lambda=1}^4 \dG_{\lambda}(1)$. Applying the ``Fourier transformation'' $\ex^{-\im \pi \Nf/2}$ in the Fock space,\footnote{Let $\pi(k, \lambda)=-\frac{\im}{\sqrt{2}} (a(k, \lambda)-a(k, \lambda)^*)$ and $\phi(k, \lambda) =\frac{1}{\sqrt{2}}(a(k, \lambda)+a(k, \lambda)^*)$. We can confirm that $ [\pi(k, \lambda), \phi(k', \lambda')]=-\im \delta_{\lambda\lambda'}\delta(k-k') $. Recalling the fact $[-\im {\rm d}/{\rm d}x, x]=-\im $, $\pi(k, \lambda)$ and $\phi(k, \lambda)$ can be regarded as a multiplication operator and a differential operator, respectively. Now, we readily check that $\ex^{\im \pi\Nf/2} \pi(k, \lambda)\ex^{-\im \pi \Nf/2}=\phi(k, \lambda)$ holds, which corresponds to the relation $\mathcal{F} x\mathcal{F}^{-1}=-\im {\rm d}/{\rm d}x$, where $\mathcal{F}$ is the Fourier transformation on~$L^2(\BbbR)$. This similarity is a reason why we refer to the unitary operator $\ex^{\im \pi\Nf/2}$ as the Fourier transformation.} we obtain that \begin{gather} H= \ex^{\im\pi \Nf/2}U^* H_{\mathrm{F2e}}U \ex^{-\im \pi \Nf/2}\nonumber\\ \hphantom{H}{} = -\frac{1}{2}\Delta_1+\frac{1}{2}e^2\nu^2x_1^2-\frac{1}{2}\Delta_2+\frac{1}{2}e^2\nu^2x_2^2+ex_1\cdot\hat{ E}(0)+ex_2\cdot \hat{E}(r)+H_0\nonumber\\ \hphantom{H=}{} +e^2\int_{\BbbR^3}\dm k\, \hat{\varrho}(k)^2\cos( k\cdot r) (x_1\cdot x_2),\label{DefH} \end{gather} where \begin{gather} \hat{E}(x)= \int_{\BbbR^3} \dm k\, |k| \hat{\varrho}(k) \big\{\vepsilon(k,1)\big(\cos(k\cdot x) q(k, 1) + \sin (k\cdot x) q(k, 3)\big)\nonumber\\ \hphantom{\hat{E}(x)=}{} +\vepsilon(k,2)\big(\cos (k\cdot x) q(k, 2) + \sin (k\cdot x) q(k, 4)\big)\big\} \label{DefE} \end{gather} and \begin{gather*} q(k, \lambda)=\frac{1}{\sqrt{2|k|}}\big(a(k, \lambda)+a(k, \lambda)^*\big). \end{gather*} Since due to the assumption (A.3) the last term in~(\ref{DefH}) gives a rapidly decreasing contribution as a function of~$R$ to the ground state energy, we ignore this term from now on. Finally, we define \begin{gather*} K=-\frac{1}{2} \Delta+\frac{1}{2}e^2\nu^2 x^2+ex\cdot \hat{E}(0)+H_0. \end{gather*} By an argument similar to the construction of $U$, we can construct a unitary operator $u$ on $L^2\big(\BbbR^3\big) \otimes \Big(\bigotimes\limits_{\lambda=1}^4\Fock(\mathfrak{H}_{\lambda})\Big)$ such that $K=\ex^{\im \pi \Nf/2} u H_{\mathrm{F1e}} u^{-1} \ex^{-\im \pi \Nf/2}$. \section{Lattice approximated Hamiltonians}\label{FiniteVApp} In order to exactly compute the ground state energies of $H$ and $K$, we will first introduce the lattice approximation of Hamiltonians. As we will see in later sections, the approximated Hamiltonians can be regarded as Hamiltonians of finite dimensional harmonic oscillator, which are exactly solvable. For each $\Lambda>0$, let $\chi_{\Lambda}$ be an ultraviolet cutoff function given by $\chi_{\Lambda}(k)=1$ if $|k|\le \Lambda$, $\chi_{\Lambda}(k)=0$ otherwise. We define a linear operator $\hat{E}_{\Lambda}(x)$ by replacing $\hat{\varrho}(k)$ with $\hat{\varrho}(k) \chi_{\Lambda}(k)$ in the definition of $\hat{E}(x)$, i.e., the equation~(\ref{DefE}). We also define $H_{0, \Lambda}$ by \begin{gather*} H_{0, \Lambda}=\sum_{\lambda=1}^4 \int_{\BbbR^3} \dm k \, |k| \chi_{\Lambda}(k)a(k, \lambda)^*a(k, \lambda) . \end{gather*} The Hamiltonians with a cutoff $\Lambda$ are defined by \begin{gather*} H_{\Lambda} =-\frac{1}{2}\Delta_{1}+\frac{1}{2}e^2\nu^2x_1^2 -\frac{1}{2}\Delta_{2}+\frac{1}{2}e^2\nu^2x_2^2+ex_1\cdot \hat{E}_{\Lambda}(0)+ex_2\cdot \hat{E}_{\Lambda}(r) +H_{0, \Lambda},\\ K_{\Lambda} =-\frac{1}{2}\Delta+\frac{1}{2}e^2\nu^2x^2+ex\cdot \hat{E}_{\Lambda}(0)+H_{0, \Lambda}. \end{gather*} We readily see that $H_{\Lambda}$ and $K_{\Lambda}$ respectively converge to~$H$ and~$K$ in the norm resolvent sense as $\Lambda\to \infty$. Let $M$ be the (momentum) lattice with a cutoff $\Lambda$, namely, \begin{gather*} M=\big\{ l\in (2\pi \BbbZ/L)^3\, \big|\, |l_i|\le 2\pi \Lambda,\, i=1,2,3 \big\}\backslash \{0\}. \end{gather*} For later use, we label the elements of $M$ as \begin{gather*} M=\{ k_1, \dots, k_N\}. \end{gather*} Then the lattice approximated Hamiltonians are defined by \begin{gather*} H_{L, \Lambda} =-\frac{1}{2}\Delta_1+\frac{1}{2}e^2\nu^2x_1^2-\frac{1}{2}\Delta_2+\frac{1}{2}e^2\nu^2x_2^2+ex_1\cdot\hat{ E}_{L, \Lambda}(0)+ex_2\cdot \hat{E}_{L, \Lambda}(r)+H_{0,L, \Lambda},\\ K_{L, \Lambda} = -\frac{1}{2}\Delta+\frac{1}{2}e^2\nu^2 x^2+ex\cdot \hat{E}_{L, \Lambda}(0)+H_{0, L, \Lambda}, \end{gather*} where \begin{gather} \hat{E}_{L, \Lambda}(x)= \left(\frac{2\pi}{L}\right)^{3/2}\sum_{k\in M} |k| \hat{\varrho}(k) \big\{ \vepsilon(k,1)\big(\cos(k\cdot x) q(k,1) + \sin (k\cdot x) q(k, 3)\big)\nonumber\\ \hphantom{\hat{E}_{L, \Lambda}(x)=}{} +\vepsilon(k,2)\big(\cos (k\cdot x) q(k, 2) + \sin (k\cdot x) q(k, 4)\big)\big\},\label{EleFielD}\\ H_{0, L, \Lambda}=\frac{1}{2}\sum_{\lambda=1}^4\sum_{k\in M} \big(p(k, \lambda)^2+|k|^2 q(k, \lambda)^2\big)-2\sum_{k\in M}|k|\nonumber \end{gather} with $p(k, \lambda)=\frac{1}{i} \sqrt{\frac{|k|}{2}}(a(k, \lambda)-a(k, \lambda)^*)$. The lattice approximated operators act in the Hilbert space $L^2\big(\BbbR_{x_1}^3\big)\otimes L^2\big(\BbbR_{x_2}^3\big) \otimes \Big(\bigotimes\limits_{\lambda=1}^4 \Fock(\mathfrak{H}_{L, \Lambda, \lambda})\Big)$ or $L^2\big(\BbbR^3\big) \otimes \Big(\bigotimes\limits_{\lambda=1}^4 \Fock(\mathfrak{H}_{L, \Lambda, \lambda})\Big)$, where $\mathfrak{H}_{L, \Lambda, \lambda}=\ell_*^2(M) \allowbreak \cap \mathfrak{H}_{\lambda}$. Here, $\ell_*^2(M)$ is the $\ell^2(M)$ equipped with a modified norm \begin{gather*} \|f\|_*=\left(\frac{2\pi}{L}\right)^{3/4}\left(\sum_{k\in M} |f(k)|^2\right)^{1/2}, \end{gather*} and we regard $\ell^2_*(M)$ as a closed subspace of $L^2\big(\BbbR^3\big)$. Note that $p(k, \lambda)$ and $q(k, \lambda)$ are essentially self-adjoint on the finite particle subspace of $\bigotimes\limits_{\lambda=1}^4 \Fock(\mathfrak{H}_{L, \Lambda, \lambda}) $. In what follows, we denote their closures by same symbols, respectively. $q(k, \lambda)$ and $p(k, \lambda)$ is a canonical pair of the photonic displacement coordinate and its conjugate momentum satisfying the standard commutation relations: \begin{gather*} [p(k, \lambda), q(k', \lambda')] =-\im \delta_{kk'}\delta_{\lambda\lambda'},\\ [p(k, \lambda), p(k', \lambda')] =0=[q(k, \lambda), q(k', \lambda')]. \end{gather*} Recall the identification $\Fock(\BbbC)=L^2(\BbbR)$. Using this, we can naturally embed $\bigotimes\limits_{\lambda=1}^4\Fock(\mathfrak{H}_{L, \Lambda, \lambda})$ into $ \Big(\bigotimes\limits_{\lambda=1}^4L^2\big(\BbbR^3\big)\Big)^{\otimes \# M}$. In addition, $p(k, \lambda)$ and $q(k, \lambda)$ can be regarded as the differential and multiplication operators, respectively. The following proposition is a basis for our computation. \begin{Prop}\label{ResolCon}For each $z\in \BbbC\backslash \BbbR$, one has \begin{gather*} \lim_{\Lambda\to \infty} \lim_{L\to \infty} (H_{L, \Lambda}-z)^{-1} =(H-z)^{-1},\\ \lim_{\Lambda\to \infty} \lim_{L\to \infty} (K_{L, \Lambda}-z)^{-1} =(K-z)^{-1} \end{gather*} in the operator norm topology. \end{Prop} \begin{proof}See, e.g., \cite{AH, GJ}. \end{proof} \section{Diagonalization I: One-electron Hamiltonian}\label{Dai1} In this section, we diagonalize the one-electron Hamiltonian $K_{L, \Lambda}$. To this end, let \begin{alignat*}{3} & F_x(k, 1)=\left(\frac{2\pi}{L}\right)^{3/2} |k|\hat{\varrho}(k)\cos (k\cdot x),\qquad && F_x(k, 2)= \left(\frac{2\pi}{L}\right)^{3/2}|k|\hat{\varrho}(k)\cos (k\cdot x),& \\ & F_x(k, 3)= \left(\frac{2\pi}{L}\right)^{3/2} |k|\hat{\varrho}(k)\sin (k\cdot x),\qquad && F_x(k, 4)= \left(\frac{2\pi}{L}\right)^{3/2}|k|\hat{\varrho}(k)\sin (k\cdot x).& \end{alignat*} We define a linear operator $ \mathbb{T}(x)$ from $\ell^2(M\times \{1, \dots, 4\})$ to $\BbbC^3$ by \begin{gather*} \mathbb{T}(x){\boldsymbol f} =\sum_{\lambda=1}^4 \sum_{k\in M} |\vepsilon(k, \lambda)\ra F_x(k, \lambda)f(k, \lambda) \end{gather*} for each ${\boldsymbol f}=\{f(k, \lambda)\,|\, k\in M, \lambda\in \{1, \dots,4\}\}\in \ell^2(M\times \{1, \dots, 4\})$. Here, we used the following notation: $\vepsilon(k, 3):=\vepsilon(k, 1)$ and $\vepsilon(k, 4):=\vepsilon(k, 2)$. The adjoint of $\mathbb{T}(x)$ is denoted by~$\mathbb{T}^*(x)$. Note that \begin{gather*} (\mathbb{T}^*(x)a)(k, \lambda)=\la \vepsilon(k, \lambda)\,|\,a\ra_{3} F_x(k, \lambda),\qquad a\in \BbbC^3, \end{gather*} where $\la \cdot| \cdot \ra_3$ stands for the inner product in $\mathbb{C}^3$. Using the above notations, the interaction term $x\cdot \hat{E}_{L, \Lambda}(r)$ in $K_{L, \Lambda}$ is expressed as $x\cdot \hat{E}_{L, \Lambda}(r)= \la \mathbb{T}(r) {\boldsymbol q}\,|\, x\ra_3=\la x\,|\,\mathbb{T}(r){\boldsymbol q}\ra_3$, where ${\boldsymbol q}=\{q(k, \lambda)\, |\, k\in M, \lambda\in \{1, \dots, 4\}\}$. On the other hand, the field energy can be represented by \begin{gather*} H_{0, L, \Lambda}=\frac{1}{2}\big({\boldsymbol p}^2+\la {\boldsymbol q}| S_0{\boldsymbol q}\ra\big)-\frac{1}{2}\operatorname{tr} \big[\sqrt{S_0}\big], \end{gather*} where $ {\boldsymbol p}=\{p(k, \lambda)\, |\, k\in M, \lambda\in \{1, \dots, 4\}\}$ and \begin{gather*} S_0= \begin{pmatrix} |k_1|^2 \one_4& & & & O\\ &|k_2|^2\one_4& & & & \\ & & & \ddots & & \\ &O & & & |k_N|^2\one_4 \end{pmatrix}. \end{gather*} Hence, $K_{L, \Lambda}$ can be rewritten as \begin{gather} K_{L, \Lambda}=-\frac{1}{2}\Delta+\frac{1}{2}e^2\nu^2 x^2+e \la x\,|\,\mathbb{T}(r) {\boldsymbol q}\ra_3 +\frac{1}{2}\big( {\boldsymbol p}^2+\la {\boldsymbol q}\,|\, S_0{\boldsymbol q}\ra\big)-\frac{1}{2}\operatorname{tr} \big[\sqrt{S_0}\big]. \label{OnePartiK} \end{gather} By setting $\phi=(x, {\boldsymbol q})$ and $\pi=(-\im \nabla, {\boldsymbol p})$, one sees that \begin{gather*} K_{L, \Lambda}=\frac{1}{2}\big( \la \pi| \pi\ra+\la \phi\,|\,\omega\phi\ra \big) -\frac{1}{2}\operatorname{tr}\big[\sqrt{\omega_0}\big]+\frac{3}{2}e\nu, \end{gather*} where \begin{gather*} \omega= \omega_0+Q,\qquad \omega_0= \begin{pmatrix} e^2\nu^2 & 0\\ 0& S_0 \end{pmatrix},\qquad Q=e\begin{pmatrix} 0 & \mathbb{T}(r)\\ \mathbb{T}^*(r) & 0 \end{pmatrix}. \end{gather*} The following lemma is a basic input. \begin{lemm}\label{LLL} If $1\le \sqrt{2}e\nu_0$ and $ \sqrt{2}e\|\hat{\varrho}\|_*<1$, then $\omega \ge 0$. \end{lemm} \begin{proof} By (\ref{TT*}), we have $\|\mathbb{T}(r) {\boldsymbol f}\| \le \sqrt{2} \|\hat{\varrho}\|_*\big\|S_0^{1/2} {\boldsymbol f}\big\|$ for all ${\boldsymbol f}\in \ell^2(M\times \{1, \dots, 4\})$. Hence, for all $\vphi=(a, {\boldsymbol f})\in \BbbC^3\oplus \ell^2(M\times \{1, \dots, 4\})$, we have, by the Schwarz inequality, \begin{align*} |\la \vphi|Q\vphi\ra| &\le 2\sqrt{2}e \| \hat{\varrho}\|_* \|a\|_3 \big\|S_0^{1/2} {\boldsymbol f}\big\|\\ &\le \sqrt{2} e\| \hat{\varrho}\|_*\big(\|a\|^2_3+\big\|S_0^{1/2} {\boldsymbol f}\big\|^2\big)\\ &\le \sqrt{2} e\|\hat{\varrho}\|_*\la \vphi|\omega_0\vphi\ra, \end{align*} provided that $1\le e^2\nu^2$. This concludes the proof of Lemma \ref{LLL}. \end{proof} Therefore, the ground state energy of $K_{L, \Lambda}$ is given by the following formula. \begin{Prop}\label{EnergyTr} Let $E_{L, \Lambda}=\inf \operatorname{spec}(K_{L, \Lambda})$. If $1\le \sqrt{2} e \nu_0$ and $\sqrt{2}e \|\hat{\varrho}\|_* <1$, then one has \begin{gather*} E_{L, \Lambda}=\frac{1}{2}\operatorname{tr}\big[\sqrt{\omega}-\sqrt{\omega_0}\big] +\frac{3}{2}e\nu. \end{gather*} \end{Prop} \begin{proof} We provide a sketch of the proof. First, we diagonalize $\omega$ as \[ \omega=U^{-1} \operatorname{diag}(\lambda_1, \dots, \lambda_{4N+3}) U, \] where $U$ is a unitary matrix and $\lambda_1, \dots, \lambda_{4N+3}$ are positive eigenvalues of $\omega$. By setting $\tilde{\phi}=U\phi$ and $\tilde{\pi}=U \pi$, we can express $K_{L, \Lambda}$ as \begin{gather} K_{L, \Lambda} =\frac{1}{2}\big\la \tilde{\pi}|\tilde{\pi}\big\ra +\frac{1}{2}\big\la \tilde{\phi}|\operatorname{diag}(\lambda_1, \dots, \lambda_{4N+3})\tilde{\phi}\big\ra -\frac{1}{2}\operatorname{tr}[\sqrt{\omega_0}]+\frac{3}{2}e\nu. \label{ReExK} \end{gather} Because $\tilde{\pi}_j$ and $ \tilde{\phi}_j$ satisfy the Weyl relation: $e^{\im t \tilde{\pi}_i} e^{\im s \tilde{\phi}_j}=e^{\im st \delta_{ij}} e^{\im s \tilde{\phi}_j} e^{\im t \tilde{\pi}_i}$, the von Neumann's uniqueness theorem \cite[Theorem~VIII.14]{ReSi1} tells us that there is a unitary operator $\tau\colon L^2\big(\BbbR^{4N+3}\big) \allowbreak \to L^2\big(\BbbR^{4N+3}\big)$ such that $\tau\tilde{\phi}_j\tau^{-1} =x_j$ and $\tau \tilde{\pi}_j\tau^{-1}=-\im \partial/\partial x_j$. Therefore, the right-hand side of~(\ref{ReExK}) can be regarded as a Hamiltonian for $4N+3$-dimensional harmonic oscillator. Since the lowest eigenvalue of the Hamiltonian $-\frac{1}{2}\Delta_j^2+\frac{\lambda_j}{2} x_j^2$ is equal to $\sqrt{\lambda_j}/2$, we obtain that \begin{gather*} E_{L, \Lambda} =\frac{1}{2}\sum_{j=1}^{4N+3} \sqrt{\lambda_j}-\frac{1}{2}\operatorname{tr}\big[\sqrt{\omega_0}\big]+\frac{3}{2}e\nu =\frac{1}{2}\operatorname{tr}\big[\sqrt{\omega}-\sqrt{\omega_0}\big] +\frac{3}{2}e\nu. \end{gather*} This finishes the proof of Proposition \ref{EnergyTr}. \end{proof} Applying the elementary fact \begin{gather} \frac{1}{\pi}\int_{-\infty}^{\infty}\dm s \, \frac{a}{s^2+a}=\sqrt{a}, \label{I11} \end{gather} we have that \begin{gather} E_{L, \Lambda}= \frac{1}{2\pi}\int_{-\infty}^{\infty}\dm s\, \operatorname{tr}\big[ \omega \big(s^2+\omega\big)^{-1}-\omega_0 \big(s^2+\omega_0\big)^{-1} \big]+\frac{3}{2}e\nu\nonumber\\ \hphantom{E_{L, \Lambda}}{} = \frac{1}{2\pi}\sum_{j=1}^{\infty}(-1)^j\int_{-\infty}^{\infty}\dm s\, \operatorname{tr}\big[ \omega_0 \big(s^2+\omega_0\big)^{-1}\big\{ Q \big(s^2+\omega_0\big)^{-1} \big\}^j\big]\nonumber\\ \hphantom{E_{L, \Lambda}=}{} +\frac{1}{2\pi}\sum_{j=1}^{\infty}(-1)^{j+1}\int_{-\infty}^{\infty}\dm s\, \operatorname{tr}\big[\big\{ Q \big(s^2+\omega_0\big)^{-1}\big\}^j\big]+\frac{3}{2}e\nu\no \hphantom{E_{L, \Lambda}}{} = \frac{1}{2\pi}\sum_{j=1}^{\infty}(-1)^{j+1}\int_{-\infty}^{\infty}\dm s\, s^2 \operatorname{tr}\big[\big(s^2+\omega_0\big)^{-1}\big\{Q \big(s^2+\omega_0\big)^{-1} \big\}^j\big]+\frac{3}{2}e\nu. \label{Expansion} \end{gather} Since $Q$ is off-diagonal, (\ref{Expansion}) becomes \begin{gather} E_{L, \Lambda}= -\frac{1}{2\pi}\sum_{j=1}^{\infty}\int_{-\infty}^{\infty}\dm s \, s^2 \operatorname{tr}\big[\big(s^2+\omega_0\big)^{-1} Q (s)^{2j}\big]+\frac{3}{2}e\nu, \label{ESeries} \end{gather} where $Q(s)=\big(s^2+\omega_0\big)^{-1/2} Q\big(s^2+\omega_0\big)^{-1/2}$. In what follows, we will examine the convergence of the right-hand side of~(\ref{ESeries}). As we will see, this series absolutely converges and~(\ref{ESeries}) is rigorously justified if $\nu_0$ is large enough. We begin with the following basic lemma. \begin{lemm}\label{EstT} We have the following \begin{gather*} \big\| \mathbb{T}(x) \big(s^2+S_0\big)^{-1/2} \big\| \le \sqrt{2} \|\hat{\varrho}\|_{*}, \qquad \big\|\big(s^2+S_0\big)^{-1/2} \mathbb{T}^*(x)\big\| \le \sqrt{2} \|\hat{\varrho}\|_{*} \end{gather*} for all $x\in \BbbR^3$, where $\|f\|_{*}=\sqrt{\left(\frac{2\pi}{L}\right)^{3}\sum\limits_{k\in M} |f(k)|^2}$ for each $f\in \ell^2(M)$. \end{lemm} \begin{proof} For each ${\boldsymbol f}\in \ell^2(M\times \{1, \dots, 4\})$, we have, by~(\ref{TT*}), \begin{align*} \big\| \mathbb{T}(x) \big(s^2+S_0\big)^{-1/2} {\boldsymbol f} \big\|^2&=\la {\boldsymbol f}\,|\,\mathbb{T}^*_s(x) \mathbb{T}_s(x) {\boldsymbol f}\ra\\ &=\big\la {\boldsymbol f}\,|\,\big(s^2+S_0\big)^{-1/2} \mathbb{M}(x, x)\big(s^2+S_0\big)^{-1/2} {\boldsymbol f}\big\ra\\ &\le \Bigg| \sum_{k, \lambda} \frac{F_x(k, \lambda)}{\big(s^2+k^2\big)^{1/2}} f(k, \lambda) \Bigg|^2\\ &\le \big\| \big(s^2+k^2\big)^{-1/2} F_x \big\|^2\|{\boldsymbol f}\|^2. \end{align*} Because $\big\|\big(s^2+k^2\big)^{-1/2} F_x\big\|^2\le 2\|\hat{\varrho}\|^2_*$, we conclude that $\big\|\mathbb{T}(x) \big(s^2+S_0\big)^{-1/2} \big\| \le \sqrt{2} \|\hat{\varrho}\|_{*}$. \end{proof} \begin{lemm}\label{EstInt}Let \begin{gather} D(s)=2e^2s^2\big(s^2+e^2\nu^2\big)^{-1}\nonumber\\ \hphantom{D(s)=}{} \times \big\{ \big(s^2+e^2\nu^2\big)^{-1} \big\| \big(s^2+|k|^2\big)^{-1/2} |k| \hat{\varrho} \big\|_*^2+\big\| \big(s^2+|k|^2\big)^{-1} |k| \hat{\varrho} \big\|_*^2 \big\}. \label{DefD(s)} \end{gather} Then we have the following: \begin{itemize}\itemsep=0pt \item[{\rm (i)}] For all $s\in \BbbR$, \begin{gather*} s^2\operatorname{tr}\big[\big(s^2+\omega_0\big)^{-1} Q(s)^{2n}\big] \le \left( \frac{\sqrt{2}}{\nu} \|\hat{\varrho}\|_* \right)^{2n-2} D(s). \end{gather*} \item[{\rm (ii)}] Let $a=\big( \frac{\sqrt{2}}{\nu}\|\hat{\varrho}\|_*\big)^2$. If $a<1$, then we have \begin{gather*} \sum_{n=1}^{\infty} s^2\operatorname{tr}\big[\big(s^2+\omega_0\big)^{-1}Q(s)^{2n} \big] \le \frac{1}{1-a} D(s). \end{gather*} Remark that $\lim\limits_{L\to \infty} a\le \big(\frac{\sqrt{2}}{\nu} \|\hat{\varrho}\|_{L^2}\big)^2\le c_{\infty}^2<1/4 $ holds for all $\Lambda>0$ by the assumption in Theorem~{\rm \ref{Rto7}}. Thus, the condition $a<1$ is satisfied provided that~$L$ is sufficiently large. \item[{\rm (iii)}] $D(s)\in L^1(\BbbR)$ and \begin{gather} \frac{1}{2\pi} \int_{\BbbR} \dm s\, D(s) \le \frac{e}{\nu}\|\hat{\varrho}\|_*^2.\label{EstD} \end{gather} \end{itemize} \end{lemm} \begin{proof} We set $\mathbb{T}_s(r)=\mathbb{T}(r)\big(s^2+S_0\big)^{-1/2}$. First, consider the case where $n=1$. Because \begin{gather} Q(s)^2=e^2\big(s^2+e^2\nu^2\big)^{-1} \begin{pmatrix} \mathbb{T}_s(r) \mathbb{T}^*_s(r) & 0\\ 0 & \mathbb{T}^*_s(r) \mathbb{T}_s(r) \end{pmatrix}, \label{Q^2Formula} \end{gather} we obtain that \begin{gather*} s^2 \operatorname{tr}\big[\big(s^2+\omega_0\big)^{-1} Q (s)^{2}\big]\\ \qquad {} = e^2s^2 \big(s^2+e^2\nu^2\big)^{-2} \operatorname{tr}\big[ \mathbb{T}_s(r) \mathbb{T}_s^*(r) \big] + e^2s^2 \big(s^2+e^2\nu^2\big)^{-1} \operatorname{tr}\big[ \big(s^2+S_0\big)^{-1} \mathbb{T}_s^*(r) \mathbb{T}_s(r) \big] \end{gather*} By (\ref{TT*}) and (\ref{T*T}), we have \begin{gather*} \operatorname{tr}\big[ \mathbb{T}_s(r) \mathbb{T}_s^*(r) \big] \le 2 \left(\frac{2\pi}{L}\right)^{3}\sum_{k\in M} \big(s^2+|k|^2\big)^{-1}|k|^2 |\hat{\varrho}(k)|^2,\\ \operatorname{tr}\big[ \big(s^2+S_0\big)^{-1} \mathbb{T}_s^*(r) \mathbb{T}_s(r) \big] \le 2 \left(\frac{2\pi}{L}\right)^{3}\sum_{k\in M}\big(s^2+|k|^2\big)^{-2}|k|^2|\hat{\varrho}(k)|^2. \end{gather*} Thus, we get (i) for $n=1$. To prove the assertion for $n\ge 2$, we remark that $\|Q(s)\| \le \frac{\sqrt{2}}{\nu} \|\hat{\varrho}\|_{*}$, which immediately follows from Lemma~\ref{EstT} and~(\ref{Q^2Formula}). Thus, by using the fact $Q(s)^{2n} \le \|Q(s)\|^{2n-2} Q(s)^2$, we have \begin{gather*} \operatorname{tr}\big[\big(s^2+\omega_0\big)^{-1/2} Q(s)^{2n}\big(s^2+\omega_0\big)^{-1/2}\big]\! \le \|Q(s)\|^{2n-2}\!\operatorname{tr}\big[\big(s^2+\omega_0\big)^{-1/2} Q(s)^2\big(s^2+\omega_0\big)^{-1/2} \big]. \end{gather*} Applying the result for $n=1$, we get the desired result for $n\ge 2$. (ii) immediately follows from~(i). By using the formula (\ref{I111}) with $a=b=e^2\nu^2$ and $c=|k|^2$, we see that \begin{gather*} \frac{1}{2\pi}\int_{-\infty}^{\infty}\dm s \, 2 e^2 s^2 \big(s^2+e^2\nu^2\big)^{-2} \big\| \big(s^2+|k|^2\big)^{-1/2} |k|\hat{\varrho} \big\|_*^2 \\ \qquad{} = \frac{e^2}{2\pi}\left(\frac{2\pi}{L}\right)^{3}\sum_{k\in M}\frac{2 \pi}{2e\nu}\frac{|k|^2|\hat{\varrho}(k)|^2}{(e\nu+|k|)^2} \le \frac{e}{2\nu} \|\hat{\varrho}\|_{*}^2. \end{gather*} Similarly, by using the formula (\ref{I111}) with $a=e^2\nu^2$ and $b=c=|k|^2$, we obtain \begin{gather*} \frac{1}{2\pi}\int_{-\infty}^{\infty}\dm s \, 2e^2 s^2 \big(s^2+e^2\nu^2\big)^{-1} \big\| \big(s^2+|k|^2\big)^{-1} |k| \hat{\varrho} \big\|_*^2 \\ \qquad{} = \frac{e^2}{2\pi} \left(\frac{2\pi}{L}\right)^{3}\sum_{k\in M}\frac{2 \pi}{2|k|(|k|+e\nu)^2} |k|^2 |\hat{\varrho}(k)|^2 \le \frac{e}{2\nu} \|\hat{\varrho}\|^2_{*}. \end{gather*} Inserting these into (\ref{DefD(s)}), we obtain the assertion (iii). \end{proof} \begin{coro}\label{ConvOne} The right-hand side of \eqref{ESeries} absolutely converges, provided that $\sqrt{2} \| \hat{\varrho}\|_{*}<\nu$, $1\le \sqrt{2}e\nu_0$ and $\sqrt{2} e \|\hat{\varrho}\|_*<1$. In addition, to exchange the series with the integral in \eqref{ESeries} $($or~\eqref{Expansion}$)$ can be justified. \end{coro} \section{Diagonalization II: Two-electron Hamiltonian} \label{Dai2} Next we will diagonalize $H_{L, \Lambda}$. This is actually possible because we employ the Feynman Hamiltonian, see Remark~\ref{WhyF?} for details. By an argument similar to that of the proof of (\ref{OnePartiK}), $H_{L, \Lambda}$ can be expressed as \begin{gather*} H_{L, \Lambda}= -\frac{1}{2}\Delta_1+\frac{1}{2}e^2\nu^2x_1^2 -\frac{1}{2}\Delta_2+\frac{1}{2}e^2\nu^2x_2^2+e \la \mathbb{T}(0) {\boldsymbol q}|x_1 \ra_3+e\la\mathbb{T}(r) {\boldsymbol q}| x_2\ra_3\\ \hphantom{H_{L, \Lambda}=}{} +\frac{1}{2}\big( {\boldsymbol p}^2+\la {\boldsymbol q}| S_0{\boldsymbol q}\ra \big)-\frac{1}{2}\operatorname{tr}\big[\sqrt{S_0}\big]. \end{gather*} By setting $\Phi=(x_1, x_2, {\boldsymbol q})$ and $\Pi=(-\im \nabla_1, -\im \nabla_2, {\boldsymbol p})$, we have that \begin{gather*} H_{L, \Lambda}=\frac{1}{2}\big(\la \Pi| \Pi\ra+\la \Phi| \Omega\Phi\ra \big)-\frac{1}{2}\operatorname{tr}\big[\sqrt{\Omega_0}\big]+3 e\nu, \end{gather*} where \begin{gather*} \Omega= \Omega_0+Q_1+Q_2,\\ \Omega_0= \begin{pmatrix} e^2\nu^2 & 0&0\\ 0&e^2\nu^2 &0\\ 0&0& S_0 \end{pmatrix}, \!\!\qquad Q_1=e \begin{pmatrix} 0&0& \mathbb{T}(0)\\ 0&0&0\\ \mathbb{T}^*(0)&0&0 \end{pmatrix},\!\!\qquad Q_2=e\begin{pmatrix} 0&0&0\\ 0&0& \mathbb{T}(r) \\ 0&\mathbb{T}^*(r)&0 \end{pmatrix}. \end{gather*} By an argument similar to that in the proof of Proposition~\ref{EnergyTr}, we get the following useful formula. \begin{Prop} Let $E_{L, \Lambda}(R)=\inf \operatorname{spec}(H_{L, \Lambda})$. If $1\le \sqrt{2} e \nu_0$ and $\sqrt{2}e \|\hat{\varrho}\|_* <1$, then $\Omega\ge 0$ and \begin{gather*} E_{L, \Lambda}(R)=\frac{1}{2}\operatorname{tr}\big[\sqrt{\Omega}-\sqrt{\Omega_0}\big]+3e\nu. \end{gather*} \end{Prop} \begin{rem}[Why are the Feynman Hamiltonians helpful?]\label{WhyF?} From the expression~(\ref{EleFielD}), we see that $\hat{E}_{L, \Lambda}(x)$ can be written as a sum of multiplication operators~$q(k, \lambda)$. As we already knew, this fact is a key to the diagonalization of $H_{L, \Lambda}$. In contrast to the Feynman Hamiltonians, in the standard representation, $\hat{E}_{L, \Lambda}(x)$ corresponds to the following operator: \begin{gather} \left( \frac{2\pi}{L} \right)^{3/2} \sum_{\lambda=1, 2}\sum_{k\in M}\hat{\varrho}(k) \vepsilon(k, \lambda) \big\{\cos(k\cdot x)|k| q(k, \lambda)+\sin(k\cdot x) |k|^{-1}p(k, \lambda)\big\}. \label{QPMix} \end{gather} In (\ref{QPMix}), both multiplication and differential operators appear, provided that $x\neq 0$. At first glance, it appears that diagonalizing the Hamiltonians in this representation requires extra efforts. \end{rem} Moreover, it can be readily seen that, by (\ref{I11}), \begin{gather} E_{L, \Lambda}(R) = \frac{1}{2\pi}\sum_{j=1}^{\infty}(-1)^{j+1}\!\int_{-\infty}^{\infty}\!\dm s\, s^2 \operatorname{tr}\big[ \big(s^2+\Omega_0\big)^{-1}\big\{ (Q_1+Q_2) \big(s^2+\Omega_0\big)^{-1} \big\}^j \big]+3e\nu. \!\!\label{TwoElExpansion} \end{gather} To examine this formal series, let us introduce the following notation: \begin{gather*} \la O_1O_2\cdots O_n\ra =\frac{1}{2\pi}\int_{-\infty}^{\infty}\dm s\, s^2\operatorname{tr}\big[ \big(s^2+\Omega_0\big)^{-1}O_1(s)O_2(s)\cdots O_n(s) \big], \end{gather*} where $O_i(s)=\big(s^2+\Omega_0\big)^{-1/2}O_i \big(s^2+\Omega_0\big)^{-1/2}$. Then (\ref{TwoElExpansion}) can be expressed as \begin{gather} E_{L, \Lambda}(R) = \sum_{j=1}^{\infty}(-1)^{j+1}\la \underbrace{(Q_1+Q_2)\cdots (Q_1+Q_2)}_j\ra+3e\nu\nonumber\\ \hphantom{E_{L, \Lambda}(R)}{}= \sum_{j=1}^{\infty}(-1)^{j+1}\la \underbrace{Q_1\cdots Q_1}_j\ra +\sum_{j=1}^{\infty}(-1)^{j+1}\la \underbrace{Q_2\cdots Q_2}_j\ra \nonumber\\ \hphantom{E_{L, \Lambda}(R)=}{} +\sum_{j=1}^{\infty}\sum_{i_1, \dots, i_j\in \{1,2\}\atop \{i_1, \dots, i_j\}\neq \{1, 1, \dots, 1\}, \{2,2, \dots, 2\}} (-1)^{j+1}\la Q_{i_1}\cdots Q_{i_j}\ra+3e\nu. \label{ExpTwo??} \end{gather} Since $Q_1$ and $Q_2$ are off-diagonal, we have \begin{gather*} E_{L, \Lambda}(R)= -\sum_{j=1}^{\infty}\la \underbrace{Q_1\cdots Q_1}_{2j}\ra -\sum_{j=1}^{\infty}\la \underbrace{Q_2\cdots Q_2}_{2j}\ra\\ \hphantom{E_{L, \Lambda}(R)=}{} -\sum_{j=1}^{\infty}\sum_{i_1, \dots, i_{2j}\in \{1,2\}\atop \{i_1, \dots, i_{2j}\}\neq \{1, 1, \dots, 1\}, \{2,2, \dots, 2\}} \la Q_{i_1}\cdots Q_{i_{2j}}\ra+3e\nu. \end{gather*} On the other hand, we remark that, by Corollary~\ref{ConvOne}, \begin{gather*} E_{L, \Lambda}= -\sum_{j=1}^{\infty}\la \underbrace{Q_1\cdots Q_1}_{2j}\ra+\frac{3}{2}e\nu =-\sum_{j=1}^{\infty}\la \underbrace{Q_2\cdots Q_2}_{2j}\ra+\frac{3}{2}e\nu, \end{gather*} provided that $\sqrt{2}\|\hat{\varrho}\|_*<\nu$, $1\le \sqrt{2} e \nu_0$ and $\sqrt{2}e \|\hat{\varrho}\|_* <1$. Thus, we formally arrive at the following formula: \begin{gather} E_{L, \Lambda}(R)-2E_{L, \Lambda} =-\sum_{j=1}^{\infty}\sum_{i_1, \dots, i_{2j}\in \{1,2\}\atop \{i_1, \dots, i_{2j}\}\neq \{1, 1, \dots, 1\}, \{2,2, \dots, 2\} } \la Q_{i_1}\cdots Q_{i_{2j}}\ra.\label{BindingEn} \end{gather} Our next task is to prove the convergence of the right-hand side of (\ref{BindingEn}). For this purpose, we need some preliminaries. Let \begin{gather*} \mathcal{I}_{2j}=\big\{I=\{i_1, \dots, i_{2j}\},\, i_1,\dots, i_{2j}\in \{1, 2\}\, \big|\, I\neq \{1, 1, \dots, 1\}, \{2, 2, \dots, 2\} \big\}. \end{gather*} For each $I\in \mathcal{I}_{2j}$, we set $|I|=i_1+i_2+\cdots+i_{2j}$. Furthermore, we use the following notation: \begin{gather*} Q_I=Q_{i_1}Q_{i_2} \cdots Q_{i_{2j}},\qquad I=\{i_1, \dots, i_{2j}\}\in \mathcal{I}_{2j}. \end{gather*} \begin{lemm}\label{Odd0} Let $I\in \mathcal{I}_{2j}$. If $|I|$ is an odd number, then $\la Q_I\ra =0$. \end{lemm} \begin{proof} Note that $\big(s^2+\Omega_0\big)^{-1}$, $Q_1(s)^2$ and $Q_2(s)^2$ are diagonal operators, while $Q_1(s)Q_2(s)$ and $Q_2(s)Q_1(s)$ are off-diagonal operators, see Appendix \ref{List}. Hence, if $|I|$ is an odd number, then $\big(s^2+\Omega_0\big)^{-1}Q_{i_1}(s)\cdots Q_{i_{2j}}(s)$ is an off-diagonal operator. Accordingly, \begin{gather*} \Tr\big[ \big(s^2+\Omega_0\big)^{-1}Q_{i_1}(s)\cdots Q_{i_{2j}}(s)\big]=0. \end{gather*} This concludes the proof of Lemma \ref{Odd0}. \end{proof} Let $\mathcal{I}_{2j}^{(e)}=\{I\in \mathcal{I}_{2j}\, |\, \mbox{$|I|$ is even}\}$. By Lemma~\ref{Odd0}, we have \begin{gather} \mbox{the r.h.s.\ of (\ref{BindingEn})}=-\sum_{j=1}^{\infty} \sum_{I\in \mathcal{I}_{2j}^{(e)}} \la Q_I\ra. \label{ExpansionF} \end{gather} \begin{lemm} For each $s\in \BbbR$ and $I=\{i_1, \dots, i_{2j}\}\in \mathcal{I}_{2j}^{(e)}$, we set \begin{gather*} Q_I(s)=Q_{i_1}(s)\cdots Q_{i_{2j}}(s) \end{gather*} and \begin{gather*} E_I(s)=s^2\operatorname{tr}\big[ \big(s^2+\omega\big)^{-1} Q_{I\backslash \{i_1\}}^*(s)Q_{I\backslash \{i_1\}}(s) \big], \end{gather*} where $Q_{I\backslash \{i_1\}}^*(s)=\big(Q_{I\backslash \{i_1\}}(s)\big)^*$. For all $R>0$, we have the following: \begin{itemize}\itemsep=0pt \item[{\rm (i)}] For each $s\in \BbbR$ and $I\in \mathcal{I}_{2j}^{(e)}$, \begin{gather} s^2 \big|\operatorname{tr} \big[ \big(s^2+\omega\big)^{-1}Q_I(s) \big]\big| \le D(s)^{1/2} E_I(s)^{1/2}, \label{IDS} \end{gather} where $D(s)$ is given by \eqref{DefD(s)}. \item[{\rm (ii)}] Recall that $a$ is defined by $a=\big(\frac{\sqrt{2}}{\nu}\|\hat{\varrho}\|_*\big)^2$. If $a<1/4$, then \begin{gather} \sum_{j=1}^{\infty}\sum_{I\in \mathcal{I}_{2j}^{(e)}} E_I(s)^{1/2} \le D(s)^{1/2} \frac{4}{1-4a}. \label{SumEI} \end{gather} Thus, $D(s)^{1/2}\sum\limits_{j=1}^{\infty}\sum\limits_{I\in \mathcal{I}_{2j}^{(e)}} E_I(s)^{1/2}\in L^1(\BbbR)$ and \begin{gather*} \sum_{j=1}^{\infty} \sum_{I\in \mathcal{I}_{2j}^{(e)}} |\la Q_I\ra| \le \frac{e}{\nu} \|\hat{\varrho}\|_*^2\frac{4}{1-4a}. \end{gather*} Note that as we mentioned in Lemma~{\rm \ref{EstInt}}, the condition $a<1/4$ is satisfied provided that~$L$ is large enough. \end{itemize} \end{lemm} \begin{proof}For notational simplicity, we set $G=\big(s^2+\omega_0\big)^{-1}$. By the Schwarz inequality $|\operatorname{tr}[A^*B]| \allowbreak \le \operatorname{tr}[A^*A]^{1/2} \operatorname{tr}[B^*B]^{1/2}$, we obtain \begin{gather*} \big|\operatorname{tr} \big[ G^{1/2} Q_{i_1}(s)\cdots Q_{i_{2j}}(s) G^{1/2} \big]\big| \le \operatorname{tr} \big[ G^{1/2}Q_{i_1}(s)Q_{i_1}(s) G^{1/2} \big]^{1/2} \\ \qquad{}\times \operatorname{tr} \big[ G^{1/2}Q_{i_{2j}}(s) Q_{i_{2j-1}}(s) \cdots Q_{i_2}(s)Q_{i_2}(s) \cdots Q_{i_{2j}}(s)G^{1/2} \big]^{1/2}, \end{gather*} which implies that \begin{gather*} s^2\big|\operatorname{tr}\big[\big(s^2+\omega\big)^{-1} Q_I(s) \big]\big|\le \big\{s^2 \operatorname{tr}\big[\big(s^2+\omega\big)^{-1} Q_{i_{1}}(s)Q_{i_{1}}(s)\big]\big\}^{1/2} E_I(s)^{1/2}.\label{QSch} \end{gather*} Because \begin{gather} s^2 \operatorname{tr}\big[\big(s^2+\omega\big)^{-1} Q_{i}(s)Q_{i}(s)\big] \le D(s),\label{Q0} \end{gather} we conclude (i). From Lemma \ref{EstT} and (\ref{Q^2Formula}), we obtain that \begin{gather*} \|Q_i(s)\| \le \frac{\sqrt{2}}{\nu} \|\hat{\varrho}\|_*.\label{Q1} \end{gather*} Hence, $E_I(s) \le a^{2j-2} s^2 \operatorname{tr}\big[\big(s^2+\omega\big)^{-1} Q_{i_{2j}}(s)Q_{i_{2j}}(s)\big] \le a^{2j-2}D(s)$ by~(\ref{Q0}). Therefore, we obtain~(\ref{SumEI}). One observes that \begin{align*} \sum_{j=1}^{\infty}\sum_{I\in \mathcal{I}_{2j}^{(e)}} s^2\big|\operatorname{tr}\big[\big(s^2+\omega\big)^{-1} Q_I(s) \big]\big|&\underset{(\ref{IDS})}{\le} \sum_{j=1}^{\infty}\sum_{I\in \mathcal{I}_{2j}^{(e)}} D(s)^{1/2} E_I(s)^{1/2} \le \sum_{j=1}^{\infty}2^{2j}a^{j-1}D(s)\\ &\underset{(\ref{SumEI})}{\le} \frac{4}{1-4a}D(s). \end{align*} In the second inequality, we have used the fact that $\#\mathcal{I}_{2j}^{(e)} \le 2^{2j}$ Accordingly, we get \begin{gather*} \sum_{j=1}^{\infty}\sum_{I\in \mathcal{I}_{2j}^{(e)}} |\la Q_I\ra| \le \frac{4}{1-4a} \frac{1}{2\pi} \int_{\BbbR}\dm s \, D(s) \le \frac{4}{1-4a} \frac{e}{\nu}\|\hat{\varrho}\|_*^2 \end{gather*} by (\ref{EstD}). \end{proof} \begin{coro}If $\sqrt{2}\|\hat{\varrho}\|_{*}<\nu$, $1\le \sqrt{2} e \nu_0$ and $\sqrt{2}e \|\hat{\varrho}\|_* <1$, then the r.h.s.\ of \eqref{BindingEn} converges absolutely for every $R>0$. In addition, to exchange the series with the integral, i.e., $\la \cdots \ra$ in~\eqref{BindingEn} $($or \eqref{ExpTwo??}$)$ can be justified. \end{coro} \section{Proof of Theorem \ref{Rto7}}\label{PfMainT} For each $I\in \mathcal{I}_{2j}^{(e)}$, $\# I$ indicates the cardinality of $I$. Notice that $\# I$ is different from $|I|=i_1+\cdots+i_{2j}$. \subsection[Analysis of $\la Q_I\ra$ with $\#I=2$]{Analysis of $\boldsymbol{\la Q_I\ra}$ with $\boldsymbol{\#I=2}$} We claim that \begin{gather} \la Q_1 Q_2\ra=\la Q_2 Q_1\ra=0. \label{QI=2} \end{gather} To see this, let $I=\{1, 2\}$ or $\{2, 1\}$. Trivially, $|I|=1+2=3$. By Lemma~\ref{Odd0}, we conclude~(\ref{QI=2}). \subsection[Analysis of $\la Q_I\ra$ with $\# I=4$]{Analysis of $\boldsymbol{\la Q_I\ra}$ with $\boldsymbol{\# I=4}$} In this subsection, we will examine the following terms: \begin{gather*} \sum_{i_1, \dots, i_{4}\in \{1,2\}\atop \{i_1, \dots, i_{4}\}\neq \{1, 1, 1, 1\}, \{2,2,2, 2\} }\la Q_{i_1}Q_{i_2}Q_{i_3} Q_{i_{4}}\ra = \mathscr{A}+\mathscr{B}, \end{gather*} where \begin{gather} \mathscr{A}= \la Q_1 Q_1 Q_2 Q_2\ra +\la Q_2Q_2 Q_1 Q_1\ra\label{DefmathA} \end{gather} and \begin{gather} \mathscr{B}=\la Q_1Q_2Q_1Q_2\ra+\la Q_2 Q_1 Q_2 Q_1\ra+\la Q_2 Q_1 Q_1 Q_2\ra +\la Q_1 Q_2 Q_2 Q_1\ra.\label{DefmathB} \end{gather} In Appendix~\ref{NumC}, we will prove the following lemmas. \begin{lemm}\label{MainTerm} We have \begin{gather*} \lim_{R\to \infty} \lim_{\Lambda\to \infty} \lim_{L\to \infty} R^7 \mathscr{A} =\frac{23}{4\pi}\left(\frac{1}{2\pi}\right)^2\left( \frac{1}{4}\alpha_{\mathrm{E, at}} \right)^2. \end{gather*} \end{lemm} \begin{lemm}\label{Error}We have \begin{gather*} \lim_{R\to \infty} \lim_{\Lambda\to \infty} \lim_{L\to \infty} R^9\la Q_1Q_2Q_2Q_1\ra =\lim_{R\to \infty} \lim_{\Lambda\to \infty} \lim_{L\to \infty} R^9\la Q_2Q_1Q_1Q_2\ra =\frac{g}{e^2\nu^6}, \end{gather*} where $g$ is a constant independent of $e$, $\nu_0$ and~$R$. Moreover, $\la Q_1Q_2Q_1Q_2\ra=\la Q_2Q_1Q_2Q_1\ra=0$. Thus, $\lim\limits_{R\to \infty} \lim\limits_{\Lambda\to \infty} \lim\limits_{L\to \infty} R^9\mathscr{B}=2g/e^2\nu^6$. \end{lemm} \subsection[Analysis of $\la Q_I\ra$ with $\# I\ge 6$]{Analysis of $\boldsymbol{\la Q_I\ra}$ with $\boldsymbol{\# I\ge 6}$} Let $I=\{i_1, \dots, i_{2j}\}\in \mathcal{I}_{2j}^{(e)}$. We will examine the following two cases, separately. \begin{itemize}\itemsep=0pt \item[]Case 1: There exists a unique number $\ell\in \{1, 2,\dots, 2j-1\}$ such that $i_{\ell}+i_{\ell+1}=3$. \item[]Case 2: There exist at least two numbers $m, n\in \{1, 2, \dots, 2j-1\}$ such that $i_{m}+i_{m+1}=i_{n}+i_{n+1}=3$. \end{itemize} \begin{exam}For readers' convenience, we provide some examples below: \begin{itemize}\itemsep=0pt \item[] Case 1: $ I=\big\{1,1,\overbrace{1,2}^{i_3+i_4=3},2,2,2, 2\big\}$, $\big\{1,1,1,1,\overbrace{1,2}^{i_5+i_6=3},2,2\big\}$. \item[] Case 2: $I=\big\{1,1,\overbrace{1,2}^{i_3+i_4=3},2,2,2,\overbrace{2,1}^{i_8+i_9=3},1\big\}$, $\big\{1,1,1,1,1,\overbrace{1,2}^{i_6+i_7=3},\overbrace{2,1}^{i_8+i_9=3},1\big\}$. \end{itemize} \end{exam} \subsubsection{Case 1} In Appendix \ref{NumC}, we will prove the following lemma. \begin{lemm}\label{Case1} Assume that $I$ satisfies the condition in Case~$1$. If $R$ is sufficiently large, then we have \begin{gather*} \lim_{\Lambda\to \infty} \lim_{L\to \infty} |\la Q_I\ra|\le R^{-9} \alpha_{\mathrm{E, at}}^2\bigg( \frac{\|\hat{\varrho}\|^2_{L^2}}{3\nu^2} \bigg)^{\# I/2-2} C, \end{gather*} where $C$ is a positive number independent of $e$, $I$, $R$ and $\nu_0$. \end{lemm} \subsubsection{Case 2} The purpose here is to prove Lemma \ref{QIEst} below. To this end, we begin with the following lemma. \begin{lemm}\label{EstQG}Let $G=\big(s^2+\Omega_0\big)^{-1}$. For each $j\in \{1, 2\}$, we have \begin{gather*} \|Q_j G\| \le D(\hat{\varrho}), \end{gather*} where $D(\hat{\varrho})=\max\big\{ \sqrt{2}e \big\||k|^{-1} \hat{\varrho}\big\|_{*}, \frac{\sqrt{2}}{e\nu^2} \||k| \hat{\varrho}\|_{*}\big\}$. \end{lemm} \begin{proof} By (\ref{TT*}) and (\ref{T*T}), we readily show that \begin{gather*} \big\|\mathbb{T}(r) \big(s^2+S_0\big)^{-1}\big\| \le\sqrt{2}\big\| |k|^{-1} \hat{\varrho}\big\|_{*},\\ \big\|\big(s^2+e^2\nu^2\big)^{-1} \mathbb{T}^*(r)\big\| \le \frac{\sqrt{2}}{e^2\nu^2} \||k| \hat{\varrho}\|_{*}. \end{gather*} This concludes the proof of Lemma \ref{EstQG}. \end{proof} \begin{lemm}\label{QIEst} Let $j\ge 3$. For each $I\in \mathcal{I}_{2j}^{(e)}$ satisfying the condition in Case~$2$, we have \begin{gather*} |\la Q_I\ra| \le c_L^{2j-4} \la Q_2Q_1Q_1Q_2\ra, \end{gather*} where $c_L=\max\big\{D(\hat{\varrho}), \frac{\sqrt{2}} {\nu} \|\hat{\varrho}\|_{*}\big\}$. \end{lemm} \begin{proof} By the assumption in the condition Case~2, there exist at least two numbers $m, n\in \{1, 2, \dots, 2j-1\}$ such that $i_m+i_{m+1}=i_n+i_{n+1}=3$. Hence, $I$ can be decomposed as $I=A \cup \{i_m, i_{m+1}\}\cup B \cup \{i_n, i_{n+1}\} \cup C$. Without loss of generality, we may assume that $\{i_m, i_{m+1}\}=\{i_n, i_{n+1}\}=\{1, 2\}$. Thus, \begin{gather*} \la Q_I\ra= \la Q_A Q_1Q_2 Q_B Q_1 Q_2 Q_C\ra. \end{gather*} Let $Q_I(s)=Q_{i_1}(s)Q_{i_2}(s)\cdots Q_{i_{2j}}(s)$. By the Schwarz inequality, we have \begin{gather} \big|\operatorname{tr} \big[G^{1/2} Q_I(s) G^{1/2}\big]\big|\le \Phi_1^{1/2} \Phi_2^{1/2}, \label{Phi} \end{gather} where \begin{gather*} \Phi_1=\operatorname{tr} \big[G^{1/2}Q_A(s)Q_1(s) Q_2(s) Q_2(s)Q_1(s) Q_A^*(s) G^{1/2}\big],\\ \Phi_2= \operatorname{tr}\big[G^{1/2} Q_C^*(s)Q_2(s)Q_1(s) Q_B^*(s)Q_B(s) Q_1(s)Q_2(s) Q_C(s) G^{1/2}\big]. \end{gather*} First, we estimate $\Phi_1$. By the cyclic property of the trace, we have \begin{align} \Phi_1 &=\operatorname{tr} \big[ Q_2(s)Q_1(s)Q_A^*(s) G^{1/2} G^{1/2} Q_A(s) Q_1(s)Q_2(s) \big]\nonumber\\ &= \operatorname{tr}\big[Q_2(s)Q_1(s)G^{1/2} G^{-1/2}Q_A^*(s) G^{1/2} G^{1/2} Q_A(s)G^{-1/2} G^{1/2} Q_1(s)Q_2(s)\big]. \label{Cycle} \end{align} Because \begin{gather*} G^{1/2} Q_A(s) G^{-1/2}=(GQ_{a_1})(GQ_{a_2}) \cdots (GQ_{a_{\# A}}), \end{gather*} where $A=\{a_1, a_2, \dots, a_{\#A}\}$, we have, by Lemma~\ref{EstQG}, \begin{gather*} \big\|G^{1/2} Q_A(s)G^{-1/2}\big\| \le D(\hat{\varrho})^{\# A}. \end{gather*} Thus, by (\ref{Cycle}) and the cyclic property of the trace, \begin{gather} \Phi_1 \le D(\hat{\varrho})^{2\# A} \operatorname{tr} \big[G^{1/2} Q_1(s)Q_2(s)Q_2(s)Q_1(s) G^{1/2}\big]. \label{Cycle2} \end{gather} As for $\Phi_2$, we have \begin{gather*} \Phi_2 \le \|Q_B(s)\|^2 \operatorname{tr}\big[ G^{1/2} Q_C^*(s) Q_2(s)Q_1(s)Q_1(s)Q_2(s)Q_C(s)G^{1/2} \big]. \end{gather*} By an argument similar to the one in the proof of (\ref{Cycle2}), one obtains that \begin{gather} \operatorname{tr} \big[ G^{1/2} Q_C^*(s) Q_2(s)Q_1(s) Q_1(s)Q_2(s)Q_C(s) G^{1/2} \big] \nonumber\\ \qquad {}\le D(\hat{\varrho})^{2\# C} \operatorname{tr} \big[G^{1/2} Q_2(s)Q_1(s)Q_1(s)Q_2(s) G^{1/2}\big]. \label{TrQC} \end{gather} By using the fact $\|Q_B(s)\| \le \big(\frac{\sqrt{2}}{\nu} \|\hat{\varrho}\|_{*}\big)^{2\# B}$ and (\ref{TrQC}), we have \begin{gather} \Phi_2 \le \left( \frac{\sqrt{2}}{\nu} \| \hat{\varrho}\|_{*} \right)^{2\# B} D(\hat{\varrho})^{2\# C} \operatorname{tr} \big[ G^{1/2} Q_2(s)Q_1(s)Q_1(s)Q_2(s) G^{1/2} \big]. \label{Cycle3} \end{gather} Combining (\ref{Phi}), (\ref{Cycle2}) and (\ref{Cycle3}), we arrive at \begin{align*} |\la Q_I\ra| & \le \left( \frac{\sqrt{2}}{\nu} \| \hat{\varrho} \|_{*} \right)^{\# B} D(\hat{\varrho})^{\# A+\# C} \la Q_1Q_2Q_2Q_1\ra^{1/2}\la Q_2Q_1Q_1Q_2\ra^{1/2}\\ & \le c_L^{2j-4} \la Q_1Q_2Q_2Q_1\ra^{1/2} \la Q_2Q_1Q_1Q_2\ra^{1/2}. \end{align*} Because $\la Q_1Q_2Q_2Q_1\ra=\la Q_2Q_1Q_1Q_2\ra$, we obtain the desired result. \end{proof} \subsection{Completion of the proof of Theorem \ref{Rto7}} First, remark that $\lim\limits_{\Lambda\to \infty} \lim\limits_{L\to \infty}E_{L, \Lambda}=E $ and $\lim\limits_{\Lambda\to \infty} \lim\limits_{L\to \infty}E_{L, \Lambda}(R)=E(R) $ by Proposition~\ref{ResolCon}. We divide $\mathcal{I}_{2j}^{(e)}$ as $\mathcal{I}_{2j}^{(e)}=\mathcal{I}^{(e)}_{2j, 1}\cup \mathcal{I}_{2j, 2}^{(e)}$, where \begin{gather*} \mathcal{I}_{2j ,\alpha}^{(e)}=\big\{I\in \mathcal{I}_{2j}^{(e)}\, |\, \mbox{$I$ satisfies the condition in Case $\alpha$}\big\},\qquad \alpha=1, 2. \end{gather*} Note that $\#\mathcal{I}^{(e)}_{2j, 1}=2j-1$ and $\# \mathcal{I}_{2j, 2}^{(e)}\le 2^{2j}$. By (\ref{ExpansionF}) and (\ref{QI=2}), one obtains that \begin{gather*} 2E_{L, \Lambda}-E_{L, \Lambda}(R)=\mathscr{A}+\mathscr{B}+\sum_{j\ge 3}\sum_{I\in \mathcal{I}_{2j, 1}^{(e)}} \la Q_I\ra+\sum_{j\ge 3} \sum_{I\in \mathcal{I}_{2j, 2}^{(e)}} \la Q_I\ra, \end{gather*} where $\mathscr{A}$ and $\mathscr{B}$ are defined by (\ref{DefmathA}) and (\ref{DefmathB}), respectively. Therefore, \begin{gather} \big| R^7 \big\{ 2E_{L, \Lambda}-E_{L, \Lambda}(R)-\mathscr{A} \big\} \big| \le R^7\mathscr{B}+\sum_{j\ge 3} \sum_{I\in \mathcal{I}_{2j, 1}^{(e)}} R^7 |\la Q_I\ra|+\sum_{j\ge 3} \sum_{I\in \mathcal{I}_{2j, 2}^{(e)}} R^7 |\la Q_I\ra|. \label{Last0} \end{gather} We will estimate the three terms in the right-hand side of (\ref{Last0}). By Lemma~\ref{Error}, we can easily control the first term. As for the second term, by Lemma~\ref{Case1}, we have \begin{align} \lim_{\Lambda\to \infty} \lim_{L\to \infty}\sum_{j\ge 3} \sum_{I\in \mathcal{I}_{2j, 1}^{(e)}} R^7 |\la Q_I\ra| \underset{{\rm Lemma~\ref{Case1}}}\le & R^{-2} \sum_{j\ge 3}\big( \# \mathcal{I}^{(e)}_{2j, 1} \big) C \alpha_{\mathrm{E, at}}^2 \left( \frac{\|\hat{\varrho}\|_{L^2}^2}{3\nu^2} \right)^{j-2}\nonumber\\ =\ \ \ \ &R^{-2} C \alpha_{\mathrm{E, at}}^2\sum_{j\ge 3}(2j-1) \left( \frac{\|\hat{\varrho}\|_{L^2}^2}{3\nu^2} \right)^{j-2}. \label{Last1} \end{align} Note that because $\|\hat{\varrho}\|_{L^2}^2/3\nu^2<1$, the right-hand side of (\ref{Last1}) converges. On the other hand, using Lemma~\ref{QIEst}, one obtains that \begin{align} \sum_{j\ge 3} \sum_{I\in \mathcal{I}_{2j, 2}^{(e)}} R^7 |\la Q_I\ra| \underset{{\rm Lemma~\ref{QIEst}}}{\le}& \sum_{j\ge 3} \big( \# \mathcal{I}_{2j, 2}^{(e)} \big) c_L^{2j-4} R^7\la Q_2Q_1Q_1Q_2\ra\no \le\ \ \ \ & \sum_{j\ge 3} 2^{2j} c_L^{2j-4} R^7\la Q_2Q_1Q_1Q_2\ra. \label{Last2} \end{align} Note that because $\lim\limits_{L\to \infty}c_L=c_{\infty}<1/2$, the right-hand side of~(\ref{Last2}) converges, provided that~$L$ is sufficiently large. Combining (\ref{Last0}), (\ref{Last1}) and (\ref{Last2}), and using Lemma~\ref{Error}, we finally arrive at \begin{gather*} \lim_{R\to \infty}\left|R^7\left\{2E-E(R)-\frac{23}{4\pi}\left(\frac{1}{2\pi}\right)^2\left( \frac{1}{4}\alpha_{\mathrm{E, at}}\right)^2\right\} \right|\\ \qquad {} \le \left\{2 +c_{\infty}^{-4}\sum_{j\ge 3}(2c_{\infty})^{2j}\right\} \lim_{R\to \infty } \lim_{\Lambda\to \infty} \lim_{L\to \infty} R^7\la Q_1Q_2Q_2Q_1\ra\\ \qquad\quad{} +\lim_{R\to \infty}R^{-2}C \alpha_{\mathrm{E, at}}^2\sum_{j\ge 3}(2j-1) \left(\frac{\|\hat{\varrho}\|_{L^2}^2}{3\nu^2}\right)^{2j-2} =0. \end{gather*} This concludes the proof of Theorem~\ref{Rto7}. \section{Discussions}\label{Discuss} \subsection{Indistinguishability of the electrons} The original Hamiltonian $H_{\mathrm{2e}}$ has the indistinguishability of the electrons, i.e., the Hamiltonian is unchanged under the exchange of $x_1 \leftrightarrow x_2+r$. In contrast to this, the approximated Hamiltonian~$H_{\mathrm{D2e}}$ breaks the indistinguishability. Nevertheless, the Hamiltonian~$H_{\mathrm{D2e}}$ does explain the Casimi--Polder potential as we show in Theorem~\ref{Rto7}. The distinguishability comes from the assumptions (C.1) and (C.2). However, to justify the assumptions is still open. One way to avoid the unjustified derivation of~$H_{\mathrm{D2e}}$ is to directly start with the Hamiltonian~$H$ given by~(\ref{DefH}) without the last term, which can for instance be directly taken from \cite[equation~(13.127)]{Spohn} and then extended to the two-particle case. Alternatively and equivalently, the many-particle case is presented, e.g., in \cite[Section~4]{Loudon}. If we start from this form, the necessary assumptions are stated as follows: \begin{itemize}\itemsep=0pt \item We assume distinguishability of the two electrons by localizing electron $1$ at $0$, such that electron 1 experiences the field $\hat{E}(0)$, while electron $2$ is localized at~$r$ and hence experiences the field $\hat{E}(r)$. \item We discard all self-interaction terms and approximate the atomic Coulomb potential by a~harmonic potential. \end{itemize} In this manner, we can construct a minimal QED model which describes the Casimir--Polder potential. Note that, since the particle $1$ and $2$ only communicate via the photon field, and due to distinguishability, the actual choice of coordinate systems is insubstantial such that we can choose for particle~$2$ a coordinate system that is centered at $r$. \subsection{Cancellation mechanism of the van der Waals--London force } As we performed in \cite{MS1}, the attractive $R^{-7}$ decay (the retarded van der Waals potential) appears due to the exact cancellation of the terms with~$R^{-6}$ decay (the van der Waals--London potential) originating from $V_R$ by the contribution from the quantized radiation field. Note that the conditions (A.1)--(A.3) are assumed in~\cite{MS1} as well, but (C.1) and (C.2) are not. As we saw in the present paper, this kind of the cancellation mechanism cannot be reproduced under the conditions (C.1), (C.2) and (A.1)--(A.4). In this sense, our assumptions, especially~(C.1) and~(C.2) would be unphysical. In many literatures, the retardation on the van der Waals potential is examined under the condition (C.1) alone. In these studies, the cancellation of the terms with $R^{-6}$ decay is presupposed and only the $4$-th order perturbation theory is performed without estimating higher order terms.\footnote{A kind of weak cancellation mechanics is discussed in~\cite{Koppen} by imposing the infrared cutoff.} As far as we know, to examine the exact cancellation mechanism under only the condition (C.1) is still unsolved. This problem could be a key to achieving mathematically complete understanding of the retarded van der Waals potential.
1902.00378
\section{Introduction} \label{sec:introduction} \begin{figure*} \includegraphics[width=\textwidth]{overview.pdf} \caption{Method overview: Wikipedia articles contain textual description of a subject, these articles are also accompanied with illustrative images supporting the text. These images are often accompanied by captions. A Latent Dirichlet Allocation (LDA) \cite{blei2003latent} topic modeling framework generates a global contextual representation of the textual information from entire text article. The same LDA model generates a local contextual representation from the per-image caption. These two text representations are jointly used to supervised the training of deep CNN.} \label{fig:overview} \end{figure*} The emergence of large-scale annotated datasets such as ImageNet~\cite{deng2009imagenet}, Places~\cite{zhou2014learning} and MS-COCO~\cite{lin2014microsoft} has undoubtedly been one of the key ingredients for the tremendous impact of deep learning on almost every computer vision task. However, there is a major issue with the supervised learning setup in large scale datasets; collecting and manually annotating those datasets requires a great amount of human effort. On the other hand, the fact that the annotations on such datasets are usually limited to discrete sets of popular visual classes, may not necessarily be an optimal training setup for cross-modal retrieval datasets that usually cover a set of broader and richer semantic concepts. As an alternative to the fully supervised setup, self-supervised learning methods aim at learning discriminative visual features by designing auxiliary tasks for which the target labels are free to obtain. These labels provide supervision for the training of computer vision models the same way as in supervised learning, but the supervisory signal can be directly obtained from the training data, either from the images themselves~\cite{doersch2015unsupervised,pathak2016context} (uni-modal training) or from a complementary modality that is found naturally correlated with them ~\cite{agrawal2015learning,owens2016ambient,gomez2017self} (multi-modal training). Unlike supervised learning, where visual features are learnt from human generated labels, in self-supervised learning labels are automatically obtained from the training data. In this paper we present a self-supervised cross-modal retrieval framework that leverages as supervisory signal the correlations found between images and text on a large collection of illustrated articles in order to learn discriminative visual features that could potentially transfer well to any general computer vision task, such as image classification or object detection. We hypothesise that the learned representations by using such approach can be used more naturally in a cross-modal retrieval framework than the representations learned from annotated datasets for image classification. Our intuition follows from the observation that illustrated encyclopedic articles, like Wikipedia's ones, are well organized and contain a detailed textual description of their subject while certain aspects of the subject are illustrated by images. Those images complement the text and at the same time provide context to our imagination. Furthermore, the captions associated with these images specifically describe their contents. These observations, and the large-scale availability of such articles, lead us to treat representation learning for cross-modal retrieval as a self-supervised visual representation learning task. We demonstrate that rich visual representations can be learned by training a network to predict the global (article-level) and local (caption-level) semantic contexts in which an image appears, and at the same time, the learned representations can be used to perform cross-modal retrieval with promising results. Gomez and Patel~\mbox{\emph{et al.}}~\cite{gomez2017self,patel2018texttopicnet} have proposed in the past self-supervised representation learning using Wikipedia articles. Their method consists in learning a Latent Dirichlet Allocation (LDA) model from the entire corpus of text articles, and then training a CNN to predict the semantic context of images by using as training labels the semantic level representations (the probability distribution over semantic topics) of the articles in which they appear, as provided by the LDA model. An assumption made in their method is that all the images within a given text article have the same target semantic representation, which is obtained from the LDA model. However, images within a Wikipedia article can be drastically different in terms of appearance and semantic content. To overcome this, we create a new \emph{Wikipedia dataset with captions} which is similar to the one used in the TextTopicNet~\cite{gomez2017self,patel2018texttopicnet} method, but also containing the image captions from Wikipedia. Thus, as illustrated in Figure~\ref{fig:overview}, the training data in our method comes in a triplet form (image, text article, image caption). Our intuition is that adding another target representation based on image captions could provide more image specific training self-supervision. Furthermore, we experimentally show that our training procedure leads to significantly better results for both cross-modal retrieval and image classification. Following are the major contributions made in this paper: \begin{itemize} \item We propose a multi-task learning framework to train a CNN that predicts text representations obtained from text articles (global context) and per-image captions (local context). \item We experimentally demonstrate that the self-supervisedly learned visual features are generic enough for other computer vision tasks and outperform other self-supervised and naturally supervised approaches on standard benchmarks. \item Without using any form of semantic information, our method outperforms both unsupervised and supervised approaches on cross-modal retrieval (image-to-text and text-to-image) benchmarks on Wikipedia \cite{rasiwasia2010new} and Pascal sentences datasets \cite{farhadi2010every}. \item The Wikipedia image-article dataset \cite{patel2018texttopicnet} consist of only images and text articles, and as an auxiliary contribution, we release a large scale dataset obtained from English Wikipedia consisting of images, per-image captions and co-occurring text articles. \end{itemize} \begin{figure*} \includegraphics[width=\textwidth]{dataset_samples.jpg} \caption{Samples from Wikipedia dataset with captions. In the method, the shown captions provide local image specific information, whereas the entire text article provides global subject information.} \label{fig:dataset_samples} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{supervised_self_supervised.pdf} \caption{Supervised (left) vs. Self-Supervised Training (right). In supervised training the ground-truth labels $y _{i}$ are collected by human annotation. Whereas in self-supervised training, a transformation on a part of input data is used as the target label for training.} \label{fig:supervised_self_supervised} \end{figure*} The rest of the paper is structured as follows. In Section \ref{sec:related_work}, previous work is reviewed. In Section \ref{sec:dataset}, details of training dataset are elaborated. In Section \ref{sec:method} the proposed method is described and in Section \ref{sec:experiments} evaluated. The paper is concluded in Section \ref{sec:conclusion}. \section{Related Work} \label{sec:related_work} \subsection{Self-Supervised Visual Representations} As an alternative to fully-supervised algorithms, there has recently been a growing interest in self-supervised or naturally-supervised approaches that make use of non-visual signals, intrinsically correlated to the image, as a form to supervise visual feature learning. The objective of those methods is to learn visual representations (without human annotations) that are generic to work well across a wide range of object classes and at the same time are discriminating enough to be useful for classical computer vision tasks such at image classification, object detection, semantic segmentation etc. \subsubsection{Unsupervised Visual Representation Learning} Work in unsupervised data-dependent methods for learning visual features has been mainly focused on algorithms that learn filters one layer at a time. A number of unsupervised algorithms have been proposed to that effect, such as sparse-coding, restricted Boltzmann machines (RBMs), auto-encoders~\cite{zhao2015stacked}, and K-means clustering~\cite{coates2010analysis,dundar2015convolutional,krahenbuhl2015data}. However, despite the success of such methods in several unsupervised learning benchmark datasets, a generic unsupervised method that works well with real-world images does not exist. Bojanowski \& Joulin \mbox{\emph{et al.}}~\cite{bojanowski2017unsupervised} present an approach for unsupervised learning of visual features using Noise As Target (NAT) label for training. Their approach is domain agnostic and makes use of fixed set of target labels for training. The primary difference between our work and \cite{bojanowski2017unsupervised} is that, in our work the final network predictions are directly useful for a specific task - cross-modal matching and retrieval. \subsubsection{Uni-modal Self-Supervised Methods} In contrast to the purely supervised approaches, uni-modal self-supervised algorithms make use of the structure in the visual data itself for the purpose of representation learning. Agrawal \mbox{\emph{et al.}}~\cite{agrawal2015learning} make use of egomotion information obtained by odometry sensors mounted on a vehicle. They train a network using contrastive loss formulation \cite{mobahi2009deep} to predict the camera transformations between two image pairs. Wang and Gupta \mbox{\emph{et al.}}~\cite{wang2015unsupervised,wang2017transitive} make use of videos as training data and use relative motion of objects as supervisory signal for training. The relative motion information is obtained by using a standard unsupervised tracking algorithm. A Siamese-triplet network is then trained using a ranking loss function. Pathak and Efros \mbox{\emph{et al.}}~\cite{pathak2016context} take inspiration from auto-encoders and proposed a context-encoder. They train a network using a combination of L2 loss and adversarial loss to generate arbitrary image regions conditioned on their surrounding. Doersch \mbox{\emph{et al.}}~\cite{doersch2015unsupervised} use spatial context such as relative position of patches within an image to make the network learn object and object parts. Our proposed method is different from all of these methods since it makes use of multi-modal axillary task for training. Further, by training the network to predict local and global contexts in which an image appears as illustration could be directly used for cross-modal retrieval. Our work is more correlated with the multi-modal self-supervised approaches as elaborated in next section \subsubsection{Multimodal Self-Supervised Methods} Multi-modal self-supervised learning alrogithms attempt to utilize the structure in one modality to provide the training supervision for co-occuring modality. Owens \mbox{\emph{et al.}}~\cite{owens2016ambient} make use of sound as a modality to provide supervisory signal. They do so by training a deep CNN to predict a hand-crafted statistical summary of sound associated with a video frame. Gomez and Patel ~\mbox{\emph{et al.}} \cite{gomez2017self} make use of Wikipedia documents which consist of text articles and co-occuring images. First, a Latent Drichilet Allocation (LDA) \cite{blei2003latent} topic model is learned on the entire Wikipedia dataset. Second, text articles are represented in the form of topic-probabilities using learned LDA model. Finally, a convolutional neural network is trained on images in Wikipedia, where the target label is the representation of corresponding text article. Our work is more closely related to \cite{gomez2017self,patel2018texttopicnet,patel2016dynamic}, however, as previously mentioned, their approach makes use of same target representation for all images within a text article. This not only leads to sub-optimal performance but also completely ignores the local context of an image. \subsection{Cross-Modal Representation Learning} Two general categories of the representation learning methods for cross-modal retrieval can be: (a) \textit{real-valued}, (b) \textit{binary valued}. The binary methods are more focused on efficiency and aim to map the items from different modalities on a common binary hamming space \cite{zhu2013linear,song2013inter,shen2016fast,xu2017learning}. Our approach falls in the category of \textit{real-valued} methods. Within this category of methods the training for cross-modal retrieval could be: unsupervised \cite{andrew2013deep,hardoon2004canonical,srivastava2012multimodal,feng2014cross,yan2015deep} or supervised \cite{gong2014multi,wang2016joint,wang2013learning,zhai2014learning}. Zhang ~\mbox{\emph{et al.}} \cite{zhao2015stacked} propose a multimodal hashing method, called semantic correlation maximization (SCM), which integrates semantic labels into the hashing learning procedure. This method uses label vectors to get semantic similarity matrix and tries to reconstruct it through the learned hash codes. Gong ~\mbox{\emph{et al.}} \cite{gong2014multi} propose a novel three-view CCA (CCA-3V) framework, which explicitly incorporates the dependence of visual features and text on the underlying semantics. Wang ~\mbox{\emph{et al.}} \cite{wang2013learning} propose a novel regularization framework for the cross-modal matching problem, called LCFS (Learning Coupled Feature Spaces). It unifies coupled linear regressions, $l_{21}$-norm and trace norm into a generic minimization formulation so that subspace learning and coupled feature selection can be performed simultaneously. Furthermore, they extend this framework to more than two-modality case in \cite{wang2016joint}, where the extension version is called JFSSL (Joint Feature Selection and Subspace Learning). Wang ~\mbox{\emph{et al.}} \cite{wang2017adversarial} propose an adversarial learning approach for cross-modal retrieval. The method is built around the idea of min-max game involving two different processes $players$: a modality classifier distinguishing the items in terms of their modalities, and a feature projector generating modality-invariant and discriminate representations and aiming to confuse the modality classifier. While most of these supervised or unsupervised approaches attempts to learn a common embedding space for the prupose of cross-modal retrieval, they assume that the visual representations are provided by a pre-trained CNN (either AlexNet \cite{krizhevsky2012imagenet} or VGG-16 \cite{simonyan2014very}) on ImageNet dataset \cite{russakovsky2015imagenet}. The cost (human annotation effort) of this pre-training is not accounted by cross-modal retrieval methods. Further, the underlying assumption of is that ImageNet pre-trained features transfer well for cross-modal retrieval. The proposed method investigates these two aspects, firstly, we do not make use of ImageNet pre-training and instead use the self-supervised visual representations. Secondly, in the experiments, we train the network just once on our dataset and no form of adaptation is done on test datasets. This demonstrates that our proposed method is capable of learning a general purpose category-agnostic cross-modal retrieval system. \section{Wikipedia Dataset with Captions} \label{sec:dataset} In order to obtain a training dataset for our method, we scrapped the entire English Wikipedia while considering only articles with at least 50 words and illustrated with at-least one image. Similarly to the preprocessing of ImageCLEF dataset we filtered small images (< 256 pixels) and images with formats other than JPG. Furthermore, we only considered the images for which captions are available. With these constraints our dataset consists of $1.8$ million images with captions and the associated text article that they appear with. Figure \ref{fig:dataset_samples} shows samples from the dataset. \section{Method} \label{sec:method} In this section, we first elaborate over the core distinction between supervised and self-supervised trainings. Then we discuss about the Latent Dirichlet Allocation (LDA) \cite{blei2003latent}, which is used for representing text articles and image captions, and thus for generating target representations for training the CNN. Finally, we go over the training of the CNN. \subsection{Self-Supervised Learning} \label{sec:self_supervised} The supervised methods learn rich visual representations from large collections of training data. This data always has human annotations, $D=\{(x_{1},y_{1}), (x_{2},y_{2}) ... (x_{N},y_{N})\}$, and the deep network is trained to minimize the overall risk term: \begin{equation} \small R=\sum_{i=1}^{N}[loss(f(x_{i},\Theta),y_{i})] \end{equation} Where $\Theta$ are the parameters of the deep network. Unlike supervised approaches, self-supervised methods train without making use of any human annotations. The training data, $D=\{(x_{1}), (x_{2}) ... (x_{N})\}$ can be sub-divided into components and one or more components can be used to provide self-supervision for others, thus, data is represented as $D=\{(x_{1}^{+}, x_{1}^{-}), (x_{2}^{+}, x_{2}^{-}) ... (x_{N}^{+}, x_{N}^{-})\}$ and the training for one component is governed by the other changing the overall risk term to: \begin{equation} \small R=\sum_{i=1}^{N}[loss(f(x_{i}^{+},\Theta),x_{i}^{-})] \end{equation} Fig.~\ref{fig:supervised_self_supervised} shows the explicit difference between supervised and self-supervised approaches. \subsection{Latent Dirichlet Allocation} \label{sec:text_rep} LDA \cite{blei2003latent} is a generative statistical model of a text corpus where each document can be viewed as a mixture of various topics, and each topic is characterized by a probability distribution over words. LDA can be represented as a three level hierarchical Bayesian model. Given a text corpus consisting of $M$ documents and a dictionary with $N$ words, Blei \mbox{\emph{et al.}} define the generative process~\cite{blei2003latent} for a document $d$ as follows: \begin{itemize} \item{Choose $\theta \sim Dirichlet(\alpha)$.} \item{For each of the $N$ words $w_n$ in $d$:} \begin{itemize} \item{Choose a topic $z_n \sim Multinomial(\theta)$.} \item{Choose a word $w_n$ from $P(w_n \mid z_n, \beta)$, a multinomial probability conditioned on the topic $z_n$.} \end{itemize} \end{itemize} \noindent where $\theta$ is the mixing proportion and is drawn from a Dirichlet prior with parameter $\alpha$, and both $\alpha$ and $\beta$ are corpus level parameters, sampled once in the process of generating a corpus. Each document is generated according to the topic proportions $z_{1:K}$ and word probabilities over $\beta$. The probability of a document $d$ in a corpus is defined as: \small \begin{equation} P(d\mid\alpha, \beta) = \nonumber \int_{\theta}P(\theta \mid\alpha)\left(\prod_{n=1}^{N}\sum_{z_{K}}^{ } P(z_{K} \mid \theta)P(w_{n}\mid z_{K},\beta)\right)d\theta \nonumber \end{equation} \normalsize Learning LDA~\cite{blei2003latent} on a document corpus provides two set of parameters: word probabilities given topic $P(w\mid z_{1:K})$ and topic probabilities given document $P(z_{1:K} \mid d)$. Therefore each document is represented in terms of topic probabilities $z_{1:K}$ (being $K$ the number of topics) and word probabilities over topics. Any new (unseen) document can be represented in terms of a probability distribution over topics of the learned LDA model by projecting it into the topic space. \subsection{Network Architecture} \label{sec:network_architecture} Throughout our experiments, we make use of AlexNet architecture \cite{krizhevsky2012imagenet}. The choice of AlexNet is justified because most of the existing self-supervised methods make use of this same architecture \cite{wang2016unsupervised,agrawal2015learning,owens2016ambient,gomez2017self,patel2018texttopicnet,pathak2016context}. Further, we compare to cross-modal retrieval methods with reported performance using AlexNet \cite{hardoon2004canonical,rasiwasia2010new,sharma2012generalized,gong2014multi,wang2013learning,wang2016joint}. Thus the use of AlexNet architecture is essential for fair comparisons. As shown in Figure \ref{fig:overview}, till the \textit{fc7} layer the architecture is same as standard AlexNet \cite{krizhevsky2012imagenet}, which is followed by two fully-connected branches one prediction caption level topic probabilities and other predicting article level topic probabilities. \begin{table*} \resizebox{\textwidth}{!}{ \begin{tabular}{l | c | c | c | c | c | c | c | c | c | c | c | c | c | c | c | c | c | c | c | c } \toprule Method &aer &bk &brd &bt &btl &bus &car &cat &chr &cow &din &dog &hrs &mbk &prs &pot &shp &sfa &trn &tv \\ \midrule Ours &\textbf{73} &\textbf{56} &\textbf{49} &\textbf{65} &\textbf{26} &\textbf{50} &\textbf{73} &\textbf{46} &\textbf{48} &\textbf{38} &\textbf{45} &\textbf{42} &73 &\textbf{64} &\textbf{86} &\textbf{34} &\textbf{44} &\textbf{44} &\textbf{74} &\textbf{48} \\ \midrule TextTopicNet (Wikipedia) \cite{patel2018texttopicnet} & 71 & 52 & 47 & 61 & 26 & 49 & 71 & 46 & 47 & 36 & 44 & 41 & 72 & 62 & 85 & 31 & 40 & 42 & 72 & 44\\ TextTopicNet (ImageCLEF) \cite{gomez2017self} &67 &44 &39 &53 &20 &49 &68 &42 &43 &33 &41 &35 &70 &57 &82 &30 &31 &39 &65 &41 \\ Sound~\cite{owens2016ambient} &69 &45 &38 &56 &16 &47 &65 & 45 &41 &25 &37 &28 &\textbf{74} &61 &85 &26 &39 &32 &69 &38 \\ Texton-CNN &65 &35 &28 &46 &11 &31 &63 &30 &41 &17 &28 &23 &64 &51 &74 &9 &19 &33 &54 &30 \\ K-means &61 &31 &27 &49 &9 &27 &58 &34 &36 &12 &25 &21 &64 &38 &70 &18 &14 &25 &51 &25\\ Motion~\cite{wang2015unsupervised} &67 &35 &41 &54 &11 &35 &62 &35 &39 &21 &30 &26 &70 &53 &78 &22 &32 &37 &61 &34 \\ Patches~\cite{doersch2015unsupervised} &70 &44 &43 &60 &12 &44 &66 &52 &44 &24 &45 &31 &73 &48 &78 &14 &28 &39 &62 &43 \\ Egomotion~\cite{agrawal2015learning} &60 &24 &21 &35 &10 &19 &57 &24 &27 &11 &22 &18 &61 &40 &69 &13 &12 &24 &48 &28 \\ \midrule ImageNet~\cite{krizhevsky2012imagenet} &79 &\textbf{71} &\textbf{73} &75 &\textbf{25} &60 &80 &\textbf{75} &51 &\textbf{45} &60 &\textbf{70} &\textbf{80} &\textbf{72} &\textbf{91} &42 &\textbf{62} &56 &82 &62 \\ Places~\cite{zhou2014learning} &\textbf{83} &60 &56 &\textbf{80} &23 &\textbf{66} &\textbf{84} &54 &\textbf{57} &40 &\textbf{74} &41 &\textbf{80} &68 &90 &\textbf{50} &45 &\textbf{61} &\textbf{88} &\textbf{63} \\ \bottomrule \end{tabular}} \vspace{0.25em} \caption{PASCAL VOC2007 per-class average precision (AP) scores for the classification task with pool5 features.} \label{tab:pascal_pool5_AP} \end{table*} \subsection{Learning Self-Supervised Representations} Following up with the formal definition of self-supervised learning as described in Section \ref{sec:self_supervised}. The multimodal document from Wikipedia can be thought as a training sample, $x_{i}$. This multimodal document consists of text article $x_{i}^{A}$, image captions $x_{i}^{C}$ and images $x_{i}^{I}$. Let $\Phi(x_{i}^{A})$ and $\Phi(x_{i}^{C})$ be the text topic probability distributions given by LDA \ref{sec:text_rep} for the document text and the image captions accordingly. The deep CNN is trained to predict the above topic distributions given the corresponding article image, and producing as outputs: $f_{A}(x_{i}^{I},\Theta)$ (for article) and $f_{C}(x_{i}^{I},\Theta)$ (for caption). The loss is computed as the cross entropy between the LDA topic distribution and the predicted distribution. The overall risk term on the training data will be: \begin{equation} \small \begin{split} R = \sum_{i=1}^{i=N}\lbrack\sum_{topic=1}^{topic=K}\Phi(x_{i}^{A})_{topic}log(f_{A}(x_{i}^{I},\Theta)_{topic}) \\ + \sum_{topic=1}^{topic=K}\Phi(x_{i}^{C})_{topic}log(f_{C}(x_{i,topic}^{I},\Theta)_{topic})\rbrack \end{split} \label{eq:overall_risk} \end{equation} where $N$ is total number of samples in the training data, $K$ is the number of topics in the LDA model \cite{blei2003latent} and $\Theta$ maps to the learnt CNN parameters. Note that $K$ is a hyper-parameter and we fix $K=40$ throughout the experiments. \subsection{Training Details} \label{sec:training_details} Learning to predict the target topic probability distributions we minimize a sigmoid cross-entropy loss as shown in the overall risk term Eq. \ref{eq:overall_risk}. We use a Stochastic Gradient Descent (SGD) optimizer, with base learning rate of $0.001$, with a step decay after every $200,000$ iterations by a factor of $0.1$, and momentum of $0.9$. The batch size is set to $128$. With these settings the network converges after $500,000$ iterations of training. \section{Experiments} \label{sec:experiments} We will first compare the learnt visual representations with other self-supervised methods on the task on image classification on two standard benchmark datasets (Section \ref{sec:image_classification}). Next, we will compare our method with various cross-modal retrieval methods (Section \ref{sec:cross_modal_exp}). \subsection{Self-Supervised Features for Image Classification} \label{sec:image_classification} \subsubsection{PASCAL VOC} Self-supervised learned features are tested for image classification on PASCAL VOC 2007 \cite{everingham2010pascal} dataset. In total there are 9,963 images, and 20 semantic classes. The data has been split into 50\% for training/validation and 50\% for testing. The classification here is multi-label, that is, each image can be classified into multiple classes. We extract features from the top layers of the CNN (fc7, fc6, pool5) for each image of the dataset. Then, for each class we perform a grid search over the parameter space of an one-vs-all Linear SVM classifier~\footnote{Liblinear implementation from \url{http://scikit-learn.org/}} to optimize its validation accuracy. Then, we use the best performing parameters to train again the one-vs-all SVM using both training and validation images. \begin{table} \centering \begin{tabular}{ l | c | c | c | c} \toprule Method & max5 & pool5 & fc6 & fc7 \\ \midrule Ours & - & \textbf{53.8} & \textbf{54.9} & \textbf{56.8} \\ \midrule TextTopicNet (Wikipedia) \cite{patel2018texttopicnet} & - & 51.9 & 54.2 & 55.8\\ TextTopicNet (ImageCLEF) \cite{gomez2017self} & - & $47.4$ & $48.1$ & $48.5$ \\ Sound ~\cite{owens2016ambient} & 39.4 & 46.7 & 47.1 & 47.4 \\ Texton-CNN & 28.9 & 37.5 & 35.3 & 32.5 \\ K-means~\cite{krahenbuhl2015data} & 27.5 & 34.8 & 33.9 & 32.1 \\ Tracking~\cite{wang2015unsupervised} & 33.5 & 42.2 & 42.4 & 40.2 \\ Patch pos.~\cite{doersch2015unsupervised} & 26.8 & 46.1 & - & - \\ Egomotion~\cite{agrawal2015learning} & 22.7 & 31.1 & - & - \\ \midrule ImageNet~\cite{krizhevsky2012imagenet} & \textbf{63.6} & \textbf{65.6} & \textbf{69.6} & \textbf{73.6} \\ Places~\cite{zhou2014learning} & 59.0 & 63.2 & 65.3 & 66.2 \\ \bottomrule \end{tabular} \vspace{0.25em} \caption{PASCAL VOC2007 mAP comparison for image classification with supervised (bottom), and self-supervised (middle) methods. } \label{pascal_SVM_mAP} \end{table} In Tables \ref{tab:pascal_pool5_AP} and \ref{pascal_SVM_mAP}, we compare our results on the PASCAL VOC2007 test set with different state-of-the-art self-supervised learning algorithms using features from different top layers and SVM classifiers. Our method which leverages global and local contexts for self-supervised training achieves state-of-the-art performance as seen in Table \ref{pascal_SVM_mAP}. This demonstrates that a network that identifies global and local semantic contexts in which an image is more probable to appear gives better visual representations. In Table \ref{tab:pascal_pool5_AP}, we provide a per-class comparison with various self-supervised and supervised visual representation learning algorithms. It can be clearly seen that our method performs better than other self-supervised methods for most of the classes. In the case of ``bottle'' class our method outperforms fully supervised network. \subsubsection{SUN 397} Table~\ref{tab:sun_SVM_mAP} compares our results on the SUN397 \cite{xiao2010sun} test set with state-of-the-art self-supervised learning and supervised algorithms. SUN397 \cite{xiao2010sun} consists of 50 training and 50 test images for each of the 397 scene classes. We follow the same evaluation protocol as \cite{owens2016ambient,agrawal2015learning} and make use 20 images per class for training and remaining 30 for validation. We evaluate our method on three different partitions of training and testing and report the average performance. This scene classification dataset is suitable for the evaluation of self-supervised approaches as it contains less frequently occurring classes and thus is more challenging compared to PASCAL VOC 2007 dataset. We appreciate that our method outperforms all other modalities of supervision in this experiment. We observe that using features from \textit{fc6} layer gives better performance compared to using features from \textit{fc7} layer. This indicates that \textit{fc6} and \textit{pool5} layers of our network are more robust towards uncommon classes. \begin{table} \begin{tabular}{ l | c | c | c | c} \toprule Method & max5 & pool5 & fc6 & fc7 \\ \midrule Ours & - & \textbf{30.3} & \textbf{33.5} & \textbf{28.2} \\ \midrule TextTopicNet (Wikipedia) \cite{patel2018texttopicnet} & - & 28.8 & 32.2 & 27.7 \\ Sound ~\cite{owens2016ambient} & 17.1 & 22.5 & 21.3 & 21.4 \\ Texton-CNN & 10.7 & 15.2 & 11.4 & 7.6 \\ K-means~\cite{krahenbuhl2015data} & 11.6 & 14.9 & 12.8 & 12.4 \\ Tracking~\cite{wang2015unsupervised} & 14.1 & 18.7 & 16.2 & 15.1 \\ Patch pos.~\cite{doersch2015unsupervised} & 10.0 & 22.4 & - & - \\ Egomotion~\cite{agrawal2015learning} & 9.1 & 11.3 & - & - \\ \midrule ImageNet~\cite{krizhevsky2012imagenet} & 29.8 & 34.0 & 37.8 & 37.8 \\ Places~\cite{zhou2014learning} & \textbf{39.4} & \textbf{42.1} & \textbf{46.1} & \textbf{48.8} \\ \bottomrule \end{tabular} \vspace{0.25em} \caption{SUN397 accuracy for image classification with supervised (bottom), and self-supervised (middle) methods.} \label{tab:sun_SVM_mAP} \end{table} \subsection{Cross-Modal Retrieval} \label{sec:cross_modal_exp} As seen in Fig. \ref{fig:overview}, the final layer of the network projects the images on same representation as text as obtained by the LDA model (Section \ref{sec:text_rep}). Therefore, cross-modal retrieval can be directly done by making use of LDA topic probabilities for text and network final predictions for image. We use KL-divergence as a distance metric to short the samples of the target modality, since both the LDA encoding and the CNN output represent probability distributions. Note that our comparisons are made with existing methods with reported performance using ImageNet pre-trained AlexNet \cite{krizhevsky2012imagenet} architecture for image representations and LDA \cite{blei2003latent} or BoW representations for text. \subsubsection{Wikipedia} We use the Wikipedia retrieval dataset \cite{rasiwasia2010new}, which consists of 2,866 image-article pairs split into train and test set of 2,173 and 693 pairs respectively Further, each image-document pair is labeled with one of ten semantic classes \cite{rasiwasia2010new}. In Table \ref{table:multi_modal_retrieval_wiki} we compare our results with supervised and unsupervised multi-modal retrieval methods discussed in ~\cite{wang2016comprehensive} and ~\cite{kang2015cross}. Supervised methods make use of class or categorical information associated with each image-document pair, whereas unsupervised methods do not. All of these methods use LDA for text representation and CNN features from pre-trained CaffeNet, which is trained on ImageNet dataset in a supervised setting. We observe that the self-supervised baseline method outperforms unsupervised approaches, and has competitive performance to supervised methods without using any labeled data. \begin{table} \centering \begin{tabular}{l | c | c | c} \toprule Method & \shortstack{Image\\Query} & \shortstack{Text\\Query} & Average \\ \midrule Ours & $39.10$ & $\textbf{43.40}$ & $\textbf{41.25}$\\ TextTopicNet (Wikipedia) \cite{patel2018texttopicnet}& $37.63$ & $40.25$ & $38.94$\\ TextTopicNet (ImageCLEF) \cite{gomez2017self} & $39.58$ & $38.16$ & $38.87$\\ \midrule CCA \cite{hardoon2004canonical,rasiwasia2010new}& $19.70$ & $17.84$ & $18.77$ \\ PLS \cite{rosipal2006overview} & $30.55$ & $28.03$ & $29.29$ \\ \midrule SCM* \cite{rasiwasia2010new}& $37.13$ & $28.23$ & $32.68$ \\ GMMFA* \cite{sharma2012generalized} & $38.74$ & $31.09$ & $34.91$ \\ CCA-3V* \cite{gong2014multi} & $40.49$ & $36.51$ & $38.50$ \\ GMLDA* \cite{sharma2012generalized} & $40.84$ & $36.93$ & $38.88$ \\ LCFS* \cite{wang2013learning}& $41.32$ & $38.45$ & $39.88$ \\ JFSSL* \cite{wang2016joint}& $\textbf{42.79}$ & $39.57$ & $41.18$ \\ \bottomrule \end{tabular} \vspace{0.25em} \caption{Mean average precision (MAP) comparison on Wikipedia dataset \cite{rasiwasia2010new} with supervised (bottom), unsupervised (middle) and self-supervised (top) methods. Methods marked with asterisk make use of document (image-text) class category information.} \label{table:multi_modal_retrieval_wiki} \end{table} In Table \ref{table:multi_modal_retrieval_wiki}, we also observe that our method which leverages global and local contexts for self-supervised training leads to state-of-the-art performance, even when compared to fully supervised approaches. This demonstrates that training a network to predict both the global and local semantic contexts in which it is more probable to appear leads to better learning for retrieval task. Further note that, except ours and TextTopicNet \cite{gomez2017self,patel2018texttopicnet} all the other methods use ImageNet pre-trained network. \subsubsection{Pascal Sentences} We also evaluate our method on pascal sentences dataset \cite{farhadi2010every} which is a subset of pascal VOC dataset. It contains $1000$ pairs of an image along with several sentences from $20$ categories. While, the other methods randomly split the dataset into $600$ training and $400$ testing samples, we test on all $1000$ samples. This is due to the fact that we do not make use of this dataset for training at any point. Table \ref{table:multi_modal_retrieval_pascal} provides an extensive comparison with existing methods. Compared to other retrieval methods that use self-supervised visual representations \cite{gomez2017self,patel2018texttopicnet}, our method achieves $1.6\%$ higher MAP with $\frac{1}{4^{th}}$ the size of training data. This demonstrates the efficacy of jointly using global and local self-supervision signals. \begin{table} \centering \begin{tabular}{l | c | c | c} \toprule Method & \shortstack{Image\\Query} & \shortstack{Text\\Query} & Average \\ \midrule Ours & $32.6$ & $\textbf{36.0}$ & $\textbf{34.3}$ \\ TextTopicNet (Wikipedia)\cite{patel2018texttopicnet}& $30.1$ & $35.2$ & $32.7$\\ TextTopicNet (ImageCLEF) \cite{gomez2017self} & $26.4$ & $31.6$ & $29.0$ \\ \midrule CCA \cite{hardoon2004canonical,rasiwasia2010new}& $9.90$ & $9.7$ & $9.8$ \\ CFA \cite{li2003multimedia} & $18.7$ & $21.6$ & $20.2$ \\ KCCA (Poly) \cite{hardoon2004canonical} & $20.7$ & $19.1$ & $19.9$ \\ KCCA (RBF) \cite{hardoon2004canonical} & $23.3$ & $24.9$ & $24.1$ \\ Bimodal AE \cite{ngiam2011multimodal} & $24.5$ & $25.6$ & $25.1$ \\ Multimodal DBN \cite{srivastava2012multimodal} & $19.7$ & $18.3$ & $19.0$ \\ Corr-AE \cite{feng2014cross} & $26.8$ & $27.3$ & $27.1$ \\ JRL \cite{zhai2014learning} & $30.0$ & $28.6$ & $29.3$ \\ CMDN \cite{peng2016cross} & $\textbf{33.4}$ & $33.3$ & $33.4$ \\ \bottomrule \end{tabular} \vspace{0.25em} \caption{Mean average precision (MAP) comparison on pascal sentences dataset \cite{farhadi2010every} with supervised image representations (bottom) and self-supervised image representations (top) methods.} \label{table:multi_modal_retrieval_pascal} \end{table} \subsection{Qualitative Retrieval Results} \label{sec:qualitative_results} Finally, in this section we provide additional qualitative experiments for an image retrieval task. Figure~\ref{fig:text_to_img_qualitative} shows the top-8 nearest neighbors for a given text query (from left to right and top to bottom: ``car''+``fast'', ``car''+``slow'', ``aeroplane''+``passenger'', ``aeroplane''+``fighter'', ``people''+``eating'', and ``people''+``playing'') in the learned topic space of of our model (without fine tuning). We appreciate that, by leveraging textual semantic information, our method learns rich visual representations that can disambiguate correctly between those combined queries. Figure~\ref{fig:img_to_img_qualitative} shows the 4 nearest neighbors for a given query image (left-most), where each row makes use of features obtained from different layers of our model (again without fine tuning). Query images are randomly selected from PASCAL VOC 2007 dataset and never shown at training time. It can be appreciated that when retrieval is performed in the semantic space layers (prob-article and prob-caption), the results are semantically close, although not necessarily visually similar. As features from earlier layers are used, the results tend to be more visually similar to the query image. \begin{figure*} \centering \includegraphics[width=\textwidth]{Text_to_image_nn.pdf} \caption{Qualitative examples of text query to image retrieval using nearest neighbour search by comparing network output from caption branch $(f_{C}(x,\Theta))$ with LDA topic probabilities ($\Phi(x^{C})$).} \label{fig:text_to_img_qualitative} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{Image_to_Image_Retrieval.pdf} \caption{Top 4 nearest neighbors for a given query image image (left-most). Each row makes use of features obtained from different layers of our network (without fine tuning). From top to bottom: \textit{prob-article} $(f_{A}(x,\Theta))$, \textit{prob-caption} $(f_{C}(x,\Theta))$, \textit{fc7}, \textit{fc6}, \textit{pool5}.} \label{fig:img_to_img_qualitative} \end{figure*} \section{Conclusion \label{sec:conclusion} In this article we put forward a self-supervised method that takes advantage of the natural correlation between an article's text and the images used to illustrate it, in order to learn useful visual representations. The proposed method is capable of exploiting the rich semantics and broad coverage of illustrated articles, making use of both article-wide semantics and specific image semantics captured by the image caption. We demonstrated that the learned visual features can transfer well to any general computer vision task such as image classification or object detection, while they can be directly used in a cross-modal retrieval framework yielding state of the art results both on the Wikipedia retrieval dataset and the Pascal Sentences dataset. Notably, the obtained model improves the state of the art not only in comparison to other self-supervised methods, but also when compared to supervised models. \clearpage \bibliographystyle{ACM-Reference-Format}
1605.03092
\section{Introduction} \label{sec:intro} Hydrodynamic instabilities are responsible for the frequent encounter of turbulence in nature. Although instabilities are connected to the onset of turbulence and the generation of small scales, in many situation, instabilities are also responsible for the formation of large-scale structures. In such situations, flows of a given coherence length-scale are unstable to larger scale perturbations transferring energy to these scales. A classical example of a large-scale instability is the $\alpha$-effect \cite{steenbeck_berechnung_1966,moffatt_field_1978} in magneto-hydrodynamic (MHD) flows to which the origin of large-scale planetary and solar magnetic field is attributed. In $\alpha$-dynamo theory, small-scale helical flows self-organize to generate magnetic fields at the largest scale of the system. While large-scale instabilities have been extensively studied for the dynamo problem, limited attention has been drawn to large-scale instabilities of the pure hydrodynamic case. Hence, most direct numeric simulations (DNS) and turbulence experiments are designed so that the energy injection scale $\ell$ is close to the domain size $L$. This allows to focus on the forward energy cascade and the formation of the Kolmogorov spectrum \cite{frisch_turbulence:_1995}. Scales larger that the forcing scale, where no energy cascade is present, are expected \cite{frisch1985,Ganga_Fauve14} to reach a thermal equilibrium with a $k^2$ spectrum \cite{LEE:1952p4100,OrszagAnalytTheo,kraichnan73,krstulovicetal09}. Recent studies, using (hyper-viscous) simulations of turbulent flows randomly forced at intermediate scales \cite{Dallas}, have shown that the energy spectrum at large scales deviates from the thermal equilibrium prediction and forms a strong peak at the largest scale of the system. A possible explanation for this intriguing result is that a large-scale instability is present. In pure hydrodynamic flows, the existence of large-scale instabilities has been known for some time. An asymptotic expansion based on scale separation was used in \cite{frisch_large-scale_1987,frisch_new_1988} to demonstrate the existence of a mechanism similar to the MHD $\alpha$-dynamo called the anisotropic-kinetic-alpha (\AKA{}) instability. The \AKA{} instability is present in a certain class of non-parity-invariant, time-dependent and anisotropic flows. It appears for arbitrary small values of the Reynolds number and leads to a growth-rate $\sigma$ proportional to the wavenumber $q$ of the unstable mode: $\sigma\propto q$. However, the necessary conditions for the presence of the \AKA{} instability are stricter than those of the $\alpha$-dynamo. Thus, most archetypal flows studied in the literature do not satisfy the \AKA{} conditions for instability. This, however, does not imply that the large scales are stable since other mechanisms may be present. In the absence of an \AKA{}-effect higher-order terms in the large-scale expansion may lead to a so-called eddy-viscosity effect \cite{kraichnan_eddy_1976}. This eddy-viscosity can be negative and thus produce a large-scale instability \cite{dubrulle_eddy_1991,wirth_eddy_1995}. The presence of a negative eddy-viscosity instability appears only above a critical value of the Reynolds number. It results in a weaker growth-rate than the \AKA{}-effect, proportional to the square of the wavenumber of the unstable mode $\sigma\propto q^2$. Furthermore, the calculations of the eddy-viscosity coefficient can be much more difficult than those of the \AKA{} $\alpha$ coefficient. This difficulty originates on the order at which the Reynolds number enters the expansion as we explain below. In the present paper, the Reynolds number is defined as $Re \equiv U_{rms} \ell / \nu$ where $U_{rms}$ is the root mean square value of the velocity and $\nu$ is the viscosity. Note that we have chosen to define the Reynolds number based on the energy injection scale $\ell$. An alternative choice would be to use the domain length scale $L$ which would lead to the large-scale Reynolds number that we will denote as $Re^L=UL/ \nu =(L/ \ell) Re$. For the \AKA{} effect, the large-scale Reynolds number $Re^L$ is large, while the Reynolds number $Re$, based on the forcing scale $\ell$, is small. This allows to explicitly solve for the small-scale behavior and obtain analytic results. This is not possible for the eddy-viscosity calculation where there are two regimes to consider. Either the Reynolds numbers is small and the eddy-viscosity only provides a small correction to the regular viscosity, or the Reynolds numbers is large and the inversion of an advection operator is needed. This last case can be obtained analytically only for very simple one dimensional shear flows \cite{dubrulle_eddy_1991,wirth_eddy_1995}. To illustrate the basic mechanisms involved in such multi-scale interactions, we depict in fig.~\ref{fig:3modmod} a toy model demonstrating the main ideas behind these instabilities. This toy model considers a driving flow, $\bm{U}$ at wavenumber ${\bf K}\sim1/\ell$, that couples to a small amplitude large-scale flow, $\bm{v}_{\bm{q}}$ at wavenumber ${\bf q}\sim1/L$ with $\bf |q|\ll |K|$. The advection of $\bm{v}_{\bm{q}}$ by $\bm{U}$ and visa versa will then generate a secondary flow $\bm{v}_{\bm{Q}}$ at wavenumbers $\bm{Q}=\bm{K\pm q}$. This small-scale perturbation in turn couples to the driving flow and feeds back the large-scale flow. If this feedback is constructive enough to overcome viscous dissipation, it will amplify the large-scale flow and this process will lead to an exponential increase of $\bm{v}_{\bm{q}}$ and $\bm{v}_{\bm{Q}}$. This toy model has most of the ingredients required for the instabilities to occur. \begin{figure}[!ht] \centering \includegraphics[width=\fwidth, trim= 750 0 0 0, clip=true]{fig1} \caption{(Color online) Sketch of the three-modes model. $U$ represents the small-scale driving flow of wavenumber $K$ (full arrow), $v_q$ is the large-scale perturbation of wavevector $q$ (dashed arrow) and $v_Q$ is the small scale perturbation of wavevector $Q=K \pm q$ (doted arrow).} \label{fig:3modmod} \end{figure} In order to study large-scale instabilities, they must be isolated from other small-scale competing instabilities that might coexist. This can be achieved by using Floquet theory \cite{floquet_sur_1883} (also referred as Bloch theory in quantum mechanics \cite{ashcroft_solid_1976}). Indeed, Floquet theory can track modes with large and small spatial periodicity separately. In what follows, we use direct numerical simulations (DNS) in the Floquet framework to study different flows, either in the presence of the \AKA{} effect using the flow introduced in \cite{frisch_large-scale_1987} or in the absence of \AKA{} effect using the \dombre{} flow (A=B=C) \cite{dombre_chaotic_1986} and the Roberts flow \cite{roberts_spatially_1970}. Our study extends to values of $Re$ and $\ell/L$ beyond the range of validity of the asymptotic expansions. Finally, we compare the results of Floquet DNS to those of full Navier-Stokes DNS. \section{Methods} \label{sec:methods} \subsection{Navier-Stokes} \label{subsec:navierStokes} Our starting point is the incompressible Navier-Stokes equation in the periodic $[0, 2\pi L]^3$-cube: \begin{align} \dt \Ugen = \Ugen \X \roT \Ugen - \grad \Pgen + \visco \Laplace \Ugen +\Fgen \, , \label{eq:fullNS} \end{align} with $\diV \Ugen =0$ and where $\Ugen$, $\Fgen$, $\Pgen$ and $\nu$ denote the velocity field, the forcing field, the generalized pressure field and the viscosity coefficient, respectively. The geometry imposes that all fields be $2\pi L$-periodic. We further assume that the forcing has a shorter spatial period $2\pi\ell$ with $L/\ell$ an arbitrary large integer. We denote the wavenumber of this periodic forcing as $\bf K$, with $K=|{\bf K}|=1/\ell$ for the flows examined. If the initial conditions of $\bf V$ satisfies the same periodicity as $\Fgen$ then this periodicity will be preserved by the solutions of the Navier-Stokes and corresponds to the preservation of the discrete symmetries $x\to x+2\pi\ell$, $y\to y+2\pi\ell$ and $z\to z+2\pi \ell$. However, these solutions can be unstable to arbitrary small perturbations that break this symmetry and grow exponentially. To investigate the stability of the periodic solutions, we decompose the velocity and pressure field in a driving flow and a perturbation component: \begin{align} \Ugen= \Ulam+\vlin \quad , \quad \Pgen = \Pgen_{\Ulam} + \Pgen_{\vlin} \end{align} where $\Ulam$ denotes the driving flow that has the same periodicity as the forcing $2\pi \ell$ and $\vlin$ is the velocity perturbation. The linear stability analysis amounts to determining the evolution of small amplitude perturbations so that only the first order terms in $\vlin$ are kept. The evolution equation of the driving flow is thus: \begin{align} \dt \Ulam = \Ulam \X \roT \Ulam - \grad \Pgen_{\Ulam} + \visco \Laplace \Ulam + \Fgen \, . \label{eq:linNS:K} \end{align} The remaining terms give the linearized Navier-Stokes equation for the perturbation: \begin{align} \dt \vlin = \Ulam \X \roT \vlin+& \vlin \X \roT \Ulam - \grad \plin_{\vlin} + \visco \Laplace \vlin \, , \label{eq:linNS:rK} \end{align} The two pressure terms enforce the incompressibility conditions $\diV\Ulam=0$ and $\diV\vlin=0$. The $\bf U$ flow is not necessarily a laminar flow (but respects $2\pi \ell$ periodicity). In general, the linear perturbation $\vlin$ does not only consist of modes that break the periodicity of the forcing. Linear unstable modes respecting the periodicity may also exist: they correspond to small-scale instabilities. We show how these modes can be distinguished from periodicity-breaking large-scale modes in the following section devoted to Floquet analysis. \subsection{Floquet Analysis} \label{subsec:FLASH} Studying large-scale flow perturbations with a code that solves the full Navier-Stokes equation requires considerable computational power as resolution of all scales from domain size $L$ to the smallest viscous scales $\ell_\nu\ll \ell$ must be achieved. This is particularly difficult in our case where scale separation $\ell\ll L$ is required. In order to overcome this limitation, we adopt the Floquet framework \cite{floquet_sur_1883}. In Floquet theory, the velocity perturbation can be decomposed into modes that are expressed as the product of a complex harmonic wave, $e^{i \qvec \cdot \bf r}$, multiplied by a periodic vector field $\vfloq(\rvec,t)$ with the same periodicity $2\pi \ell$ as that of the driving flow: \begin{align} \vlin(\rvec,t) = \vfloq(\rvec,t)e^{\imath \qvec \cdot \rvec } + c.c. \, , \end{align} and similar for the pressure, \begin{align} \plin_{\vlin}(\rvec,t) = \pfloq(\rvec,t)e^{\imath \qvec \cdot \rvec } + c.c. \, , \end{align} where \textit{c.c.} denotes the complex conjugate of the previous term. Perturbations whose values of $\bf q$ are such that at least one component is not an integer multiple of $1/\ell$, break the periodicity of the driving flow. The perturbation field $\vlin$ then involves all Fourier wavenumbers of the type $\bf Q=q+k$, where $\bf k$ is a wavevector corresponding to the $2\pi\ell$-periodic space dependence of $\vfloq$. We restrict the study to values of $q=|\bf q|$ satisfying $0 < q \le K$. For finite domain sizes $\bf q$ is a discrete vector with ${q}\ge 1/L$, while for infinite domain sizes $\qvec$ can take any arbitrarily small value. In the limit ${ q/K}\ll 1$ the perturbation involves scales much larger than $\ell$. Therefore, scale separation is achieved without solving intermediate scales as would be required if the full Navier-Stokes equations were used. Furthermore, this framework has the advantage of isolating perturbations that break the forcing periodicity ($\qvec \ell \notin \mathbb{Z}^3$), from other small-scale unstable modes with the same periodicity ($\qvec \ell \in \mathbb{Z}^3$) that might also exist in the system. A drawback of the Floquet decomposition is that some operators have somewhat more complicated expressions than in the simple periodic case. For instance, taking a derivative requires to take into account the variations of both the harmonic and the amplitude. Separating the amplitude in its real and imaginary parts $\vfloq(\rvec,t)=\vfloq^{r} + \imath \vfloq^{i}$, we obtain \begin{align} \partial_x \vlin = \left[ \partial_x \vfloq^{r} - q_x \vfloq^{i} + \imath (q_x \vfloq^{r} +\partial_x \vfloq^{i}) \right] e^{\imath \qvec \cdot \rvec } + c.c. \,, \label{eq:der:Floq} \end{align} where $\partial_x$ denotes the $x$-derivative and $q_x$ denotes the $x$-component of the $\qvec$ wavevector. Using eq.~\eqref{eq:linNS:rK} and \eqref{eq:der:Floq}, the linearized Navier-Stokes equation can be written as a set of $3+1$ complex scalar equations: \begin{align} \nonumber \partial_t \vfloq =& (\roT\Ulam) \times \vfloq + (\imath \qvec \X \vfloq + \roT \vfloq ) \times \Ulam \\ & - (\imath \qvec + \grad) \pfloq + \visco ( -\qvec^2 + \Delta) \vfloq \, , \label{eq:ns:Flq} \\ \text{with} \quad & \imath \qvec \cdot \vfloq + \diV \vfloq=0 \, . \label{eq:incomp:Flq} \end{align} We use standard pseudo-spectral methods to solve this system of equations in the $2\pi \ell$-periodic cube. The complex velocity field $\vfloq$ is decomposed in Fourier space where derivatives are reduced to a multiplication by $\imath \kvec$, where $\kvec$ is the Fourier wavevector. Multiplicative term are computed in real space. These methods have been implemented in the: Floquet Linear Analysis for Spectral Hydrodynamics (FLASH) code and details are given in appx.~\ref{sec:apx:FLASH}. In order to find the growth-rate of the most unstable mode, we integrate eq.~\eqref{eq:ns:Flq},\eqref{eq:incomp:Flq}, for a time long enough for a clear exponential behaviour to be observed. The growth-rate of this most unstable mode can then be measured by linear fitting. Note that this process only leads to the measurement of the fastest growing mode. \subsection{Three-modes model} % \label{subsec:3mm} % Although the Floquet framework is very convenient to solve equations numerically, it does not easily yield analytic results. Rigorous results must be based on asymptotic expansions and can only be derived in the limit of small \Rn{} or for simple shear layers \cite{dubrulle_eddy_1991,wirth_eddy_1995}. To obtain a basic understanding of the processes involved, we will use the idea represented in the toy model of fig.~\ref{fig:3modmod}. This model also has the major advantage of using a formalism that can easily be related to the physical aspect of the problem. In our derivation, we only consider the evolution of the two most intense modes of the perturbation and of the driving flow. The velocity perturbation is thus decomposed as a series of velocity fields of different modes: \begin{align} \vlin(\rvec, t) &= \pmv(\rvec, t) + \pmV(\rvec, t) + \rmv(\rvec,t) \, , \\ \pmv(\rvec, t) &= \vfloq(\qvec, t) e^{\imath \qvec \rvec } + c.c. \, , \\ \pmV(\rvec, t) &= \sum_{\knrm=1} \vfloq(\qvec,\kvec, t) e^{\imath ( \qvec \cdot \rvec + \kvec \cdot \rvec ) } + c.c. \, , \\ \rmv(\rvec, t) &= \sum_{\knrm > 1} \vfloq(\qvec,\kvec, t) e^{\imath ( \qvec \cdot \rvec + \kvec \cdot \rvec ) } + c.c. \, , \end{align} where $\qvec$ denotes the wavenumber of the large-scale modes and $\Qvec$ denotes the modes directly coupled to $\qvec$ via the driving flow, since $K=1$. At wavenumber $\qvec$, the linearized Navier-Stokes equation can be rewritten as: \begin{align} \dt \pmv = \Ulam\X\roT\pmV +\pmV \X \roT \Ulam - \grad \pmp + \visco \Laplace \pmv \,. \label{eq:linNS:q} \end{align} Assuming that the coupling with the truncated velocity, $\rmv$, is negligible with respect to the coupling with the large-scale velocity, $\pmv$, the linearized equation at $\Qvec$ reads: \begin{align} \dt \pmV = \Ulam \X \roT\pmv + \pmv\X \roT\Ulam - \grad \pmP + \visco \Laplace \pmV \, , \label{eq:linNS:Kpmq} \end{align} where $\pmp$ and $\pmP$ denote the pressure enforcing the incompressible conditions: $\diV \pmv=0$ and $\diV \pmV=0$, respectively. The modes are represented in fig.~\ref{fig:floquetSpec}. \begin{figure}[!ht] \centering \includegraphics[width=\fwidth]{fig2} \caption{Fourier modes of the Floquet decomposition used in the FLASH code and the three-modes model.} \label{fig:floquetSpec} \end{figure} The derivation is restricted to stationary positive helical driving flows, satisfying: $ \Uhel ( \rvec ) = K^{-1} \roT \Uhel( \rvec ) \,. $ The problem can then be solved by making use of the vorticity fields: \begin{align} \pmomega = \roT \pmv \quad \text{and} \quad \pmOmega = \roT \pmV \,, \label{eq:pmOmegas} \end{align} and the adiabatic approximation: $\partial_t \pmV \ll \nu \Laplace\pmV $. The system of equations of the three-modes model is thus: \begin{align} \visco \Laplace \pmOmega &= - \roT \left [ \Uhel \times ( \pmomega - K \pmv ) \right] \, , \\ \dt \pmomega &= \roT \left [ \Uhel \X (\pmOmega - K \pmV )\right ] + \visco \Laplace \pmomega \, . \end{align} The greatest eigenvalue of the system, $\sigma$, gives the growth-rate of the perturbation. The growth-rate can be derived analytically for an \ABC{} large-scale flow: \begin{align} U^{ABC}_x &= C \sin( K z ) + B \cos( K y ) \, , \\ U^{ABC}_y &= A \sin( K x ) + C \cos( K z ) \, , \\ U^{ABC}_z &= B \sin( K y ) + A \cos( K y ) \, . \label{eq:ABC:flow} \end{align} For \abc{} flows (\lamABC{}), one finds: \begin{align} \sigma = \beta q^2 - \nu q^2 \quad &\text{with} \quad \beta = \bc \rek^2 \nu \, , \label{eq:sigABC0} \\ \bc = \frac{1-\lambda^2}{4+2\lambda^2} \quad &\text{and} \quad \rek = \tfrac{U}{K \nu} \, , \label{eq:sigABC} \end{align} where \rek{} denotes the small-scale \Rn{} defined using the driving flow. The fastest growing mode is found to be fully helical. This simple model indicates that some driving flows, not satisfying the hypotheses of the \AKA{}-effect, described in \cite{frisch_large-scale_1987}, can generate a negative eddy-viscosity instability satisfying $\sigma \propto q^2$. The largest growth-rate is obtained for $\lambda=0$ while no $q^2$ instability is predicted for $\lambda=1$. For $\lambda \ne 1$ the flow becomes unstable when the $\beta$ term can overcome the viscosity $\beta>\nu$. This happens when $Re$ is above a critical value: $\rec= \bc^{-1/2}$. \section{Results} \label{sec:results} \subsection{\AKA{}} \label{subsec:aka} We begin by examining a flow that satisfies the conditions for an \AKA{} instability. Such a flow was proposed in \cite{frisch_large-scale_1987} (from now on $Fr87$) and is given by: \begin{align} U^{Fr87}_x &= U_0 \cos \left( K y + \visco K^2 t \right) \, , \nonumber \\ U^{Fr87}_y &= U_0 \sin \left( K x - \visco K^2 t \right) \, , \label{eq:Fr87:flow} \\ U^{Fr87}_z &= U^{Fr87}_x + U^{Fr87}_y \, . \nonumber \end{align} The growth-rate of large-scale unstable modes can be calculated in the small Reynolds number limit and is given by: \begin{align} \sigma = \alpha q - \nu q^2 \, , \label{eq:Fr87:alpha} \end{align} with $\alpha=a Re U_0$ and $a=\frac{1}{2}$. The fastest growing mode has negative helicity and \qvec{} along the $z$-direction. Setting \qvec{} along the $z$-direction, we integrated eq.~\eqref{eq:incomp:Flq}) numerically and measured the growth-rate $\sigma$. Fig.~\ref{fig:sigVqVreAKA_lin} displays the growth-rate of the most unstable mode as a function of the wavenumber amplitude $q=|\qvec |$ for three different values of $Re$ measured by the Floquet code and compared to the theoretical prediction. \begin{figure}[!ht] \centering \includegraphics[width=\fwidth , trim= 5 10 5 5, clip=true]{fig3} \caption{Growth-rate \textit{vs.} Floquet wavenumber, $\sigma(q)$, at different $\rek$ plotted in log-log scale for a $Fr87 $flow, eq.~\eqref{eq:Fr87:flow}.} \label{fig:sigVqVreAKA_lin} \end{figure} The agreement is good for small values of $q$ and for small values of $Re$ where the asymptotic limit is valid. For $q$ small enough, the flow is unstable and satisfies $\sigma \propto q$. Fig.~\ref{fig:sigVqVreAKA} shows in log-log scale the growth-rate of the perturbation as a function of $q$ for different Reynolds numbers. The solid line in the graph indicates the $\sigma \propto q$ scaling which is satisfied for all $Re$. \begin{figure}[!ht] \centering \includegraphics[width=\fwidth,trim= 5 10 5 5, clip=true]{fig4} \caption{Growth-rate \textit{vs.} Floquet wavenumber, $\sigma(q)$, at different $\rek$ plotted in log-log scale for a $Fr87$ flow, eq.~\eqref{eq:Fr87:flow}.} \label{fig:sigVqVreAKA} \end{figure} In fig.~\ref{fig:alphaVreAKA}, we compare the theoretical and numerically calculated prefactor $a$ of the $\alpha$~coefficient. This coefficient increases linearly with $Re$ and is seen to be in good agreement with the theoretical prediction up to $Re\simeq 10$. For larger values of $Re$, $a$ deviates from the linear prediction and saturates. \begin{figure}[!htb] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] (image) at (0,0) {\includegraphics[width=\fwidth,trim= 0 25 35 15, clip=true]{fig5a}}; \begin{scope}[x={(image.south east)},y={(image.north west)}] \node[anchor=south west,inner sep=0] (image) at (0.2,0.6) {\includegraphics[width=3.7cm,trim= 0 25 35 15, clip=true]{fig5b}}; \end{scope} \end{tikzpicture} \caption{\alphaMes{}~coefficient \textit{vs.} \Rn{}, plotted in log-log scale for an instability generated by a $Fr87$ flow, eq.~\eqref{eq:Fr87:flow}. In insert \alphaReMes{} \textit{vs.} \Rn{} plotted in lin-log scale.} \label{fig:alphaVreAKA} \end{figure} A positive growth-rate for a small $q$ mode does not guarantee the dominance of large scales. We should also consider what fraction of the perturbation energy is concentrated in the large scales. Fig.~\ref{fig:specAKA} shows the energy spectra for different Reynolds numbers. The energy spectrum for the complex Floquet field $\vfloq{}$ is defined as: $E(k) = \sum_{k-\frac{1}{2}\le {\vert \bf k \vert} \leq k+\frac{1}{2}} \vert \vfloq{} \vert^2 $ with $E(k=0)$ the energy at large scales $1/q$. While at small Reynolds numbers, the smallest wavenumber $k=0$ dominates, as the Reynolds number increases, more energy is concentrated in the wavenumber of the driving flow $k=1$. \begin{figure}[!ht] \centering \includegraphics[width=\fwidth,trim= 5 10 5 5, clip=true]{fig6} \caption{Spectrum of the Floquet perturbation, $E(k)$, for different small-scale Reynolds numbers, $\rek$, with $\qvec = (0; 0; 0.025)$ generated by a $Fr87$ flow, eq.~\eqref{eq:Fr87:flow}.} \label{fig:specAKA} \end{figure} To quantify this behavior, we plot in fig.~\ref{fig:specAKAfracq} the fraction of the energy in the zero mode $E_0 = E(0)$ divided by the total energy of the perturbation $E_{tot}=\sum_{k=0}^{\infty}E(k)$, as a function of the wavenumber $q$ for different values of $Re$. In the small $q$ limit, this ratio reaches an asymptote that depends on the Reynolds number. This asymptotic value is shown as a function of the $Re$ in fig.~\ref{fig:specAKAfracRe}. The small-scale energy ($E_{tot}-E_0$) is then shown to follow a power law $1-\frac{E_{0}}{E_{tot}}\propto Re^{2}$ for small values of $Re$. Therefore, for the \AKA{} instability, at small $Re$, the energy is concentrated in the large scales, whereas, at large $Re$, the most unstable mode has a small projection in the large scales. \begin{figure}[!ht] \centering \includegraphics[width=\fwidth,trim= 5 8 5 5, clip=true]{fig7} \caption{Growth-rate \textit{vs.} Floquet wavenumber, $\sigma(q)$, at different $\rek$ plotted in log-log scale for a $Fr87$ flow, eq.~\eqref{eq:Fr87:flow}.} \label{fig:specAKAfracq} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=\fwidth,trim= 5 10 35 25, clip=true]{fig8} \caption{Energy ratio \textit{vs.} Reynolds number, $E_0$, plotted in log-log scale for a $Fr87$ flow, eq.~\eqref{eq:Fr87:flow}.} \label{fig:specAKAfracRe} \end{figure} \subsection{Roberts flow: $\lambda=0$ } \label{subsec:rob} We now investigate non-\AKA{}-unstable flows. We consider the family of the \ABC{} flow, for which we expect large-scale instabilities of the form given in eq.~\eqref{eq:sigABC}. The three-modes model predicts that from the family of \ABC{} flows the most unstable is the \rob{} flow that is commonly referred to as the Roberts flow in the literature \cite{roberts_spatially_1970}. The model predicts a positive growth-rate when $Re>2$. Fig.~\ref{fig:linRob} shows the growth-rate $\sigma$ as a function of $q$ for various Reynolds numbers calculated using the Floquet code. For small values of the Reynolds number all modes $q$ have negative growth-rate. Above a critical value $\rec\simeq 2$ unstable modes appear at small values of $q$ in agreement with the model predictions. \begin{figure}[!ht] \centering \includegraphics[width=\fwidth,trim= 5 5 5 5, clip=true]{fig9} \caption{Growth-rate $\sigma$ \textit{vs.} Floquet wavenumber $q$, for different $\rek$ for the Roberts flow.} \label{fig:linRob} \end{figure} To investigate the behavior of the instability for small values of $q$ we plot in Fig.~\ref{fig:sigVqVreRob} the absolute value of the growth-rate as a function of $q$, in a logarithmic scale, for \Rn{} ranging from $0.312$ to $160$. Dashed lines indicate positive growth-rates while dotted lines indicate negative growth-rates. The solid black line indicates the $\sigma \propto q^2$ scaling followed by all curves. Therefore, the scaling predicted by the model (eq.~\eqref{eq:sigABC0},\eqref{eq:sigABC}) is verified. We will refer to the instabilities that follow this scaling $\sigma \propto q^2$ as negative eddy-viscosity instabilities. \begin{figure}[!ht] \centering \includegraphics[width=\fwidth,trim= 5 5 5 5, clip=true]{fig10} \caption{Growth-rate \textit{vs.} Floquet wavenumber, $\sigma(q)$, for different $\rek$ plotted in log-log scale for a Roberts flow. The full markers with dashes represent the value of positive growth-rates whereas the empty markers with dots represent the absolute value of negative growth-rates.} \label{fig:sigVqVreRob} \end{figure} To further test the model predictions we measure the proportionality coefficient for the $q^2$ power law obtained from the Floquet code. Fig.~\ref{fig:betaVreRob} compares the \bc{}~coefficient predicted by the three-modes model with the results of the Floquet code. The figure shows $(\langle \sigma/q^2\rangle + \nu)/\nu$ measured from the data for different values of $Re$, while the $Re^2/4$ prediction of the model is shown by a solid black line. The two calculations agree on nearly two orders of magnitude. Positive growth-rate for the large-scale modes implies $\betaMes>1$. The critical value of the Reynolds number, for which the instability begins, can be obtained graphically at the intersection of the numerically obtained curve with the $\betaMes{}=1$ line plotted with a dash-dot green line. The predictions of the model $\rec=2$ and the numerically values obtained are in excellent agreement. \begin{figure}[!ht] \centering \includegraphics[width=\fwidth,trim= 5 10 5 5, clip=true]{fig11} \caption{\betaMes{} \textit{vs.} of \Rn{}, plotted in lin-log scale for the Roberts flow. } \label{fig:betaVreRob} \end{figure} Similarly to the \AKA{} flow, the fraction of energy concentrated in the large scales ($k=1$) becomes independent of $q$ in the small $q$ limit. This is demonstrated in fig.~\ref{fig:efVqRob} where the ratio of $E_0/E_{tot}$ is plotted as a function of $q$. In fig.~\ref{fig:e0VqVreRob}, we show the asymptotic value of this ratio as a function of the Reynolds number. As in the case of the \AKA{} instability, the projection to the large scales depends on the Reynolds number, and at large $Re$, it follows the power law $\frac{E_0}{E_{tot}} \propto Re^{-2}$. \begin{figure}[!ht] \centering \includegraphics[width=\fwidth,trim= 5 8 5 5, clip=true]{fig12} \caption{Growth-rate \textit{vs.} Floquet wavenumber, $\sigma(q)$, at different $\rek$ plotted in log-log scale for a Roberts flow.} \label{fig:efVqRob} \end{figure} \begin{figure}[!ht] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] (image) at (0,0) {\includegraphics[width=\fwidth,trim= 20 20 30 20, clip=true]{fig13a}}; \begin{scope}[x={(image.south east)},y={(image.north west)}] \node[anchor=south west,inner sep=0] (image) at (0.35,0.325) {\includegraphics[width=5.4cm,trim= 5 10 35 15, clip=true]{fig13b}}; \end{scope} \end{tikzpicture} \caption{ Fraction of large-scale energy, \EoEt{}, for different \Rn{} for the most unstable mode of the Roberts flow. } \label{fig:e0VqVreRob} \end{figure} \subsection{\Dombre{} flow: $\lambda=1$} \label{subsec:Dom} For the \dom{} flow, the three-modes model predicts that the \bc{}~coefficient is zero. Therefore, the model does not predict a negative eddy-viscosity instability with: $\sigma\propto q^2$. Fig.~\ref{fig:ABC} shows the growth-rate as a function of the wavenumber $q$ calculated using the Floquet code for different values of the Reynolds number. Clearly the small $q$ modes still become unstable but the dependence on $Re$ appears different from the previously examined cases. We thus examine separately the small $Re$ and large $Re$ behaviors. \begin{figure}[!ht] \centering \includegraphics[width=\fwidth,trim= 5 5 5 5, clip=true]{fig14} \caption{Growth-rate \textit{vs.} Floquet wavenumber $\sigma (q)$, for different $\rek$ for the \ABC{} flow.} \label{fig:ABC} \end{figure} \subsubsection{Small values of $Re$ First, we examine the instability for small values of $Re\le10$ for which the growth-rate $\sigma$ tends to zero as $q\to0$. Fig.~\ref{fig:sigVqVreDom} shows the growth-rate of the instability for the \dombre{} flow as a function of the wavenumber $q$ in logarithmic scale for different values of $Re$ ranging from $0.312$ to $10$. In this range, the growth-rate behaves much like the Roberts flow, and is in contradiction with the three-modes model. The numerically calculated growth-rates show a clear negative eddy-viscosity scaling $\sigma \propto q^2$. The growth-rate becomes positive above a critical value of $Re$. \begin{figure}[!ht] \centering \includegraphics[width=\fwidth,trim= 5 5 5 0, clip=true]{fig15} \caption{Growth-rate \textit{vs.} Floquet wavenumber, $\sigma(q)$, for different $\rek$ plotted in log-log scale for a \dombre{} flow. The full markers with dashes represent the value of positive growth-rates whereas the empty markers with dots represent the absolute value of negative growth-rates.} \label{fig:sigVqVreDom} \end{figure} In fig.~\ref{fig:betaVreDom}, the measured value of $\betaMes{}$ is represented as a function of the \Rn{}. In the insert, the plot lin-log of $\betaReMes{}$ provides a measurement of the $b$ coefficient. This expression becomes larger than one (signifying the instability boundary that is marked by a dash-dot line) for $Re \gtrsim 3$. This value $\rec \simeq 3$ is slightly higher than the critical Reynolds number of the Roberts flow $\rec=2$. At very small Reynolds number, the value of $b=\betaReMes{}$ approaches zero very quickly, which indicates that the model prediction is recovered at $Re\to 0$. \begin{figure}[!htb] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] (image) at (0,0) {\includegraphics[width=\fwidth,trim= 5 10 15 5, clip=true]{fig16a}}; \begin{scope}[x={(image.south east)},y={(image.north west)}] \node[anchor=south west,inner sep=0] (image) at (0.52,0.15) {\includegraphics[width=3.9cm,trim= 5 10 15 5, clip=true]{fig16b}}; \end{scope} \end{tikzpicture} \caption{\betaMes{}~coefficient \textit{vs.} \Rn{}, plotted in log-log scale for the \dombre{} flow. The insert shows $b=\betaReMes{}$. } \label{fig:betaVreDom} \end{figure} To investigate further the discrepancy of the Floquet results with the three-modes model. Fig.~\ref{fig:xiVlambda} shows the \bc{}~coefficient (measured as $b=\betaReMes{}$) for different $\lambda$~parameter from $0$ (Roberts flow) to $1$ (\dombre{} flow). All the DNS are carried out at $\rek=10$. \begin{figure}[!ht] \centering \includegraphics[width=\fwidth , trim= 20 25 35 30, clip=true]{fig17} \caption{\bc{}~coefficient \textit{vs.} $\lambda$~parameter, $\bc(\lambda)$, at $\rek=10$ for \lamABC{} flows with parameter: $A=1:B=1:C=\lambda\,$.} \label{fig:xiVlambda} \end{figure} The results indicate that the three-modes model and the results from the Floquet code agree for $\lambda \lesssim 0.5$ but deviate as $\lambda$ becomes larger. To identify where this discrepancy between the model and the DNS occurs, we modified the FLASH code in order to test the assumptions of the model. This is achieved by enforcing the adiabatic approximation in the Floquet code and by controlling the number of modes that play a dynamical role. The later is performed by using a Fourier truncation of the Floquet perturbation at a value $k_{cut}$ so that only modes with $k<k_{cut}$ are present. Fig.\ref{fig:xiVkcut} shows the dependence of the \bc{}~coefficient on the truncation mode, \kcut{}. For $\kcut \geq 3$, the growth-rate reaches the asymptotic value that is also observed in the insert of fig.~\ref{fig:betaVreDom} for $Re=10$ obtained from the ``untampered" FLASH code. This confirms the assumption that modes in the smallest scales have little impact on the evolution of the large-scale perturbation. However, the \bc{}~coefficient strongly varies for $\kcut \leq 3$. The model predictions are recovered only when $k_{cut}=1$ that amounts to keeping only the modes used in the model. Therefore, the hypothesis of the model to restrict the interaction of the perturbation to its first two Fourier modes does not seem to hold for the \dombre{} flow at moderate \Rn{}, $1\leq \rek \leq 10$. The adiabatic hypothesis does not appear to affect the results. Therefore, the discrepancy between the three-modes model and the numeric results is due to the coupling of the truncated velocity $\rmv$ that was neglected in the model. \begin{figure}[!ht] \centering \includegraphics[width=\fwidth, trim= 20 25 30 30, clip=true]{fig18} \caption{\bc{}~coefficient \textit{vs.} Fourier truncation mode, $\bc(\kcut)$, at $\rek=10$ of instabilities generated by \lamABC{} flows.} \label{fig:xiVkcut} \end{figure} \subsubsection{Large values of $Re$ We now turn our focus to large values of the Reynolds number that display a finite growth-rate $\sigma$ at $q\to0$, see fig.~\ref{fig:ABC}. Fig.~\ref{fig:ABC_highReG} shows the growth-rate $\sigma$ in a lin-log scale for four different values of the Reynolds number. Unlike the small values of $Re$ examined before here it is clearly demonstrated that above a critical value of $Re$ the growth-rate $\sigma$ reaches an asymptotic value independent of $q$. At first, this finite growth-rate seems to violate the momentum conservation. Indeed, momentum conservation enforces modes with $q=0$, corresponding to uniform flows, not to grow. \begin{figure}[!ht] \centering \includegraphics[width=\fwidth , trim= 0 8 12 5 , clip=true]{fig19} \caption{ Growth-rate as a function of $q$ for the \ABC{} flow and for large values of $Re$} \label{fig:ABC_highReG} \end{figure} The resolution of this conundrum can be obtained by looking at the projection of the unstable modes to the large scales. In fig.~\ref{fig:ABC_E0}, we plot the ratio $E_0/E_{tot}$ as a function of $q$ for the same values of $Re$ as used in fig.~\ref{fig:ABC_highReG}. Unlike the small $Re$ cases examined previously, for large $Re$, this energy ratio decays to zero at small values of $q$ and appears to follow the power law $E_0/E_{tot}\propto q^4$. Therefore, at $q=0$, the energy at large scales $E_0$ is zero and the momentum conservation is not violated in the $q=0$ limit. \begin{figure}[!ht] \centering \includegraphics[width=\fwidth , trim= 0 8 5 5 , clip=true]{fig20} \caption{ Growth-rate as a function of $q$ for the \ABC{} flow and for large values of $Re$} \label{fig:ABC_E0} \end{figure} \subsubsection{Small and large-scale instabilities} \label{sec:small} It appears that there are two distinct behaviors: the first one for which $\lim_{q\to0}\sigma =0$ and $\lim_{q\to0}E_0/E_{tot}>0$ when $Re$ is small and the second one for which $\lim_{q\to0}\sigma >0$ and $\lim_{q\to0}E_0/E_{tot}=0$ when $Re$ is large. We argue that there is a second critical Reynolds number $\recS$ such that flows for which $\rec < Re < \recS$ show the first behavior while flows with $\recS < Re$ show the second behavior. This second critical value is related to the onset of small-scale instabilities. To demonstrate this claim we are going to use a simple model. We consider the evolution of two modes, one at large scales $v_q$ and one at small scales $v_{_Q}$. These modes are coupled together by an external field $U$. In the absence of this coupling, the large-scale mode $v_q$ decays while the evolution of the small-scale mode $v_{_Q}$ depends on the value of the Reynolds number. The simplest model satisfying these constraints, dimensionally correct and leading to an \AKA{} type $\sigma\propto q$ instability or a negative eddy-viscosity instability $\sigma \propto q^2$ is: \begin{eqnarray} \frac{d}{dt} v_q =& -\nu q^2 v_q &+ U q^nQ^{1-n} v_{_Q} \, ,\\ \frac{d}{dt} v_Q =& U Q v_q &+ \sigma_{_Q} v_{_Q} \, . \end{eqnarray} The index $n$ takes the values $n=1$ if an \AKA{} instability is considered and $n=2$ if an instability of negative eddy-viscosity is considered. Note that for $q=0$ the growth of $v_q$ is zero, as required by momentum conservation. $\sigma_{_Q}=sUQ -\nu Q^2$ gives the small-scale instability growth-rate that is positive if $Re=U/(\nu Q)>1/s=\recS$. The simplicity of the model allows for an analytical calculation of the growth-rate and the eigenmodes. Despite its simplicity, it can reproduce most of the results obtained here in the $q\ll Q$ limit. The general expression for the growth-rate is given by $\sigma = \frac{1}{2}\left[(\sigma_{_Q} -\nu q^2 \pm \sqrt{ (\sigma_{_Q}+\nu q^2)^2 + 4Q^{2-n}q^nU^2 }\, \right]$ and eigenmode satisfies $v_q/v_{_Q} =U q^nQ^{1-n}/(\sigma+\nu q^2).$ First, we focus on large values of $\nu$ such that $\sigma_{_Q}=-\nu Q^2 <0$. For $n=1$, the growth-rate $\sigma$ and the energy ratio ${E_0}/{E_{tot}}= {v_q^2}/{(v_q^2+v_{_Q}^2)}$ are given to the first order in $q$ \begin{align} \sigma \simeq \frac{U^2 q}{\nu Q} \quad \mathrm{and}\quad \frac{E_0}{E_{tot}} \simeq \frac{1}{1+Re^2}. \label{eq:mdlA} \end{align} In the same limit for $n=2$ we obtain \begin{align} \sigma \simeq \nu (Re^2-1)q^2 \quad \mathrm{and}\quad \frac{E_0}{E_{tot}} \simeq \frac{1}{1+Re^2}. \label{eq:mdlB} \end{align} The critical Reynolds number for the large-scale instability is given by $\rec=1$. Both of these results in eqs.~\eqref{eq:mdlA},\eqref{eq:mdlB} are in agreement with the results demonstrated in figs.~\ref{fig:sigVqVreAKA}, \ref{fig:specAKAfracq}, \ref{fig:specAKAfracRe}, \ref{fig:sigVqVreRob}, \ref{fig:efVqRob}, \ref{fig:e0VqVreRob}. The behavior changes when a small-scale instability exists $\sigma_{_Q}>0$. This occurs when $UQ>s\nu Q^2$ at the critical Reynolds number: $\recS=1/s$. For large $Re\gg \recS$ we thus expect $\sigma_{_Q} \simeq sUQ>0$. In this case for $n=1$ to first order in $q$, we have: \begin{align} \sigma \simeq \sigma_{_Q} \quad \mathrm{and}\quad \frac{E_0}{E_{tot}} \simeq \frac{q^2}{s^2Q^2} \label{eq:mdlC} \end{align} while for $n=2$, we obtain: \begin{align} \sigma \simeq \sigma_{_Q} \quad \mathrm{and}\quad \frac{E_0}{E_{tot}} \simeq \frac{q^4}{s^2Q^4}. \label{eq:mdlD} \end{align} The model is thus in agreement also with the scalings observed in figs.~\ref{fig:ABC_highReG},~\ref{fig:ABC_E0}. The transition from one behavior to the other occurs at the onset of small-scale instability $\recS$. It is thus worth pointing out that the results of the FLASH codes showed that the transition from $\lim_{q\to0}\sigma =0$ modes to $\lim_{q\to0}\sigma > 0$ occurs at the value of $Re$ for which small-scale instability of the \ABC{} flow starts $\recS \simeq13$ \cite{podvigina_non-linear_1994}. This further verifies that the transition observed is due to the development of small-scale instabilities. We also note here that both the Roberts flow and the $Fr87$ flow given in eq.~\eqref{eq:Fr87:flow} are invariant in translations along the $z$-direction. This implies that each $q_z$ mode evolves independently with out coupling to other $k_z$ modes. The onset of small-scale instabilities $\recS$ for $q=0$ in this case then corresponds to the onset of two dimensional instabilities. Two dimensional flows however forced at the largest scale of the system are known to be stable at all Reynolds numbers \cite{marchioro1986example}. This result originates from the fact that two dimensional flows conserve both energy and enstrophy and small scales cannot be excited without exciting large scales at the same time. This is the reason why no $\recS$ were observed in these flows. Finally, this model provides a way to distinguish between the presence or absence of the \AKA{} effect for values of $Re$ larger than the critical Reynolds for small-scale instabilities $\recS$ by looking at the scaling of the energy in the large scales with respect to the scale separation $q/Q$. In the presence of an \AKA{} effect the scaling of eq.~\eqref{eq:mdlC} is expected, while, in the absence of an \AKA{} effect, the scaling of eq.~\eqref{eq:mdlD} is expected if a negative eddy-viscosity is present. \subsection{Turbulent \dombre{} flows \label{subsec:turb} As discussed in the introduction the driving flow does not need to be laminar to use Floquet theory. It is only required to obey the $2\pi \ell$-periodicity. It is worth thus considering large-scale instabilities in a turbulent \ABC{} flow that satisfies the forcing periodicity. This amounts to the turbulent flow forced by an \ABC{} forcing in a periodic cube of the size of the forcing period $2\pi \ell$. Due to the stationarity of the laminar \ABC{} flow, it can be excluded as possible candidate for an \AKA{} instability. However, this is not true of a turbulent \ABC{} flow since it evolves in time. We cannot thus a priori infer that a turbulent \ABC{} flow results in an \AKA{} instability or not. To test this possibility, we consider the linear evolution of the large-scale perturbations $\vlin$ driven by an \dombre{} flow at $Re=50$, that is beyond the onset of the small-scale instability $\recS \simeq 13$. The turbulent \dombre{} flow $\bf U$ is obtained solving the Navier-Stokes eqs.~\eqref{eq:linNS:K} in the domain $(2\pi \ell)^3$ driven by the forcing function ${\bf F}^{ABC} ={\bf U}^{ABC}$. The code is executed until the flow reaches saturation. The evolution of the large scale perturbations is then examined solving eq.~\eqref{eq:incomp:Flq} with the FLASH code coupled to the Navier-Stokes eqs.~\eqref{eq:linNS:K}. The kinetic energy $E_U$ of the turbulent \dombre{} flow $\bf U$ is shown in fig.~\ref{fig:linET_TRB}. \begin{figure}[!ht] \centering \includegraphics[width=\fwidth , trim= 0 10 5 5 , clip=true]{fig21} \caption{Energy evolution of the turbulent \dombre{} driving flow at $Re=122$.} \label{fig:linET_TRB} \end{figure} The energy $E_U$ strongly fluctuates around a mean value. The evolution of the energy $E_{tot}$ of the perturbations $\bf v$ for different values of $q$ is shown in the insert of ~\ref{fig:TRB_GR}. $E_{tot}$ shows an exponential increase, from which the growth-rate can be measured. The growth-rate $\sigma$ as a function of the wavenumber $q$ is shown in fig.~\ref{fig:TRB_GR} while the ratio $E_{0}/E_{tot}$ is shown in fig.~\ref{fig:TRB_E0}. \begin{figure}[!ht] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] (image) at (0,0) {\includegraphics[width=\fwidth,trim= 5 5 10 5, clip=true]{fig22a}}; \begin{scope}[x={(image.south east)},y={(image.north west)}] \node[anchor=south west,inner sep=0] (image) at (0.2,0.425) {\includegraphics[width=5cm,trim= 5 5 5 5, clip=true]{fig22b}}; \end{scope} \end{tikzpicture} \caption{Growth-rate of the turbulent \dombre{} driving flow v, the wavenumber, $\sigma(q)$. The insert shows the exponential growth of the large-scale perturbations for various $q$.} \label{fig:TRB_GR} \end{figure} \begin{figure}[!ht] \centering \includegraphics[width=\fwidth]{fig23} \caption{$E_0/E_{tot}$ ratio \textit{vs.} the wavenumber $q$.} \label{fig:TRB_E0} \end{figure} The growth-rate of the large-scale instabilities appears to reach an finite value in the limit $q\to0$ just like laminar $ABC$ flows above the small-scale critical Reynolds $\recS$. However, the ratio $E_0/E_{tot}$ does not scale like $q^4$ as laminar \dombre{} flows but like $q^2$. As discussed in the previous section, this indicates that the turbulent \dombre{} flow is \AKA{}-unstable. This can have possible implications for the saturated stage of the instability that we examine next. \subsection{Non-linear calculations and bifurcation diagram} \label{subsec:fullNS} We further pursue our investigation of large-scale instabilities by examining the non-linear behavior of the flow close to the instability onset. We restrict ourselves to the case of the \dombre{} flow whose non-linear behavior has been extensively studied in the absence however of scale separation \cite{dombre_chaotic_1986}. The linear stability of the \ABC{} flow in the minimum domain size has been studied in \cite{podvigina_non-linear_1994} and more recently in \cite{jones_dynamo_2014}. These studies have shown that the \ABC{} flow destabilizes at $\recS \simeq 13$. To investigate the non-linear behavior of the flow in the presence of scale separation, we perform a series of DNS of the forced Navier-Stokes equation (eq.~\eqref{eq:fullNS}) in triple periodic cubic boxes of size $2\pi L$. The forcing maintaining the flow is ${\bf F}^{ABC} = \frac{\sqrt{2}}{\sqrt{3}} \nu |{\bf K}|^2 {\bf U}^{ABC}$ so that the laminar solution of the flow is the \ABC{} flow \cite{dombre_chaotic_1986} normalized to have unit energy. Four different boxes sizes are considered: $KL=1,5,10$ and $20$. For each box size and for each value of $Re$, the flow is initialized with random initial conditions and evolves until a steady state is reached. Fig.~\ref{fig:NrgVreK} shows the saturation level of the total energy $E_V$ at steady state as a function of $Re$ for the four different values of $KL$. At low \Rn{}, the laminar solution ${\bf V=U}^{ABC}$ is the only attractor and so the energy is $E_V=1$. At the onset of the instability the total energy decreases. A striking difference appears between the $KL=1$ case and other three cases. For the $KL=1$ case the first instability appears at $\recS \simeq 13$ in agreement with the previous work \cite{podvigina_non-linear_1994,jones_dynamo_2014}. By definition, only small-scale instabilities are present in the $KL=1$ case ({\it i.e.} instabilities that do not break the forcing periodicity). For the other three cases, which allow the presence of modes of larger scale than the forcing scale, the flow becomes unstable at a much smaller value: $\rec \simeq 3$. This value of $\rec$ is in agreement with the results obtained in section \ref{subsec:Dom} for large-scale instability by a negative eddy-viscosity mechanism. \begin{figure}[!htb] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] (image) at (0,0) {\includegraphics[width=\fwidth,trim= 35 25 25 25, clip=true]{fig24a}}; \begin{scope}[x={(image.south east)},y={(image.north west)}] \node[anchor=south west,inner sep=0] (image) at (0.52,0.27) {\includegraphics[width=3.85cm,trim= 25 25 25 25, clip=true]{fig24b}}; \end{scope} \end{tikzpicture} \caption{Bifurcation: total energy \textit{vs.} \Rn{}, $E_{tot}(Re)$, for different scale separation $K\in \lbrace 1 ; 5; 10 ; 20 \rbrace $. In insert, zoom of the graph for $\rek \in [ 2;5 ]$.} \label{fig:NrgVreK} \end{figure} The energy curves for the forcing modes $KL \geq 5$ all collapse on the same curve. This indicates that not only the growth-rate but also the saturation mechanism for these three simulations are similar. Further insight on the saturation mechanism can be obtained by looking at the energy spectra. Fig.~\ref{fig:Ek} shows the energy spectrum of the velocity field at the steady state of the simulations. Two types of spectra are plotted. In fig.~\ref{fig:Ek}, spectra plotted using lines and denoted as $k$-bin display energy spectrum collected in bins where modes $\bf k$ satisfy $ n_1-1/2 < {|\bf k}|L \leq n_1+1/2$, with $n_1$ a positive integer. $E(k)$ then represents the energy in the bin $n_1=k$. In fig.~\ref{fig:Ek}, spectra plotted using red dots and denoted by $k^2\!$-bin display the energy spectrum collected in bins where modes $\bf k$ satisfy $ |{\bf k}|^2L^2=n_2$, with $n_2$ a positive integer. Since ${\bf k}L$ is a vector with integer components $m_x$, $m_y$ and $m_z$, its norm $k^2L^2=m_x^2+m_y^2+m_z^2$ is also a positive integer. $E(k)$ then represents the energy in the bin $n_2=k^2L^2$. This type of spectrum provides more precise information about the energy distribution among modes. In our case, they help separate $K$ modes from $K\pm1/L$ modes and highlight the three-modes interaction. The $k=K\pm1/L$ modes as well as the largest scale mode $kL=1$ that were used in the three-modes model are shown by blue circles in the spectra. The drawback of $k^2\!$-bin spectra is their memory consumption. They have a number of bins equal to the square of the number of bins of standard $k$-bin spectra. However, since spectra are not outputted at every time-step, this inconvenience is limited. \begin{figure}[!htb] \centering \begin{tikzpicture} \node[anchor=south west,inner sep=0] (image) at (0,0) {\includegraphics[width=\fwidth]{fig25a}}; \begin{scope}[x={(image.south east)},y={(image.north west)}] \node[anchor=south west,inner sep=0] (image) at (0.0,0.5) {\includegraphics[width=\hwidth,trim= 0 5 5 5, clip=true]{fig25b}}; \node[anchor=south west,inner sep=0] (image) at (0.5,0.5) {\includegraphics[width=\hwidth,trim= 0 5 5 5, clip=true]{fig25c}}; \node[anchor=south west,inner sep=0] (image) at (0.0,0.0) {\includegraphics[width=\hwidth,trim= 0 5 5 5, clip=true]{fig25d}}; \node[anchor=south west,inner sep=0] (image) at (0.5,0.0) {\includegraphics[width=\hwidth,trim= 0 5 5 5, clip=true]{fig25e}}; \end{scope} \end{tikzpicture} \caption{Energy spectra, $E(k)$, for different scale separation $K\in \lbrace 1 ; 5; 10 ; 20 \rbrace $.} \label{fig:Ek} \end{figure} The plots of the spectra show that the most energetic modes are the modes close to the forcing scale and the largest scale mode $kL=1$. This is true even for the largest scale separation examined $KL=20$. We note that the largest scale mode is not the most unstable one as seen in all the cases examined (see figs.~\ref{fig:sigVqVreAKA_lin},\ref{fig:linRob},\ref{fig:ABC}). Despite this fact, it appears that the $kL=1$ is the dominant mode that controls saturation. The exact saturation mechanism however is beyond the scope of this work. \section{Conclusion} \label{sec:ccl} In this work, we examined in detail the large-scale hydrodynamic instabilities of a variety of flows. Using the Floquet framework as well as simplified models, we were able to investigate the stability of periodic flows to large-scale perturbations for a wide parameter range. Our work verifies the asymptotic results derived in the past but also covers cases that go beyond their validity including turbulent flows. For the $Fr87$ flow (see eq.~\eqref{eq:Fr87:flow}) at small values of $Re$, the instability growth rate scales like: $\sigma \propto q\; Re\,$, with most of the energy in the large scales $1-E_0/E_{tot} \propto Re^2$. It is present for any arbitrarily small value of the Reynolds number provided that scale separation is large enough. When $Re$ becomes of order one this behavior changes. The growth-rate saturates in $Re$ and most of the energy of the most unstable mode is concentrated in the small scales. Flows in the absence of an \AKA{} effect, like the \ABC{} and Roberts flow, show a negative eddy-viscosity scaling. The instability appears only above a critical value of the Reynolds number $\rec$ that was found to be $\rec \simeq 2$ for the Roberts flow and $\rec \simeq 3$ for the \dombre{} flow. The growth-rate follows the scaling $\sigma \propto \nu(b Re^2-1) q^2$. The value of $b$ can be calculated based on a three mode model for the Roberts flow and was found to be $b=1/4$. The three-modes model however failed to predict the \bc{}~coefficient of the \dombre{} flow because more modes were contributing to the instability. For the \dombre{}, the negative eddy-viscosity instability was shown to stop at a second critical Reynolds number $\recS \simeq 13$, where the flow becomes unstable to small-scale perturbations. For values of $Re$ larger than $\recS$ the growth-rate remains finite and independent of $q$ even at the $q\to 0 $ limit. On the contrary, the fraction of energy at the largest scale becomes dependent on $q$ decreasing as $E_0/E_{tot} \propto q^4$ in the $q\to0$ limit. These behavior is well described by a two-modes model that is explained in sec.~\ref{sec:small}. This model also predicts that in the case of an \AKA{} instability the ratio $E_0/E_{tot}$ scales like $E_0/E_{tot} \propto q^2$ for $Re>\recS$. This scaling was indeed found by examining the large-scale instability of a turbulent ABC flow, indicating that a turbulent \dombre{} is \AKA{}-unstable. Our study was carried out further to the non-linear regime where it was shown that in the presence of scale separation, the forcing scale and the largest scales of the system are the most dominant energetically. The persistence of this behavior at larger values of $Re$ remains to be examined. \acknowledgments This work was granted access to the HPC resources of MesoPSL financed by the Region Ile de France and the project Equip@Meso (reference ANR-10-EQPX-29-01) of the programme Investissements d'Avenir supervised by the Agence Nationale pour la Recherche and the HPC resources of GENCI-TGCC-CURIE \& GENCI-CINES-JADE (Project No. x20162a7620) where the present numerical simulations have been performed. \section{Appendix: FLASH} \label{sec:apx:FLASH} A pseudo-spectral method is adopted to compute numerically eq.~\eqref{eq:ns:Flq} and \eqref{eq:incomp:Flq}. The linear term are computed in Fourier space. All the terms involving the driving flow are computed in physical space made incompressible by solving in periodic space the Poisson problem, using: \begin{align} \bm{\Psi}^{(2)} = -\Laplace^{-1} (\roT)^2 \bm{\Psi}^{(1)} \,. \end{align} The main steps of the algorithm are written below. In this algorithm, \Four{} and $\Four^{-1}$ denote direct and inverse fast Fourier transforms. $\vect{AUX}^{(1)}$ and $\vect{AUX}^{(2)}$ are two auxiliary vector fields. $\vect{AUX}^{(1)}$ is real and $\vect{AUX}^{(2)}$ is complex. \begin{algorithm}[!htb] \caption*{Floquet Linear Analysis of Spectral Hydrodynamic (FLASH)} \begin{algorithmic}[1] \REQUIRE $\nu$, $T$, $dt$, $\qvec$, $\vfloq{}^{(0)}$, $\Ulam$ \STATE $\Wlam= \roT \Ulam$ \STATE $n=0$ \STATE $\vect{V}^{(n)}=\Four(\vlin{}^{(n)})$ \WHILE{$t < T$} \STATE $\vect{AUX}^{(1)}= \Ulam \times \Four^{-1} (\imath ( \kvec + \qvec) \times \vect{V}^{(n)}) - \Wlam \times \Four^{-1} (\vect{V}^{(n)}) $ \STATE $\vect{AUX}^{(2)}=- \vert\vert \kvec + \qvec\vert\vert^{-2} ( \kvec + \qvec) \times ( \kvec + \qvec) \times \mathcal{F} [\vect{AUX}^{(1)}]$ \STATE $\vect{V}^{(n+1)} = \vect{V}^{(n)} + dt ( \vect{AUX}^{(2)} - \nu \vert\vert \kvec + \qvec\vert\vert^{2} \vect{V}^{(n)} )$ \STATE $n=n+1$ , $t=t+dt$ \ENDWHILE \end{algorithmic} \end{algorithm} To carry out the computations with greater precision, a fourth order Runge-Kutta method is used instead of the simple Euler method at line 7 of the algorithm. The Fourier parallel expansions are also truncated at $1/3$ to avoid aliasing error. The code is parallelised with MPI and uses many routine from the GHOST code \cite{mininni_nonlocal_2008}. Most of the DNS are done at a $32^3$ and $64^3$ resolution. Convergence tests show that this resolution is sufficient for the range of \Rn{} studied.
0901.4136
\section{Introduction} \label{introduction} $N=4$ supersymmetric Yang-Mills theory is believed to possess an electric-magnetic duality symmetry. When the gauge group is maximally broken, to an Abelian subgroup, this duality interchanges the massive electrically charged particles corresponding to the elementary fields with magnetically charged states arising from classical soliton solutions. If the unbroken symmetry has a non-Abelian component, then there exist massless particles carrying electric-type charge. Although there are no isolated massless solitons, certain multisoliton solutions have degrees of freedom that are naturally interpreted as corresponding to massless magnetic monopoles. These are manifested as clouds of non-Abelian field that surround one or more massive monopoles and shield part of their non-Abelian magnetic charge~\cite{Lee:1996vz}. Our aim in this paper will be to explore the properties of these massless monopoles by studying the interactions between these clouds. To explain this in more detail, consider an SU($N$) gauge theory with an adjoint Higgs field whose asymptotic value can be brought into the form \begin{equation} \Phi = {\rm diag}\, (s_1, s_2, \dots s_N) \end{equation} with $s_1 \le s_2 \le \dots s_N$. If the $s_i$ are all distinct, the gauge symmetry is broken maximally, to U(1)$^{N-1}$, and there are $N-1$ topological charges. If the asymptotic magnetic field is then written in the form $F_{ij} = \epsilon_{ijk} r_k \, Q_M /r^3$, with \begin{equation} Q_M = {\rm diag}\, (n_1, n_2-n_1, \dots, -n_{N-1}) \, , \end{equation} these topological charges are the $n_k$. We will refer to an SU($N$) monopole solution with such charges as being an $(n_1, n_2, \dots, n_{N-1})$ solution. With this maximal symmetry breaking, one can identify $N-1$ fundamental monopoles~\cite{Weinberg:1979zt}, each carrying a single unit of one of the topological charges, with the mass of the $k$th being\footnote{For the remainder of this paper, we will assume that the gauge fields have been rescaled so as to set the gauge coupling $e$ to unity.} $(4\pi/e)(s_{k+1}-s_k)$. Each of these has four degrees of freedom --- three position variables and one U(1) phase. A BPS solution with arbitrary magnetic charges can be understood as being composed of appropriate numbers of the various species of fundamental monopoles and as living on a moduli space whose dimension is four times the total number of component monopoles. Our interest here is not in maximal gauge symmetry breaking, but rather in the alternative possibility, where $\Phi$ has degenerate eigenvalues. The unbroken symmetry is then enhanced to a non-Abelian group, and some of the fundamental monopoles become massless. It is instructive to follow the behavior of the classical solutions as this case is obtained from the maximally broken one by smoothly varying the eigenvalues of $\Phi$. One finds that an isolated fundamental monopole solution goes over to the vacuum solution as it becomes massless. The behavior of multimonopole solutions is more complex~\cite{Lu:1998br}. If the total magnetic charge remains purely Abelian, then the solution rapidly approaches its limiting form once the inverse of the smallest monopole mass becomes larger than the separations of the component monopoles. In addition, the moduli space of solutions and its naturally defined metric have smooth limits; in the examples where the metric for the case with nonmaximal breaking has been found directly, it is indeed the limit of the metric for the maximally broken case~\cite{Lee:1996vz,Houghton:1999qu}. If instead the magnetic charge has a non-Abelian component, none of these are the case. Not only do the moduli spaces not have smooth limits, but cases that are gauge equivalent in the massless limit have different dimensions for any finite monopole mass. These pathologies are clearly related to the long-range behavior of the non-Abelian fields and the non-normalizability of certain zero modes in the massless limit. A simple example of this arises in an SU(3) gauge theory. With maximal breaking, the unbroken symmetry is U(1)$\times$U(1), and there are two species of fundamental monopoles. If two of the eigenvalues of $\Phi$ are equal, the unbroken symmetry is SU(2)$\times$U(1) and one of the fundamental monopoles is massless. In the maximally broken case, the moduli spaces of the (2,0) and the (2,2) solutions have dimensions four and twelve, respectively. Yet, when the breaking is nonmaximal an SU(2) gauge transformation can turn a (2,[0]) solution into a (2,[2]) solution, where the square brackets denote the massless species. On the other hand, the (2,[1]) solutions, which have a purely Abelian long-range field have a well-defined moduli space, with a metric that is a smooth limit of the (2,1) metric. In this paper we will restrict ourselves to configuration with purely Abelian magnetic charge. Several such solutions with a single massless monopole are known. These include solutions~\cite{Weinberg:1982jh} with one massive and one massless monopole for SO(5) broken to SU(2)$\times$U(1), as well as SU(4) (1,[1],1) solutions~\cite{Weinberg:1998hn} and the SU(3) (2,[1]) solutions~\cite{Dancer:1992kj} referred to above, each with one massless and two massive monopoles. These SU(3) solutions will be of particular importance in our analysis. Their properties were studied in considerable detail by Dancer and collaborators~\cite{Dancer:1992kj,Dancer:1992kn,Dancer:1992hf,Dancer:1997zx}, and we will refer to them, and their $(N-1,[N-2],\dots,[1])$ generalizations for SU($N$), as Dancer solutions. In all of these solutions there is a single non-Abelian cloud (spherical in the first case, ellipsoidal in the latter two) that encloses the massive monopole(s). Well inside this cloud, the magnetic field approximates the field, with both Abelian and non-Abelian magnetic components, that would be expected to arise just from the massive monopoles. The cloud effectively shields the non-Abelian components, so that outside the cloud one sees the purely Abelian field corresponding to the sum of the massive and massless monopole charges. The size of the cloud is determined by a single parameter, whose value has no effect on the total energy of the solution. Since our goal in this paper is to investigate the interaction between massless monopole clouds, we need solutions that have more than one cloud. One might have expected that these could be obtained simply by having more than one massless monopole. This turns out not to be so. For example, the SU($N$) (1,[1],\dots,[1],1) solutions contain $N-3$ massless monopoles, but only a single cloud, no matter how large $N$ becomes~\cite{Lee:1996vz}. [In fact, for $N>4$ these solutions are essentially embeddings of (1,[1],1) SU(4) solutions into the larger group.] A set of solutions that do have multiple clouds, and which we will focus on, are the (2,[2],[2],[2],2) solutions --- with four massive and six massless monopoles --- in the theory with SU(6) broken to U(1)$\times$SU(4)$\times$U(1). The structure of these solutions was studied in Ref.~\cite{Houghton:2002bz}. They can be viewed as containing two SU(3) Dancer solutions, each with a ``Dancer cloud'' enclosing two massive monopoles, embedded in disjoint subgroups of the SU(6). In addition, there are two larger ``SU(4) clouds'', enclosing both Dancer clouds, that are somewhat analogous to the cloud in the SU(4) (1,[1],1) solutions. As we will describe in more detail later, these four clouds are characterized not only by individual cloud-size parameters, but also by a number of additional parameters specifying their orientations with respect to the unbroken gauge group. Note that, even in this case, the number of clouds is less than the number of massless monopoles. A useful tool for studying the low-energy dynamics of multimonopole systems such as these is the moduli space approximation~\cite{Manton:1981mp}, which reduces the full field theory dynamics to that of a finite number of collective coordinates $z^a$. The latter are governed by the Lagrangian \begin{equation} L = {1\over 2} g_{ab} \dot z^a \dot z^b \, , \end{equation} where $g_{ab}$ is the metric on the moduli space of BPS solutions. This method was used to study the (1,[1],1) SU(4) solutions~\cite{Chen:2001ge}. Because the two massive monopoles lie in mutually commuting subgroups of the SU($4$), one would not expect them to interact directly with each other. This turns out to be the case; indeed, they can pass though each other undeflected. On the other hand, they do interact with, and exchange energy with, the cloud. In these interactions the cloud acts as if it were a thin shell, with the monopole-cloud interactions concentrated in the times when the massive monopole positions coincide with the shell. At large times the massive monopoles and the cloud decouple from each other and evolve independently. The interactions between the cloud and the massless monopoles are similar in the SU(3) Dancer solutions~\cite{Dancer:1992kn,Dancer:1992hf}, which have the additional feature that the massive monopoles, being both of the same type, also interact directly with each other. A word of caution is in order here. The essential idea underlying the moduli space approximation is that for a slowly moving soliton fluctuations off of the moduli space are energetically suppressed. If the theory contains massless particles, this needs further examination, since excitation of these modes by radiation of massless particles is always energetically possible. However, since the source of the radiation is proportional to the time derivative of the bosonic fields, the radiation rate is expected to be small for low soliton velocities. This has been shown rigorously for configurations involving pairs of monopoles in theories where the massless gauge fields are all Abelian \cite{Manton:1988bn,Stuart:1994tc}. Although the validity of the approximation has not been demonstrated rigorously for the case where there are massless non-Abelian gauge fields, explicit comparison of the predictions of the moduli space approximation with numerical evolution of the full field equations in a spherically symmetric example~\cite{Chen:2001qt} show that they agree as long as the configurations are slowly varying. Of course, this method requires that moduli space metric be known. It can be obtained directly from the BPS solutions, if they are known explicitly. In some other cases indirect methods based on the mathematical properties of the moduli space can be used to obtain $g_{ab}$~\cite{Atiyah:1985dv,Lee:1996if,Gauntlett:1996cw}. However, for other cases it turns out to be easiest to resort to an alternative approach. The Atiyah-Drinfeld-Hitchin-Manin-Nahm (ADHMN) construction~\cite{Nahm:1982jb,Nahm:1981xg,Nahm:1981nb,Nahm:1983sv} is a powerful tool for obtaining BPS solutions. It is based on an equivalence between the Bogomolny equation for the fields in three-dimensional space and an ordinary differential equation for a set of matrix functions of a single variable, known as the Nahm data. The moduli space of Nahm data has its own naturally defined metric. It has been shown for both the SU(2) theory~\cite{nakajima} and for the case of SU($N+1$) broken to U($N$)~\cite{takahasi}, and is believed to be true in general, that the moduli spaces of Nahm data and of BPS solutions are isometric.\footnote{In fact, we will demonstrate this equivalence for yet another example, the (1,[1],1) SU(4) solutions, in Sec.~\ref{oneoneone}.} [In particular, Dancer used this equivalence in his investigation of the dynamics of the SU(3) solutions, and worked with the Nahm data metric.] In this paper we will assume that the equivalence of the two metrics holds as well for our SU(6) example, and will work with the Nahm data metric. The remainder of this paper is organized as follows. In Sec.~\ref{Nahmsec} we review the relevant parts of the ADHMN construction, as well as the metric on the moduli space of Nahm data. We then show that if the metric for the SU($M+1$) Dancer solutions is known, then that for the $(M,[M],\dots,[M],M)$ solutions of SU($2M+2$) broken to U(1)$\times$SU($2M$)$\times$U(1) can be easily obtained. In Sec.~\ref{oneoneone} we illustrate this method by obtaining the metric for the (1,[1],1) SU(4) solutions and verify that the Nahm data metric thus obtained is in fact equivalent to the metric on the moduli space of BPS solutions, which had been found previously~\cite{Lee:1996vz,Chen:2001ge,Lee:1996kz}. Next, in Sec.~\ref{twotwotwo}, we turn to the moduli space metric for the SU(6) (2,[2],[2],[2],2) solutions. These are described by a total of 40 collective coordinates. Although the full metric can be obtained by the methods of Sec.~\ref{Nahmsec}, the result would be rather unwieldy for exploring the nature of the cloud dynamics. We therefore consider a restricted problem, that of a lower-dimensional submanifold of axially symmetric solutions, for which the metric has a relatively simple closed form expression but which still has enough structure to allow us to investigate nontrivial cloud dynamics. In Sec.~\ref{dynamics} we use the metric obtained in the previous section to explore the cloud-cloud interactions. Section~\ref{conclude} contains some concluding remarks. There is an Appendix, which contains some of the details of the moduli space metric calculation. \section{The ADHMN construction and the metric for $\bm{(M,[M],\dots,[M],M)}$ solutions} \label{Nahmsec} As was discussed in Sec.~\ref{introduction}, the moduli spaces of BPS solutions and of Nahm data are believed to be isometric. We work in this paper with the Nahm data metric, which is the more accessible of the two for the examples that we study. In this section we first review the essential elements of the ADHMN construction~\cite{Nahm:1982jb,Nahm:1981xg,Nahm:1981nb,Nahm:1983sv}, emphasizing the points that are relevant for the $(M,[M],\dots,M)$ solutions of SU($2M+2$). We then obtain a general expression for the metric of these SU($2M+2$) solutions, and briefly discuss an asymptotic special case. \subsection{Nahm data} The basic elements in the ADHMN construction\footnote{For a fuller description of the ADHMN construction, including a discussion of how the spacetime fields are obtained from the Nahm data, see Ref.~\cite{Weinberg:2006rq}.} are the Nahm data, a quadruple of matrix functions $T_\mu(s)$ ($\mu=0,1,2,3$) that obey the Nahm equation, \begin{equation} 0= {dT_i \over ds} + i[T_0,T_i] + {i\over 2} \epsilon_{ijk}[T_j,T_k] \, , \qquad i,j,k = 1,2,3 \, . \label{nahmeq} \end{equation} For charge $k$ solutions in an SU(2) theory with Higgs vacuum expectation value $v$, the $T_\mu(s)$ are $k\times k$ Hermitian matrices defined for $-v/2 \le s \le v/2$. For a general SU($N$) theory, the eigenvalues of the Higgs vacuum expectation value divide the range of $s$ into $N-1$ intervals, on each of which Eq.~(\ref{nahmeq}) must hold. (The boundary conditions at the ends of these intervals are somewhat involved; we describe them below for the cases we need.) The dimension of the $T_\mu$ varies from interval to interval, being determined on each by the corresponding magnetic charge. If the $T_\mu$ are of the same size, $k\times k$, on two adjacent intervals, then there are additional ``jumping data'', forming a $2k$-component complex vector, associated with the boundary between these intervals. For later reference, note that if the $T_\mu(s)$ satisfy the Nahm equation, then so do the $\tilde T_\mu$ defined by \begin{eqnarray} \tilde T_i(s) &=& R_{ij} T_j(s) + B_i \, , \cr \tilde T_0(s) &=& T_0(s) \, , \label{rotANDtrans} \end{eqnarray} where $R_{ij}$ is an orthogonal matrix with determinant one. The $\tilde T_\mu(s)$ lead to a spacetime solution that is obtained from the original one by a combination of the spatial rotation specified by $R$ and a spatial translation by the vector $\bf B$. For SU($2M+2$) broken to U(1)$\times$SU($2M$)$\times$U(1) the eigenvalues of the Higgs vacuum expectation value are $s_L < s_0 <s_R$, with $s_0$ being $2M$-fold degenerate. These divide the range of $s$ into a ``left'' interval $[s_L,s_0]$, a ``right'' interval $[s_0,s_R]$, and $2M-1$ intervals of zero width at $s=s_0$. These correspond to two species of massive monopoles, with masses \begin{equation} M_L = 4\pi(s_0 - s_L) \, , \qquad M_R = 4\pi( s_R- s_0) \, , \end{equation} and $2M-1$ species of massless fundamental monopoles. For the $(M, [M], \dots, [M], M)$ solutions we need two sets of $M\times M$ matrices, $T_\mu^L(s)$ and $T_\mu^R(s)$, defined on the left and right intervals, respectively.\footnote{The spacetime fields are obtained from sums of integrals over the various intervals, with the integrands obtained by solving a differential equation involving the Nahm matrices on the corresponding interval. Because the integrals for the zero-width intervals vanish, the corresponding Nahm matrices have no effect on the spacetime fields. For more details, see Ref.~\cite{Weinberg:2006rq}.} These obey Eq.~(\ref{nahmeq}) subject to the boundary condition that, for $M>1$, the $T_i^L$ ($T_i^R$) have poles at $s_L$ ($s_R$) with the residues forming an $M$-dimensional irreducible representation of SU(2). Except for these poles, the $T_\mu^L$ and $T_\mu^R$ must be everywhere nonsingular. Because the magnetic charge is the same on adjacent intervals, there are jump data associated with each of the $2M$ coincident boundaries at $s_0$. We write the data associated with the $F$th boundary as a $2M$-component vector $a_{\alpha r}^F$, where $r=1,2,\dots, M$ and $\alpha=1,2$. These jump data are required to satisfy the constraint \begin{equation} \left(\Delta T_j\right)_{rs} = \left[ T_j^L(s_0)\right]_{rs} - \left[T_j^R(s_0)\right]_{rs} = {1 \over 2}\sum_{F\alpha\beta} a_{\alpha s}^{F*} (\sigma_j)_{\alpha \beta} a_{\beta r}^F \, . \label{jumpeq} \end{equation} If we assemble the jump data into a $2M \times 2M$ matrix $A$ with $A_{\alpha r, F} = a_{\alpha r}^F$, and define an $M\times M$ matrix \begin{equation} (T_4)_{rs} = {1 \over 2}\sum_{F\alpha} a_{\alpha s}^{F*} a_{\alpha r}^F \end{equation} and a $2M\times 2M$ matrix \begin{equation} K_{\alpha r; \beta s} \equiv (\Delta T_j)_{rs} (\sigma_j)_{\alpha\beta} + (T_4)_{rs} \delta_{\alpha\beta} = \sum_F a_{\beta s}^{F*} a_{\alpha r}^F \, , \label{KmatrixDef} \end{equation} the jump data constraint becomes simply $K = AA^\dagger$. For later reference, it is important to note that this implies that the eigenvalues of $K$ must be positive. The general solution of this constraint is \begin{equation} A = K^{1/2} V \, , \label{Adecomp} \end{equation} where $V$ is unitary. The freedom in the choice of $V$ reflects the existence of an unbroken U(1)$\times$SU($2M$) subgroup of the original SU($2M+2$) gauge group.\footnote{In the next subsection we will see how the effects of the remaining unbroken U(1) factor are manifested.} The crucial observation for us is that the dimensions and boundary conditions for the $T_\mu^L$ and $T_\mu^R$ are the same as those for the Nahm data of the SU($M+1$) Dancer solution. Apart from the gauge orientation angles and phases encoded in $V$, the new information specific to the SU($2M+2$) problem --- i.e., the parameters that describe the SU($2M$) clouds --- enters only through $T_4$, which can be chosen to be any Hermitian matrix, subject only to the constraint that the eigenvalues of $K$ must be positive. Thus, if the Nahm equation has already been solved for the SU($M+1$) Dancer problem, the only additional data needed for the $(M, [M], \dots, [M], M)$ solutions are the jump data, which are given by Eq.~(\ref{Adecomp}). \subsection{Gauge action} In addition to the spacetime symmetries described by Eq.~(\ref{rotANDtrans}), the Nahm equations have a set of invariances that are analogous to, although distinct from, the gauge transformations on the spacetime fields. If $g(s)$ is a unitary matrix of appropriate dimension, the Nahm equation (\ref{nahmeq}) and the jump equation (\ref{jumpeq}) are invariant under\footnote{Such transformations are often used to make $T_0(s)$ vanish identically.} \begin{eqnarray} T_\mu(s) &\rightarrow& \tilde T_\mu(s) = g(s) T_\mu(s) g^{-1}(s) + i \delta_{\mu 0} {dg\over ds} g^{-1}(s) \, , \cr \cr A_{\alpha r, F} & \rightarrow& \tilde A_{\alpha r, F} = g(s_0)_{rs} A_{\alpha s, F} \, . \end{eqnarray} When considering spacetime gauge transformations one distinguishes between local gauge transformations, which approach the identity at spatial infinity, and global gauge transformations, which are nontrivial as $r \rightarrow \infty$. While the former simply reflect the presence of redundant field components, the latter correspond to symmetries that are related to conserved gauge charges. When the gauge symmetry is unbroken, the number of global gauge transformations is equal to the dimension of the gauge group. If the symmetry is spontaneously broken, as it must be when monopoles are present, only those global gauge transformations that leave the asymptotic Higgs field invariant (i.e., those in the unbroken gauge group) lead to normalizable zero modes about a static solution, and only these correspond to physical motions on the moduli space. Similarly, we can distinguish between local gauge actions on the Nahm data, for which $g(s)=I$ at both boundaries, and global gauge actions, which have $g \ne I$ at one or both boundaries. Any of the latter that act on pole terms in the Nahm data will lead to nonnormalizable modes, and therefore do not contribute to the moduli space dynamics. This means that for the $(M, [M], \dots, M)$ solutions, which have poles at both boundaries, the only relevant global gauge actions are those that leave the pole terms invariant. These are proportional to the unit matrix and are of the form $g(s) = e^{i\chi(s)}I$. Because an $s$-independent gauge action proportional to the unit matrix would have no effect on the Nahm data, it is sufficient to consider the case where $\chi$ vanishes at one boundary, but not at the other; this leads to a zero mode corresponding to an unbroken U(1). The zero modes corresponding to the other unbroken generators do not arise from gauge actions, but instead correspond to variations of $V$. For the SU($M+1$) Dancer solutions there must be a pole at one boundary, but the only constraint at the other is that the Nahm data be nonsingular. There are then a total of $M^2$ independent normalizable global gauge zero modes, corresponding to the generators of the unbroken U($M$). These can be obtained from gauge actions for which $g$ is an $M$-dimensional unit matrix at the pole and proportional to one of the generators of U($M$) at the other boundary. \subsection{The moduli space metric} \label{generalitiesSec} The moduli space of Nahm data is the space of solutions of the Nahm and jump equations, but with solutions that are related by local gauge actions considered equivalent. The coordinates on this space are the collective coordinates $z_a$. Its tangent space at a given point is spanned by the variations of the Nahm data that preserve the Nahm and jump equations and that are orthogonal to the variations due to local gauge actions. For our solutions, the Nahm data consist of two quadruples of Nahm matrices, one for each interval, and jump data at the boundary at $s_0$. We can write these collectively as ${\cal T}=\{T^L_\mu(s), T^R_\mu(s), A\}$. The tangent vectors to the Nahm data moduli space must have a similar structure, and so can be written as ${\cal Y} = \{ Y^L_\mu(s), Y^R_\mu(s), Y\}$. The inner product of two such vectors is defined to be\footnote{Note that in the trace in the last term the indices run over $2M$ values, whereas in the first two terms the traces are of $M\times M$ matrices.} \begin{equation} \langle {\cal Y}, {\cal Y'} \rangle = \int_{s_L}^{s_0} \, {\rm Tr\,} Y^L_\mu(s) Y^{'L}_\mu(s) + \int_{s_0}^{s_R} \, {\rm Tr\,} Y^R_\mu(s) Y^{'R}_\mu(s) + {1\over 2}{\rm Tr\,} (Y Y^{'\dagger} + Y' Y^\dagger ) \, . \end{equation} An infinitesimal local gauge action is specified by a Hermitian matrix function $\Lambda(s)$ that is everywhere continuous and vanishes at $s_L$ and $s_R$. We denote its values on the left and right intervals by $\Lambda^L(s)$ and $\Lambda^R(s)$, and define $\Lambda(s_0) \equiv \Lambda^0$. Its action on the Nahm data defines a vector ${\cal Y}_{\rm gauge}$ with \pagebreak \begin{eqnarray} (Y_{\rm gauge})_\mu^{L,R} &=& \delta_\Lambda T_\mu^{L,R} = \delta_{\mu 0} {d\Lambda^{L,R} \over ds} +i[T_\mu^{L,R}, \Lambda^{L,R}] \equiv D_\mu \Lambda^{L,R} \, , \cr \cr (Y_{\rm gauge})_{\alpha r, F} &=& (\delta_\Lambda A)_{\alpha r, F} = -i \Lambda^0_{rs} A_{\alpha s, F} \, , \end{eqnarray} where $D_j = i[T_j(s),~]$ and $D_0= d/ds + i[T_0(s),~]$. In order that $\cal Y$ be orthogonal to ${\cal Y}_{\rm gauge}$ for any choice of $\Lambda$, we must require that \begin{eqnarray} 0 &=& D_\mu Y_\mu^{L,R}(s) \, , \qquad s\ne s_0 \, , \label{backgroundgauge} \\ 0 &=& \left[Y_0^L(s_0) - Y_0^R(s_0)\right]_{rs} + {i\over 2} (YA^\dagger - AY^\dagger)_{\alpha r, \alpha s} \, . \label{jumpgaugecondition} \end{eqnarray} By analogy with the corresponding constraint on the variations of the spacetime fields, we will refer to these as the background gauge conditions. A basis for the tangent space is given by a set of vectors ${\cal Y}_a$ of the form \begin{eqnarray} Y_{a\mu}^{L,R}(s) &=& {\partial T_\mu^{L,R}(s;z) \over \partial z^a} + D_\mu \Lambda_a^{L,R} \, , \cr\cr Y_a &=& {\partial A \over \partial z^a} - i[\Lambda_a(s_0) \otimes A] \, , \label{YisTplusgauge} \end{eqnarray} with the gauge action $\Lambda_a(s)$ chosen so that ${\cal Y}_a$ is in background gauge. The metric on the moduli space is then defined to be\footnote{The factor of $4\pi$ here is chosen to make the normalizations of the Nahm data metric and the BPS solution metric the same; it can be easily checked by noting the coefficient of the terms quadratic in the center-of-mass position.} \begin{equation} ds^2 = g_{ab}\, dz^a dz^b = 4 \pi \langle {\cal Y}_a, {\cal Y}_b \rangle \, dz^a dz^b \, . \label{metricdef} \end{equation} \subsection{The metric for the $\bm {(M, [M], \dots, M)}$ solutions of SU($\bm {2M+2}$)} \label{Nahmmetric} As we have just seen, to calculate the moduli space metric one must first vary the Nahm data with respect to the coordinates, and then find a gauge action $\Lambda(s)$ that brings the resulting tangent vector into background gauge. The one part of this procedure that is not completely straightforward is the solution of the differential equation for $\Lambda(s)$ that is implied by Eqs.~(\ref{backgroundgauge}) and (\ref{YisTplusgauge}). However, if this has already been done for the SU($M+1$) Dancer solutions, the determination of the moduli space metric for the $(M, [M], \dots, M)$ solutions of SU($2M+2$) reduces to an algebraic problem. To see this, first note that the Dancer problems give us two sets of basis vectors, $Y^{DL}_{a\mu}$ and $Y^{DR}_{a\mu}$, that satisfy Eq.~(\ref{backgroundgauge}) on their respective domains. Each of these sets includes $M^2$ vectors corresponding to the global U($M$) gauge freedom. These are of the form $Y^{L}_{f\mu}=D_\mu\chi_f^L$ and $Y^{R}_{f\mu}=D_\mu\chi^R_f$ ($f=1,2,\dots M^2$), where $\chi_f^L$ ($\chi_f^R$) vanishes at $s_L$ ($s_R$) and is nonzero and proportional to one of the U($M$) generators at $s_0$. Because the Dancer basis vectors satisfy Eq.~(\ref{backgroundgauge}), \begin{equation} D_\mu D_\mu \chi_f^L = D_\mu D_\mu \chi_f^R = 0 \, . \label{d2Lambda} \end{equation} Now suppose that the coordinates for the $(M, [M], \dots, M)$ solutions are chosen so that one subset are those originating with the left Dancer problem, a second subset are those from the right Dancer problem, and the remainder are associated only with the jump data. The tangent vector corresponding to the coordinate $z^a$ can then be written in the form \begin{equation} {\cal Y}_a = \left\{ Y_{a\mu}^{DL} + D_\mu \Lambda^L_a, Y_{a\mu}^{DR} + D_\mu \Lambda^R_a, {\partial A \over \partial z^a} -i(\Lambda_a^0 \otimes I_2)A\right\} \, , \end{equation} where $Y_{a\mu}^{DL}$ and $Y_{a\mu}^{DR}$ are the vectors from the left and right Dancer problems.\footnote{Of course, with coordinates chosen as described above, at most one of these Dancer vectors will be nonzero for a given $z^a$.} Although these Dancer vectors already satisfy the background gauge condition on their respective domains, an additional gauge action may be needed to satisfy Eq.~(\ref{jumpgaugecondition}) at $s_0$. Its gauge function $\Lambda$ must vanish at $s_L$ and $s_R$ and, in order to maintain the background gauge condition on the two intervals, it must satisfy $D_\mu D_\mu \Lambda =0$ for $s \ne s_0$. It is therefore a linear combination of Dancer global gauge modes, with \begin{eqnarray} \Lambda^L_a &=& c^L_{af} \chi_f^L \, , \cr \Lambda^R_a &=& c^R_{af} \chi_f^R \, . \label{gaugemodeexpansion} \end{eqnarray} Thus, for each ${\cal Y}_a$ there are a total of $2M^2$ constants to be determined. Requiring continuity of the gauge action at the boundary, $\Lambda^0_a= \Lambda^L_a(s_0)= \Lambda^R_a(s_0)$, gives $M^2$ algebraic equations. The background gauge condition at the boundary becomes \begin{eqnarray} \left[ D_0\Lambda^L_a(s_0) - D_0\Lambda^R_a(s_0) + \Lambda^0_a T_4 + T_4 \Lambda^0_a \right]_{rs} &=& \left[ Y_{a0}^{DR}(s_0) - Y_{a0}^{DL}(s_0) \right]_{rs} \cr\cr && \qquad +{i \over 2} \left(A {\partial A^\dagger \over \partial z_a} - {\partial A \over \partial z_a} A^\dagger\right)_{\alpha r,\alpha s} \label{jumpcondition} \end{eqnarray} and gives $M^2$ more equations, thus determining the $c^L_{af}$ and $c^R_{af}$, and hence ${\cal Y}_a$. We can now evaluate the metric. Because the ${\cal Y}_a$ satisfy the background gauge conditions of Eqs.~(\ref{backgroundgauge}) and (\ref{jumpgaugecondition}) and, in addition, $D_\mu Y_{a\mu}^{DL} = D_\mu Y_{a\mu}^{DR} =0$, many of the terms involving the gauge actions can be eliminated by integrations by parts. With the aid of Eq.~(\ref{jumpcondition}), one eventually obtains \begin{eqnarray} g_{ab} &=& 4\pi \int_{s_L}^{s_0} ds \, {\rm Tr\,} Y_{a\mu}^{DL} Y_{b\mu}^{DL} + 4\pi \int_{s_0}^{s_R} ds \, {\rm Tr\,} Y_{a\mu}^{DR} Y_{b\mu}^{DR} + g_{ab}^0 \cr \cr &=& g_{ab}^{DL} + g_{ab}^{DR} + g_{ab}^0 \, , \label{metricform} \end{eqnarray} where $g_{ab}^{DL}$ and $g_{ab}^{DR}$ are the metric components from the corresponding Dancer solutions and \begin{eqnarray} g_{ab}^0 &=& 4\pi \, {\rm Tr\,} \left[Y_{a0}^{DL}(s_0) - Y_{a0}^{DR}(s_0)\right] \Lambda^0_b + 2\pi \, {\rm Tr\,} \left({\partial A\over \partial z^a} {\partial A^\dagger\over \partial z^b} + {\partial A\over \partial z^b} {\partial A^\dagger\over \partial z^a} \right) \cr &&\qquad + 2 \pi i \, {\rm Tr\,} \left[ {\partial A\over \partial z^a} A^\dagger (\Lambda_b^0 \otimes I_2) - (\Lambda_b^0 \otimes I_2) A {\partial A^\dagger\over \partial z^a} \right] \cr\cr &=& 2 \pi \, {\rm Tr\,} \left({\partial A\over \partial z^a} {\partial A^\dagger\over \partial z^b} + {\partial A\over \partial z^b} {\partial A^\dagger\over \partial z^a} \right) + 4\pi\, {\rm Tr\,} \left[D_0\Lambda^R_a(s_0) - D_0\Lambda^L_a(s_0)\right] \Lambda_b^0 \cr &&\qquad - 4\pi \,{\rm Tr\,} T_4 \left(\Lambda_a^0 \Lambda_b^0 + \Lambda_b^0 \Lambda_a^0 \right) \label{jumpmetric} \end{eqnarray} contains the entire contribution from the jump data.\footnote{Although the middle term in the final expression for $g_{ab}^0$ appears not to be symmetric under interchange of $a$ and $b$, it actually is. This can be shown by an integration by parts and making use of the facts that the $\Lambda_c$ obey $D_\mu D_\mu \Lambda_c =0$ and vanish at $s_L$ and $s_R$.} These two equations are the main result of this section. \subsection{Large SU($2M$) clouds} \label{largeCloudSec} Before focusing on the specific examples of $M=1$ and $M=2$, it is worth commenting briefly on the case where the length scales in $T_\mu^{DL}$ and $T_\mu^{DR}$ are smaller than all those entering $T_4$ by a factor of $\epsilon \ll 1$. This corresponds to the situation where the SU($2M$) clouds are large compared to both the Dancer clouds and the separations between the massive monopoles. In this case Eq.~(\ref{Adecomp}) can be written as \begin{equation} A = (T_4^{1/2}\otimes I_2)V +\delta A \, , \end{equation} where $\delta A$, which contains all of the information about the Dancer data, is suppressed by a factor of $\epsilon$. Therefore, if $z_a$ is one of the Dancer coordinates, $\partial A/\partial z_a = O(\epsilon)$. Furthermore, by noting the $\Lambda_a^0 T_4$ terms on the left-hand side of Eq.~(\ref{jumpcondition}), we see that the $\Lambda_a$ corresponding to these coordinates are also suppressed by a factor of $\epsilon$. It follows from these facts that (1) if either $a$ or $b$ refers to a Dancer coordinate, then $g_{ab}^0$ is suppressed relative to $g_{ab}^{DL}$ or $g_{ab}^{DR}$ and (2) if $a$ and $b$ both refer to jump coordinates, then all of the dependence of $g_{ab}^0$ on Dancer parameters is through subleading terms. Hence, to leading order in $\epsilon$ the moduli space Lagrangian separates into two parts, one depending only on the Dancer parameters and one depending only on the jump parameters. In other words, large SU($2M$) clouds are effectively decoupled from both the massive monopoles and the Dancer clouds. \section{(1,[1],1) solutions in SU(4)} \label{oneoneone} We will first consider the (1,[1],1) solutions for a theory with SU(4) broken to U(1)$\times$SU(2)$\times$U(1). This will not only serve as an illustration of our method but, because the metric on the space of BPS solutions is already known~\cite{Lee:1996vz,Chen:2001ge,Lee:1996kz}, it will also provide one more example supporting the conjecture that the moduli spaces of Nahm data and of BPS solutions are isometric. The ``Dancer'' solutions in this case are just embeddings of the unit SU(2) monopole, each with a four-dimensional moduli space. The Nahm data are numbers rather than matrices and are given (with a standard choice of the gauge action) on the left interval by $T^L_0 =0$, $T^L_j = - X^L_j$, where the $X^L_j$ are the coordinates of the monopole center. Differentiating these with respect to the $X^L_j$ gives three Dancer tangent vectors \begin{equation} \left[Y_{X^L_j}^{DL}\right]_\mu = - \delta_{\mu j} \end{equation} that satisfy the background gauge condition without needing any compensating gauge action. The fourth tangent vector corresponds to a U(1) phase, and so must be of the form \begin{equation} \left[Y_{\rm U(1)}^{DL}\right]_\mu = D_\mu \chi^L = \delta_{\mu 0} {d\chi^L \over ds} \, . \end{equation} In order that this be in background gauge, we need that $d^2 \Lambda/ds^2 =0$. Fixing the normalization by requiring that $\chi^L(s_0) =1$ and $\chi^L(s_L)=0$, we find that \begin{equation} \chi^L(s) = {4\pi(s-s_L) \over M_L} \, , \qquad \left[Y_{\rm U(1)}^{DL}\right]_\mu = \delta_{\mu 0} \left( {4\pi \over M_L} \right) \, . \end{equation} The Nahm data and tangent vectors for the right Dancer data are completely analogous. They are obtained simply by replacing $L$ by $R$ in the above equations, except for a sign change that yields \begin{equation} \chi^R(s) = -{4\pi(s-s_R) \over M_R} \, , \qquad \left[Y_{\rm U(1)}^{DR}\right]_\mu = - \delta_{\mu 0} \left( {4\pi \over M_R} \right) \, . \end{equation} In the absence of jump data, these two sets of four vectors would give a moduli space metric \begin{equation} ds_L^2 + ds_R^2 = M_L \,d{\bf X}_L^2 + M_R \,d{\bf X}_R^2 + {(4\pi)^2\over M_L} d\chi_L^2 + {(4\pi)^2\over M_R} d\chi_R^2 \, . \end{equation} In the standard fashion, we can rewrite the positions in terms of center-of-mass and relative positions $\bf X_{\rm CM}$ and $\bf R$. We can also replace $\chi_L$ and $\chi_R$ by a global U(1) phase and a relative U(1) phase. The former, given by $\xi = \chi_L + \chi_R$, corresponds to a simultaneous phase rotation of the two monopoles, and is described by a tangent vector $\{\left[Y^{DL}_\xi\right]_\mu, \left[Y^{DR}_\xi\right]_\mu\} = 4\pi\delta_{\mu 0}/(M_L+M_R) \{1,1\} = 4\pi/(M_L+M_R) \, D_\mu \{M_L\,\chi^L, -M_R\,\chi^R\}$. Note that although $\left[Y^{DL}_\xi\right]_\mu$ and $\left[Y^{DR}_\xi\right]_\mu$ correspond to pure gauge actions on their separate intervals, the combined vector is not a gauge action because the corresponding left and right gauge functions are not equal at $s=s_0$. The relative U(1) phase $\psi= (M_L\chi_R-M_R\chi_L)/(M_L+M_R) $ corresponds to an orthogonal combination of vectors and, in the context of just the Dancer data, is a pure gauge action. We can therefore choose the gauge so that $\left[Y^{DL}_\psi\right]_\mu$ and $\left[Y^{DR}_{\psi}\right]_\mu$ both vanish and the $g_{a\psi}$ are given completely by the jump data term $g^0_{a\psi}$ We now have to consider the contributions from the jump data. We start by defining $T_4 = b$; examination of the spacetime solutions shows that $b$ measures the size of the non-Abelian cloud. We then have \begin{equation} K = b I_2 + {\bf R}\cdot {{\mbox{\boldmath $\sigma$}}} = U K_0 U^{-1} \, , \end{equation} where $K_0 = {\rm diag}\, (b+R, b-R)$, and can write the general solution for the jump data as \begin{equation} A = U K_0^{1/2} W e^{i\psi} \, , \end{equation} where $W$, like $U$, is an SU(2) matrix.\footnote{In the notation of Eq.~(\ref{Adecomp}), $W=U^{-1}V$. We have written $A$ in this form to facilitate comparison with the results in Ref.~\cite{Chen:2001ge}.} Neither the center-of-mass position nor the global U(1) phase $\xi_{\rm total}$ appears in $A$, so the tangent vectors corresponding to these variables have no jump component and are specified completely by the $Y_\mu^{DL}$ and $Y_\mu^{DR}$ inherited from the Dancer problems. It is easily verified that these vectors are in background gauge and that they are orthogonal to the vectors for the relative coordinates. The calculation of the remaining metric terms is simplified by noting that, because the spatial rotations represented by $U$ and the global U(2) symmetry represented by $W$ and $\psi$ are isometries, we can calculate the metric from the tangent vectors at a point with $U=W=I$, $\psi=0$. At this point the tangent vector for the intermonopole separation $R$ gets a jump data contribution \begin{equation} {\partial A\over \partial R} = {1 \over 2} \,{\rm diag}\, \left({1\over \sqrt{b+R}}, -{1\over \sqrt{b-R}} \right) \end{equation} that combines with the contributions from the Dancer vectors to give a tangent vector ${\cal Y}_R$ that is already in background gauge. For the cloud size parameter $b$ we have \begin{equation} {\partial A\over \partial b} = {1 \over 2} \,{\rm diag}\, \left({1\over \sqrt{b+R}}, {1\over \sqrt{b-R}} \right) \, . \end{equation} Because $b$ does not enter the Dancer solutions, ${\cal Y}_b$ has no $Y_{\mu}^{DL}$ or $Y_{\mu}^{DR}$ contribution. It, too, satisfies the background gauge conditions without the need for a compensating gauge action. To obtain the remaining tangent vectors, which correspond to rotations and U(2) transformations, we first write an infinitesimal variation of $A$ (with $K_0$ held fixed) as \begin{equation} dA = {i\over 2} \sum_{i=1}^2 \sigma_j K_0^{1/2} \, d\alpha_j + {i\over 2} \sum_{i=1}^3 K_0^{1/2} \sigma_j \, d\beta_j + {i\over 2} K_0^{1/2} \, d\psi \, , \end{equation} where the $d\alpha_j$ and $d\beta_j$ are the invariant one-forms for the rotational SO(3) and gauge SU(2) symmetries.\footnote{We have not included an $\alpha_3$ term because its effect can be absorbed by a redefinition of $\beta_3$; this term would correspond to the Euler angle that leaves $\bf R$ invariant.} The $d\alpha_1$ and $d\alpha_2$ terms combine with contributions, due to rotations of $\bf R$, from the left and right intervals to give background gauge tangent vectors. The $\beta_1$ and $\beta_2$ vectors have no contributions from these intervals, but are also in background gauge. However, the $\beta_3$ and $\psi$ vectors do not satisfy Eq.~(\ref{jumpgaugecondition}) and so must be supplemented by compensating gauge actions. As explained in Sec.~\ref{Nahmmetric}, these gauge actions must have gauge functions of the form $\Lambda_a^L = c^L_a \chi^L$, $\Lambda_a^R = c^R_a \chi^R$. Continuity of $\Lambda$ at $s_0$ implies that $c^L_a=c^R_a\equiv c_a$. Using Eq.~(\ref{jumpcondition}), we then find that \begin{equation} c_{\beta_3} = {R\mu \over 4\pi +2b\mu} \, , \qquad c_{\psi} = {b\mu \over 4\pi +2b\mu} \, , \end{equation} where $\mu = M_L M_R /(M_L+M_R)$ is the reduced mass. With the $\Lambda_a$ thus determined, we can use Eqs.~(\ref{metricform}) and (\ref{jumpmetric}) to show that the metric for the eight-dimensional relative moduli space [i.e., with the center-of-mass motion and overall U(1) phase factored out] is \begin{eqnarray} ds^2 &=& \left[ \mu + {2\pi b \over (b^2 - R^2)} \right] dR^2 + {2\pi b \over (b^2 - R^2)} db^2 - {4\pi R \over b^2 - R^2}\, db \, dR \cr\cr &&\quad + \left( \mu R^2 + {2\pi b}\right) (d\alpha_1^2 + d\alpha_2^3) + {2\pi b} (d\beta_1^2 + d\beta_2^2) + 4\pi \sqrt{b^2-R^2} (d\alpha_1 \, d\beta_1 + d\alpha_2\, d\beta_2) \cr\cr &&\quad + {4\pi^2 b \over 2\pi + b\mu }d\psi^2 + \left[{2\pi b} - {2\pi \mu R^2 \over (2\pi + b\mu) }\right] d\beta_3^2 + {8\pi^2 R\over (2\pi + b\mu) }\, d\psi \, d\beta_3 \, . \end{eqnarray} \pagebreak This agrees with the metric on the moduli space of BPS solutions that was previously obtained.\footnote{This is verified most easily by comparing with the form given in Ref.~\cite{Chen:2001ge}. For the angular and phase parts of the metric our expression, written in terms of angular velocities, is related to the one in Eq.~(3.18) of that paper, given in terms of angular momenta, by a Legendre transformation.} This thus provides another example where the moduli spaces of Nahm data and of BPS solutions are isometric, lending further support to the conjecture that this is true in general. For later reference, we note that when the angular momenta and charges all vanish, the system reduces to one governed by the Lagrangian \begin{equation} L = {\pi\over 2} {(\dot b + \dot R)^2\over (b+R)} + {\pi\over 2} {(\dot b - \dot R)^2\over (b-R)} + {\mu \over 2} \dot R^2 \, . \label{su4Lag} \end{equation} \section{(2,[2],[2],[2],2) solutions in SU(6)} \label{twotwotwo} \subsection{Nahm data} The Nahm data are $2 \times 2$ matrix functions $T_\mu^L(s)$ and $T_\mu^R(s)$ on the left and right intervals, respectively, plus jump data that are obtained from $T_\mu^L(s_0)$, $T_\mu^R(s_0)$, and \begin{equation} T_4 = p I_2 + {\bf q} \cdot {\mbox{\boldmath $\tau$}} \label{pqdef} \end{equation} by using Eqs.~(\ref{KmatrixDef}) and (\ref{Adecomp}). The parameters in $T_4$ determine the properties of the two SU(4) clouds. Examination of the spacetime solutions~\cite{Houghton:2002bz} shows that, roughly speaking, $p+q$ and $p-q$ (with $q=|{\bf q}|$) determine the sizes of the clouds, while the direction of $\bf q$ specifies an orientation in the unbroken SU(4). The $T_\mu^L(s)$ are themselves the Nahm data for an SU(3) (2,[1]) Dancer solution. By an appropriate gauge action one can set $T_0^L=0$ and then write~\cite{Dancer:1992kn} \begin{equation} T_i^L = {1\over 2} \sum_j A^L_{ij} f_j^L(s) \hat\tau_j^L + R_i^L {\rm I}_2 \, , \label{dancermetricform} \end{equation} where $A_{ij}^L$ is an orthogonal matrix and the three $\hat\tau_j^L = U_L \tau_j U_L^{-1}$ are a rotated set of Pauli matrices. The $f_j^L(s)$ obey \begin{equation} {df_1^L \over ds} = f_2^L f_3^L \label{topeq} \end{equation} and its two cyclic permutations. If we adopt the convention that $f_1^2 \le f_2^2 \le f_3^2$, they are given in terms of Jacobi elliptic functions by \begin{eqnarray} f_1^L(s) &=& - {D_L \cn {{\kappa_L}} [D_L(s-s_L)] \over \sn {{\kappa_L}} [D_L(s-s_L)] } \, , \cr \cr f_2^L(s) &=& - {D_L \dn {{\kappa_L}} [D_L(s-s_L)] \over \sn {{\kappa_L}} [D_L(s-s_L)] } \, , \cr \cr f_3^L(s) &=& - {D_L \over \sn {{\kappa_L}} [D_L(s-s_L)] } \, . \label{topfunctions} \end{eqnarray} The requirement that $f_j^L(s)$ only have a pole at $s_L$ imposes the conditions $0\le \kappa_L \le 1$ and $0 \le D_L \le 2K(\kappa_L)/(s_0-s_L)$, where $K(\kappa)$ is the complete elliptic integral of the first kind. The $T_\mu^R(s)$ are similar, but with $D_L$ and $\kappa_L$ replaced by $D_R$ and $\kappa_R$. \begin{figure} \begin{center} \leavevmode \epsfysize=3in \epsffile{fig1-dancer.eps} \end{center} \caption{A geodesically complete submanifold illustrating the SU(3) Dancer solutions. The long straight lines correspond to the axially symmetric hyperbolic solutions, with two widely separated monopoles, while the short straight lines correspond to the axially symmetry trigonometric solutions. The limiting curved boundaries, which are not part of the manifold, correspond to SU(2) two-monopole solutions.} \label{dancerspace} \end{figure} The left and right Nahm data each contain eleven parameters: three center-of-mass variables $R_j$, the three Euler angles in $A_{ij}$ that specify the spatial orientation, the three angles needed to define the $\hat\tau_j$, and the elliptic function parameters $D$ and $\kappa$. The significance of the latter two is clarified by referring to the plot in Fig.~\ref{dancerspace}. The change of variables~\cite{Dancer:1992hf} \begin{eqnarray} x &=& (2 - \kappa^2) D^2 \, , \cr y &=& -\sqrt{3}\, \kappa^2 D^2 \end{eqnarray} maps the allowed range of $D$ and $\kappa$ onto the lower right sextant of the plot (including the straight boundaries, but excluding the curved outer boundary, which is geodesically infinitely far from any point in the interior). By adjoining five other copies (corresponding to the other possible orderings of the $f_j^2$), one obtains a geodesically complete two-dimensional manifold. Points far out on the long arms of the figure correspond to solutions with two well-separated massive monopoles, with the distance between the two approximately equal to $D$. The straight lines down the centers of the arms, on which $\kappa =1$, correspond to minimal Dancer cloud size, while the limiting curve corresponds to embeddings of SU(2) two-monopole solutions that can be thought of as having infinite Dancer clouds. The central point, where $D=0$ and $\kappa$ is undefined, corresponds to a solution with coincident massive monopoles and a minimal size cloud. On the short straight lines emanating from this point $\kappa = 0$. Points on these lines correspond to solutions with coincident massive monopoles and clouds varying from minimal to infinite size~\cite{Dancer:1997zx,Irwin:1997ew}. For $\kappa$ equal to 0 or 1, the elliptic functions reduce to trigonometric or hyperbolic functions, respectively~\cite{Dancer:1992kj}. Two of the $f_j$ are then equal and the spacetime solution has an axial symmetry. When $D=0$, all three of the $f_j$ are equal and the solution is spherically symmetric. A straight trajectory passing from a $\kappa =0$ line through the central point and out along the opposite $\kappa = 1$ line is a geodesic of the Dancer metric. \subsection{Reduction to cylindrical symmetry with vanishing charges} \label{cylindrical} As noted above, the left and right sets of Dancer data each contain eleven parameters. In addition, there is an overall U(1) phase associated with each set. These, plus the four parameters from the elements of $T_4$ given in Eq.~(\ref{pqdef}) and the sixteen moduli arising from the U(4) matrix $V$ would seem to give a total of 44 moduli. This cannot be correct, because a solution with ten monopoles should lie on a 40-dimensional moduli space. The discrepancy is resolved by noting that there is a U(2) subgroup of the U(4) whose effect is gauge equivalent to that obtained by simultaneously rotating the U(1) phases and SU(2) orientations of the two Dancer solutions and the SU(2) orientation of the vector $\bf q$. Because the moduli space metric for the Dancer data is already known, the methods of Sec.~\ref{Nahmsec} can be applied to obtain the metric for the full 40-dimensional moduli space. However, the result would be rather unwieldy for exploring the nature of the cloud dynamics. We will therefore reduce the problem to a more manageable one by restricting ourselves to a considerably smaller, but geodesically complete, submanifold. A geodesically complete submanifold can be obtained by restricting to the maximal subspace left invariant by some isometry of the full manifold. In particular, we will require that the solutions be axially symmetric about the $z$-axis. This means that the Nahm data must be invariant under the combination of a rotational transformation of the form given in Eq.~(\ref{rotANDtrans}) and an appropriately chosen gauge action. In each set of Dancer data two of the $f_j$ must then be equal (which is only possible if $\kappa=0$ or 1), which implies that the solution acts as a symmetric top in the SU(2) space. Furthermore, the symmetry axes of the two Dancer solutions must be aligned with each other and with $\bf q$. More specifically, the $\hat \tau_j^L$ and the $\hat \tau_j^R$ can differ only by an U(1) rotation. Making use of the redundant U(2) freedom noted above, we can take the U(1) rotation to be about the $\tau_3$ direction and fix the $\hat \tau_j^L$ and $\hat \tau_j^R$ to be \begin{eqnarray} \hat \tau_j^L &=& \left\{e^{-i\psi \tau_3} \tau_1 e^{i\psi \tau_3}, \, e^{-i\psi \tau_3} \tau_2 e^{i\psi \tau_3}, \, \tau_3 \right\} \cr \hat \tau_j^R &=& \left\{e^{i\psi \tau_3} \tau_1 e^{-i\psi \tau_3}, \, e^{i\psi \tau_3} \tau_2 e^{-i\psi \tau_3}, \, \tau_3 \right\} \end{eqnarray} Although rotation of the relative phase $\psi$ is not an isometry, there is a $Z_2$ symmetry that reverses its sign. We can require invariance under this symmetry as well, and set $\psi=0$.\footnote{Invariance under this $Z_2$ symmetry could also be achieved by setting $\psi=-\pi/2$; we will not explore this possibility here.} If we now set $T_0^L = T_0^R= 0$ and write $T_4 = p +q\tau_3$, the Nahm matrices on the left and right intervals then become \begin{eqnarray} T_j^L(s) &=& \left[{1\over 2}\,g_1^L(s) \tau_1 , \, {1\over 2}\,g_1^L(s) \tau_2 , \, {1\over 2}\,g_3^L(s) \tau_3 + Z_L {\rm I}_2\right] \, , \cr T_j^R(s) &=& \left[{1\over 2}\,g_1^R(s) \tau_1 , \, {1\over 2}\,g_1^R \tau_2(s), \, {1\over 2}\,g_3^R(s) \tau_3 + Z_R {\rm I}_2\right] \, . \label{axialT} \end{eqnarray} with \begin{eqnarray} g_1^L(s) &=& \cases{ -D_L \csc[D_L(s-s_L)] \, ,& $\kappa_L =0$ \cr -D_L \,\mbox{cosech}[D_L(s-s_L)] \, , & $\kappa_L =1$ } \cr \cr g_3^L(s) &=& \cases{ -D_L \cot[D_L(s-s_L)] \, ,& $\kappa_L =0$ \cr -D_L \coth[D_L(s-s_L)] \, , & $\kappa_L =1$ } \cr\cr g_1^R(s) &=& \cases{ D_R \csc[D_R(s_R-s)] \, ,& $\kappa_R =0$ \cr D_R \,\mbox{cosech}[D_R(s_R-s)] \, , & $\kappa_R =1$ } \cr \cr g_3^R(s) &=& \cases{ D_R \cot[D_R(s_R-s)] \, ,& $\kappa_R =0$ \cr D_R \coth[D_R(s_R-s)] \, , & $\kappa_R =1$ . } \label{axialfunctions} \end{eqnarray} We can further simplify matters by requiring that the conserved charges from the unbroken U(1)$\times$SU(4)$\times$U(1) symmetry all vanish. One's first thought might be that the phases associated with these vanishing charges could be simply dropped from the Lagrangian. This is not so, because there are couplings between the angular velocities $\omega^i$ of these phases and the six non-phase moduli ($D_L$, $D_R$, $p$, $q$, $Z_L$, and $Z_R$) that remain after our symmetry constraints are imposed. If we denote the latter moduli by $y^a$, the moduli space Lagrangian can be written as \begin{equation} L_{\rm MS} = {1\over 2} C_{ab} \,\dot y^a \dot y^b + B_{ai}\,\dot y^a \omega^i + {1\over 2} E_{ij}\,\omega^i \omega^j \, , \label{LMS} \end{equation} where the metric coefficients $C_{ab}$, $B_{ai}$, and $E_{ij}$ depend only on the $y^a$. By means of a Legendre transformation we can convert this to an effective Lagrangian in which the dependence on the $\omega^i$ is replaced by a dependence on the conserved charges \begin{equation} Q_j = E_{ij}\omega^i + B_{ja}\dot y^a \, . \end{equation} If all of the $Q_j$ vanish, this effective Lagrangian reduces to \begin{equation} L_{\rm MS,\,eff} = {1\over 2}\left[ C_{ab} - B_{ai} E^{-1}_{ij} B_{jb} \right] \dot y^a \dot y^b \, . \label{LMSeffDef} \end{equation} As we did for the (1,[1],1) example, we will take advantage of the isometries of the moduli space and calculate the metric at the point $V=I$. We start our calculation by displaying the Nahm data. The $T^L_\mu(s)$ and $T^R_\mu(s)$, as well as $T_4$, were given above. With $V=I$, $A=K^{1/2}$, where \begin{equation} K = \left(\matrix{p+q+C+R & 0 & 0 & 0 \cr \cr 0 & p-q-C+R & 2B & 0 \cr \cr 0 & 2B & p+q-C-R & 0 \cr \cr 0 & 0 & 0 & p-q+C-R } \right) \label{Kdisplay} \end{equation} with \begin{eqnarray} B &\equiv& {1\over 2}\left[ g_1^L(s_0) - g_1^R(s_0)\right] \, , \cr\cr C &\equiv& {1\over 2}\left[g_3^L(s_0) - g_3^R(s_0)\right] \, , \cr\cr R &\equiv& Z_L - Z_R \, . \end{eqnarray} \pagebreak Note that $K$ has been written so that the Greek indices in Eq.~(\ref{KmatrixDef}) label $2\times 2$ blocks; the elements within each block are labeled by the indices $r$ and $s$. Given this Nahm data, the calculation of $L_{\rm MS,\,eff}$ can be organized as follows: 1) {\bf Calculate the derivatives of the Nahm data with respect to the $\bm{y^a}$.} On the left and right intervals the only nonvanishing derivatives are those of the $T_\mu$ with respect to the corresponding $D$ and $Z$, but $A$ has nonzero derivatives with respect to all of the $y^a$. The calculation of these is somewhat involved, and so we relegate it to the Appendix. The explicit form of the results are actually not needed until the final step 6. 2) {\bf Determine whether the tangent vectors obtained in step 1 require any additional gauge actions to put them into background gauge.} It is easy to see that the derivatives of the $T_\mu$ on the left and right intervals obey Eq.~(\ref{backgroundgauge}). The pieces arising from the data at $s_0$ require a bit more care. With $V$ taken to be the identity, $A$ is Hermitian. Equation~(\ref{jumpgaugecondition}) then implies that a compensating gauge action is only needed if \begin{equation} \left[ {\partial A \over \partial y^a} , A \right]_{\alpha r,\alpha s} \ne 0 \end{equation} for any coordinate $y^a$. To see that this quantity always vanishes, first note that both $A$ and $\partial A/\partial y^a$ have the same block diagonal form as $K$, and that a nonvanishing commutator can only arise from the middle $2\times 2$ block. Within this block, the matrices are all linear combinations of the identity and the Pauli matrices $\rho_x$ and $\rho_z$. Any commutator must then be proportional to $\rho_y$, and thus would not contribute after the trace over $\alpha$ was taken.\footnote{This cancellation is a consequence of the axial symmetry, because otherwise there is also a $\rho_y$ contribution to $K$.} 3) {\bf Determine which $\bm{B_{ja}}$ are nonvanishing.} Because the tangent vectors for the $y^a$ do not require compensating gauge actions, $B_{ja}$ is given just by the first term on the last line of Eq.~(\ref{jumpmetric}). This gives \begin{equation} B_{ja} = -{2\pi i}\, {\rm Tr\,} \left( \left[ {\partial A \over \partial y^a} , A \right] t_j \right) \, , \end{equation} where $t_j$ is the Hermitian generator corresponding to the $j$th phase. From the remarks of the previous paragraph, we see that we can choose the $t_j$ so that the only nonzero $B_{ja}$ come from the generator that has a $\rho_y$ in the middle $2\times 2$ block and zeros elsewhere; we label this generator $t_2$. 4) {\bf Show that the tangent vector corresponding to the U(4) action generated by $\bm{t_2}$ does not need a compensating gauge action.} Referring to Eq.~(\ref{jumpgaugecondition}), we see that this is equivalent to showing that \begin{equation} 0 = \left(A t_2 A \right)_{\alpha r, \alpha s} \end{equation} for all values of $r$ and $s$. It is easy to verify that this follows from the symmetric block diagonal form of $A$. 5) {\bf Calculate $\bm{E^{-1}_{22}}$. } Because the $B_{ja}$ vanish if $j\ne 2$, we only need this one element of the matrix $E^{-1}$. Using the fact that the $t_2$ tangent vector has no compensating gauge action, Eq.~(\ref{jumpmetric}) gives \begin{equation} E_{2j} = 2\pi \, {\rm Tr\,} ( A\{t_2,t_j \} A ) = 2\pi \, {\rm Tr\,} (K\{t_2,t_j \}) \, . \end{equation} This vanishes unless $j=2$, implying that \begin{equation} E^{-1}_{22} = \left( E_{22} \right)^{-1} = {1 \over 8\pi (p-C) } \, . \label{Etwotwo} \end{equation} 6) {\bf Evaluate the $\bm{C_{ab}}$ and the $\bm{B_{2a}}$ and substitute the results into Eq.~(\ref{LMSeffDef}) to obtain $\bm{ L_{\rm MS, eff}}$.} The details of this are given in the Appendix. Instead of writing the result directly in terms of the $y^a$, it is more convenient to express it in terms of the four eigenvalues of $K$, \begin{eqnarray} \lambda_1 &=& p + q + R + C \, , \cr \lambda_2 &=& p - q - R + C \, , \cr \lambda_+ &=& p - C + \sqrt{4B^2 +(q-R)^2} \, , \cr \lambda_- &=& p - C -\sqrt{4B^2 +(q-R)^2} \, , \end{eqnarray} and the variable \begin{equation} \theta = \tan^{-1}\left({2B \over R-q} \right) \, . \end{equation} We can then write \begin{equation} L_{\rm MS, eff} = M_L\, \dot Z_L^2 + M_R \, \dot Z_R^2 + {1 \over 2}I_{DD}^L \, \dot D_L^2 + {1 \over 2} I_{DD}^R \, \dot D_R^2 + {\pi \over 2} \sum_\sigma { \dot\lambda_\sigma^2 \over \lambda_\sigma} + {\pi \over 2} {\left(\lambda_+ - \lambda_- \right)^2 \over \left(\lambda_+ + \lambda_- \right)} \dot\theta^2 \, , \label{twotwotwoLag} \end{equation} where $I_{DD}$ is the function given in Eq.~(\ref{IDDdef}). (Of course, when obtaining the equations of motion from this Lagrangian one must remember that the $\lambda_\sigma$ and $\theta$ are not independent variables.) \subsection{The large-mass limit} Considerable simplification can be achieved by working in the ``large-mass limit'' in which the massive monopole core radii, $M_L^{-1}$ and $M_R^{-1}$, are much less than all other relevant distance scales.\footnote{It must be kept in mind that this limit involves a comparison between the monopole masses and the cloud sizes and massive monopole separations. While the masses are, of course, constant, the evolution of the other quantities may invalidate this limit at large times. This would happen, for example, in a geodesic motion that started with a large-mass $\kappa=0$ Dancer solution, passed through the spherically symmetric point where the symmetry axes in Fig.~\ref{dancerspace} meet, and then moved out toward the large-mass $\kappa=1$ solutions.} There are four possible cases, depending on the values of $\kappa_L$ and $\kappa_R$. We will examine the two with $\kappa_L = \kappa_R$. \subsubsection{Hyperbolic solutions, $\kappa_L = \kappa_R = 1$ } Here we take $\mu_L= M_LD_L/4\pi$ and $\mu_R = M_R D_R/4\pi$ both large, with $D_L$ and $D_R$ held fixed. In this limit the Dancer clouds have minimum size and $D_L$ and $D_R$ are the separations between the massive monopoles of the same species. Up to exponentially small corrections, \begin{equation} B=0\, , \qquad C = -{1\over 2}(D_L + D_R) \, . \end{equation} Substituting these values, as well as the asymptotic values of $I_{DD}^L$ and $I_{DD}^R$ from Eq.~(\ref{IDDhyperlimit}), into Eq.~(\ref{twotwotwoLag}) gives \begin{eqnarray} L_{\rm MS, eff} &=& {1 \over 2} M_L \, \dot Z_1^2 + {1 \over 2} M_R \,\dot Z_4^2 + {2\pi (Z_1 -Z_4) \over [( p- q)^2 - ( Z_1 - Z_4)^2]} \, (\dot p-\dot q) \, (\dot Z_1 -\dot Z_4) \cr\cr &+& {\pi (p-q) \over [(p-q)^2 - (Z_1 -Z_4)^2]} \left[(\dot p-\dot q)^2 + (\dot Z_1 -\dot Z_4)^2 \right] \cr\cr &+& {1 \over 2} M_L \, \dot Z_2^2 + {1 \over 2} M_R \, \dot Z_3^2 + {2\pi(Z_2 -Z_3) \over [(p+q)^2 - (Z_2 -Z_3)^2]} \, (\dot p+\dot q) \, (\dot Z_2 -\dot Z_3) \cr\cr &+& {\pi(p+q) \over [(p+q)^2 - (Z_2 -Z_3)^2]} \left[(\dot p+\dot q)^2 + (\dot Z_2 -\dot Z_3)^2 \right] \, , \label{hypermetric} \end{eqnarray} where \begin{eqnarray} Z_1 &=& Z_L + {D_L\over 2} \, , \cr \cr Z_2 &=& Z_L - {D_L\over 2} \, , \cr \cr Z_3 &=& Z_R + {D_R\over 2} \, , \cr \cr Z_4 &=& Z_R - {D_R\over 2} \, . \end{eqnarray} Examination of Eq.~(\ref{hypermetric}) shows that the metric is the sum of two independent pieces, one involving $Z_1$, $Z_4$, and $p-q$, and one involving $Z_2$, $Z_3$, and $p+q$. Each of these describes a (1,[1],1) SU(4) system. [Indeed, this could have been foreseen by recalling the results of Ref.~\cite{Houghton:2002bz}, where it was shown that the SU(6) solutions with two minimal Dancer clouds and all SU(2) orientations aligned were essentially superpositions of two independent SU(4) (1,[1],1) solutions.] The splitting of the metric here implies that the two nontrivial clouds are completely decoupled from each other. Hence, this limiting case does not shed light on the interactions between clouds, which is our primary interest in this paper. We therefore turn to the second limiting case. \subsubsection{Trigonometric solutions, $\kappa_L = \kappa_R=0$} For these, we take $\mu_L = (s_0-s_L)D_L= M_LD_L/4\pi$ and $\mu_R =(s_R-s_0)D_R = M_R D_R/4\pi$ to be just less than the maximum allowed value, $\pi$. In this regime the approximate radius of the Dancer cloud is \begin{equation} a = {D \over 2( \pi - \mu)} \gg M^{-1} \, . \end{equation} To leading order, then, we can write \begin{equation} C = -B = (a_L + a_R) \equiv\tilde a \, . \end{equation} In addition, using Eq.~(\ref{IDDtriglimit}), we find, again to leading order, that \begin{equation} I_{DD}^L \, dD_L^2 + I_{DD}^R \, dD_R^2 = 4\pi \,{da_L^2 \over a_L} + 4\pi \, {da_R^2 \over a_R} = 16\pi \left(d\sqrt{\tilde a}\right)^2 + 16 \pi \,\tilde a d \phi^2 \, , \end{equation} where $\phi = \tan^{-1}(\sqrt{a_L/a_R})$. We can take the center of mass, $M_LZ_L + M_RZ_R$, to be at rest and define a reduced mass ${\cal M} = M_L M_R/(M_L + M_R)$. The effective moduli space Lagrangian of Eq.~(\ref{LMSeffDef}) then reduces to \begin{equation} L_{\rm MS, eff} = {\cal M}\, \dot R^2 + {\pi\over 2} \sum_\sigma { \dot\lambda_\sigma^2 \over \lambda_\sigma} + {\pi \over 2} {\left(\lambda_+ - \lambda_- \right)^2 \over \left(\lambda_+ + \lambda_- \right)} \dot\theta^2 + 4\pi \, {\dot{\tilde a}^2 \over \tilde a} + 16\pi\, \tilde a \, \dot\phi^2 \, . \label{effLag} \end{equation} Note that, except in the $\dot \phi^2$ term, the Dancer cloud size parameters $a_L$ and $a_R$ only enter the Lagrangian through their sum $\tilde a$. This is a consequence of our having aligned the U(1) phases of the two Dancer clouds, as described in Sec.~\ref{cylindrical}. Finally, in the limit of large monopole mass we can treat $R = Z_L -Z_R$ as being constant in time, and so drop the first term on the right-hand side of Eq.~(\ref{effLag}). For the sake of simplicity, we will set $R=0$. In the large-mass limit in which we are working, this makes the system essentially spherically symmetric. It also sets $\theta = - \tan^{-1}(2\tilde a/q)$. \section{Cloud dynamics} \label{dynamics} We now focus on the dynamics of the trigonometric solutions discussed at the end of the previous section. We work in the large-mass limit with $R=0$. The eigenvalues $\lambda_\sigma$ of the matrix $K$ are then \begin{eqnarray} \lambda_1 &=& p + q +\tilde a \, , \cr \lambda_2 &=& p - q +\tilde a \, , \cr \lambda_+ &=& p -\tilde a + \sqrt{q^2 + 4 \tilde a^2} \, , \cr \lambda_- &=& p - \tilde a -\sqrt{q^2 + 4 \tilde a^2} \, , \end{eqnarray} As noted in Sec.~\ref{Nahmsec}, these eigenvalues must all be positive. Applying this constraint to the smallest eigenvalue, $\lambda_-$, gives the inequality\footnote{For a fixed static solution, $q$ is naturally defined to be positive. However, when describing time-dependent solutions it is convenient to allow $q$ to change sign when it goes through a zero.} \begin{equation} p - |q| \ge \tilde a \ge 0 \, . \label{lambdabound} \end{equation} \subsection{Asymptotic behavior} The system is particularly easy to analyze at large times (either positive or negative). The eigenvalues are then all large, with $p\pm q \gg \tilde a= a_L + a_R$, and \begin{eqnarray} \lambda_1 &\approx& \lambda_+ \approx p + q \, , \cr \lambda_2 &\approx& \lambda_- \approx p - q \, . \end{eqnarray} Substituting these into Eq.~(\ref{effLag}), and noting that the $\dot \theta^2$ term in the Lagrangian is suppressed, we see that the dynamics is well described by the Lagrangian \begin{equation} L_{\rm asym} = \pi {(\dot p + \dot q)^2\over p+q} + \pi {(\dot p - \dot q)^2\over p-q} + 4\pi {\dot a_L^2 \over a_L} + 4\pi {\dot a_R^2 \over a_R} \, . \end{equation} This can be viewed as describing a system composed of four noninteracting spherical clouds: two ``SU(4) clouds'', with cloud parameters $(p+q)$ and $(p-q)$, and two Dancer clouds, with cloud parameters $a_L$ and $a_R$. (We will refer to these cloud parameters as radii, but it should be kept in mind that the cloud structure does not allow a precise and unambiguous definition of its radius.) These evolve according to \begin{eqnarray} p\pm q &=& {1\over 2} C_\pm(t - t_\pm)^2 \, , \cr a_{L,R} &=& {1\over 2} C_{L,R}(t - t_{L,R})^2 \, , \end{eqnarray} where the $C_i$ and $t_i$ are arbitrary constants.\footnote{These formulas imply that at very large times the clouds would be expanding at speeds greater than that of light. A more detailed analysis of cloud behavior~\cite{Chen:2001qt} shows that at these times the moduli space approximation breaks down, and that instead the cloud expansion is best described as a wavefront moving at the speed of light.} The total energy is divided into four separately conserved parts, \begin{eqnarray} E_{p+q} &=& \pi {(\dot p + \dot q)^2\over p+q} = \pi C_+ \, , \cr E_{p-q} &=& \pi {(\dot p - \dot q)^2\over p-q} =\pi C_- \, , \cr E_L &=& 4\pi {\dot a_L^2 \over a_L} = 4\pi C_L \, , \cr E_R &=& 4\pi {\dot a_R^2 \over a_R} = 4\pi C_R \, . \end{eqnarray} Note that this asymptotic separation into noninteracting clouds did not require that $(p+q)$-cloud and the $(p-q)$-cloud be very different in size, but only that they both be much larger than the Dancer clouds. This can be understood by recalling the description of the corresponding static solutions in Ref.~\cite{Houghton:2002bz}. By analyzing the magnetic field in the regions between the cloud radii, it was found that the non-Abelian part of the effective magnetic charge, $Q_{\rm NA}$, in each of the regions can be diagonalized, with\footnote{We have arbitrarily chosen $q>0$ and $a_R > a_L$.} \begin{equation} Q_{\rm NA} = \cases{ {\rm diag(0,0,0,0)} \, , & $r \gg p+q$ \cr {\rm diag(0,-1,0,1)} \, , & $p+q \gg r \gg p-q$ \cr {\rm diag(-1,-1,1,1)} \, , & $p-q \gg r \gg a_R$ \cr {\rm diag(-2,0,1,1)} \, , & $a_R \gg r \gg a_L$ \cr {\rm diag(-2,0,0,2)} \, , & $a_L \gg r $ . } \end{equation} In other words, the clouds act as if they have magnetic charges \begin{eqnarray} Q_{p+q} &=& {\rm diag(0,1,0,-1)} \, , \cr Q_{p-q} &=& {\rm diag(1,0,-1,0)} \, , \cr Q_{a_R} &=& {\rm diag(1,-1,0,0)} \, , \cr Q_{a_L} &=& {\rm diag(0,0,1,-1)} \, . \end{eqnarray} Thus, the $(p+q)$- and the $(p-q)$-clouds lie in mutually commuting SU(2) subgroups of the unbroken SU(4), and so can only affect each other via interactions mediated by one or both of the Dancer clouds. When $p \pm q \gg \tilde a$, these interactions are negligible, in accordance with the discussion in Sec.~\ref{largeCloudSec}. Similarly, the two Dancer clouds decouple from each other in this asymptotic regime, regardless of their relative sizes. \subsection{Scattering} We have a system of four clouds that are asymptotically noninteracting. The asymptotic solutions indicate that they are all contracting at large negative times, and expanding at large positive times. Their interactions at intermediate times can be viewed as a series of one or more scattering processes. These can be studied by starting with an initial configuration containing well-separated clouds and then, using numerical simulations, letting the system evolve under the equations of motion that follow from the Lagrangian of Eq.~(\ref{effLag}). We show two typical examples of this in Fig.~\ref{multicloudscatter}. Both of these simulations were performed with the constant of motion $J={\tilde a}^2\dot\phi$ set equal to zero, so that the ratio of the Dancer cloud radii remains constant throughout. The evolution does not depend on this ratio, but only on the sum of the Dancer radii, $\tilde a$, which is shown by the solid line in these plots. There is some ambiguity in defining the size of the two SU(4) clouds [e.g., the differences between $\lambda_1$, $\lambda_+$, and $p+q$ are negligible at large times, but not necessarily when the SU(4) and Dancer clouds are comparable in size]. We have, somewhat arbitrarily, chosen to plot $p+q$ (dotted line) and $p-q$ (dashed line). \begin{figure} \centering \begin{tabular}{cc} \epsfig{file=fig2a-3clouds2.eps,width=0.49\linewidth,clip=} & \epsfig{file=fig2b-3clouds1.eps,width=0.49\linewidth,clip=} \\ \end{tabular} \caption{Two examples of cloud collisions. The horizontal axis represents time, and the vertical axis cloud size. The incoming $p-q$ and $p+q$ clouds, represented by the dotted and dashed lines respectively, collide with the Dancer cloud (solid line) and then expand to infinity. } \label{multicloudscatter} \end{figure} These plots show several features, common to all of the examples that we have examined, that should be noted. First, the SU(4) clouds always remain larger than the Dancer clouds (in fact, larger than the sum of the Dancer radii), as should be expected from the bound in Eq.~(\ref{lambdabound}). In the asymptotic solutions, the SU(4) cloud radii have parabolic dependences on time, with a minimum radius of zero. In the actual interacting solutions, their behavior is rather similar, except that the vertex of the parabola is raised so that it occurs at or near the point when the SU(4) cloud radius is equal to $\tilde a$. (Given the ambiguity in defining the cloud radii, the distinction between exact coincidence of these values, as in Fig.~\ref{multicloudscatter}a, or a slight gap between them, as in Fig.~\ref{multicloudscatter}b, is not meaningful.) In particular, the overlap (or near overlap) between the SU(4) clouds and the Dancer clouds is relatively brief, suggestive of a rather short and sharp interaction. Also, from examining simulations for a variety of initial conditions, we see that, just as in the asymptotic limit, the $(p+q)$- and $(p-q)$-clouds do not appear to interact directly with each other. This suggests that we focus on the interaction of just one of these SU(4) clouds with the Dancer clouds. We can do this by choosing initial conditions such that the $(p+q)$-cloud is very large (and therefore essentially not interacting with the rest of the system) at the time that the $(p-q)$- and Dancer clouds interact. In fact, we can simplify our analysis by taking the $(p+q)$-cloud to be at infinity; i.e., by taking the limit $p \to \infty$, with $\dot p^2/p$ and $\delta \equiv p-q$ held fixed. In this limit $\lambda_1 = 2p-\delta +\tilde a$ and $\lambda_+ = 2p-\delta - \tilde a +O(1/p)$ tend to infinity, while \begin{eqnarray} \lambda_2 &=& \delta + \tilde a \, , \cr \lambda_- &=& \delta - \tilde a \, . \end{eqnarray} If we drop the terms proportional to $\dot p^2$ that decouple from everything else, and restrict ourselves to the $J=0$ case where the ratio of $a_L/a_R$ remains constant, the effective Lagrangian of Eq.~(\ref{effLag}) reduces to \begin{equation} L_{\rm MS, red} = {\pi \over 2} {(\dot \delta + \dot{\tilde a})^2 \over (\delta +\tilde a)} + {\pi \over 2} {(\dot \delta - \dot {\tilde a})^2 \over (\delta -\tilde a)} + 4\pi {\dot {\tilde a}^2 \over \tilde a} \, . \label{aDeltaLag} \end{equation} Keeping in mind that we expect $\delta \gg \tilde a$ at large times, and noting that this purely kinetic Lagrangian is equal to the energy, we can write \begin{eqnarray} E &=& \left[{\pi\delta {\dot \delta}^2 \over {\delta^2-\tilde a}^2} - {\pi \tilde a \dot{\tilde a}\dot\delta\over {\delta^2-\tilde a}^2}\right] + \left[{4\pi{\dot {\tilde a}}^2 \over \tilde a} + {\pi \delta {\dot {\tilde a}}^2 \over {\delta^2-\tilde a}^2} - {\pi \tilde a \dot{\tilde a}\dot\delta\over {\delta^2-\tilde a}^2}\right] \cr &\equiv& E_\delta + E_a \, . \end{eqnarray} where we have defined SU(4) and Dancer cloud energies whose asymptotic values at large $|t|$ are \begin{eqnarray} E_\delta &=& \pi \, {\dot \delta^2 \over \delta} \, , \cr E_a &=& 4 \pi \, {\dot {\tilde a}^2 \over \tilde a} \, . \label{asymE} \end{eqnarray} It follows that the trajectories at large negative times, when $\delta \gg \tilde a$, are of the form \begin{eqnarray} \delta (t) &=& { E_\delta \over 4\pi} \, (t-t_\delta)^2 \, , \cr\cr \tilde a(t) &=& { E_a \over 16\pi} \, (t-t_a)^2 \, . \label{asymInitCond} \end{eqnarray} The trajectories at large positive times are of the same form, except that the values of the various constants of motion are changed as a result of the interactions between the clouds. \begin{figure} \centering \begin{tabular}{cc} \epsfig{file=fig3a-delta7.eps,width=0.49\linewidth,clip=} & \epsfig{file=fig3b-delta8.eps,width=0.49\linewidth,clip=} \end{tabular} \caption{A typical collision between the Dancer cloud and an SU(4) cloud. The horizontal axis represents time, and the vertical axis cloud size. The actual cloud trajectories are shown as solid lines. In (a) the dashed lines indicate the initial and final asymptotic trajectories of the Dancer cloud, while in (b) they indicate the asymptotic trajectories of the SU(4) cloud.} \label{asymcurves} \end{figure} The form of Eq.~(\ref{aDeltaLag}) is strikingly similar to that of Eq.~(\ref{su4Lag}), with the Dancer cloud parameter $\tilde a$ playing a similar role to $R$, the separation between the massive monopoles in the SU(4) (1,[1],1) solution, and the fixed monopole reduced mass $\mu$ replaced by the variable $\tilde a^{-1}$. This seems surprising, since the previous case involved massive monopoles hitting an ellipsoidal cloud at two distinct points, while in the present case two nested spherical clouds are meeting each other at all points. Nevertheless, the similarity in the Lagrangians suggests that the interactions should be similar. In particular, the analysis of the SU(4) dynamics in Ref.~\cite{Chen:2001ge} found that the interaction between the cloud and the massive monopoles was relatively brief, taking place over a distance of order $\mu^{-1}$. This suggests similarly brief interactions in the present case, with the interaction largely restricted to the time when $\delta - \tilde a$ is itself of order $\tilde a$. We saw some indication of this, with all of the clouds present, in Fig.~\ref{multicloudscatter}. We illustrate this more clearly in the two-cloud case in Fig.~\ref{asymcurves}, where we show the transition from the initial asymptotic trajectories to the final ones. Equation~(\ref{asymInitCond}) suggests that an arbitrary solution depends on four initial constants, $t_a$, $t_\delta$, $E_a$, and $E_\delta$. It is clear that time-translation invariance can be used to eliminate one of these. In addition, the Lagrangian of Eq.~(\ref{aDeltaLag}) has some interesting scaling properties. The only effect of the rescalings \begin{eqnarray} \tilde a &\rightarrow& \tilde a' = \lambda \tilde a \, , \cr \delta &\rightarrow& \delta' = \lambda \delta \, ,\cr t &\rightarrow& t' = \kappa t \end{eqnarray} is to multiply the Lagrangian by an overall factor of $\lambda/\kappa^2$. Hence, given any solution of the equations of motion, these rescalings will generate a two-parameter set of solutions. Thus, to study the full range of possible solutions we really only need to vary a single continuous parameter, which we choose to be $E_a/E_\delta$. (Note that applying the constraint $\delta > \tilde a$ in the asymptotic region implies that $E_a/E_\delta <4$.) Also, since the rescaling cannot reverse the time ordering, we must consider separately the cases $t_a -t_\delta >0$ and $t_a - t_\delta <0$. \begin{figure}[t] \centering \begin{tabular}{cc} \epsfig{file=fig4a-delta1.eps,width=0.49\linewidth,clip=} & \epsfig{file=fig4b-delta2.eps,width=0.49\linewidth,clip=} \\ \epsfig{file=fig4c-delta3.eps,width=0.49\linewidth,clip=} & \epsfig{file=fig4d-delta4.eps,width=0.49\linewidth,clip=} \\ \epsfig{file=fig4e-delta5.eps,width=0.49\linewidth,clip=} & \epsfig{file=fig4f-delta6.eps,width=0.49\linewidth,clip=} \end{tabular} \caption{Typical interactions between a Dancer cloud (solid black line) and an SU(4) cloud (dashed purple line). The horizontal axis represents time, and the vertical axis cloud size.} \label{manyInt} \end{figure} The range of possibilities is illustrated in Fig.~\ref{manyInt}. If $t_a -t_\delta >0$, the collapsing SU(4) cloud collides with the Dancer cloud while the latter is also collapsing. Three examples of this are shown in Fig.~\ref{manyInt}a-c, with the value of $E_a/E_\delta$ increasing from one to the next. In all three cases the SU(4) cloud loses energy to the Dancer cloud. In the last case, where $E_a/E_\delta$ is initially close to its maximum allowed value, the SU(4) cloud loses so much energy that the inequality $E_a/E_\delta <4$ is temporarily violated. Because the cloud radii both increase like $Et^2$, there must be a second interaction in which the Dancer cloud overtakes the SU(4) cloud and transfers back enough energy that the inequality is satisfied at large times. The crossover from the behavior shown in Fig.~\ref{manyInt}b to that in Fig.~\ref{manyInt}c occurs when $E_a/E_\delta \approx 2$. In the borderline case, $t_a - t_\delta =0$, the two clouds arrive at the origin simultaneously, as shown in Fig.~\ref{manyInt}d. In this case the asymptotic solution of Eq.~(\ref{asymInitCond}) is exact for all times, and no energy is exchanged between the clouds. Finally, we come to the case where $t_a -t_\delta <0$. Here, the collapsing SU(4) cloud only reaches the Dancer cloud after the latter has already reached its minimum size and begun to expand. If $E_a/E_\delta <4$ is sufficiently large, as in Fig.~\ref{manyInt}e, the Dancer cloud loses some energy to the SU(4) cloud, but continues to expand, although at a reduced speed. (This is then a time-reversed version of a solution with $t_a -t_\delta >0$.) However, if $E_a/E_\delta < 4$ is small enough, as in Fig.~\ref{manyInt}f, the collision can reverse the expansion of the Dancer cloud and have it shrink to zero radius a second time. The boundary between these two regimes is at $E_a/E_\delta \approx 2.6$. \begin{figure} \centering \begin{tabular}{cc} \epsfig{file= fig5a-catchup_elastic.eps,width=0.49\linewidth,clip=} & \epsfig{file= fig5b-headon_elastic.eps,width=0.49\linewidth,clip=} \\ \epsfig{file= fig5c-catchup_inelastic.eps,width=0.49\linewidth,clip=} & \epsfig{file= fig5d-headon_inelastic.eps,width=0.49\linewidth,clip=} \end{tabular} \caption{Energy change between the SU(4) and Dancer clouds. In (a) and (c), both clouds are contracting at the time of collision, while in (b) and (d) a contracting SU(4) cloud collides with an expanding Dancer cloud. In all four plots the dotted blue curve shows the transfer observed in the simulations. In (a) and (b) the solid purple curve shows the prediction for an elastic collision, while in (c) and (d) it indicates the prediction for an inelastic collision with $\Delta = -{1\over 5} (3 v_{\delta i}^2 + 4v_{\delta i}v_{ai} -4 v_{ai}^2)$.} \label{energyTransFig} \end{figure} While these plots are sufficient to provide a qualitative understanding of the interactions, it would be nice to have some more quantitative results as well. Let us first consider the energy transferred during the collision. The fact that the interaction between the clouds takes place over a relatively short time interval suggests a naive model that treats the interaction as an instantaneous elastic collision of two rigid shells, with kinetic energy and radial momentum ($\sum_a M_a \dot r_a$) conserved. Because the Dancer cloud has four times the kinetic energy of the SU(4) cloud for the same value of the velocity [see Eq.~(\ref{asymE})], we treat it as having four times the mass. It is then a straightforward matter to calculate the fractional energy transfer. The result is compared with the actual data from numerical simulations in Fig.~\ref{energyTransFig}. We see that the model captures the important features of the collisions. It accurately predicts that if the two shells collide while traveling in the same direction, the faster one will lose energy. It also agrees with the data in predicting that in a head-on collision the SU(4) cloud will lose almost all of its energy for large values of $E_a/E_\delta$. Finally, it correctly asserts that for a head-on collision there is a critical value of the initial energy ratio below which the direction of energy transfer is reversed. This model works better than one might have hoped, but there is no mystery as to why the predicted and observed energy transfer disagree. First, the cloud trajectories are only approximate, and are altered by additional interaction terms that only become significant when the cloud radii are comparable. Second, the interactions are not truly instantaneous, but occur over a finite time interval as the clouds move through one another. Let us modify the statement of conservation of energy of the clouds by including an inelastic term $\Delta$, defined in terms of the initial and final cloud velocities by \begin{equation} \frac12 v_{\delta i}^2 + \frac12 v_{ai}^2 = \frac12 v_{\delta f}^2 + \frac12 v_{af}^2 + \Delta \, . \end{equation} Because all of the terms in the conservation of energy equation are quadratic in velocities, we looked for an expression for $\Delta$ that was quadratic in the initial velocities and that provided good agreement with the observed energy transfer. By trial and error, we found that taking $\Delta = -{1\over 5} (3 v_{\delta i}^2 + 4v_{\delta i}v_{ai} -4 v_{ai}^2)$ provides excellent agreement with the results obtained from numerical simulations, as can be seen from the plots in Fig.~\ref{energyTransFig}. The exact dynamics that give rise to this this formula are still unclear to us. We argued previously that the interaction between the clouds is largely restricted to the time when $\delta - \tilde a$ was itself of order $\tilde a$; this gives us a measure of the thickness of the clouds. To describe this more precisely let us define the beginning of the interaction to be the time when 20\% of the total energy has been transferred from one cloud to the other, and the end of the interaction to be the time when 80\% has been transferred. We also define $\delta_0$ and $\tilde a_0$ to be the values of these variables at the beginning of the interaction and \begin{equation} \rho = {\delta_0 - \tilde a_0 \over \tilde a_0 } \, . \end{equation} \begin{figure} \centering \begin{tabular}{cc} \epsfig{file= fig6a-rho_cc.eps,width=0.49\linewidth,clip=} & \epsfig{file= fig6b-rho_ce.eps,width=0.49\linewidth,clip=} \end{tabular} \caption{The parameter $\rho$, which is a measure of cloud thickness, as a function of the energy ratio. The result for two collapsing clouds is shown in (a), and for a collapsing SU(4) cloud and expanding Dancer cloud in (b). } \label{RhoFig} \end{figure} Figure~\ref{RhoFig} shows $\rho$ as a function of energy ratio for two different regimes. The left plot shows $\rho$ for an interaction in which a collapsing SU(4) cloud overtakes a collapsing Dancer cloud. Because the cloud velocities are equal when $E_a/E_\delta = 4$, approaching this value from below corresponds to decreasing the relative velocity of the clouds. This plot therefore shows that $\rho$ decreases as the relative velocity is decreased. We see that the two clouds can approach quite close to one another before exchanging a significant amount of energy if they are moving slowly relative to one another. The right plot is for an interaction in which a collapsing SU(4) cloud collides with an expanding Dancer cloud. In this case the relative velocity increases with $E_a/E_\delta$. We see that as this increases the clouds begin to transfer their energy sooner, and hence at a greater separation. The maximum value of $\rho$ is about 0.17 in the former case and 0.15 in the latter. The similarity of the values for these two collision scenarios would seem to indicate that the cloud thickness is a relatively small fraction of the cloud radius, approximately $(0.15-0.20) a_0$. The behavior described by these two plots suggests that the clouds act as dissipative media, and that when the SU(4) cloud moves through the Dancer cloud, the energy loss increases with the relative velocity of the clouds. This explains why when both clouds are collapsing and their relative velocity is small, they can come very close together before significant energy is transferred. In the other situation, where the clouds collide head-on, the energy transfer begins very quickly because the relative velocity is large. This is also consistent with the behavior of the inelasticity in the collisions that we found previously. \section{Summary and concluding remarks} \label{conclude} In this paper we have used moduli space methods to investigate the properties of the massless magnetic monopoles that arise when a gauge theory is spontaneously broken to a non-Abelian subgroup. We have shown how the natural metric on the Nahm data for a class of SU($2M+2$) solutions with $2M$ massive and $M(2M-1)$ massless monopoles can be obtained from the metric of a simpler class of SU($M+1$) solutions. Using this approach, we have explicitly verified for the SU(4) (1,[1],1) case that the moduli spaces for the Nahm data and for the BPS solutions are isomorphic, thus lending further support to the conjecture that such an isomorphism holds in general. We then applied this method to the problem of obtaining the metric for the SU(6) (2,[2],[2],[2],2) solutions from the (2,[1]) SU(3) metric studied by Dancer. This gave us an effective Lagrangian for a class of axially symmetric solutions. This Lagrangian was then used to study the interactions of the clouds that are the semiclassical manifestation of the massless monopoles. By examining explicit spacetime solutions, it has been known for some time that the spacetime fields evaluated at the cloud radius are not qualitatively different from those at points slightly further from or closer to the origin. One might therefore expect the interactions between clouds to take place as if the shells of these clouds were diffuse. However, our simulations show instead that the clouds interact more like relatively thin, hard shells. In the collisions between an SU(4) cloud and the Dancer clouds the energy transfer takes place over a short interval before and after the cloud radii coincide, suggesting an effective cloud thickness that is roughly 15-20\% of the cloud radius. Some intriguing open questions remain. It is known that in Type IIB string theory one can interpret D1-branes stretched between D3-branes as the analogs of massive magnetic monopoles. This suggests that massless monopoles should, in some sense, correspond to D1-branes of zero length connecting coincident D3-branes. It would be desirable to clarify these ideas, and to see if they would help explain the properties of the clouds that we have found. One would also like to understand better the role of massless monopoles in the electric-magnetic duality of $N = 4$ supersymmetric Yang-Mills theory, where they should be the duals of the ``gluons'', the massless gauge particles of the unbroken subgroup. We hope that our results will help shed light on these questions. \begin{acknowledgments} This work was supported in part by the U.S.~Department of Energy. \end{acknowledgments}
2302.05535
\section{Introduction} Let $A$ be an $n$ by $n$ matrix or a bounded linear operator on a complex Hilbert space $(H, \langle \cdot , \cdot \rangle , \| \cdot \|)$. A closed set $\Omega \subset \mathbb{C}$ is a $K$-spectral set for $A$ if the spectrum of $A$ is contained in $\Omega$ and if, for all rational functions $f$ bounded in $\Omega$, the following inequality holds: \begin{equation} \| f(A) \| \leq K \| f \|_{\Omega} , \label{Kspectral} \end{equation} where $\| \cdot \|$ on the left denotes the norm in $H$ and $\| \cdot \|_{\Omega}$ on the right denotes the $\infty$-norm on $\Omega$. It was shown in \cite{CP} that the closure of the numerical range \begin{equation} W(A) := \{ \langle Aq,q \rangle : q \in H,~\| q \| = 1 \} \label{numericalrange} \end{equation} is a $(1 + \sqrt{2})$-spectral set for $A$. This was extended in \cite{CG} to show that other regions in the complex plane are $K$-spectral sets. In particular, it was shown that the numerical range with a circular hole or cutout is a $(3 + 2 \sqrt{3})$-spectral set. In this paper, we use theorems proved in \cite{CG} to derive values of $K$ for which (\ref{Kspectral}) holds for other regions $\Omega$. A simple way to find such a $K$ value for a given region $\Omega$ containing the spectrum of $A$ in its interior is to use the Cauchy integral formula, replacing the norm of the integral by the integral of the resolvent norm: \[ f(A) = \frac{1}{2 \pi i} \int_{\partial \Omega} ( \zeta I - A )^{-1} f( \zeta )\,d \zeta \Rightarrow \| f(A) \| \leq \frac{1}{2 \pi} \left( \int_{\partial \Omega} \| ( \zeta I - A )^{-1} \|~| d \zeta | \right) \| f \|_{\Omega} . \] Thus one can always take \begin{equation} K = \frac{1}{2 \pi} \int_{\partial \Omega} \| ( \zeta I - A )^{-1} \|~ | d \zeta | . \label{KCauchy} \end{equation} The main goal of \cite{CG} was to produce $K$ values that are independent of $A$ for certain regions $\Omega$ (that do depend on $A$), but it was also hoped that the values derived there would be smaller than those in (\ref{KCauchy}). We will compare these $K$ values for various sets $\Omega$. For some sets, we will also compare these values to what we believe to be the optimal $K$ value. This is computed numerically using an optimization code and, at least, provides a {\em lower bound} on $K$. The main theorem in \cite{CG} (Theorem \ref{thm:main} in this paper), relates the value of $K$ not to $\frac{1}{2 \pi}$ times a boundary integral of the resolvent norm but to $\frac{1}{\pi}$ times a boundary integral of the absolute value of the minimum point in the spectrum of the Hermitian part of a certain unit scalar times the resolvent. This is $\frac{1}{\pi}$ times the infimum of the real part of the numerical range of this unit scalar times the resolvent. If the absolute value of this infimum turns out to be much less than the {\em numerical radius} (the supremum of the absolute values of points in the numerical range of the resolvent, which is between $\frac{1}{2}$ and $1$ times the norm of the resolvent), then Thoerem \ref{thm:main} may give a much smaller $K$ value than that in (\ref{KCauchy}); on the other hand, if the absolute value of this infimum turns out to be almost equal to the numerical radius of the resolvent, then the two $K$ values may be close, with formula (\ref{KCauchy}) actually producing a somewhat smaller value. We show that this latter situation holds in a number of cases of interest and we give a partial explanation as to why. The organization of this paper is as follows. In section \ref{previous results} we establish notation and review results from \cite{CG}. In section \ref{extensions} we extend these results slightly and show how they can be applied to an arbitrary region containing the spectrum of $A$ to determine a value of $K$ for which the region is a $K$-spectral set. In section \ref{applications} we apply the extended results to a variety of problems. We consider block diagonal matrices and show how the numerical range can be divided into disjoint components that constitute a $K$-spectral set for the matrix. We also consider relevant $K$-spectral sets for describing the behavior of continuous and discrete time dynamical systems. In section \ref{comparisons} we give further comparisons between the values of $K$ determined by the methods of sections \ref{previous results} and \ref{extensions} and those defined by (\ref{KCauchy}), and we provide an explanation for the observed results. \section{Results from \cite{CG}} \label{previous results} \subsection{Notation} Let $f$ be a rational function bounded in a closed set $\Omega$ containing the spectrum of $A$. Assume that the boundary $\partial \Omega$ is rectifiable and has a finite number of connected components. From the Cauchy integral formula, we can write \[ f(z) = \frac{1}{2 \pi i} \int_{\partial \Omega} \frac{f( \zeta )}{\zeta - z}\,d\zeta ,~~ f(A) = \frac{1}{2 \pi i} \int_{\partial \Omega} ( \zeta I - A )^{-1} f( \zeta )\,d\zeta . \] Letting $s$ denote arc length, going in a counter-clockwise direction along $\partial \Omega$, and letting $\partial \omega \subset \mathbb{R}$ denote the values of $s$ as $\zeta (s)$ traverses $\partial \Omega$, the above equations can be written in the form \[ f(z) = \frac{1}{2 \pi i} \int_{\partial \omega} \frac{f( \zeta (s) )}{\zeta (s) - z} \zeta' (s)\,ds ,~~ f(A) = \frac{1}{2 \pi i} \int_{\partial \omega} ( \zeta (s) I - A )^{-1} f( \zeta (s) ) \zeta' (s)\,ds . \] We will also use the Cauchy transform of the complex conjugate $\bar{f}$: \[ g(z) := C( \overline{f},z) := \frac{1}{2 \pi i} \int_{\partial \omega} \frac{\overline{f( \zeta (s))}}{\zeta (s) - z} \zeta' (s)\,ds ,~~ g(A) := \frac{1}{2 \pi i} \int_{\partial \omega} ( \zeta (s) I - A )^{-1} \overline{f( \zeta (s))} \zeta' (s)\,ds . \] Finally we define the transform of $f$ by the double layer potential kernel, \begin{equation} \mu ( \zeta (s),z ) := \frac{1}{\pi} \frac{d}{ds} ( \arg ( \zeta (s) - z ) ) = \frac{1}{2 \pi i} \left( \frac{ \zeta' (s)}{\zeta (s) - z} - \frac{ \overline{\zeta' (s)}}{\overline{\zeta (s)} - \bar{z}} \right) , \label{mu_defn} \end{equation} \begin{equation} \mu ( \zeta (s),A ) = \frac{1}{2 \pi i} \left( ( \zeta (s) I - A )^{-1} \zeta' (s) - ( \overline{\zeta (s)} I - A^{*} )^{-1} \overline{\zeta' (s)} \right) . \label{muA_def} \end{equation} With these definitions, we can write \[ S(f,z) := f(z) + \overline{g(z)} = \int_{\partial \omega} f( \zeta (s)) \mu ( \zeta (s),z)\,ds , \] \[ S(f,A) := f(A) + g(A )^{*} = \int_{\partial \omega} f( \zeta (s)) \mu ( \zeta (s),A)\,ds . \] Further, note that $S(1,A) = 2I$ since \[ \int_{\partial \omega} \mu ( \zeta (s),A)\,ds = \frac{1}{2 \pi i} \int_{\partial \omega} ( \zeta (s) I - A )^{-1} \zeta' (s)\,ds + \left( \frac{1}{2 \pi i} \int_{\partial \omega} ( \zeta (s) I - A )^{-1} \zeta' (s)\,ds \right)^{*} = I + I^{*} = 2I . \] \subsection{Main Results from \cite{CG}} Define \[ c_1 := \sup \{ \max_{z \in \Omega } | C( \bar{f}, z ) | : f \mbox{ a rational function}, \| f \|_{\Omega} \leq 1 \} . \] It is shown in \cite[Lemma 1]{CG} that $c_1$ satisfies \begin{equation} c_1 \leq \supess_{\zeta_0 \in \partial \Omega} \int_{\partial \omega} | \mu ( \zeta (s), \zeta_0 ) |\,ds . \label{c1_bound} \end{equation} Define \begin{equation} c_2 := \frac{1}{2} \sup \{ \| S(f,A) \| : f \mbox{ a rational function}, \| f \|_{\Omega} \leq 1 \} . \label{c2_def} \end{equation} Following is (a part of) the main theorem of \cite[Theorem 2]{CG}: \begin{theorem} \label{thm:main} With $c_1$ and $c_2$ as defined above, $\Omega$ is a $K$-spectral set for $A$, where \[ K = c_2 + \sqrt{ c_2^2 + c_1 } . \] \end{theorem} One can use (\ref{c1_bound}) and definition (\ref{mu_defn}) to bound $c_1$ in the theorem. If we fix $\zeta_0 \in \partial \Omega$ and let $\zeta(s)$ move around a curve $\Gamma_j$ that is all or part of $\partial \Omega$ then, from the definition in (\ref{mu_defn}), $\int_{s: \zeta (s) \in \Gamma_j} | \mu ( \zeta (s), \zeta_0 ) |\,ds$ is equal to $\frac{1}{\pi}$ times the total variation in the argument of $\zeta (s) - \zeta_0$. For example, if $\partial \Omega$ is a circle or the boundary of a convex set such as in Figure \ref{fig:regions}(a), then the argument of $\zeta (s) - \zeta_0$ changes by $\pi$ as $\zeta (s)$ traverses the curve $\partial\Omega$ so that $\int_{\partial \omega} | \mu ( \zeta (s), \zeta_0 ) |\,ds = 1$. If $\zeta_0$ lies inside a circle or the boundary curve of a convex set such as in Figure \ref{fig:regions}(b), then the integral of $| \mu ( \zeta (s), \zeta_0 ) |$ over that piece of the boundary is $2$. If $\zeta_0$ lies outside a circle of radius $r$ such as in Figure \ref{fig:regions}(c), then the argument of $\zeta (s) - \zeta_0$ goes from its initial value, say, $0$ to $\arcsin (r/R)$, where $R$ is the distance from $\zeta_0$ to the center of the circle, back to $0$, to $- \arcsin (r/R)$, and back to $0$, for a total change of $4 \arcsin (r/R)$. Note that for any region $\Omega$, the upper bound (\ref{c1_bound}) on $c_1$ can be computed numerically, by testing many points $\zeta_0 \in \partial \Omega$ and finding the one that leads to the largest total variation in the argument of $\zeta (s) - \zeta_0$, as $\zeta (s)$ traverses $\partial \Omega$. \begin{figure}[ht] \centerline{\epsfig{file=regionsnew.eps,width=4in}} \caption{Various boundary configurations. The blue asterisk represents $\zeta_0$, and the red lines show how the angle of the vector $\zeta (s) - \zeta_0$ changes as $\zeta (s)$ traverses the boundary curve.} \label{fig:regions} \end{figure} \medskip To obtain upper bounds on $c_2$, we first note that if $\mu ( \zeta (s) , A )$ is positive semidefinite (PSD) for $s \in [ s_{min} , s_{max} ]$, then \begin{equation} \left\| \int_{s_{min}}^{s_{max}} f( \zeta (s) ) \mu ( \zeta (s), A )\,ds \right\| \leq \max_{s \in [s_{min} , s_{max}]} | f( \zeta (s)) |~\left\| \int_{s_{min}}^{s_{max}} \mu ( \zeta (s), A )\,ds \right\| . \label{PSDresult} \end{equation} A proof can be obtained by noting that \[ \left\| \int_{s_{min}}^{s_{max}} f( \zeta (s)) \mu ( \zeta (s), A )\,ds \right\| = \sup_{\| x \| = \| y \| = 1} \left| \int_{s_{min}}^{s_{max}} f( \zeta (s) ) \left\langle \mu ( \zeta (s) ,A )y, x \right\rangle \,ds \right| , \] and following the arguments in \cite[Lemma 2.3]{CP}. Thus if $\mu ( \zeta , A )$ is PSD for all $\zeta \in \partial \Omega$, then $c_2 \leq 1$, since for any rational function $f$ with $\| f \|_{\Omega} \leq 1$, \[ \| S(f,A) \| \leq \left\| \int_{\partial \omega} \mu ( \zeta (s), A )\,ds \right\| = \| 2I \| = 2 , \] and from definition (\ref{c2_def}), $c_2$ is bounded by half this value. For $\Omega$ a convex set containing $W(A)$, Theorem \ref{thm:main} yields the Crouzeix-Palencia result \cite{CP} that $\Omega$ is a $(1 + \sqrt{2} )$-spectral set for $A$, since in this case $c_1 \leq 1$ and $c_2 \leq 1$. When $\mu ( \zeta (s) , A )$ is not PSD, we will add a multiple of the identity to $\mu ( \zeta (s) , A)$ to obtain a PSD operator. For this, we need bounds on the minimum value in the spectrum of $\mu ( \zeta (s) , A )$: \begin{equation} \lambda_{min}( \mu ( \zeta (s), A)) := \min \{ \lambda : \lambda \in \mbox{Sp} ( \mu ( \zeta (s) , A ) ) \} . \label{lambdamin_def} \end{equation} First, fix a point $\zeta_0 = \zeta ( s_0 )$ in $\partial \Omega$ where the unit tangent $\zeta_0' := \left. \frac{d \zeta}{ds} \right|_{s_0}$ exists. Since $\mu ( \zeta (s),A)$ depends on $\zeta' (s)$, when we fix a point $\zeta_0$, we will write $\mu ( \zeta_0 , \zeta_0' , A)$ to make this dependence clear. Since the magnitude of $\zeta_0'$ is $1$, it can be written in the form $e^{i \theta_0}$ for some $\theta_0 \in [0, 2 \pi )$. Therefore, using definition (\ref{muA_def}), we can write \begin{equation} \mu ( \zeta_0 , \zeta_0' , A ) = \frac{1}{2 \pi} \left[ e^{i ( \theta_0 - \pi/2)} ( \zeta_0 I - A )^{-1} + e^{-i ( \theta_0 - \pi /2)} \left( ( \zeta_0 I - A )^{-1} \right)^{*} \right] . \label{mu_expression} \end{equation} It follows that $\lambda_{min} ( \mu ( \zeta_0 , \zeta_0' , A ) )$ is $\frac{1}{\pi}$ times the smallest real part of points in $\mbox{cl} ( W ( e^{i ( \theta_0 - \pi / 2 )} ( \zeta_0 I - A )^{-1} ))$ (where $\mbox{cl}( \cdot )$ denotes the closure), and therefore $\lambda_{min} ( \mu ( \zeta_0 , \zeta_0' , A ) )$ is greater than or equal to $- \frac{1}{\pi}$ times the numerical radius of this matrix, $w( e^{i ( \theta_0 - \pi /2)} ( \zeta_0 I - A )^{-1} ) := \sup \{ |z| : z \in W( e^{i ( \theta_0 - \pi / 2 )} ( \zeta_0 I - A )^{-1} ) \}$. Equivalently, we can write \begin{equation} \lambda_{min} ( \mu ( \zeta_0 , \zeta_0' , A )) \geq - \frac{1}{\pi} w (( \zeta_0 I - A )^{-1}) ) . \label{lambdaminlb} \end{equation} The following theorem is from \cite[Lemmas 5, 7, and 8]{CG}. Again letting $\zeta_0$ denote a point on $\partial \Omega$, the half-plane $\Pi_0 := \{ z \in \mathbb{C} : \mbox{Im}( \zeta_0' (\overline{\zeta_0} - \bar{z})) \geq 0 \}$ has the same outward normal as $\Omega$ at $\zeta_0$. For a disk about a point $\xi$ of radius $r$, the assumption $\zeta_0 - \xi = i r \zeta_0'$ in the theorem means that $\partial \Omega$ and the boundary of the disk are tangent at $\zeta_0$ and the outward normal to $\Omega$, $\zeta_0' / i$, is the same as the inward normal to the disk. \begin{theorem} \label{thm:lambdamin} If $W(A) \subset \Pi_0$, then $\lambda_{min} ( \mu ( \zeta_0 , \zeta_0' , A )) \geq 0$, with equality if $\zeta_0 \in \partial W(A)$. If, for some $\xi \in \mathbb{C} \backslash \mbox{Sp} (A)$, $\zeta_0 - \xi = i r_1 \zeta_0'$, where $r_1 \leq 1/ \| (A - \xi I )^{-1} \|$, then $\lambda_{min} ( \mu ( \zeta_0 , \zeta_0' , A )) \geq - \frac{1}{2 \pi r_1}$. If $\zeta_0 - \xi = i r_2 \zeta_0'$, where $r_2 \leq 1/w((A - \xi I )^{-1} )$, then $\lambda_{min} ( \mu ( \zeta_0 , \zeta_0', A ) ) \geq - \frac{1}{\pi r_2}$. \end{theorem} Note that the interior of the disks $\{ z \in \mathbb{C} : | z - \xi | < 1 / \| ( A - \xi I )^{-1} \| \}$ and $\{ z \in \mathbb{C} : | z - \xi | < 1/ w( (A - \xi I )^{-1} ) \}$ alluded to in the theorem contain no points in the spectrum of $A$ since $\| (A - \xi I )^{-1} \| \geq w( ( A - \xi I )^{-1} ) \geq | ( \lambda - \xi )^{-1} |$ for all $\lambda \in \mbox{Sp} (A)$; that is, the inverses of these quantities, which are the radii of the disks, are less than or equal to $| \lambda - \xi |$. Theorems \ref{thm:main} and \ref{thm:lambdamin} can be used together to obtain $K$ values for certain types of sets, such as the numerical range with a circular hole or cutout, where it will be shown in the next subsection that $K \leq 3 + 2 \sqrt{3}$. Unlike the $K$ value in (\ref{KCauchy}), this value, $3+2\sqrt{3}$, is a fixed constant independent of $A$ and does not require knowledge of the resolvent norm on the boundary of the set. However, one must have information about the resolvent at the point $\xi$ where the disk is to be removed in order to know how large a disk can be removed. The resolvent, $( A - \xi I )^{-1}$, now plays a role in the definition of the set, rather than in the constant $K$ as in (\ref{KCauchy}). For any set $\Omega$ containing the spectrum of $A$, Theorem 1 can be used directly to derive a value of $K$ that depends on $\lambda_{min} ( \mu ( \zeta , A ) )$. Since $\lambda_{min} ( \mu ( \zeta , A ) )$ is twice the minimum real part of all points in the closure of the numerical range of $( \zeta' / (2 \pi i) ) ( \zeta I - A )^{-1}$, it follows that $| \lambda_{min} ( \mu ( \zeta , A ) ) | \leq (1/ \pi ) w(( \zeta I - A )^{-1} ) \in [ (1/ (2 \pi )) \| ( \zeta I - A )^{-1} \| , (1/ \pi ) \| ( \zeta I - A )^{-1} \| ]$. If it turns out that $| \lambda_{min} ( \mu ( \zeta , A ) ) | \approx (1/ \pi ) w (( \zeta I - A )^{-1} )$ for $\zeta \in \partial \Omega$, then the value of $K$ in (\ref{KCauchy}) may be somewhat smaller than that from Theorem \ref{thm:main}, since the value in (\ref{KCauchy}) involves the integral of $( 1/(2 \pi )) \| ( \zeta I - A )^{-1} \|$. If $| \lambda_{min} ( \mu ( \zeta , A )) | << (1/ \pi ) w ( \zeta I - A )^{-1} )$ (as, for example, if $\Omega = W(A)$, where $\lambda_{min} ( \mu ( \zeta , A ) ) = 0$), then the $K$ value from Theorem \ref{thm:main} may be {\em much} smaller than that in (\ref{KCauchy}). \subsection{Example from \cite{CG}}\label{example} Using these results, it is shown in \cite{CG} that if $\Omega = \Omega_0 \backslash {\cal D} ( \xi , r )$, where $\Omega_0$ is a convex domain containing $\mbox{cl}(W(A))$ and ${\cal D} ( \xi , r )$ is the disk about a point $\xi \in \mathbb{C} \backslash \mbox{Sp}(A)$ of radius $r$, where $r \leq 1/w( (A - \xi I )^{-1})$, then $\Omega$ is a $(3 + 2 \sqrt{3} )$-spectral set for $A$. This assumes that $\partial {\cal D} ( \xi , r ) \subset \Omega_0$ or the number of intersection points of $\partial \Omega_0$ and $\partial {\cal D}( \xi , r )$ is finite. To bound $c_1$ in this case, suppose first that $\partial {\cal D} ( \xi , r ) \subset \Omega_0$. If $\zeta_0 \in \partial \Omega_0$, then as $\zeta (s)$ traverses $\partial \Omega_0$, the argument of $\zeta (s) - \zeta_0$ changes by $\pi$, as illustrated in Figure \ref{fig:regions}(a). As $\zeta (s)$ traverses $\partial {\cal D} ( \xi , r )$, the argument of $\zeta (s) - \zeta_0$ changes by $4 \arcsin ( r/ | \zeta_0 - \xi | ) < 2 \pi$, as illustrated in Figure \ref{fig:regions}(c). Thus, in this case, \[ \int_{\{ s : \zeta (s) \in \partial \Omega_0 \}} | \mu ( \zeta (s), \zeta_0 ) |\,ds = 1 ,~~ \int_{\{ s: \zeta (s) \in \partial {\cal D} ( \xi , r ) \}} | \mu ( \zeta (s), \zeta_0 ) |\,ds < 2 . \] [To simplify notation, throughout the rest of the paper we will write simply $\int_{\partial \Omega_j} \ldots\,ds$ in place of $\int_{\{ s: \zeta (s) \in \partial \Omega_j \}} \ldots\,ds$.] Now suppose $\zeta_0 \in \partial {\cal D} ( \xi , r )$. Then as $\zeta (s)$ traverses $\partial \Omega_0$, the argument of $\zeta (s) - \zeta_0$ changes by $2 \pi$, as illustrated in Figure \ref{fig:regions}(b), while as $\zeta (s)$ traverses $\partial {\cal D}( \xi , r )$, the argument of $\zeta (s) - \zeta_0$ changes by $\pi$, as illustrated in Figure \ref{fig:regions}(a). Thus, in this case, we have \[ \int_{\partial \Omega_0} | \mu ( \zeta (s), \zeta_0 ) |\,ds = 2 ,~~ \int_{\partial {\cal D} ( \xi , r )} | \mu ( \zeta (s), \zeta_0 ) |\,ds = 1 . \] It follows that for $\zeta_0$ anywhere on the boundary of $\Omega$, the change in argument of $\zeta (s) - \zeta_0$ as $\zeta (s)$ traverses $\partial \Omega$ is at most $3 \pi$; that is, $c_1 \leq 3$. If, instead, the disk ${\cal D} ( \xi , r )$ intersects $\partial \Omega_0$ as in Figure \ref{fig:regions}(d), then it is clear that the total variation in the argument of $\zeta (s) - \zeta_0$ as $\zeta(s)$ traverses $\partial \Omega$ is smaller and thus $c_1$ is again bounded by $3$. To bound $c_2$, let $\Gamma_0 = \partial \Omega_0 \backslash \mbox{cl}( {\cal D} ( \xi , r ))$ and let $\Gamma_1 = \partial {\cal D} ( \xi , r ) \cap \mbox{cl}( \Omega_0 )$, so that $\partial \Omega = \Gamma_0 \cup \Gamma_1$. Let $f$ be a function with $\| f \|_{\Omega} \leq 1$ and write $S(f,A) = S_0 + S_1 + S_2$, where \[ S_0 = \int_{\Gamma_0} f( \zeta (s)) \mu ( \zeta (s),A)\,ds,~~ S_1 = \int_{\Gamma_1} f( \zeta (s)) \left( \mu ( \zeta (s),A ) + \frac{1}{\pi r} I \right)\,ds ,~~ S_2 = - \frac{1}{\pi r} \int_{\Gamma_1} f( \zeta (s) ) I\,ds . \] It follows from Theorem \ref{thm:lambdamin} that for $\zeta \in \partial \Omega_0$, $\mu ( \zeta , A )$ is PSD. Since adding PSD operators to a PSD operator does not decrease the norm, we can extend the integral over $\Gamma_0$ to an integral over the entire boundary $\partial \Omega_0$ to obtain: \[ \| S_0 \| \leq \left\| \int_{\partial \Omega_0} \mu ( \zeta (s),A)\,ds \right\| = \| 2I \| = 2 . \] If $\zeta \in \partial {\cal D} ( \xi , r)$, since $r \leq 1/w((A- \xi I )^{-1})$, Theorem \ref{thm:lambdamin} shows that $\mu ( \zeta ,A) + \frac{1}{\pi r} I$ is PSD, and hence \[ \| S_1 \| \leq \left\| \int_{\Gamma_1} \left( \mu ( \zeta (s),A) + \frac{1}{\pi r} I \right) \,ds \right\| \leq \left\| \int_{\partial {\cal D} ( \xi , r )} \left( \mu ( \zeta (s),A) + \frac{1}{\pi r} I \right)\,ds \right\| = \frac{1}{\pi r} \int_{\partial {\cal D} ( \xi , r )} ds = 2. \] Here we have used the fact that the spectrum of $A$ lies outside ${\cal D} ( \xi , r)$ and hence $\int_{\partial {\cal D} ( \xi , r )} \mu ( \zeta (s),A)\,ds = 0$. It is clear that $\| S_2 \| \leq 2$, since the length of $\Gamma_1$ is less than or equal to the length of $\partial {\cal D} ( \xi , r )$, which is $2 \pi r$. Thus $\| S(f,A) \| \leq 6$ and $c_2 \leq 3$. Applying Theorem \ref{thm:main} with $c_1 = c_2 = 3$, yields the result from \cite{CG} that $\Omega$ is a $(3 + 2 \sqrt{3})$-spectral set for $A$. \section{Some Simple Extensions} \label{extensions} The arguments in section \ref{example} can be extended in some simple ways. Suppose, for example, that $\Omega = \Omega_0 \backslash {\cal D} ( \xi , r )$ where $\Omega_0$ and ${\cal D} ( \xi , r )$ are as in section \ref{example}, but where the intersection of $\Omega_0$ and ${\cal D}( \xi , r )$ is at most a half-disk, as pictured in Figure \ref{fig:regions}(d). The greatest variation in the argument of $\zeta_0 - \zeta (s)$ can be attained when $\zeta_0$ is in the position of the asterisk in the figure. Then the total variation of the argument of $\zeta (s) - \zeta_0$ could change by as much as $\pi / 2$ as $\zeta (s)$ traverses $\Gamma_1$. It changes by the same amount as $\zeta (s)$ moves along $\Gamma_0$ to the point where the argument of $\zeta (s) - \zeta_0$ matches $\zeta_0'$ or $- \zeta_0'$, with a change of $\pi$ in between. The total change could therefore be as large as $2 \pi$. It follows that in this case, for any $\zeta_0$ on $\partial \Omega$, \[ \int_{\partial \Omega_0} | \mu ( \zeta (s), \zeta_0 ) |\,ds \leq 2 , \] and therefore $c_1 \leq 2$ when at most a half-disk is removed from $\Omega_0$. Using the same definitions of $S_0$, $S_1$, and $S_2$ as in section \ref{example}, we now observe that the length of $\Gamma_1$ is at most $\pi r$ instead of $2 \pi r$, so that $\| S_2 \| \leq 1$, leading to the estimate $\| S(f,A) \| \leq 5$ and $c_2 \leq 5/2$. Using these values of $c_1$ and $c_2$ in Theorem \ref{thm:main} leads to the result that $\Omega$ is a $( 2.5 + \sqrt{8.25} )$-spectral set for $A$. If the radius $r$ of the disk removed from $\Omega_0$ satisfies $r \leq 1/ \| ( A - \xi I )^{-1} \|$, then from Theorem \ref{thm:lambdamin}, it follows that $\lambda_{min} ( \mu ( \zeta_0 , A )) \geq - \frac{1}{2 \pi r}$. In this case, we can replace $S(f,A) = S_0 + S_1 + S_2$ by $S(f,A) = S_0 + \tilde{S}_1 + \tilde{S}_2$, where \[ \tilde{S}_1 = \int_{\Gamma_1} f( \zeta (s)) \left( \mu ( \zeta (s),A ) + \frac{1}{2 \pi r} I \right)\,ds ,~~ \tilde{S}_2 = - \frac{1}{2 \pi r} \int_{\Gamma_1} f( \zeta (s)) I\,ds . \] Now \[ \| \tilde{S}_1 \| \leq \left\| \int_{\Gamma_1} \left( \mu ( \zeta(s),A ) + \frac{1}{2 \pi r} I \right)\,ds \right\| \leq \frac{1}{2 \pi r} \int_{\partial {\cal D} ( \xi , r)} ds = 1 , \] and $\| \tilde{S}_2 \| \leq 1$. With $c_1 = 3$ and $c_2 = 2$, it follows from Theorem \ref{thm:main} that $\Omega$ is a $(2 + \sqrt{7})$-spectral set, and if the intersection of $\Omega_0$ and ${\cal D} ( \xi , r )$ is at most a half-disk, then with $c_1 = 2$, and $\| \tilde{S}_2 \| \leq 1/2$, we can take $c_2 = 7/4$, and then it follows from Theorem \ref{thm:main} that this is a $4$-spectral set for $A$. \subsection{Removing More Disks} \label{disks} The techniques of section \ref{example} can be used to bound $K$ when multiple disks are removed from $\Omega_0 \supset \mbox{cl}(W(A))$. Consider the simplest case, where the disks ${\cal D}_1 ( \xi_1 , r_1 ), \ldots , {\cal D}_m ( \xi_m , r_m )$ do not overlap and lie entirely inside $\Omega_0$. For $\zeta_0 \in \partial \Omega_0$, the total variation in $\arg ( \zeta (s) - \zeta_0 )$ becomes \[ \pi + 4 \sum_{j=1}^{m} \arcsin \left( \frac{1}{r_j | \zeta_0 - \xi_j |} \right) \leq \pi + 2 m \pi . \] If $\zeta_0$ lies on $\partial {\cal D}_i$, then the change in $\arg ( \zeta (s) - \zeta_0 )$ is $2 \pi$ as $\zeta (s)$ traverses $\partial \Omega_0$ and $\pi$ as $\zeta (s)$ traverses $\partial {\cal D}_i$. The total change is \[ 3 \pi + 4 \sum_{\stackrel{j=1}{j \neq i}}^{m} \arcsin \left( \frac{1}{r_j | \zeta_0 - \xi_j |} \right) \leq 3 \pi + 2(m-1) \pi . \] In either case, the total variation of $\arg ( \zeta (s) - \zeta_0 )$ is at most $(2m+1) \pi$, so that $c_1 \leq 2m+1$. To bound $c_2$, write $S(f,A) = S_0 + \sum_{j=1}^m S_j + \sum_{j=1}^m S_{m+j}$, where \[ S_0 = \int_{\partial \Omega_0} f( \zeta (s)) \mu ( \zeta(s), A)\,ds ,~~ S_j = \int_{\partial {\cal D}_j} f( \zeta (s)) \left( \mu ( \zeta (s),A) ) + \frac{p_j}{ 2 \pi r_j} I \right)\,ds , \] \[ S_{m+j} = - \frac{p_j}{ 2 \pi r_j} \int_{\partial {\cal D}_j} f( \zeta (s)) I\,ds ,~~ j=1, \ldots , m , \] where $p_j = 1$ if $r_j = 1/ \| ( A - \xi_j I )^{-1} \|$ and $p_j = 2$ if $r_j = 1/ w (( A - \xi_j I )^{-1} )$. Then \[ \| S_0 \| \leq 2 ,~~ \| S_j \| \leq p_j ,~~\| S_{m+j} \| \leq p_j ,~~ j=1, \ldots , m . \] It follows that \[ \| S(f,A) \| \leq 2 + 2 \sum_{j=1}^m p_j , \] and $c_2 \leq 1 + \sum_{j=1}^m p_j$. Assuming, for simplicity, that each $p_j$ is the same, say, $p_j = p$, and applying Theorem \ref{thm:main} with $c_1 = 2m+1$ and $c_2 = 1 + mp$, we find that $\Omega$ is a \begin{equation} \left( 1 + mp + \sqrt{(1 + mp )^2 + 2m+1 } \right) \label{mdisks} \end{equation} spectral set for $A$. This bound on $K$ holds for other configurations where disks overlap or only partially intersect with $\Omega_0$, although better bounds on $c_1$ and/or $c_2$ may be attainable by considering each geometry individually. \subsection{Other $K$-Spectral Sets} \label{OtherKSpectral} In the previous subsection, we made use of Theorem \ref{thm:lambdamin} to derive values of $K$ that are independent of the operator $A$ for special types of regions $\Omega$ (that {\em do} depend on $A$). For a given operator $A$ and region $\Omega$ containing the spectrum of $A$, one can use Theorem \ref{thm:main} directly to derive $K$ values, but in most cases, these values will have to be computed numerically. A bound on the parameter $c_1$ depends only on the geometry of $\Omega$, while $c_2$ can be bounded using computed values of $\lambda_{min} ( \mu ( \zeta (s) , A))$. For continuous time systems of differential equations, the solution to the initial value problem $y' (t) = A y(t)$, $t > 0$, is $y(t) = e^{tA} y(0)$. If the spectrum of $A$ lies in the open left half-plane, then $\lim_{t \rightarrow \infty} y(t) = 0$, but if $W(A)$ extends into the right half-plane, then initially $\| y(t) \|$ may grow at a rate determined by the numerical abcissa of $A$: $\alpha (A) := \sup \{ \mbox{Re}(z) : z \in W(A) \}$. If the left half-plane, or the part of $W(A)$ that lies in the left half-plane, is a $K$-spectral set for $A$, however, then $K$ is an upper bound on the growth of $\| y(t) \|$. When dealing with powers of a matrix or operator $A$, it is well-known that if the spectrum of $A$ lies inside the open unit disk, then $A^k \rightarrow 0$ as $k \rightarrow \infty$. If the numerical range of $A$ extends outside the unit disk, however, then $\| A^k \| \leq 2 w(A )^k$ may grow with $k$ before asymptotically decreasing to $0$. If the unit disk, or the intersection of $W(A)$ with the unit disk, is a $K$-spectral set for $A$, however, then the growth of $\| A^k \|$ is limited to a factor of $K$. In either of these cases, the set $\Omega = W(A) \cap \mbox{(left half-plane)}$ or $\Omega = W(A) \cap \mbox{(unit disk)}$ is convex, so $c_1 = 1$. To bound $c_2$, let $\Gamma_0$ denote the part of $\partial W(A)$ that is retained as part of $\partial \Omega$ and let $\Gamma_1$ denote the line segment or circular arc resulting from the intersection of $W(A)$ with the imaginary axis or the unit circle. Then $\partial \Omega = \Gamma_0 \cup \Gamma_1$. For $f \in {\cal A} ( \Omega )$ with $\| f \|_{\Omega} \leq 1$, define \[ S_0 = \int_{\Gamma_0} f( \zeta (s)) \mu ( \zeta (s), A)\,ds ,~~ S_1 = \int_{\Gamma_1} f( \zeta (s)) ( \mu ( \zeta (s),A) + \gamma (s) I )\,ds ,~~ S_2 = - \int_{\Gamma_1} f( \zeta (s)) \gamma (s) I\,ds , \] where $\gamma (s) \geq - \lambda_{min} ( \mu ( \zeta (s), A))$. Proceeding as in section \ref{example}, since $\mu ( \zeta (s) , A )$ is PSD for $\zeta (s) \in \partial W(A)$, we can write \[ \| S_0 \| \leq \left\| \int_{\Gamma_0} \mu ( \zeta (s),A)\,ds \right\| \leq \left\| \int_{\partial W(A)} \mu ( \zeta (s),A )\,ds \right\| = \| 2I \| = 2 . \] Similarly, since $\mu ( \zeta (s),A) + \gamma (s) I$ is PSD on $\Gamma_1$ and $\mu ( \zeta (s),A)$ is PSD on $\partial W(A)$, if we let $\Gamma_2$ denote the part of $\partial W(A)$ that was discarded and define $\gamma (s)$ to be $0$ on $\Gamma_2$, then we have \[ \| S_1 \| \leq \left\| \int_{\Gamma_1} ( \mu ( \zeta (s),A) + \gamma (s) I )\,ds \right\| \leq \left\| \int_{\Gamma_1 \cup \Gamma_2} ( \mu ( \zeta (s),A) + \gamma (s) I )\,ds \right\| = \left| \int_{\Gamma_1 \cup \Gamma_2} \gamma (s) \right| = \int_{\Gamma_1} | \gamma (s) |\,ds . \] Finally, we can write \[ \| S_2 \| \leq \int_{\Gamma_1} | \gamma (s) |\,ds . \] Since $S(f,A) = S_0 + S_1 + S_2$, it follows that $\| S(f,A) \| \leq 2 + 2 \int_{\Gamma_1} | \gamma (s) |\,ds$ and therefore \begin{equation} c_2 \leq 1 + \int_{\Gamma_1} | \gamma(s) |\,ds . \label{c2formula} \end{equation} In general, suppose a set $\Omega$ consists of $m$ disjoint regions $\Omega_1 , \ldots , \Omega_m$ with boundaries $\Gamma_1 , \ldots , \Gamma_m$. An example might be the $\epsilon$-pseudospectrum of $A$: \[ \Lambda_{\epsilon} (A) := \{ z \in \mathbb{C} : \| (zI-A )^{-1} \| > \epsilon^{-1} \} \] For this set, the value (\ref{KCauchy}) is easy to compute: \[ K = \frac{{\cal L}( \partial \Lambda_{\epsilon} )}{2 \pi \epsilon} , \] where ${\cal L} ( \cdot )$ denotes the length of the curve. In this case, it may be difficult to come up with an analytic expression for the bound (\ref{c1_bound}) on $c_1$. This bound can be estimated numerically (to any desired accuracy), however, by first discretizing $\partial \Lambda_{\epsilon} (A)$, then considering each discretization point as a possible value for $\zeta_0$ in (\ref{c1_bound}), determining the total variation of the argument of $\zeta (s) - \zeta_0$ as $\zeta (s)$ traverses the discretized $\partial \Lambda_{\epsilon} (A)$, and finally taking $c_1$ to be $\frac{1}{\pi}$ times the maximum value of this total variation. To compute a bound on $c_2$, let $f$ be any rational function with $\| f \|_{\Lambda_{\epsilon} (A)} \leq 1$, and write $S(f,A) = S_1 + S_2$, where \[ S_1 = \int_{\cup_j \Gamma_j} f( \zeta (s) ) ( \mu ( \zeta (s),A ) + \gamma (s) I )\,ds ,~~ S_2 = - \int_{\cup_j \Gamma_j} f( \zeta (s)) \gamma (s) I\,ds . \] Taking $\gamma (s)$ to be greater than or equal to $- \lambda_{min} ( \mu ( \zeta (s), A) )$, so that $\mu ( \zeta (s),A) + \gamma (s) I$ is PSD, we can write \[ \| S_1 \| \leq \left\| \int_{\cup_j \Gamma_j} ( \mu ( \zeta (s),A) + \gamma (s) I )\,ds \right\| \leq 2 + \left\| \int_{\cup_j \Gamma_j} \gamma (s) I\,ds \right\| \leq 2 + \int_{\cup_j \Gamma_j} | \gamma (s) |\,ds , \] and similarly, \[ \| S_2 \| \leq \int_{\cup_j \Gamma_j} | \gamma (s) |\,ds . \] In this case, $\| S(f,A) \| \leq 2 + 2 \int_{\cup_j \Gamma_j} | \gamma (s) |\,ds$ and therefore \[ c_2 \leq 1 + \int_{\cup_j \Gamma_j} | \gamma (s) |\,ds . \] \section{Applications} \label{applications} Throughout this section and the next, we will always assume that the space $H$ in which we are working is Euclidean space and the norm of interest is the 2-norm, which will be denoted as $\| \cdot \|_2$. \subsection{Block Diagonal Matrices} \label{blockd} If $A$ is a block diagonal matrix, say, \[ A = \left[ \begin{array}{cc} A_{11} & 0 \\ 0 & A_{22} \end{array} \right] , \] then since \[ f(A) = \left[ \begin{array}{cc} f( A_{11} ) & 0 \\ 0 & f( A_{22} ) \end{array} \right] , \] it is clear that $\| f(A) \|_2$ can be bounded based on the size of $f$ on $W( A_{11} ) \cup W( A_{22} )$. Yet $W(A)$ is a possibly larger set: the convex hull of $W( A_{11} ) \cup W( A_{22} )$. Of course, if one knew that $A$ was block diagonal, then one could take advantage of this property, but the same observation holds when $A$ is unitarily similar to a block diagonal matrix, and then it is an np-hard problem to identify the blocks \cite{Gu1995}. Instead, one might start with $W(A)$ and try to remove one or more disks that would cut the region into disjoint pieces corresponding to the blocks of $A$. An example is illustrated in Figure \ref{fig:blockdiag}. For this matrix, $A_{11}$ was a real random $4$ by $4$ matrix and $A_{22}$ was equal to $8I$ plus a real random $4$ by $4$ matrix, where the random matrix entries were drawn from a standard normal distribution. The disk removed was centered at $\xi = 3.5$ and had radius $1/w( ( \xi I - A )^{-1} )$. According to the results of section \ref{example}, the remaining region (outlined with a thick black line in the figure) is a $(3 + 2 \sqrt{3})$-spectral set for $A$. For comparison, if one evaluates the resolvent norm integral in (\ref{KCauchy}) over the boundary of this set, one obtains the slightly larger value of $8.01$. Also shown in red in the figure are the numerical ranges of each block. \begin{figure}[ht] \centering \includegraphics[width = 3 in]{blockA.eps} \caption{Eigenvalues and numerical range of a block diagonal matrix cut into two pieces by removing a disk about $\xi = 3.5$ of radius $1/w ( ( A - \xi I )^{-1} )$. Resulting region is outlined in black; numerical ranges of the blocks are shown in red. } \label{fig:blockdiag} \end{figure} For a matrix with more diagonal blocks, one could remove more disks from $W(A)$ and obtain a $K$-spectral set with three or more disjoint regions, where $K$ is bounded by expression \eqref{mdisks}. In other cases, a single disk may not be wide enough to split the numerical range into disjoint pieces. Then multiple disks could be removed, and $K$ would again be bounded by expression (\ref{mdisks}). A better bound might be obtained by using Theorem \ref{thm:main} directly and numerically determining bounds on $c_1$ and $c_2$, as described in section \ref{OtherKSpectral}. Figures \ref{fig:block1} and \ref{fig:block0} show additional illustrations, along with the $K$ value obtained from formula (\ref{mdisks}) and one computed directly from Theorem \ref{thm:main}. \begin{figure}[h!] \centering \includegraphics[width = 3in]{block1_paper.eps} \caption{$A$ is a block diagonal matrix with three blocks. Each block is the sum of a multiple of the identity and a real random matrix $R$ with entries from a standard normal distribution. Block $A_{11} = -20I + R_1$ is $10$ by $10$, block $A_{22} = R_2$ is $5$ by $5$, and block $A_{33} = 20I + R_3$ is $10$ by $10$. The disks removed had radii $1/ \| ( \xi_{1,2} I - A )^{-1} \|_2$, where $\xi_1 = -9.5$ and $\xi_2 = 10$. Based on formula (\ref{mdisks}), the remaining region is a $K = 3 + \sqrt{14} \approx 6.74$ spectral set, and using Theorem \ref{thm:main} directly, as described in section \ref{OtherKSpectral}, we computed $c_1 \leq 2.60$, $c_2 \leq 1.78$, and $K = 4.19$. Using formula \ref{KCauchy}, the value of $K$ was computed to be $11.88$.} \label{fig:block1} \end{figure} \begin{figure}[h!] \centering \includegraphics[width = 3 in]{block0_paper.eps} \caption{$A$ is a block diagonal matrix with two blocks. Each block is the sum of a multiple of the identity and a real random matrix $R$ with entries from a standard normal distribution. Block $A_{11} = -5I + R_1$ is $10$ by $10$, and block $A_{22} = (10+5i)I + R_2$ is $10$ by $10$. Two disks of radius $1/ \| ( \xi_{1,2} I - A )^{-1} \|_2$, where $\xi_1 = 4 + 1.5i$ and $\xi_2 = 3+4i$, were needed to split the numerical range of $A$ into two disjoint sets. Based on formula (\ref{mdisks}), the remaining region is a $3 + \sqrt{14} \approx 6.74$ spectral set, and using Theorem \ref{thm:main} directly, as described in section \ref{OtherKSpectral}, we computed $c_1 \leq 3.20$, $c_2 \leq 1.73$, and $K = 4.21$. Using formula \ref{KCauchy}, the value of $K$ was computed to be $7.94$.} \label{fig:block0} \end{figure} \subsection{Bounding Solutions to the Initial Value Problem} \label{ds} The results from section \ref{OtherKSpectral} can be used to bound the solutions to both continuous and discrete time dynamical systems, assuming that the spectrum of $A$ lies in the left half-plane or the unit disk, respectively, by determining a $K$ value for the set $\Omega$ equal to the intersection of $W(A)$ with the left half-plane or the unit disk. In this case, since $\Omega$ is simply connected, one {\em may} be able to find the {\em optimal} $K$ value numerically. If $A$ is an $n$ by $n$ matrix, then the form of the function $f$ with $\| f \|_{\Omega} = 1$ that maximizes $\| f(A) \|$ is known; it is of the form $B \circ \varphi$, where $\varphi$ is any conformal mapping from $\Omega$ to the unit disk and $B$ is a finite Blaschke product of degree at most $n-1$: \[ B (z) = \prod_{j=1}^{n-1} \frac{z - \alpha_j}{1 - \bar{\alpha}_j z} ,~~ | \alpha_j | \leq 1 . \] We use the Kerzmann-Stein procedure \cite{KS,KT} as implemented in \verb+chebfun+ \cite{chebfun} to conformally map $\Omega$ to the unit disk. We then try many different initial guesses for the roots $\alpha_j$ of $B$ and use the optimization code \verb+fmincon+ in MATLAB to search for roots that maximize $\| B( \varphi (A)) \|_2$. We can check a number of conditions that are known to hold for the optimal Blaschke product $B$ to give us some confidence that we have indeed found the global maximum. See \cite{BGGRSW} for details. Still, these conditions are not sufficient to guarantee a global maximum, but at least the maximum value of $\| B( \varphi (A) ) \|_2$ returned by the optimization code is a {\em lower bound} on the optimal $K$ value for the region $\Omega$. As an example, the left plot in Figure \ref{fig:tuesday} shows the behavior of $\| e^{tA} \|_2$ for a matrix $A$ from \cite{NC} that models the ecosystem of Tuesday Lake in Wisconsin after the introduction of pisciverous largemouth bass. The plot shows initial growth and then decay of the phosphorus turnover rate. The right plot in the figure shows the eigenvalues and numerical range of the matrix and the part of the numerical range in the left half-plane. In this case we found, by integrating $| \lambda_{min} ( \mu ( \zeta (s),A ) |$ along the segment of the imaginary axis inside $W(A)$ and using Theorem \ref{thm:main}, that $K$ could be bounded by $2.66$, while formula (\ref{KCauchy}) gave the slightly larger value $K = 3.72$. Based on results from our optimization code, we believe that the optimal value of $K$ for this region is $1.95$, and, as noted earlier, this is at least a lower bound on $K$. In this case the different bounds on $K$ are all very close and somewhat larger than the maximum value of $\| e^{tA} \|_2$, $t > 0$. \begin{figure}[ht] \centerline{\epsfig{file=tuesday.eps,width=4in}} \caption{Matrix modeling the ecosystem in Tuesday Lake after introducing piscivores \cite{NC}. Left plot shows $\| e^{tA} \|_2$ growing before decaying; right plot shows $W(A)$ extending into the right half-plane (dashed curve) and eigenvalues ($x$'s) in the left half-plane.} \label{fig:tuesday} \end{figure} As another example, we consider the matrix \texttt{transient\_demo(20)} available in the eigtool package \cite{eigtool}. The upper left plot in Figure \ref{fig:transient20} shows the behavior of $\| e^{tA} \|_2$, $t > 0$, which grows to about $16.61$ before starting to decrease. The upper right plot shows the eigenvalues, in the left half-plane, and the numerical range, extending into the right half-plane, together with the region $\Omega$ consisting of the part of $W(A)$ in the left half-plane. Integrating $| \lambda_{min} ( \mu ( \zeta (s),A )) |$ along the segment of the imaginary axis forming the right boundary of $\Omega$ and using Theorem \ref{thm:main}, we determined that $K = c_2 + \sqrt{ c_2^2 + c_1 } \approx 2 c_2 = 40.13$. In this case, formula (\ref{KCauchy}) gave a smaller value, $K = 27.95$. The reason for this smaller value can be seen in the lower plots of Figure \ref{fig:transient20}. The large values of $| \lambda_{min} ( \mu ( \zeta (s),A ) ) |$ and of $ \frac{1}{2 \pi} \| ( \zeta I - A )^{-1} \|_2$ occur on the segment of the imaginary axis, and, while $| \lambda_{min} ( \mu ( \zeta (s),A ) |$ is always less than or equal to $\frac{1}{2 \pi} \| ( \zeta I - A )^{-1} \|_2$, the difference is small. Since the value of $K$ from Theorem \ref{thm:main} is approximately equal to $2 c_2$, which is approximately twice the integral of $| \lambda_{min} ( \mu ( \zeta (s),A ) ) |$ over this segment, and the value of $K$ from (\ref{KCauchy}) is the integral of $\frac{1}{2 \pi} \| ( \zeta I - A )^{-1} \|_2$ over this segment (and over the remainder of $\partial \Omega$, where $\| ( \zeta I - A )^{-1} \|_2$ is much smaller), the result is a smaller value of $K$ from formula (\ref{KCauchy}). The lower right plot shows why $| \lambda_{min} ( \mu ( \zeta ,A )) |$ might be almost as large as $\frac{1}{2 \pi} \| ( \zeta I - A )^{-1} \|_2$. It shows the numerical ranges of several of the matrices $\frac{\zeta'}{2 \pi i} ( \zeta I - A )^{-1} = \frac{1}{2 \pi} ( \zeta I - A )^{-1}$ for $\zeta$ on this segment of the imaginary axis. While the smaller numerical ranges lie mostly in the right half-plane, for the larger ones, the absolute value of the real part of the leftmost point in these numerical ranges (which is $\frac{1}{2} | \lambda_{min} ( \mu ( \zeta , A ) ) |$) is almost as large as the numerical radius. We will later see why this might be expected when $\zeta$ is close to an ill-conditioned eigenvalue. In this example, our optimization code found a function $B \circ \varphi$ for which $\| B( \varphi (A) ) \|_2 = 21.54$, and we believe that this is the optimal value of $K$ for this set $\Omega$. \begin{figure}[ht] \centerline{\epsfig{file=transientlhp.eps,width=4in}} \caption{Matrix from the eigtool command \texttt{transient\_demo(20)} \cite{eigtool}. Upper left shows $\| e^{tA} \|_2$ growing before decaying; upper right shows $W(A)$ extending into the right half-plane (dashed curve) and eigenvalues of $A$ (x's) in the left half-plane. Lower left shows $| \lambda_{\min}(\mu(\zeta),A)) |$ and $\frac{1}{2 \pi} \| ( \zeta I - A )^{-1} \|_2$ for $\zeta$ on the segment of the imaginary axis forming the right boundary of $\Omega$. Lower right shows numerical ranges of several of the matrices $\frac{1}{2 \pi} ( \zeta I - A )^{-1}$ for $\zeta$ on this segment of the imaginary axis; for the larger numerical ranges, the absolute value of the minimal real part, which is $\frac{1}{2} | \lambda_{min}( \mu ( \zeta , A ) ) |$, is almost as large as the numerical radius, explaining why $| \lambda_{min} ( \mu ( \zeta , A ) ) |$ is of the same order of magnitude as $\frac{1}{2 \pi} \|(\zeta I-A)^{-1}\|_2$.} \label{fig:transient20} \end{figure} Using the same matrix, \texttt{transient\_demo(20)}, we computed norms of powers of $A$ and found that they grew to about $20.72$ before starting to decrease, as shown in the upper left plot of Figure \ref{fig:transientpowers}. The upper right plot shows the numerical range of the matrix, which extends beyond $\mathcal D(0,1)$, and the eigenvalues which all lie within $\mathcal D(0,1)$. If we take $\Omega$ to be $W(A) \cap \mathcal D(0,1)$, whose boundary is the wide solid line in the upper-right plot, then we can use Theorem \ref{thm:main} to calculate a value of $K$ for which $\Omega$ is a $K$-spectral set. Integrating $| \lambda_{min} ( \mu ( \zeta (s),A )) |$ along the arc of the unit circle inside $W(A)$, we determined that $K = c_2 + \sqrt{ c_2^2 + c_1 } = 70.44$. Again in this case, formula (\ref{KCauchy}) gave a smaller value, $K = 36.03$. The reason can be seen in the lower plots of Figure \ref{fig:transientpowers}. The large values of $| \lambda_{min} ( \mu ( \zeta ,A ) ) |$ and of $ \frac{1}{2 \pi} \| ( \zeta I - A )^{-1} \|_2$ occur on the arc of the unit circle inside $W(A)$, as shown in the lower left plot. In this case, $| \lambda_{min} ( \mu ( \zeta (s),A ) |$ is greater than $\frac{1}{2 \pi} \| ( \zeta I - A )^{-1} \|_2$. The lower right plot shows why $| \lambda_{min} ( \mu ( \zeta ,A )) |$ might be larger than $\frac{1}{2 \pi} \| ( \zeta I - A )^{-1} \|_2$. It shows the numerical ranges of several of the matrices $\frac{\zeta'}{2 \pi i} ( \zeta I - A )^{-1} = \frac{e^{i \theta}}{2 \pi} ( \zeta I - A )^{-1}$ for $\zeta = e^{i \theta}$ on this arc of the unit circle. For the larger numerical ranges, the absolute value of the real part of the leftmost point in these numerical ranges (which is $\frac{1}{2} | \lambda_{min} ( \mu ( \zeta , A ) ) |$) is almost as large as the numerical radius. Again, we will give a partial explanation for this in the last section. In this example, our optimization code found a function $B \circ \varphi$ for which $\| B( \varphi (A) ) \|_2 = 21.06$, and we believe that this is the optimal value of $K$ for this set $\Omega$. \begin{figure}[h] \centering \includegraphics[width=4in]{transientpowers.eps} \caption{Matrix from the eigtool command \texttt{transient\_demo(20)} \cite{eigtool}. Upper left shows $\|A^k\|_2$ growing before decaying; upper right shows $W(A)$ extending beyond $\mathcal D(0,1)$ (dashed curve) and eigenvalues of $A$ (x's) in the unit disk. Lower left shows $| \lambda_{\min}(\mu(\zeta ,A)) |$ and $\frac{1}{2 \pi} \| ( \zeta I - A )^{-1} \|_2$ for $\zeta$ on the arc of the unit circle inside $W(A)$. Lower right shows numerical ranges of several of the matrices $\frac{e^{i \theta}}{2 \pi} ( \zeta I - A )^{-1}$ for $\zeta = e^{i \theta}$ on this arc of the unit circle; for the larger numerical ranges, the absolute value of the minimal real part, which is $\frac{1}{2} | \lambda_{min}( \mu ( \zeta , A ) ) |$, is almost as large as the numerical radius, explaining why $| \lambda_{min} ( \mu ( \zeta , A ) ) |$ is of the same order of magnitude as $\frac{1}{2 \pi} \|(\zeta I-A)^{-1}\|_2$.} \label{fig:transientpowers} \end{figure} \newpage \section{Further Comparisons and Some Explanations} \label{comparisons} It was noted earlier that if $\Omega = W(A)$, then Theorem \ref{thm:main} yields the Crouzeix-Palencia result \cite{CP} that $K = 1 + \sqrt{2} \approx 2.415$. For a normal matrix, some of the eigenvalues lie on the boundary of $W(A)$, and if $\Omega$ is taken to be a slightly larger set containing $W(A)$, then as $\Omega \rightarrow W(A)$, the integral in (\ref{KCauchy}) may approach $\infty$. Clearly, Theorem 1 provides a {\em much} smaller bound on $K$. The same holds for large Jordan blocks. For an $n$ by $n$ Jordan block, the numerical range is a disk about the eigenvalue of radius $\cos ( \pi / (n+1) )$, and for $z$ on the boundary of $W(A)$, $\| (zI - A )^{-1} \|_2$ increases with the matrix size. Thus, the value $K = 1 + \sqrt{2}$ is {\em much} smaller than the bound from (\ref{KCauchy}). Table \ref{table:comp1} shows a comparison of the bound from Theorem \ref{thm:main} when $\Omega = W(A)$ with that from (\ref{KCauchy}) for a variety of matrices, including the \verb+transient_demo+ matrix mentioned earlier and a \verb+boeing_demo+ matrix also from eigtool \cite{eigtool}. \begin{table}[ht] \begin{center} \begin{tabular}{|l|c|c|c|c|c|} \hline matrix & $n$ & $\kappa (V)$ & $W(A)$ & $\frac{1}{2 \pi} \int_{\partial W(A)} \| R \|_2$ & CP \\ \hline normal & $\infty$ & 1 & conv(Sp($A$)) & $\infty$ & 2.415 \\ \hline normal + E, & 50 & 1.01 & $\sim$conv(Sp($A$)) & 44.2 & 2.415 \\ $\| E \| = 1.e-3$ & & & & & \\ \hline \hline randn(n)/$\sqrt{n}$ & 50 & 75.3 & $\sim {\cal D} ( 0, \sqrt{2} )$ & 3.72 & 2.415 \\ \hline transient & 50 & 6.5e+6 & $\sim {\cal D} ( -.5,.78)$ & 13.83 & 2.415 \\ demo & & & & & \\ \hline boeing & 55 & 9.5e+6 & $\sim {\cal D}(-433,$ & 2.415 & 2.415 \\ demo & & & $8.5e+6)$ & & \\ \hline random complex & 50 & 1.3e+12 & $\sim {\cal D} (0,9.2)$ & 3.60 & 2.415 \\ triangular & & & & & \\ \hline \hline Jordan block & 50 & $\infty$ & ${\cal D}(0, 0.998)$ & 33.41 & 2.415 \\ \hline Jordan block & $\infty$ & $\infty$ & ${\cal D} (0,1)$ & $\infty$ & 2.415 \\ \hline \end{tabular} \end{center} \caption{Comparison of $K$ value from Theorem \ref{thm:main} when $\Omega = W(A)$ with that from (\ref{KCauchy}). Column 2 shows the size of the matrix and column 3 gives the condition number of a matrix of normalized eigenvectors. Column 4 gives an exact or approximate ($\sim$) description of $W(A)$. Column 5 contains the value of $K$ from (\ref{KCauchy}) and column 6 contains the Crouzeix-Palencia bound (i.e., the bound from Theorem \ref{thm:main}) on $K$.} \label{table:comp1} \end{table} When we start applying Thoerem \ref{thm:main} to other regions, however, it is less clear that it will offer a significant improvement over (\ref{KCauchy}), as illustrated in the examples of the previous section. This was observed earlier in \cite{CGL}, where it was shown that $| \lambda_{min} ( \mu ( \zeta , A) ) |$ may grow very quickly as one moves inside $W(A)$. An example in \cite{CG} involved the Grcar matrix of size $100$ (\verb+gallery('grcar',100)+ in MATLAB), where $\Omega$ was taken to be $W(A) \backslash {\cal D}(0,1/w( A^{-1} ))$. This is pictured in Figure \ref{fig:grcar100}. The results in \cite{CG} show that this is a $(3 + 2 \sqrt{3}) \approx 6.46$-spectral set for $A$, and direct application of Theorem \ref{thm:main} shows that it is a $4.35$-spectral set, with $c_1 \leq 2.84$ and $c_2 \leq 1.85$. In contrast, evaluating $K$ in (\ref{KCauchy}) gives the result $34.86$. \begin{figure}[h] \centering \includegraphics[width=3in]{grcar100.eps} \caption{Matrix from MATLAB command \texttt{gallery('grcar',100)} Eigenvalues (dots) and $W(A)$ and ${\cal D}(0,1/ w( A^{-1} ))$ (dashed curves) and $W(A) \backslash {\cal D}(0,1/w( A^{-1} ))$ (thick solid curve). It was shown in \cite{CG} that this is a $(3 + 2 \sqrt{3} ) \approx 6.46$-spectral set for $A$. Direct application of Theorem \ref{thm:main} shows that it is a $4.38$-spectral set for $A$, and the value of $K$ from (\ref{KCauchy}) is $34.85$.} \label{fig:grcar100} \end{figure} Using a somewhat smaller size Grcar matrix ($n=32$), we computed the $\epsilon$-pseudospectrum for $\epsilon = 10^{-3}$. This has multiple components, as pictured in Figure \ref{fig:grcar32}. For this region we also computed a value of $K$ from Theorem \ref{thm:main} and found that $c_1 \leq 3.02$, $c_2 \leq 2.10 \times 10^3$, and $K = 4.20 \times 10^3$. Evaluating $K$ in (\ref{KCauchy}), however, gives the smaller value $K = 2.12 \times 10^3$. As in some of the examples of section \ref{applications}, the value of $K$ from Theorem \ref{thm:main} is almost twice that from (\ref{KCauchy}). \begin{figure}[h] \centering \includegraphics[height=2in]{grcar32.eps} \caption{Matrix from MATLAB command \texttt{gallery('grcar',32)} Eigenvalues (dots) and components of the $10^{-3}$-pseudospectrum (solid curves). Direct application of Theorem \ref{thm:main} shows that this is a $4.20 \times 10^3$-spectral set for $A$, but the value of $K$ from (\ref{KCauchy}) is $2.12 \times 10^3$.} \label{fig:grcar32} \end{figure} In summary, while for some matrices $A$ and some regions $\Omega$ (like $\Omega = W(A)$), Theorem \ref{thm:main} provides {\em much} smaller $K$ values than (\ref{KCauchy}), for other regions, the $K$ value from Theorem \ref{thm:main} may be slightly larger or about the same size as that in (\ref{KCauchy}). This most often happens when the eigenvalues of the matrix are ill-conditioned and the boundary of $\Omega$ comes close to some eigenvalues. Unfortunately, the boundary of $\Omega$ is near ill-conditioned eigenvalues for many matrices and regions of interest in applications. To see why this might lead to larger $K$ values in Theorem \ref{thm:main}, we will argue that in these cases, the numerical range of $( \zeta I - A )^{-1}$ is close to a disk about a value whose distance from the origin is much less than the radius of the disk so that every point on the boundary of this disk has absolute value close to the numerical radius of $( \zeta I - A )^{-1}$. First note that if $x$ and $y$ are two unit vectors that are orthogonal to each other, then the rank one matrix $x y^{*}$ has numerical range equal to a disk about the origin of radius $\frac{1}{2}$. To see this, consider a unitary similarity transformation $Q^{*} x y^{*} Q$, where the columns of $Q$ are $[x, y, q_3 , \ldots , q_n ]$. The matrix $Q^{*} x y^{*} Q$ is the direct sum of a $2$ by $2$ Jordan block with eigenvalue $0$ and an $n-2$ by $n-2$ block of zeros, and the numerical range of this matrix is a disk about the origin of radius $\frac{1}{2}$. Let $\lambda$ be a simple eigenvalue of $A$ and suppose that a point $\zeta \in \partial \Omega$ is {\em very} close to $\lambda$. Then the resolvent $( \zeta I - A )^{-1}$ is close to the rank one matrix \begin{equation} \frac{1}{\zeta - \lambda} \frac{x y^{*}}{y^{*} x} , \label{rank1} \end{equation} where $x$ and $y$ are normalized right and left eigenvectors, respectively, corresponding to the eigenvalue $\lambda$: $A x = \lambda x$, $y^{*} A = \lambda y^{*}$. The condition number of $\lambda$ is defined as $1/ | y^{*} x |$, and if $\lambda$ is ill-conditioned this means that $| y^{*} x |$ is tiny. Let $q_1 = ( x - \frac{1}{2} ( y^{*} x ) y ) / \| x - \frac{1}{2} ( y^{*} x ) y \|$, $q_2 = ( y - \frac{1}{2} ( x^{*} y ) x ) / \| y - \frac{1}{2} ( x^{*} y ) x \|$, and let $q_3 , \ldots , q_n$ be any orthonormal vectors that are orthogonal to $q_1$ and $q_2$. Note that if $| y^{*} x |$ is small, then $q_1$ is almost orthogonal to $q_2$: \[ q_2^{*} q_1 = \frac{\frac{1}{4} ( y^{*} x ) | y^{*} x |^2}{1 - \frac{3}{4} | y^{*} x |^2} . \] The rank one matrix in (\ref{rank1}) is therefore very nearly unitarily similar to the direct sum of the following $2$ by $2$ matrix and an $n-2$ by $n-2$ block of zeros: \[ \frac{1}{( \zeta - \lambda ) ( y^{*} x )} \left[ \begin{array}{cc} ( q_1^{*} x ) ( y^{*} q_1 ) & ( q_1^{*} x ) ( y^{*} q_2 ) \\ ( q_2^{*} x ) ( y^{*} q_1 ) & ( q_2^{*} x ) ( y^{*} q_2 ) \end{array} \right] = \frac{1}{( \zeta - \lambda ) ( y^{*} x )} \left[ \begin{array}{cc} \frac{1}{2} ( y^{*} x ) & 1 \\ 0 & \frac{1}{2} ( y^{*} x ) \end{array} \right] + O( | y^{*} x |^2 ) . \] Ignoring the $O( | y^{*} x |^2 )$ terms, the numerical range of this matrix is $\frac{1}{( \zeta - \lambda ) ( y^{*} x )}$ times a disk of radius $\frac{1}{2}$ about $ \frac{1}{2} y^{*} x$. If the boundary of $\Omega$ comes only {\em fairly} close to an ill-conditioned eigenvalue, as in the examples of section \ref{applications}, the matrix in (\ref{rank1}) may not be close to $( \zeta I - A )^{-1}$ because the other nearby eigenvalues still have an effect. The closest rank one matrix to $( \zeta I-A )^{-1}$ is $\sigma_1 u_1 v_1^{*}$, where $\sigma_1$ is the largest singular value of $( \zeta I - A )^{-1}$ and $u_1$ and $v_1$ are the associated left and right singular vectors, respectively. In this case, if $u_1$ and $v_1$ are almost orthogonal to each other, then the same argument shows that if $( \zeta I - A )^{-1} \approx \sigma_1 u_1 v_1^{*}$, then the numerical range of $( \zeta I - A )^{-1}$ is approximately equal to $\sigma_1$ times a disk of radius $\frac{1}{2}$ about $\frac{1}{2} v_1^{*} u_1$. Again, the radius is much larger than the absolute value of the center, so all points on the boundary of this disk have absolute value close to the numerical radius. To see that the right and left singular vectors corresponding to the largest singular value of $( \zeta I - A )^{-1}$ are almost orthogonal to each other when $\zeta$ is close to a simple but ill-conditioned eigenvalue $\lambda$ of $A$, we can use a theorem of Stewart \cite{Stewart1973}. First note that these are the left and right singular vectors corresponding to the {\em smallest} singular value of $\zeta I - A$. Let us start with the matrix $\lambda I - A$, which has a null space of dimension one. The normalized right and left eigenvectors, $x$ and $y$, corresponding to the eigenvalue $\lambda$ of $A$ satisfy $( \lambda I - A ) x = 0$ and $( \lambda I - A )^{*} y = 0$. It follows that these are right and left singular vectors of $\lambda I - A$ corresponding to the smallest singular value, $0$. Write the SVD of $\lambda I - A$ as $Y \Sigma X^{*}$, where $X = [x, X_2 ]$ and $Y = [y, Y_2]$, and we have put the smallest singular value first. Define $E := ( \zeta - \lambda ) I$ so that $( \lambda I - A)+E = \zeta I - A$. Define \[ \gamma := \left\| \left[ \begin{array}{c} Y_2^{*} E x \\ X_2^{*} E^{*} y \end{array} \right] \right\|_F = \left\| \left[ \begin{array}{c} ( \zeta - \lambda ) Y_2^{*} x \\ ( \bar{\zeta} - \bar{\lambda} ) X_2^{*} y \end{array} \right] \right\|_F \leq \sqrt{2}~| \zeta - \lambda | , \] \begin{eqnarray*} \delta & := & \sigma_{n-1} ( \lambda I - A ) - \| y^{*} E x \|_2 - \| Y_2^{*} E X_2 \|_2 \\ & = & \sigma_{n-1} ( \lambda I - A ) - | \zeta - \lambda | \left( | y^{*} x | + \| Y_2^{*} X_2 \|_2 \right) \\ & \geq & \sigma_{n-1} ( \lambda I - A ) - | \zeta - \lambda | ( 1 + | y^{*} x | ) , \end{eqnarray*} where $\sigma_{n-1} ( \lambda I - A )$ is the second smallest singular value of $\lambda I - A$. Assuming that $\gamma / \delta < 1/2$, it is shown in \cite[Theorem 6.4]{Stewart1973} that there are vectors $p$ and $q$ satisfying \[ \left\| \left[ \begin{array}{c} p \\ q \end{array} \right] \right\|_F < 2 \frac{\gamma} {\delta} \] such that $x + X_2 p$ and $y + Y_2 q$ are (multiples of) right and left singular vectors of $(\lambda I - A )+E = \zeta I - A$, corresponding to the smallest singular value; i.e., they are left and right singular vectors of $( \zeta I - A )^{-1}$, corresponding to the largest singular value. It follows that if $x$ and $y$ are almost orthogonal to each other and if $\| p \|_2$ and $\| q \|_2$ are small, then the singular vectors $u_1$ and $v_1$ corresponding to the largest singular value of $( \zeta I - A )^{-1}$ are almost orthogonal to each other: \[ \left| \frac{( x + X_2 p )^{*} ( y + Y_2 q )}{\| x + X_2 p \|_2 \| y + Y_2 q \|_2} \right| = \frac{| x^{*} y + x^{*} Y_2 q + p^{*} X_2^{*} y + p^{*} X_2^{*} Y_2 q |} {\| x + X_2 p \|_2 \| y + Y_2 q \|_2} \leq \frac{| x^{*} y | + \| q \|_2 + \| p \|_2 + \| p \|_2 \| q \|_2} {\sqrt{( 1 - \| p \|_2^2 ) ( 1 - \| q \|_2^2 )}} . \]
1908.06506
\section{Introduction} \label{Introduction} Suppose that $n$ candidates are running for a single office. There are many different social choice procedures one can use to select a winner. In this article, we study a particular class called \emph{positional voting systems}. A positional voting system is an electoral method in which each voter submits a ranked list of the candidates. Points are then assigned according to a fixed \emph{weighting vector} $\mathbf{w}$ that gives $w_i$ points to a candidate every time they appear in position $i$ on a ballot, and candidates are ranked according to the total number of points received. For example, plurality is a positional voting system with weighting vector $\mathbf{w} = [\,\setlength\arraycolsep{3pt}\begin{matrix} 1 & 0 & 0 & \cdots & 0\end{matrix}\,]^\mathsf{T}$. One point is assigned to each voter's top choice, and the candidate with the most points wins. The Borda count is another common positional voting system in which the weighting vector is given by $\mathbf{w} =[\,\setlength\arraycolsep{3pt}\begin{matrix} n-1 & n-2 & \cdots & 1 & 0\end{matrix}\,]^\mathsf{T}$. Other examples include the systems used in the Eurovision Song Contest, parliamentary elections in Nauru, and the AP College Football Poll \cite{BesRob,FraGro,HodKlim}. By tallying points in this manner, a positional voting system outputs not just a winner, but a complete (though not necessarily strict) ranking of all candidates, called the \emph{societal ranking}. The societal ranking produced by a positional voting system depends not only on the set of ballots (called the \emph{profile}) but also on the choice of weighting vector. With the freedom to choose different weighting vectors, one can achieve many different outcomes from the same profile. An immediate question is ``Given a profile, how many different societal rankings are possible?'' In \cite{Saari}, Donald Saari famously showed that for any profile on $n$ alternatives, there are at most $n!-(n-1)!$ possible strict societal rankings depending on the choice of weighting vector. Moreover, this bound is sharp in that there exist profiles for which exactly this many different strict rankings are possible. In \cite{DEMO}, Daugherty, Eustis, Minton, and Orrison provided a new approach to analyzing positional voting systems and extended some of Saari's results to cardinal (as opposed to ordinal) rankings, as well as to partial rankings; see also \cite{CrisOrr}. Our article serves to complement these works by providing alternative derivations that afford more explicit constructions, new perspectives, and arguments which some may find more accessible. After establishing some conceptual foundations, we proceed by proving the main result in \cite{DEMO} using facts about doubly stochastic matrices (Theorem~\ref{thm:main}). With a little more linear algebra, we are then able to recover Saari's findings (Theorems \ref{number rankings lower bound} and \ref{number rankings upper bound}). An advantage of our methodology is that it gives a concrete means of constructing ``paradoxical profiles.'' It also enables us to provide a simple geometric characterization of the possible outcomes resulting from a given profile (Theorem~\ref{convex hull}). In addition, our work illustrates the utility of thinking about doubly stochastic matrices and the braid arrangement in problems related to social choice procedures. Doubly stochastic matrices arise very naturally in our analysis, and the braid arrangement provides a nice geometric realization of rankings. While our arguments do not depend on any deep facts about hyperplane arrangements, many of the objects we work with have a natural interpretation within this framework, and it provides a useful vernacular for thinking about such matters. As with the algebraic voting theory from \cite{DEMO}, the hope is that by formulating problems in terms of different mathematical constructs, new tools and perspectives become available. The connection with hyperplane arrangements has received some attention in previous works---for instance, Terao's proof of Arrow's impossibility theorem \cite{Terao}---but doubly stochastic matrices seem to have been given much less consideration in the context of voting. Finally, our approach serves to partially bridge the perspectives from \cite{Saari} and \cite{DEMO}. Saari provides a geometric realization of positional voting procedures in terms of certain high-dimensional simplices; see also \cite{Saari95}. We use some similar ideas, but cast them in the language of the braid arrangement. On the other hand, Daugherty et al. describe positional voting using Young tableaux, which enables them to harness results from the representation theory of the symmetric group. Our analysis employs ideas from linear algebra to capture the essence of these arguments. \section{Notation and Terminology} \label{Notation} Throughout this article, $n \geq 3$ is a fixed integer representing the number of candidates. The number of voters is $N$, which we only assume to be rational (though Proposition~\ref{convenient p} shows that we may take $N\in\mathbb{N}$ if we are just interested in ordinal rankings). While $n$ is arbitrary and given in advance, $N$ may have some implicit constraints depending on the context. We work exclusively over the field $\mathbb{Q}$ of rational numbers, and we write $\mathbb{N}_{0}$ for the set of nonnegative integers. Vectors are always written in boldface with the components of a generic $n$-dimensional vector $\mathbf{v}$ denoted by $v_1, v_2, \dots, v_n$. We write $\mathbf{1}$ for the vector of all ones and $J$ for the matrix of all ones, where the dimensions are clear from context. Finally, we use the notation $1\{\cdot\}$ to represent the indicator function, so that, for example, $1\{i<j\}$ equals $1$ if $i<j$ and $0$ otherwise. Let $\sigma$ be a permutation in $S_{n}$ and define the following subsets of $\mathbb{Q}^n$: \begin{gather*} V_{0}=\big\{\mathbf{x} \in\mathbb{Q}^{n}:\, x_{1}+\cdots+x_{n}=0\big\},\\ C_{\sigma}=\big\{\mathbf{x} \in\mathbb{Q}^{n}:\, x_{\sigma(1)}>x_{\sigma(2)}>\cdots>x_{\sigma(n)}\big\},\\ W=C_{id}\cap V_{0}=\big\{\mathbf{x} \in\mathbb{Q}^{n}:\, x_{1}>x_{2}>\cdots>x_{n},\;x_{1}+\cdots+x_{n}=0\big\}. \end{gather*} The set $W$ will be of particular importance for us and can be thought of as the set of strict weighting vectors. Clearly such vectors should have decreasing entries, and the sum-zero normalization is essentially because adding a multiple of $\mathbf{1}$ to a weighting vector does not affect the ranking of candidates. (Further elaboration is given at the end of this section.) If we wish to allow the same point value to be assigned to multiple candidates, we can consider the closure $\overline{W}$. Now label the permutations in $S_{n}$ lexicographically according to one-line notation, and let $R_{\ell}=R_{\sigma_{\ell}}$ be the $n\times n$ permutation matrix corresponding to $\sigma_{\ell}$, defined by $R_{\ell}(i,j)=1\{\sigma_{\ell}(j)=i\}$. Given a weighting vector $\mathbf{w}\in W$, define the $n\times n!$ matrix $T_{\mathbf{w}} = \big[\,\setlength\arraycolsep{3pt}\begin{matrix} \sigma_{1}\mathbf{w} & \sigma_{2}\mathbf{w} & \cdots & \sigma_{n!}\mathbf{w}\end{matrix}\,\big]$ having $\ell^{\text{th}}$ column \[ \sigma_{\ell}\mathbf{w}:=R_{\ell}\mathbf{w} =\big[\,\setlength\arraycolsep{3pt}\begin{matrix} w_{\sigma_{\ell}^{-1}(1)} & w_{\sigma_{\ell}^{-1}(2)} & \cdots & w_{\sigma_{\ell}^{-1}(n)} \end{matrix}\,\big]^{\mathsf{T}}. \] For a given profile $\mathbf{p} \in\mathbb{Q}^{n!}$, the \emph{results vector} for the positional voting procedure associated with $\mathbf{w}$ is given by \begin{equation} \label{resultsvector} \mathbf{r} =T_{\mathbf{w}} \mathbf{p} =p_{1}R_{1}\mathbf{w} +p_{2}R_{2}\mathbf{w} +\cdots+p_{n!}R_{n!}\mathbf{w} =Q_{\mathbf{p}} \mathbf{w} \end{equation} where $Q_{\mathbf{p}}$ is a convenient shorthand for $\sum_{\ell=1}^{n!}p_{\ell}R_{\ell}$. Each $\sigma\in S_{n}$ corresponds to the ranking of the candidates (labeled $1$ through $n$) in which candidate $\sigma(k)$ is the $k^{\text{th}}$ favorite. The profile $\mathbf{p}$ encodes preferences of the electorate so that $p_{\ell}$ is the number of voters with preference $\sigma_{\ell}$. The $(i,j)$-entry of $Q_{\mathbf{p}}$ is thus the number of voters ranking candidate $i$ in $j^{\text{th}}$ place. If each voter assigns $w_{k}$ points to their $k^{\text{th}}$ favorite candidate, then $r_{j}$ is the total number of points given to candidate $j$. The societal ranking for this election procedure is $\pi\in S_{n}$ with $\mathbf{r}\in C_{\pi}$. (We are assuming for the moment that a strict ranking is achieved. The possibility of ties will be addressed after Example~\ref{election}.) Note that we have given two different ways of computing the results vector $\mathbf{r}$ in equation~\eqref{resultsvector}. On one hand, we have $T_\mathbf{w}$, an $n \times n!$ matrix that encodes all the possible permutations of the weighting vector, which can be combined with the $n!$-dimensional profile vector $\mathbf{p}$ to yield the result. On the other hand, we have $Q_\mathbf{p}$, an $n \times n$ matrix that encodes the number of votes each candidate receives in each place, which can be combined with the $n$-dimensional weighting vector $\mathbf{w}$ to yield the result. \begin{example} \label{election} Consider an election with $4$ candidates and voter preferences described by the following table. \begin{center} \begin{tabular}{ |c|c| } \hline \textnormal{Voting Preference} & \textnormal{Number of Votes} \\ \hline $(2, 3, 4, 1)$ & $8$ \\ \hline $(1, 3, 2, 4)$ & $5$ \\ \hline $(4, 3, 2, 1)$ & $10$ \\ \hline $(2, 3, 1, 4)$ & $8$ \\ \hline $(4, 1, 3, 2)$ & $7$ \\ \hline \end{tabular} \end{center} \smallskip \noindent Then \begin{align*} Q_{\mathbf{p}} & = 8\begin{bmatrix}0 & 0 & 0 & 1 \\1 & 0 & 0 & 0 \\0 & 1 & 0 & 0 \\0 & 0 & 1 & 0 \end{bmatrix} + 5\begin{bmatrix}1 & 0 & 0 & 0 \\0 & 0 & 1 & 0 \\0 & 1 & 0 & 0 \\0 & 0 & 0 & 1 \end{bmatrix} + 10\begin{bmatrix}0 & 0 & 0 & 1 \\0 & 0 & 1 & 0 \\0 & 1 & 0 & 0 \\1 & 0 & 0 & 0 \end{bmatrix}\\ & \qquad\qquad\; + 8\begin{bmatrix}0 & 0 & 1 & 0 \\1 & 0 & 0 & 0 \\0 & 1 & 0 & 0 \\0 & 0 & 0 & 1 \end{bmatrix} + 7\begin{bmatrix}0 & 1 & 0 & 0 \\0 & 0 & 0 & 1 \\0 & 0 & 1 & 0 \\1 & 0 & 0 & 0 \end{bmatrix} = \begin{bmatrix}5 & 7 & 8 & 18 \\ 16 & 0 & 15 & 7 \\ 0 & 31 & 7 & 0 \\ 17 & 0 & 8 & 13 \end{bmatrix}. \end{align*} To implement the Borda count, we use $\mathbf{w} =[\,\setlength\arraycolsep{3pt}\begin{matrix} 1.5 & 0.5 & -0.5 & -1.5\end{matrix}\,]^\mathsf{T}$, which is obtained from $[\,\setlength\arraycolsep{3pt}\begin{matrix} 3 & 2 & 1 & 0\end{matrix}\,]^\mathsf{T}$ by subtracting $\frac{1}{4}(3+2+1+0)\mathbf{1}$. This yields \[ Q_{\mathbf{p}} \mathbf{w} = \begin{bmatrix}5 & 7 & 8 & 18 \\ 16 & 0 & 15 & 7 \\ 0 & 31 & 7 & 0 \\ 17 & 0 & 8 & 13 \end{bmatrix} \cdot \begin{bmatrix}1.5 \\0.5 \\-0.5 \\-1.5 \end{bmatrix} = \begin{bmatrix} -20 \\6 \\12 \\2 \end{bmatrix}, \] so that the societal ranking is $(3,2,4,1)$. If instead we use the plurality method, $\mathbf{w} = [\,\setlength\arraycolsep{3pt}\begin{matrix} 0.75 & -0.25 & -0.25 & -0.25\end{matrix}\,]^\mathsf{T}$, we find that \[ Q_{\mathbf{p}} \mathbf{w} =\begin{bmatrix}5 & 7 & 8 & 18 \\ 16 & 0 & 15 & 7 \\ 0 & 31 & 7 & 0 \\ 17 & 0 & 8 & 13 \end{bmatrix} \cdot \begin{bmatrix}0.75 \\-0.25 \\-0.25 \\-0.25\end{bmatrix} = \begin{bmatrix}-4.5 \\6.5 \\-9.5 \\7.5 \end{bmatrix}, \] resulting in $(4,2,1,3)$. By changing the weights, we moved the first place candidate to last place! \end{example} Of course, it is also possible that $\mathbf{r} \not\in C_{\pi}$ for any $\pi \in S_{n}$ because there are ties between candidates. In this case, $\mathbf{r}$ lies on one or more of the hyperplanes $H_{ij}=\big\{\mathbf{x}\in\mathbb{Q}^{n}:\, x_{i}=x_{j}\big\}$ comprising the \emph{braid arrangement}. The sets $C_{\sigma}$ described above are known as \emph{chambers} of the braid arrangement. If we allow for nonstrict inequalities in their definition, the resulting objects are called \emph{faces}. The faces of the braid arrangement correspond to ordered set partitions of $[n]$ via $G\sim\big(B_{1},\ldots,B_{m}\big)$ when $G$ consists of all $\mathbf{x}\in\mathbb{Q}^{n}$ such that $x_{i}>x_{j}$ if and only if there exist $k<\ell$ with $i\in B_{k}$ and $j\in B_{\ell}$. If the results vector $\mathbf{r}$ belongs to $G$, then $B_{k}$ is the set of candidates tied for $k^{\text{th}}$ place. The chambers are $n$-dimensional faces representing strict rankings, and two results vectors in $\mathbb{Q}^{n}$ lie in the same face if they correspond to identical rankings of the candidates. Finally, we observe that if $\mathbf{w}\in V_{0}$, then $\sigma\mathbf{w}\in V_{0}$ for all $\sigma\in S_{n}$, so $\mathbf{r}$ as defined in equation~\eqref{resultsvector} is a linear combination of sum-zero vectors and thus lies in $V_{0}$ as well. Also, the decomposition $Q_{\mathbf{p}}=\sum_{\ell=1}^{n!}p_{\ell}R_{\ell}$ shows that every row and column of $Q_{\mathbf{p}}$ sums to $N=\sum_{\ell=1}^{n!}p_{\ell}$, the total number of ballots cast. Thus the condition that $\mathbf{w}\in V_{0}$ is not much of a restriction since any $\mathbf{y}\in C_{id}$ can be decomposed as $\mathbf{y}=\overline{\mathbf{y}}+a_{\mathbf{y}}\mathbf{1}$ where $\overline{\mathbf{y}}\in W$ and $a_{\mathbf{y}} = \frac{1}{n} \sum_{i=1}^n y_i$. Moreover, $Q_{\mathbf{p}}\mathbf{y}$ lies in the same face as $Q_{\mathbf{p}}\overline{\mathbf{y}}$ because \[ Q_{\mathbf{p}}\mathbf{y} =Q_{\mathbf{p}}\overline{\mathbf{y}}+a_{\mathbf{y}}Q_{\mathbf{p}}\mathbf{1} =Q_{\mathbf{p}}\overline{\mathbf{y}}+Na_{\mathbf{y}}\mathbf{1}. \] Indeed, $\bigcap_{i<j}H_{ij}=\{c\mathbf{1}:\, c\in\mathbb{Q}\}$, so it is natural to project the braid arrangement onto the orthogonal complement, $V_{0}$. This is called its \emph{essentialization}. \section{Main Results} \label{Results} One of the key insights of this article is that we can construct paradoxical profiles by considering matrices that have all row and column sums equal. This is because such a matrix can be shifted and scaled to obtain a \emph{doubly stochastic matrix} (all entries nonnegative and all row and column sums equal to $1$). This then enables us to appeal to the \emph{Birkhoff--von Neumann theorem} \cite{Birk,vonN}, which states that every doubly stochastic matrix $P$ is a convex combination of permutation matrices, $P=\sum_{\ell=1}^{n!}\lambda_{\ell}R_{\ell}$ with $\lambda_{1},\ldots,\lambda_{\ell}\geq 0$ and $\lambda _{1}+\cdots+\lambda_{\ell}=1$. The following result, which is known but included for the sake of completeness, distills these observations into a convenient form which will be crucial for subsequent arguments. \begin{prop} \label{lin comb} Let $\mathcal{P}$ be any collection of $(n-1)^{2}+1$ linearly independent $n\times n$ permutation matrices, and let $\mathcal{M}_{n}$ be the vector space of $n\times n$ matrices over $\mathbb{Q}$ with all row and columns sums equal. Then every matrix in $\mathcal{M}_{n}$ can be written as a linear combination of matrices in $\mathcal{P}$. \end{prop} \begin{proof} Suppose $S$ is an $n\times n$ matrix with all row and column sums equal to $t$. If $S=\frac{t}{n}J$, then $S$ is a linear combination of permutation matrices because $J$ is; see below. Otherwise, let $m=\min _{(i,j)\in[n]^{2}}S(i,j)$. Then $P=(t-mn)^{-1}(S-mJ)$ is doubly stochastic, so the Birkhoff--von Neumann theorem shows that $P$ can be written as a convex combination of permutation matrices, $P=\sum_{\ell=1}^{n!}\lambda_{\ell}R_{\ell}$. Similarly, $\frac{1}{n}J=\sum_{\ell=1}^{n!}\kappa_{\ell}R_{\ell}$ as it too is doubly stochastic. It follows that $S=(t-mn)P+mJ$ is a linear combination of permutation matrices. Since every linear combination of permutation matrices has all row and column sums equal, $\mathcal{M}$ is precisely the linear span of the permutation matrices. To see that the dimension of $\mathcal{M}_{n}$ is $(n-1)^{2}+1$, define $B_{i,j}$ to be the $n\times n$ matrix with $1$'s in positions $(i,j)$ and $(n,n)$, $-1$'s in positions $(i,n)$ and $(n,j)$, and $0$'s elsewhere for each $(i,j)\in[n-1]^{2}$. If $Z=[z_{i,j}]_{i,j=1}^{n}$ is any matrix with all row and column sums equal to $0$, then it is easy to see that $Z=\sum_{i=1}^{n-1}\sum_{j=1}^{n-1}z_{i,j}B_{i,j}$. Now let $S$ be any matrix with all row and column sums equal to $t$. Then $S-tI$ has all row and column sums zero; hence $S$ can be expressed as a linear combination of the $B_{i,j}$'s and $I$. As these $(n-1)^{2}+1$ matrices are clearly linearly independent, the assertion follows. \end{proof} \begin{remark} Proposition~\ref{lin comb} can also be proved without invoking the Birkhoff--von Neumann theorem by showing that the collection of permutation matrices \[ \mathcal{B}=\big\{R_{(i,j,n)}:\, i,j\in[n-1]\text{ are distinct}\big\}\cup\big\{R_{(i,n)}:\, i\in[n-1]\big\}\cup\{I\} \] is a basis for $\mathcal{M}_{n}$. (The subscripts represent permutations in cycle notation and $R_{\sigma}$ is as previously defined.) Linear independence follows by looking at the final rows and columns of the matrices, and $\mathcal{M}_{n}=\text{span}(\mathcal{B})$ follows from the dimension argument in the proof of Proposition~\ref{lin comb} upon observing that $B_{i,j}=I-R_{(i,n)}-R_{(j,n)}+R_{(j,i,n)}$ for distinct $i,j<n$ and $B_{k,k}=I-R_{(k,n)}$ for $k<n$. \end{remark} The following theorem was proved in \cite{DEMO} using facts about the representation theory of $S_{n}$. Our proof is based on the same general reasoning---essentially, that one can write $T_{\mathbf{w}}\mathbf{p}=Q_{\mathbf{p}}\mathbf{w}$---but uses only linear algebra. In words, one can fix in advance a number of different positional voting procedures, along with desired election outcomes for each procedure, and then find (infinitely many) profiles such that each procedure yields the corresponding outcome! \begin{theorem} \label{thm:main} \textnormal{(\cite[Theorem 1]{DEMO})} Given any linearly independent weighting vectors $\mathbf{w}_{1},\ldots,\mathbf{w}_{n-1}\in W$ and any results vectors $\mathbf{r}_{1},\ldots,\mathbf{r}_{n-1}\in V_{0}$, there are infinitely many profiles $\mathbf{p}\in\mathbb{Q}^{n!}$ with $T_{\mathbf{w}_{k}}\mathbf{p}=\mathbf{r}_{k}$ for $k=1,\ldots,n-1$. \end{theorem} \begin{proof} The general strategy will be to construct a matrix $Q$ such that $Q\mathbf{w}_{k}=\mathbf{r}_{k}$ for $k=1,\ldots,n-1$ and show that this matrix has right and left eigenvectors $\mathbf{1}$ and $\mathbf{1}^{\mathsf{T}}$; hence its row and column sums are constant. Proposition~\ref{lin comb} then gives $Q=\sum_{\ell}p_{\ell}R_{\ell}$, and thus $T_{\mathbf{w}_{k}}\mathbf{p}=Q\mathbf{w}_{k}=\mathbf{r}_{k}$. To begin, define $\mathbf{r}_{0}=\mathbf{w}_{0}=\mathbf{1}$ and set \[ F=\big[\,\setlength\arraycolsep{3pt}\begin{matrix} \mathbf{w}_{0} & \mathbf{w}_{1} & \cdots & \mathbf{w}_{n-1}\end{matrix}\,\big],\:\, R=\big[\,\setlength\arraycolsep{3pt}\begin{matrix} \mathbf{r}_{0} & \mathbf{r}_{1} & \cdots & \mathbf{r}_{n-1}\end{matrix}\,\big],\;\text{ and }\; Q=RF^{-1}. \] ($F$ is invertible because $\mathbf{w}_{1},\ldots,\mathbf{w}_{n-1}$ are linearly independent and all orthogonal to $\mathbf{w}_{0}$.) Then $Q\mathbf{w}_{k}=\mathbf{r}_{k}$ for $k=0,\ldots,n-1$ since \[ \setlength\arraycolsep{3pt}\big[\,\begin{matrix} Q\mathbf{w}_{0} & Q\mathbf{w}_{1} & \cdots & Q\mathbf{w}_{n-1}\end{matrix}\,\big]=QF=R =\setlength\arraycolsep{3pt}\big[\,\begin{matrix} \mathbf{r}_{0} & \mathbf{r}_{1} & \cdots & \mathbf{r}_{n-1}\end{matrix}\,\big]. \] The condition $Q\mathbf{w}_{0}=\mathbf{r}_{0}$ implies that the rows of $Q$ sum to $1$. To see that the columns sum to $1$, we first observe that \[ \mathbf{w}_{0}^{\mathsf{T}}R=\setlength\arraycolsep{3pt}\big[\,\begin{matrix} \left\langle \mathbf{w}_{0},\mathbf{r}_{0}\right\rangle & \left\langle \mathbf{w}_{0},\mathbf{r}_{1}\right\rangle & \cdots & \left\langle \mathbf{w}_{0},\mathbf{r}_{n-1}\right\rangle \end{matrix}\,\big] =\setlength\arraycolsep{3pt}[\,\begin{matrix} n & 0 & \cdots & 0\end{matrix}\,] \] since $\mathbf{w}_{0}=\mathbf{r}_{0}=\mathbf{1}$ is orthogonal to each of $\mathbf{r}_{1},\ldots,\mathbf{r}_{n-1}$ by assumption. As such, \[ \mathbf{w}_{0}^{\mathsf{T}}Q=\mathbf{w}_{0}^{\mathsf{T}}RF^{-1} =\setlength\arraycolsep{3pt}[\,\begin{matrix} n & 0 & \cdots & 0\end{matrix}\,]F^{-1} =n\bm{f} \] where $\bm{f}$ is the first row of $F^{-1}$. Since $F^{-1}F=I$, we must have $\left\langle \bm{f}^{\mathsf{T}},\mathbf{w}_{0}\right\rangle =1$ and $\left\langle \bm{f}^{\mathsf{T}},\mathbf{w}_{k}\right\rangle =0$ for $k=1,\ldots,n-1$. The latter condition implies that $\bm{f}=C\mathbf{1}^{\mathsf{T}}$ for some $C$, so the former implies $1=\left\langle \bm{f}^{\mathsf{T}},\mathbf{w}_{0}\right\rangle =C\left\langle \mathbf{1},\mathbf{1}\right\rangle =nC$. Therefore, \[ \mathbf{w}_{0}^{\mathsf{T}}Q=n\bm{f}=\mathbf{w}_{0}^{\mathsf{T}}, \] so the columns of $Q$ sum to $1$ as well. Since $Q$ has all rows and columns summing to $1$, it follows from Proposition~\ref{lin comb} that it is a linear combination of permutation matrices. In other words, there exists $\mathbf{p}\in\mathbb{Q}^{n!}$ such that $Q=\sum_{\ell=1}^{n!}p_{\ell}R_{\ell}$. Accordingly, $T_{\mathbf{w}_{k}}\mathbf{p}=Q\mathbf{w}_{k}=\mathbf{r}_{k}$ for $k=1,\ldots,n-1$. In fact there are infinitely many such $\mathbf{p}$ since there are $n!$ permutation matrices and the space of doubly stochastic matrices is $(n^{2}-2n+2)$-dimensional. \end{proof} The preceding proof works just as well if one takes the weighting vectors to lie in $\overline{W}$, the closure of $W$. This allows for voting schemes in which the same point value can be assigned to multiple candidates, as in the ``vote for your favorite $k$'' system given by $\mathbf{w} =[\,\setlength\arraycolsep{3pt}\begin{matrix}n-k & \cdots & n-k & -k & \cdots & -k\end{matrix}\,]^{\mathsf{T}}$. One may impose additional constraints such as all weighting vectors having the same positions tied by restricting to some lower-dimensional face $G\subset\overline{W}$, but then the linear independence condition dictates that there are only $d=\dim(G)$ weighting/results vectors. To treat this case, take $\mathbf{w}_{1},\ldots,\mathbf{w}_{d}$ to be linearly independent vectors in $G$ and $\mathbf{r}_{1},\ldots,\mathbf{r}_{d}$ to be the desired results vectors in $V_{0}$. Then choose $\mathbf{w}_{d+1},\ldots,\mathbf{w}_{n-1}$ to be any vectors in $\overline{W}$ for which $\mathbf{w}_{1},\ldots,\mathbf{w}_{n-1}$ are linearly independent and let $\mathbf{r}_{d+1},\ldots,\mathbf{r}_{n-1}$ be any results vectors in $V_{0}$. Also, observe that $Q=RF^{-1}$ is explicit and may be easily realized as a linear combination of doubly stochastic matrices. (The entries of $Q$ may be negative, so one must add an appropriate multiple of the all ones matrix and rescale to obtain a doubly stochastic matrix $P$ as in the proof of Proposition~\ref{lin comb}.) As there are algorithms for finding a Birkhoff--von Neumann decomposition of any doubly stochastic matrix \cite{DufUcar}, our method actually provides a construction of the paradoxical profile. \begin{example} Suppose that \begin{equation*} \mathbf{w}_{1}=\begin{bmatrix} 3 \\ 1 \\ -1 \\ -3 \end{bmatrix}, \mathbf{w}_{2}=\begin{bmatrix} 1 \\ 1 \\ 1 \\ -3 \end{bmatrix}, \mathbf{w}_{3}=\begin{bmatrix} 17 \\ 1 \\ -7 \\ -11 \end{bmatrix}, \mathbf{r}_{1}=\begin{bmatrix} -2 \\ -11 \\ 4 \\ 9 \end{bmatrix}, \mathbf{r}_{2}=\begin{bmatrix} 4 \\ 5 \\ 3 \\ -12 \end{bmatrix}, \mathbf{r}_{3}=\begin{bmatrix} 13 \\ -2 \\ -6 \\ -5 \end{bmatrix}. \end{equation*} Then \begin{equation*} Q=\begin{bmatrix} 1 & -2 & 4 & 13\\ 1 & -11 & 5 & -2\\ 1 & 4 & 3 & -6\\ 1 & 9 & -12 & -5\end{bmatrix} \begin{bmatrix} 1 & 3 & 1 & 17\\ 1 & 1 & 1 & 1\\ 1 & -1 & 1 & -7 \\ 1 & -3 & -3 & -11\end{bmatrix}^{\displaystyle{-1}} =\dfrac{1}{8} \begin{bmatrix} 27 & -64 & 51 & -6\\ 49 & -146 & 113 & -8\\ -17 & 50 & -21 & -4 \\ -51 & 168 & -135 & 26\end{bmatrix} \end{equation*} and \[ P=\big(1+4\cdot\tfrac{146}{8}\big)^{-1}\big(Q+\tfrac{146}{8} J\big) =\frac{1}{592} \begin{bmatrix} 173 & 82 & 197 & 140 \\ 195 & 0 & 259 & 138 \\ 129 & 196 & 125 & 142 \\ 95 & 314 & 11 & 172\end{bmatrix} \] is doubly stochastic. Using built-in functionality in the computer algebra system \emph{SageMath} \cite{sage}, a Birkhoff--von Neumann decomposition of $P$ is given by \begin{align*} P & =\tfrac{71}{296}R_{(1,4,2,3)}+\tfrac{31}{592}R_{(1,4,3,2)}+\tfrac{43}{148}R_{(2,3,1,4)}+\tfrac{11}{592}R_{(2,3,4,1)} +\tfrac{3}{148}R_{(2,4,3,1)}\\ & \qquad +\tfrac{3}{148}R_{(3,4,1,2)}+\tfrac{117}{592}R_{(3,4,2,1)}+\tfrac{41}{296}R_{(4,1,3,2)}+\tfrac{13}{592}R_{(4,3,1,2)}. \end{align*} Here the subscripts represent permutations in one-line notation. Since $Q=74P-\tfrac{73}{4}J$ and $J=R_{(1,2,3,4)}+R_{(2,1,4,3)}+R_{(3,4,1,2)}+R_{(4,3,2,1)}$, we see that \begin{align*} Q & = -\tfrac{73}{4}R_{(1,2,3,4)}+\tfrac{71}{4}R_{(1,4,2,3)}+\tfrac{31}{8}R_{(1,4,3,2)}-\tfrac{73}{4}R_{(2,1,4,3)} +\tfrac{43}{2}R_{(2,3,1,4)}\\ & \qquad +\tfrac{11}{8}R_{(2,3,4,1)}+\tfrac{3}{2}R_{(2,4,3,1)}-\tfrac{67}{4}R_{(3,4,1,2)}+\tfrac{117}{8}R_{(3,4,2,1)} +\tfrac{41}{4}R_{(4,1,3,2)}\\ & \qquad\qquad+\tfrac{13}{8}R_{(4,3,1,2)}-\tfrac{73}{4}R_{(4,3,2,1)}. \end{align*} Thus one profile for which the weight $\mathbf{w}_{k}$ produces the result $\mathbf{r}_{k}$ consists of $-\tfrac{73}{4}$ votes for $1$ above $2$ above $3$ above $4$, $\frac{71}{4}$ votes for $1$ above $4$ above $2$ above $3$, and so forth. \end{example} In typical settings, we are concerned with ordinal rather than cardinal rankings, and Theorem~\ref{thm:main} can then be used to generate significantly many more outcomes. To facilitate the ensuing argument, we record the following simple lemma. \begin{lemma} \label{scaling lemma} For any $\mathbf{w}\in W$, $\mathbf{x}\in V_{0}$, there is some rational number $\eta_{0}>0$ such that $\eta\mathbf{w}+\mathbf{x}\in W$ for all $\eta\geq\eta_{0}$. \end{lemma} \begin{proof} Let $m=\min_{1\leq k\leq n-1}(w_{k}-w_{k+1})$ and $M=\max_{1\leq k\leq n}\left|x_{k}\right|$, and set $\eta=3M/m$. Then the successive entries of $\eta\mathbf{w}$ differ by at least $3M$, so adding $\mathbf{x}$ does not change their relative order. \end{proof} Also, recall that a conical combination of vectors is a linear combination in which all coefficients are nonnegative, and observe that $W$ is closed under nontrivial conical combinations: if $\mathbf{w}_{1}, \mathbf{w}_{2}, \ldots, \mathbf{w}_{k} \in W$, then $ c_{1}\mathbf{w}_{1}+c_{2} \mathbf{w}_{2} + \cdots + c_{k} \mathbf{w}_{k} \in W$ whenever $c_{1}, \ldots, c_{k} \geq 0$ with $c_{i} \neq 0$ for at least one $i$. A set with this property is called a \emph{convex cone}. Our next theorem implies the result from \cite{Saari} that there exist profiles from which $\big(\frac{n-1}{n}\big)n!$ different ordinal rankings can be obtained by judicious choices of weighting vectors in $W$. \begin{theorem} \label{number rankings lower bound} There exist infinitely many profiles $\mathbf{p}\in\mathbb{Q}^{n!}$ such that for every $\pi\in S_{n}$ satisfying $\pi(n)\neq1$, there is some $\mathbf{w}(\pi)\in W$ with $T_{\mathbf{w}(\pi)}\mathbf{p}\in C_{\pi}$. \end{theorem} \begin{proof} To begin, let $\mathbf{w}_{1},\ldots,\mathbf{w}_{n-1}$ be any linearly independent vectors in $W$. By Lemma~\ref{scaling lemma}, we may scale $\mathbf{w}_{1}$ so that $\mathbf{w}_{1}-n\binom{n+1}{2}\mathbf{w}_{k}\in W$ for $k=2,\ldots,n-1$. (This will be important later.) Set $\mathbf{f}_{k}=\mathbf{e}_{k}-\mathbf{e}_{k+1}$ for $k=1,\ldots,n-1$ where $\mathbf{e}_{1},\ldots,\mathbf{e}_{n}$ are the standard basis vectors in $\mathbb{Q}^{n}$, and let $\mathbf{p}$ be such that $T_{\mathbf{w}_{k}}\mathbf{p} = \mathbf{f}_{k}$ for each $k$. There are infinitely many such $\mathbf{p}$ by Theorem~\ref{thm:main}. Fix $\pi\in S_{n}$ with $\pi(n) \neq1$. The result will follow if we can find $\alpha_{1},\ldots,\alpha_{n-1}$ so that \vspace{-.6cm} \[ \mathbf{w}=\mathbf{w}(\pi)=\sum_{k=1}^{n-1}\alpha_{k}\mathbf{w}_{k} \] belongs to $W$ and \vspace{-.6cm} \[ \mathbf{s}=\mathbf{s}(\pi)=T_{\mathbf{w}}\mathbf{p}=Q_{\mathbf{p}}\mathbf{w}=\sum_{k=1}^{n-1}\alpha_{k}\mathbf{f}_{k} \] belongs to $C_{\pi}$. Now for $1\leq i<j\leq n$, define $\mathbf{f}_{ij}=\mathbf{e}_{i}-\mathbf{e}_{j}=\sum_{k=i}^{j-1}\mathbf{f}_{k}$ and $\mathbf{w}_{ij}=\sum_{k=i}^{j-1}\mathbf{w}_{k}$. The latter are contained in $W$ since it is a convex cone. For any collection of numbers $\{\beta_{ij}\}_{i<j}$, if we define $\alpha_{k}=\sum_{i=1}^k \sum_{j=k+1}^n \beta_{ij}$, then we have \[ \sum_{i<j}\beta_{ij}\mathbf{f}_{ij}=\sum_{i<j}\beta_{ij}\sum_{k=i}^{j-1}\mathbf{f}_{k} =\sum_{k=1}^{n-1}\alpha_{k}\mathbf{f}_{k} \] and \vspace{-.6cm} \[ \sum_{i<j}\beta_{ij}\mathbf{w}_{ij}=\sum_{k=1}^{n-1}\alpha_{k}\mathbf{w}_{k}. \] Accordingly, it suffices to construct $\mathbf{w}=\sum_{i<j}\beta_{ij}\mathbf{w}_{ij}\in W$ with $\mathbf{s}=Q_{\mathbf{p}}\mathbf{w}=\sum_{i<j}\beta_{ij}\mathbf{f}_{ij}\in C_{\pi}$. Note that each $\mathbf{w}_{ij}$ is in $W$, so $\mathbf{w}$ will be as well whenever the $\beta_{ij}$'s are nonnegative (and not all $0$). First consider the case in which $\pi(n)=n$. Then we can take $\beta_{kn}=n-\pi^{-1}(k)$ for $k=1,\ldots,n-1$ and $\beta_{ij}=0$ for $j\neq n$. As all $\beta_{ij}$ are nonnegative, $\mathbf{w}=\sum_{i<j}\beta_{ij}\mathbf{w}_{ij}\in W$. To see that $\mathbf{s} \in C_{\pi}$, we observe that $s_k = n-\pi^{-1}(k)$ for $k=1, \ldots, n-1$ and $s_n = -\sum_{k=1}^{n}s_{k}$. This gives the $k^{\text{th}}$ place candidate $s_{\pi(k)}=n-\pi^{-1}(\pi(k))=n-k>0$ points for $k=1,\ldots,n-1$ and gives $-\binom{n}{2}<0$ points to candidate $n$. If $\pi(n) = b$ with $1<b<n$, let $\widetilde{\pi}$ be the permutation formed from $\pi$ by moving $n$ to last place (so $\widetilde{\pi}(i)=\pi(i)$ for $i<\pi^{-1}(n)$, $\widetilde{\pi}(i)=\pi(i+1)$ for $\pi^{-1}(n)\leq i<n$, and $\widetilde{\pi}(n)=n$), and let $\widetilde{\mathbf{w}}=\mathbf{w}(\widetilde{\pi})$, $\widetilde{\mathbf{s}}=\mathbf{s}(\widetilde{\pi})$ be constructed as above. Now set $\mathbf{w}=\widetilde{\mathbf{w}}-\gamma_{bn}\mathbf{w}_{bn}$ where \vspace{-.6cm} \[ \gamma_{bn}=\binom{n}{2}+n-\pi^{-1}(n)+\frac{1}{2}. \] Then $\mathbf{s}=Q_{\mathbf{p}}\mathbf{w}=\widetilde{\mathbf{s}}-\gamma_{bn}\mathbf{f}_{bn}$, and we will be done upon establishing that $\mathbf{s} \in C_{\pi}$ and $\mathbf{w} \in W$. We first note that $\widetilde{\mathbf{s}} \in C_{\widetilde{\pi}}$, and $\mathbf{s}$ differs from $\widetilde{\mathbf{s}}$ by adding $\gamma_{bn}$ in position $n$ and subtracting $\gamma_{bn}$ in position $b$. As a result, candidate $n$ now has $n-\pi^{-1}(n)+\frac{1}{2}$ points, candidate $b$ has a negative number of points, and all other candidates have the same point values as in $\widetilde{\mathbf{s}}$. It follows that $\mathbf{s} \in C_\pi$. Now we turn our attention to $\mathbf{w}$, showing that it can be written as a linear combination of vectors in $W$ with nonnegative coefficients. Recall that $\mathbf{w}_{1}$ is scaled so that $\mathbf{w}_{1}-n\binom{n+1}{2}\mathbf{w}_{k}\in W$ for $k=2,\ldots,n-1$. Also, since $1<b<n$, we have \[ (n-b)\gamma_{bn}=(n-b)\left[\binom{n+1}{2}-\pi^{-1}(n)+\frac{1}{2}\right]<n\binom{n+1}{2}. \] Thus for each $1<k<n$, $\mathbf{w}_{1}-(n-b)\gamma_{bn}\mathbf{w}_{k}\in W$ since it is obtained by adding a positive multiple of $\mathbf{w}_{k}$ to $\mathbf{w}_{1}-n\binom{n+1}{2}\mathbf{w}_{k}$. This shows that \[ \mathbf{w}_{1}-\gamma_{bn}\mathbf{w}_{bn}=\mathbf{w}_{1}-\gamma_{bn}\sum_{k=b}^{n-1}\mathbf{w}_{k} =\frac{1}{n-b}\sum_{k=b}^{n-1}\Big(\mathbf{w}_{1}-(n-b)\gamma_{bn}\mathbf{w}_{k}\Big)\in W. \] To complete the proof, note that $\widetilde{\mathbf{w}}=\sum_{k=1}^{n-1}\big(n-\widetilde{\pi}^{-1}(k)\big)\mathbf{w}_{k}$ and $n-\widetilde{\pi}^{-1}(k)\geq1$ for all $k<n$, so \begin{align*} \mathbf{w} & =\widetilde{\mathbf{w}}-\gamma_{bn}\mathbf{w}_{bn}\\ & =(n-\widetilde{\pi}^{-1}(1)-1)\mathbf{w}_{1} +\sum_{k=2}^{n-1}\big(n-\widetilde{\pi}^{-1}(k)\big)\mathbf{w}_{k} +\big(\mathbf{w}_{1}-\gamma_{bn}\mathbf{w}_{bn}\big) \end{align*} is a conical combination of vectors in $W$ and so belongs to $W$. \end{proof} \begin{remark} The $\pi(n)=n$ part of the above argument can be interpreted as saying that if we have $n-1$ ``serious candidates,'' then by introducing ``dummy candidate'' $n$, who is assured to lose, there are profiles that achieve any relative ordering of the serious candidates by choosing appropriate weighting vectors. Note that one can always left-multiply $Q_{\mathbf{p}}$ by a permutation matrix to relabel the candidates, so there is nothing special about $n$. \end{remark} It is often more convenient to work with $Q_{\mathbf{p}}$ than $T_{\mathbf{w}}$, so we take a moment to observe that if one only cares about the ordinal rankings of candidates, then it can always be assumed that each profile consists of nonnegative integers or that the matrix $Q_{\mathbf{p}}$ is doubly stochastic. \begin{prop} \label{convenient p} Using the notation from above, \begin{enumerate} \item For any $\mathbf{p}\in\mathbb{Q}^{n!}$, there exists $\widehat{\mathbf{p}}\in\mathbb{N}_{0}^{n!}$ with $Q_{\widehat{\mathbf{p}}}\mathbf{w}$ lying in the same face as $Q_{\mathbf{p}}\mathbf{w}$ for all $\mathbf{w}\in \overline{W}$. \item For any $\mathbf{p}\in\mathbb{Q}^{n!}$, there exists $\widetilde{\mathbf{p}}\in\mathbb{Q}^{n!}$ such that $\widetilde{p}_{\ell}\geq 0$ for all $\ell$, $\sum_{\ell=1}^{n!}\widetilde{p}_{\ell}=1$, and $Q_{\widetilde{\mathbf{p}}}\mathbf{w}$ lies in the same face as $Q_{\mathbf{p}}\mathbf{w}$ for all $\mathbf{w}\in \overline{W}$. \end{enumerate} \end{prop} \begin{proof} For the first claim, set $x=\min_{\ell}p_{\ell}$, let $d$ be the least common denominator of $p_{1},\ldots,p_{n!}$, and define $\widehat{\mathbf{p}}=d(\mathbf{p}-x\mathbf{1})\in\mathbb{N}_{0}^{n!}$. If $Q_{\mathbf{p}}\mathbf{w}=T_{\mathbf{w}}\mathbf{p}=\mathbf{r}$, then \[ Q_{\widehat{\mathbf{p}}}\mathbf{w}=T_{\mathbf{w}}d(\mathbf{p}-x\mathbf{1}) =dT_{\mathbf{w}}\mathbf{p}-dxT_{\mathbf{w}}\mathbf{1}=d\mathbf{r} \] since $T_{\mathbf{w}}\mathbf{1}=\mathbf{0}$. Thus $Q_{\widehat{\mathbf{p}}}\mathbf{w}$ is a positive multiple of $Q_{\mathbf{p}}\mathbf{w}$ and so lies in the same face. For the second claim, set $m=\min_{i,j}Q_{\mathbf{p}}(i,j)$, $c=\big(\sum_{\ell=1}^{n!}p_{\ell}-mn\big)^{-1}>0$, and $\widetilde{Q}_{\mathbf{p}}=c(Q_{\mathbf{p}}-mJ)$ where $J$ is the all ones matrix. (Here we are assuming that $Q_{\mathbf{p}}$ does not have all entries equal. If it does, one can just take $\widetilde{Q}_{\mathbf{p}}=\tfrac{1}{n}J$.) Then $\widetilde{Q}_{\mathbf{p}}$ is nonnegative with rows and columns summing to $1$, so the Birkhoff--von Neumann theorem guarantees the existence of a nonnegative $\widetilde{\mathbf{p}}\in\mathbb{Q}^{n!}$ with entries summing to $1$ that satisfies $Q_{\widetilde{\mathbf{p}}}=\widetilde{Q}_{\mathbf{p}}$. The claim follows since \[ \widetilde{Q}_{\mathbf{p}}\mathbf{w}=c(Q_{\mathbf{p}}-mJ)\mathbf{w} =cQ_{\mathbf{p}}\mathbf{w}-mcJ\mathbf{w}=cQ_{\mathbf{p}}\mathbf{w} \] lies in the same face as $Q_{\mathbf{p}}\mathbf{w}$. \end{proof} The first part of Proposition~\ref{convenient p} is only helpful inasmuch as one might object to having negative or fractional votes cast. The second part is useful from a more mathematical perspective: Since $W$ is a convex cone, its image under $Q_{\mathbf{p}}$ is the same as its image under the doubly stochastic matrix $Q_{\widetilde{\mathbf{p}}}$. Thus we can study possible ordinal outcomes of positional voting procedures by looking at the image of $W$ (or its closure $\overline{W}$) under multiplication by doubly stochastic matrices. An example of the utility of this observation is that, by Proposition~\ref{lin comb}, any set of $(n-1)^{2}+1$ linearly independent permutation matrices can give rise to all paradoxical profiles for positional voting procedures. In other words, one needs only this many distinct preferences among the electorate. In fact, the construction from Proposition~\ref{convenient p} shows that one may take the doubly stochastic matrix to have at least one entry equal to zero (or all entries equal), and a result from \cite{Brua} then implies that a Birkhoff--von Neumann decomposition of size at most $(n-1)^{2}$ exists. This is the content of Theorem 4 in \cite{Saari}. The final result of this section uses the doubly stochastic matrix perspective to show that a profile can give rise to at most $\frac{n-1}{n}n!$ strict societal rankings; Theorem~\ref{number rankings lower bound} shows that this is sharp. The result was first proved in \cite{Saari} by analyzing certain simplicial constructs geometrically. Our strategy is to show that the set of possible societal rankings for any profile lies in a closed half-space whose boundary contains the origin, and then argue that the complement of this half-space properly contains at least $(n-1)!$ of the $n!$ chambers $C_\pi$ (corresponding to impossible rankings). \begin{theorem} \label{number rankings upper bound} \textnormal{(\cite[Theorem 3a]{Saari})} For any $\mathbf{p}\in\mathbb{Q}^{n!}$, there are at most $n!-(n-1)!$ permutations $\pi\in S_{n}$ such that $T_{\mathbf{w}}\mathbf{p}\in C_{\pi}$ for some $\mathbf{w}\in\overline{W}$. \end{theorem} \begin{proof} Given any profile $\mathbf{p}\in\mathbb{Q}^{n!}$, there is a doubly stochastic matrix $Q_{\widetilde{\mathbf{p}}}$ such that the possible strict societal rankings arising from $\mathbf{p}$ are precisely those $\pi\in S_{n}$ for which $C_{\pi}\cap Q_{\widetilde{\mathbf{p}}}\overline{W}\neq\emptyset$. Equivalently, since doubly stochastic matrices map sum-zero vectors to sum-zero vectors, writing $T:V_{0}\rightarrow V_{0}$ for the linear map defined by $T(\mathbf{v})=Q_{\widetilde{\mathbf{p}}}\mathbf{v}$ and denoting $\widetilde{C}_{\pi}=C_{\pi}\cap V_{0}$, $\pi$ is a possible ranking for $\mathbf{p}$ if and only if $\widetilde{C}_{\pi}\cap T\left(\overline{W}\right)\neq\emptyset$. Now $T\left(\overline{W}\right)$ is the linear image of a convex set and thus is convex. Also, $T\left(\overline{W}\right)$ is a proper subset of $V_{0}$ because if $T$ is not bijective then $T\left(\overline{W}\right)\subseteq T\left(V_{0}\right)\subset V_{0}$, and if $T$ is bijective then \[ T\left(\overline{W}\right)\cap-T\left(\overline{W}\right) =T\left(\overline{W}\right)\cap T\left(-\overline{W}\right)\subseteq T(\overline{W}\cap-\overline{W}) =\{\mathbf{0}\}. \] This means that there is some $\mathbf{v}\in V_{0}\setminus T\left(\overline{W}\right)$, hence $\varepsilon \mathbf{v}\in V_{0}\setminus T\left(\overline{W}\right)$ for every $\varepsilon>0$, so $\mathbf{0}\in\partial\,T\left(\overline{W}\right)$. Accordingly, the supporting hyperplane theorem shows that there is a hyperplane $H$ through the origin in $V_{0}$ with $T\left(\overline{W}\right)$ contained entirely in the associated closed positive half-space \cite{Lang}. In particular, there is a nonzero $\mathbf{h}\in V_{0}$ such that $\pi$ is a possible ranking for $\mathbf{p}$ only if there is an $\mathbf{r}\in \widetilde{C}_{\pi}$ such that $\langle \mathbf{h},\mathbf{r}\rangle \geq 0$. We will establish the result by showing that this constraint precludes $(n-1)!$ rankings from being possible outcomes associated with $\mathbf{p}$. To this end, oberve that $\left\langle \mathbf{h},\mathbf{r}\right\rangle <0$ for all $\mathbf{r}\in \widetilde{C}_{\pi}$ if and only if $\left\langle \pi\mathbf{h},\mathbf{w}\right\rangle <0$ for all $\mathbf{w}\in W$ where $\pi\mathbf{h}=[\,\setlength\arraycolsep{3pt}\begin{matrix} h_{\pi^{-1}(1)} & \cdots & h_{\pi^{-1}(n)}\end{matrix}\,]^\mathsf{T}$, so it suffices to show that there are at least $(n-1)!$ ways to permute the entries of $\mathbf{h}$ so that it has negative inner product with every vector in $W$. Now if $\mathbf{v},\mathbf{w}\in V_{0}$, then \begin{equation} \label{eq: inner product with partial sums} \begin{aligned} \hspace{-.15cm}\left\langle \mathbf{v},\mathbf{w}\right\rangle & =\sum_{k=1}^{n}v_{k}w_{k} =v_{1}(w_{1}-w_{2})+(v_{1}+v_{2})(w_{2}-w_{3})\\ & \:\;+(v_{1}+v_{2}+v_{3})(w_{3}-w_{4})+\cdots+(v_{1}+\cdots+v_{n-1})(w_{n-1}-w_{n}), \end{aligned} \end{equation} as there is cancellation among successive terms and $-w_{n}(v_{1}+\cdots+v_{n-1})=w_{n}v_{n}$. If $\mathbf{w}\in W$, then $w_{k}-w_{k+1}>0$ for each $k=1,\ldots,n-1$, so \eqref{eq: inner product with partial sums} shows that $\left\langle \mathbf{v},\mathbf{w}\right\rangle <0$ whenever the partial sums satisfy $\sum_{k=1}^{m}v_{k}\leq0$ for $m=1,\ldots,n-1$. (At least one of these inequalities would have to be strict for $\mathbf{v}\in V_{0}\setminus\{\mathbf{0}\}$.) Combining these observations, we conclude that there are at least $(n-1)!$ impossible rankings as long as there are $(n-1)!$ ways to permute the entries of $\mathbf{h}$ so that the partial sums are all nonpositive. To see that this is so, let $\pi'\in S_{n-1}$ and define $\pi\in S_{n}$ by $\pi(k)=\pi'(k)$ for $k<n$ and $\pi(n)=n$. Define $\pi_{m}\in S_{n}$ by $\pi_{m}(k)=\pi(k+m)$ for $k=1,\ldots,n$ and $m=0,1,\ldots,n-1$, where the addition in the argument is performed modulo $n$. The assertion will follow if we can show that at least one of these cyclic shifts of $\pi$ has the property that $s_{k}(m):=\sum_{j=1}^{k}h_{\pi_{m}(j)}\leq0$ for $k=1,\ldots,n-1$. For this, choose $m$ so that $s_{m}(0)=\sum_{j=1}^{m}h_{\pi(j)}$ is maximal. We claim that $s_{k}(m)\leq0$ for all $1\leq k<n$. Indeed, for $1\leq k\leq n-m$, \[ s_{k}(m)=\sum_{j=m+1}^{m+k}h_{\pi(j)}\leq 0 \] by maximality of $s_{m}(0)$. Also, $\mathbf{h}\in V_{0}$ implies that $\sum_{j=m+1}^{n}h_{\pi(j)}=-s_{m}(0)$, so for $n-m<\ell\leq n-1$, we have \[ s_{\ell}(m)=\sum_{j=m+1}^{n}h_{\pi(j)}+\sum_{i=1}^{\ell+m+1-n}h_{\pi(i)}\leq\sum_{j=m+1}^{n}h_{\pi(j)}+s_{m}(0)=0. \] This completes the claim and the proof. \end{proof} \begin{remark} \label{rem: hyperplane} Demonstrating that at least $(n-1)!$ chambers lie on the opposite side of $H$ from $T\left(\overline{W}\right)$ was a bit involved because we sought to keep the discussion self-contained. However, if one brings the full power of the theory of hyperplane arrangements to bear on the problem (which is one of the advantages of introducing this perspective), then much more efficient arguments are possible. For instance, Corollary 14.2 and equation (6.9) in \cite{AguMah} give the number of chambers in a \emph{generic half-space} of the braid arrangement---that is, a half-space defined by a hyperplane through the origin that does not contain any one-dimensional faces---as $(n-1)!$. Since $H$ can be perturbed so as to be generic and this can only decrease the number of chambers properly contained on either side, the desired inequality follows immediately. \end{remark} \section{Choosing Weighting Vectors} \label{Weighting Vectors} In this final section, we characterize the possible results vectors arising from a given profile $\mathbf{p}$ as the conical hull of a set of vectors constructed from the columns of $Q_{\mathbf{p}}$. The construction is surprisingly simple and provides a straightforward procedure to reverse engineer an election by selecting desirable weights for a given profile. We begin by providing a convenient description of the space of (nonstrict) weighting vectors $\overline{W}$. Define $\mathbf{v}_{1},\ldots,\mathbf{v}_{n-1}\in\mathbb{Q}^{n}$ by $\mathbf{v}_{k}=\frac{k}{n}\mathbf{1}-\sum_{j=n-k+1}^{n}\mathbf{e}_{j}$, so that \begin{align*} \mathbf{v}_{1} & = \big[\,\setlength\arraycolsep{3pt}\begin{matrix} \frac{1}{n} & \cdots & \frac{1}{n} & -\frac{n-1}{n} \end{matrix}\,\big]^{\mathsf{T}}\\ \mathbf{v}_{2} & =\big[\,\setlength\arraycolsep{3pt}\begin{matrix} \frac{2}{n} & \cdots & \frac{2}{n} & -\frac{n-2}{n} & -\frac{n-2}{n} \end{matrix}\,\big]^{\mathsf{T}}\\ & \qquad\qquad\qquad\quad \vdots \\ \mathbf{v}_{n-2} & =\big[\,\setlength\arraycolsep{3pt}\begin{matrix} \frac{n-2}{n} & \frac{n-2}{n} & -\frac{2}{n} & \cdots & -\frac{2}{n} \end{matrix}\,\big]^{\mathsf{T}}\\ \mathbf{v}_{n-1} & =\big[\,\setlength\arraycolsep{3pt}\begin{matrix} \frac{n-1}{n} & -\frac{1}{n} & \cdots & -\frac{1}{n} \end{matrix}\,\big]^{\mathsf{T}}. \end{align*} \begin{prop} \label{basis} Let $\mathbf{v}_{1},\ldots,\mathbf{v}_{n-1}$ be as above. Then \[ \overline{W}=\big\{c_{1}\mathbf{v}_{1}+\cdots+c_{n-1}\mathbf{v}_{n-1}:\, c_{1},\ldots,c_{n-1}\geq 0\big\}. \] \end{prop} \begin{proof} Clearly $\mathbf{v}_{k}\in\overline{W}$ for $k=1,\ldots,n-1$, and thus so is any conical combination thereof. Conversely, given any $\mathbf{w} \in\overline{W}$, we have \[ \mathbf{w}= \sum_{k=1}^{n-1} a_{k} \mathbf{v}_{k}, \] where \[ a_{k} = w_{n-k}-w_{n-k+1}\geq 0. \] Indeed, the $i^{\text{th}}$ coordinate of $\sum_{k=1}^{n-1} a_{k} \mathbf{v}_{k}$ is \begin{align*} \frac{1}{n}\sum_{j=1}^{n-i}ja_{j} & - \frac{1}{n}\sum_{j=n-i+1}^{n-1}a_{j}(n-j)\\ & = \frac{1}{n}\sum_{j=1}^{n-1}j(w_{n-j}-w_{n-j+1})-\sum_{j=n-i+1}^{n-1}(w_{n-j}-w_{n-j+1}) \\ & = \frac{1}{n}\Big((n-1)w_{1}-\sum_{k=2}^{n}w_{k}\Big) - \sum_{k=1}^{i-1}(w_{k}-w_{k+1}) \\ & = \frac{1}{n}\Big(nw_{1}-\sum_{k=1}^{n}w_{k}\Big) - (w_{1}-w_{i})=w_{1}-(w_{1}-w_{i})=w_{i}.\qedhere \end{align*} \end{proof} Now for any profile $\mathbf{p}\in\mathbb{Q}^{n!}$, there is an $n\times n$ matrix $Q_{\mathbf{p}}$ with all row and column sums equal to $N$ such that the possible ordinal outcomes are those whose associated faces intersect $Q_{\mathbf{p}}\overline{W}$. As the vectors in $\overline{W}$ are precisely the conical combinations of $\mathbf{v}_{1},\ldots,\mathbf{v}_{n-1}$, $Q_{\mathbf{p}}\overline{W}$ consists of the conical combinations of $\mathbf{s}_{1},\ldots,\mathbf{s}_{n-1}$ where $\mathbf{s}_{k}=Q_{\mathbf{p}}\mathbf{v}_{k}$. Writing $Q_{\mathbf{p}}=\big[\,\setlength\arraycolsep{3pt}\begin{matrix}\mathbf{q}_{1} & \cdots & \mathbf{q}_{n}\end{matrix}\,\big]$, we see that \[ \mathbf{s}_{k}=Q_{\mathbf{p}}\Big(\frac{k}{n}\mathbf{1}-\sum_{j=n-k+1}^{n}\mathbf{e}_{j}\Big) = \frac{k}{n}Q_{\mathbf{p}}\mathbf{1} - \sum_{j=n-k+1}^{n}Q_{\mathbf{p}}\mathbf{e}_{j} = \frac{kN}{n}\mathbf{1} - \sum_{j=n-k+1}^{n}\mathbf{q}_{j}. \] Since adding multiples of $\mathbf{1}$ will not change the face a vector lies in, it suffices to consider conical combinations of \[ \mathbf{t}_{k}=\mathbf{s}_{n-k}+\frac{Nk}{n}\mathbf{1}=N\mathbf{1} - \sum_{j=k+1}^{n}\mathbf{q}_{j} = \sum_{j=1}^{k}\mathbf{q}_{j},\quad k=1,\ldots,n-1. \] Also, scaling by a positive constant has no effect on which face a vector lies in, so the preceding observations can be stated as follows. \begin{theorem} \label{convex hull} Let $\mathbf{p}\in\mathbb{Q}^{n!}$ be any profile and let $Q_{\mathbf{p}}=\sum_{\ell=1}^{n!}p_{\ell}R_{\ell}$ be given in column form by $Q_{\mathbf{p}}=\big[\,\setlength\arraycolsep{3pt}\begin{matrix}\mathbf{q}_{1} & \cdots & \mathbf{q}_{n}\end{matrix}\,\big]$. Define $\mathbf{t}_{k}=\sum_{j=1}^{k}\mathbf{q}_{j}$. Then the possible outcomes for a positional voting procedure with input $\mathbf{p}$ are those whose corresponding faces intersect the convex hull of $\mathbf{t}_{1},\ldots,\mathbf{t}_{n-1}$. \end{theorem} This suggests a way for a nefarious election official to obtain the most desirable possible outcome for themselves: Given the preferences of the electorate, construct the matrix $Q_{\mathbf{p}}$ and take $\mathbf{t}_{k}$ to be the sum of its first $k$ columns for $k=1,\ldots,n-1$. Then choose the most preferable outcome whose face intersects the convex hull of $\mathbf{t}_{1},\ldots,\mathbf{t}_{n-1}$, pick a point $\mathbf{r}$ in this intersection, and decompose it as $\mathbf{r}=\sum_{k=1}^{n-1}b_{k}\mathbf{t}_{k}$. The favored outcome is assured by declaring the weighting vector to be $\mathbf{w}=\sum_{k=1}^{n-1}b_{k}\mathbf{v}_{n-k}$. In practice, this could be accomplished by repeatedly generating a random probability vector $\mathbf{b} = [\,\setlength\arraycolsep{3pt}\begin{matrix} b_{1} & \cdots & b_{n-1}\end{matrix}\,]^\mathsf{T}$ and recording the ranking corresponding to the face containing $\mathbf{s}=T\mathbf{b}$, $T = \big[\,\setlength\arraycolsep{3pt}\begin{matrix}\mathbf{t}_{1} & \cdots & \mathbf{t}_{n-1}\end{matrix}\,\big]$. (This is just a matter of keeping track of the indices when $\mathbf{s}$ is sorted in descending order.) After a sufficiently large number of iterations, one should have a nearly exhaustive list of possible ordinal rankings to choose from and can construct the desired weighting vector from the vector $\mathbf{b}$ corresponding to the favorite. \begin{example} To illustrate this process, return to the profile given in Example~\ref{election}. We have \[ Q_{\mathbf{p}} = \begin{bmatrix}5 & 7 & 8 & 18 \\ 16 & 0 & 15 & 7 \\ 0 & 31 & 7 & 0 \\ 17 & 0 & 8 & 13 \end{bmatrix}. \] In Example~\ref{election}, we saw that using the Borda count, which corresponds to weighting vector $[1.5, 0.5, -0.5, -1.5]^\mathsf{T}$, the resulting societal ranking was $(3, 2, 4, 1)$, but using plurality, which corresponds to weighting vector $[0.75, -0.25, -0.25, -0.25]^\mathsf{T}$, the resulting societal ranking was $(4, 2, 1, 3)$. Suppose instead we want candidate 2 to win the election. We add the first $k$ columns of $Q_{\mathbf{p}}$ for $k=1, 2, 3$ to get \[ \mathbf{t}_{1} = \begin{bmatrix} 5 \\ 16 \\ 0 \\ 17 \end{bmatrix}, \mathbf{t}_{2} = \begin{bmatrix} 12 \\ 16 \\ 31 \\ 17 \end{bmatrix}, \mathbf{t}_{3} = \begin{bmatrix} 20 \\ 31 \\ 38 \\ 25 \end{bmatrix}. \] Testing a few probability vectors reveals that $b_1 = 0.6, b_2 = 0.2,$ and $b_3=0.2$ yields \[ \mathbf{r} = b_{1} \mathbf{t}_{1} + b_{2} \mathbf{t}_{2} + b_{3} \mathbf{t}_{3} = \begin{bmatrix} 9.4 \\ 19 \\ 13.8 \\ 18.6 \end{bmatrix}. \] Thus, we can achieve societal ranking $(2, 4, 3, 1)$ using the weighting vector \begin{equation*} \mathbf{w} = b_{1} \mathbf{v}_{3} + b_{2} \mathbf{v}_{2} + b_{3} \mathbf{v}_{1} = 0.6 \begin{bmatrix} 0.75 \\ -0.25 \\ -0.25 \\ -0.25 \end{bmatrix} + 0.2 \begin{bmatrix} 0.5 \\ 0.5 \\ -0.5 \\-0.5 \end{bmatrix} + 0.2 \begin{bmatrix} 0.25 \\ 0.25 \\ 0.25 \\ -0.75 \end{bmatrix} = \begin{bmatrix} 0.6 \\ 0 \\ -0.2 \\ -0.4 \end{bmatrix}. \end{equation*} \end{example} \section*{Acknowledgment} The authors wish to thank Marcelo Aguiar and Dan Katz for enlightening conversations. They are also grateful to the anonymous referees whose thoughtful comments improved the exposition substantially. \bibliographystyle{plain}
cond-mat/9902168
\section{Introduction} \subsection{Definition of the problem} We consider in this paper a system that is described by an Hamiltonian ${\cal H}(Q,P;x)$ where $(Q,P)$ are canonical variables and $x$ is a parameter. It is assumed that ${\cal H}(Q,P;x)$ with $x=\const$ generates classically chaotic motion. We are mainly interested in the case of time dependent $x(t)$. However, it is assumed that $\dot{x}=V$ is a classically small velocity. The notion of classical slowness will be defined in Sec.\ref{s_flc}. The theory that we are going to present is quite general. In some particular applications $x(t)$ may represent, for example, a time-dependent electric field. However, the theory is best illustrated by considering the `piston' example: In this example $x$ represent the position of a small rigid body that is translated inside a large cavity, and $(Q,P)$ are the coordinates of a tiny gas particle. See Fig.\ref{f_pistons}. \begin{figure} \begin{center} \leavevmode \epsfysize=2.2in \epsffile{piston_ab.eps} \end{center} \caption{\protect\footnotesize The `piston' example: The slow degree of freedom is the `piston', and the `bath' consists of one gas particle. In this paper the position of the `piston' $x(t)=Vt$ is treated as a classical parameter. Dissipation means a systematic growth of the bath-energy. For simplicity we assume throughout the paper that the conservative work is zero. This is not the case in the right illustration. } \label{f_pistons} \end{figure} It is assumed that initially the system is characterized by some energy distribution $\rho(E)$. In particular we may assume a microcanonical preparation. For $V=0$ energy is a constant of the motion, and therefore the energy distribution $\rho(E)$ will not change as a function of time. On the other hand, for $V\ne 0$ the energy will be re-distributed and $\rho(E)$ will become time dependent. In this paper we are interested in the study of this time dependence. Of particular interest is the time dependence of the {\em first} and of the {\em second} moments. A systematic increase of the {\em average} energy has, by definition, the meaning of dissipation. In case of the `piston' example, dissipation means that the gas particle is being `heated up'. \subsection{Restrictive sense of `Quantum dissipation'} The subject of this paper is the quantum-mechanical (QM) theory of energy-spreading and dissipation, as defined in the previous subsection. In short we may say that we are interested in the theory of {\em Quantum Dissipation}. However, it is important to realize that we are using the term `Quantum Dissipation' in a quite {\em restrictive sense}. This is because of mainly two reasons: (a) We assume a classical driving force; (b) We are not considering a many-body bath. Note that an infinite number of degrees-of-freedom is not important for having stochastic behavior: this is the main idea behind the term `chaos' when applied to dynamical systems. We can have dissipation even if $(Q,P)$ represent a few degrees-of-freedom `bath'. The interest in Quantum Dissipation is very old \cite{FV,ZG,textbook,wall,koonin,MS,CL}. However, in most of the literature, the term `Quantum Dissipation' is used in a more {\em general sense}. Namely, $x$ becomes a dynamical variable, and one looks for its reduced dynamics. Thus, in most of the literature, dissipation-of-energy becomes only one aspect of a much more complicated problem. The `grand problem' includes, besides `dissipation', other issues such as `dephasing' and `thermalization'. It also should be noticed that the standard literature usually adopts an effective-bath approach (see subsection 1.4) or other effective formulations \cite{textbook} that do not necessarily reflect the actual dynamics of the bath degrees-of-freedoms. Important exceptions are works such as \cite{kolovsky} and \cite{srednicki}. Of particular interest is the `piston' model. If the `piston' is treated as a dynamical object, than its reduced dynamics is called `quantal Brownian motion' (QBM). According to our (restricted) definition, `dissipation' means systematic irreversible growth of the bath-energy. In case of an un-driven Brownian particle, the `dissipation' is balanced eventually by 'noise' leading to `thermalization'. In the QM case the issue of `irreversibility' is more complicated because we may have `recurrences'. The relevant time scale for these recurrences is the Heisenberg time for the combined BrownianParticle-GasParticle system. This latter time scale may be extremely large if the Brownian particle has a large mass. In this paper $x$ is not a dynamical degree of freedom, and therefore the `recurrences' that have been mentioned in the previous paragraph are not an issue. (It is as if we assume that the `piston' has an infinite mass, hence the frequent use of the term `moving walls'). For $V=0$ the Hamiltonian is time-independent, and we will have recurrences that are associated with the dynamics of the GasParticle (alone). The remnant of this latter type of recurrences is QM-adiabaticity, which we are going to discuss soon. Another type of `recurrences' are associated with {\em periodic} driving and are discussed in Sec.\ref{s8}. It should become clear from the above discussion, and subsection 1.6 below, that `recurrences' are not an important issue in this paper. \subsection{The classical theory of dissipation} The classical understanding of the dissipation-process is based mainly on the works of \cite{wall,koonin,ott,wilk1,jar} and followers. We are going to sketch briefly the main idea of the classical theory, and the associated derivation of the fluctuation-dissipation (${\cal F\!D}$) relation. In the {\em time-independent case} ($V=0$) the motion of $(Q(t),P(t))$ is irregular due to the chaotic nature of the dynamics. We shall denote the ergodic time by $t_{\tbox{erg}}$. We can define a fluctuating quantity ${\cal F}(t) = -(\partial {\cal H} / \partial x)$ that has stochastic features. The intensity of these fluctuations will be denoted by $\nu$. In the classical case ${\cal F}(t)$ is essentially like noise whose correlation time $\tau_{\tbox{cl}}$ is smaller than or equal to $t_{\tbox{erg}}$. In the {\em time-dependent case} ($V\ne0$) energy is not a constant of the motion and consequently the energy distribution $\rho(E)$ becomes time dependent. It is argued that for $t\gg t_{\tbox{erg}}$ the energy distribution satisfy a diffusion equation. The energy-dependent diffusion coefficient will be denoted by $D_{\tbox{E}}$. It turns out that quite generally $D_{\tbox{E}} = \half \nu V^2$. Associated with this diffusion is a systematic growth of the average energy. This systematic growth of energy is due to the $E$-dependence of the diffusion process. The rate of energy growth will be denoted by $\dot{{\cal Q}}$. It can be written as $\dot{{\cal Q}}=\mu V^2$. The considerations above lead to the conclusion that in the classical case the dissipation is of ohmic nature ($\dot{{\cal Q}}\propto V^2$). The dissipation coefficient is denoted by $\mu$. It is implied that the fluctuating quantity ${\cal F}(t)$ has a non-zero average, namely $\langle {\cal F} \rangle = -\mu V$. In the 'piston' example the latter represents the `friction' force that is experienced by the moving object. The considerations above also imply that {\em the analysis of dissipation is reduced to the study of energy spreading}. The difficult issue is to establish a stochastic energy spreading with a coefficient $D_{\tbox{E}} = \half \nu V^2$. Then, the ${\cal F\!D}$ relation between $\mu$ and the noise intensity $\nu$ follows as an immediate consequence. If $\rho(E)$ is a canonical distribution (which is not necessarily the case) then the ${\cal F\!D}$ relation reduces to the familiar form $\mu=\nu/(2k_{\tbox{B}}T)$ where $T$ is the temperature. \begin{table} \begin{center} \leavevmode \setlength{\baselineskip}{0.5cm} \begin{tabular}{|l|} \hline \ \\ {\bf Generic classical parameters} $(\tau_{\tbox{cl}}, \nu)$ \\ $\tau_{\tbox{cl}} \ = \ $ classical correlation time. \\ $\nu \ = \ $ intensity of fluctuations \\ \ \\ {\bf Generic quantal parameters} $(\Delta,b,\sigma,\hbar)$ \\ $\Delta \ = \ $ mean level spacing of the eigen-energies $\{E_n\}$ \\ $b \ = \ $ Dimensionless bandwidth of the matrix $(\partial {\cal H} / \partial x)_{nm}$ \\ $\sigma \ = \ $ Root-mean-square of in-band matrix elements of $(\partial {\cal H} / \partial x)_{nm}$ \\ \ \\ {\bf Semiclassical relations} \\ $\tau_{\tbox{cl}} \ = \ 2\pi\hbar/(b\Delta)$ \\ $\nu = (2\pi\hbar/\Delta)\ \sigma^2$ \\ \ \\ {\bf Linear response theory} \\ $D_{\tbox{E}} \ = \ \half \nu V^2 \ = \ (\pi\hbar/\Delta)\ \sigma^2 V^2$ \\ \ \\ {\bf Ohmic dissipation} \\ $d\langle {\cal H} \rangle / dt \ = \ \mu V^2$ \\ $\mu \ = \ {\cal F\!D} [\nu]$ \\ \ \\ {\bf Primary dimensionless parameters} $(b,v_{\tbox{PR}})$ \\ \ \\ $v_{\tbox{PR}} \ = \ (1/\hbar) \ \sqrt{\nu\tau_{\tbox{cl}}^3 \ } \ V \ = \ b^{\tbox{-3/2}} \ (2\pi\hbar/\Delta)^2 \ (\sigma/\hbar) \ V$ \\ \ \\ \hline \end{tabular} \end{center} \caption{\protect\rm\footnotesize Overview of the common theory for dissipation. Two generic parameters should be specified for the classical theory, while four are required for the QM theory. Note that $V=\dot{x}$ always appears in the combination $\nu V^2$ or $\sigma V$, and therefore it should not be counted as an additional (independent) parameter. The two classical parameters can be expressed in terms of the QM parameters via semiclassical relations. In the absence of well defined classical limit (as in the case of RMT models) this relations can be regarded as definitions. The so called Kubo-Greenwood result of linear response theory can be obtained using FGR picture, and it coincides with the classical expression. General considerations lead to a fluctuation-dissipation (${\cal F\!D}$) relation between $\mu$ and $\nu$. An important observation of this paper is that the validity of the linear-response approach is controlled by the dimensionless parameter $v_{\tbox{PR}}$. } \end{table} \subsection{The effective-bath approach to Quantum Dissipation} The most popular approach to `Quantum Dissipation' is the {\em effective-bath approach} \cite{FV,ZG,MS,CL}. When applied to `our' problem (as defined in the first subsection) it means that the chaotic $(Q,P)$ degrees-of-freedom are replace by an effective-bath that has the same {\em spectral-properties}. This may be either harmonic-bath (with infinitely many oscillators) or random-matrix-theory (RMT) bath \cite{rmt}. It turns out that quantal-classical correspondence (QCC) is a natural consequence of this procedure: The dissipation coefficient $\mu$ turns out to be the same classically and quantum-mechanically. In order to explain this point let us use the Caldeira-Leggett notations \cite{CL}. The distribution of the frequencies of the bath-oscillators is characterized by an ohmic spectral-function $J(\omega)=\eta\omega$. The classical analysis leads to a friction force with a coefficient $\mu=\eta$, and white noise whose intensity is $\nu=2\eta k_{\tbox{B}}T$. Using Feynman-Vernon \cite{FV} formalism one obtains the same value $\mu=\eta$ in the QM case. The quantal noise is characterized by an $\hbar$-dependent power-spectrum, but the noise intensity $\nu$ is defined as the $\omega=0$ component, and it is still equal to $2\eta k_{\tbox{B}}T$. Hence the classical ${\cal F\!D}$ relation $\mu=\nu/(2k_{\tbox{B}}T)$ holds also in the QM case. The effective-bath approach will not be adopted in this letter since its applicability is a matter of {\em conjecture}. In this paper we want to have a direct understanding of quantum-dissipation. \subsection{The QM theory of Dissipation} Quantum-mechanics introduces additional energy scales, as well as additional parametric scales into the problem (See Table 2). Consequently there are few $V$ regimes in the QM theory (See Table 3). The QM-adiabatic regime \cite{wilk1} is quite well understood. We shall discuss this regime only briefly since it is not related to the main concern of this paper. The further distinction between the QM-slow regime and the QM-fast regime is the main issue of this paper. Let us assume that initially the energy is concentrated in one particular level. For extremely slow velocities ($v_{\tbox{LZ}}\ll 1$) and relatively long time the energy will remain mainly concentrated in the initial level. This is the QM-adiabatic approximation. The term `QM-adiabaticity' is a beat confusing, because it actually does not correspond (in the $\hbar\rightarrow 0$ limit) to adiabaticity in the classical sense. Maybe a better term would be `perturbative localization'. In the QM-adiabatic regime Landau-Zener transitions between neighboring levels constitute the predominant mechanism for energy spreading. This mechanism does not correspond to the classical mechanism of energy-spreading. The QM-adiabatic regime is a genuine quantal regime. For higher velocities ($v_{\tbox{LZ}}\gg 1$) it is essential to take into account transitions between non-neighboring levels. The contribution of near-neighbor transitions to the energy spreading becomes negligible rather than predominant. An obvious approach for the study of energy spreading would be to adopt a Fermi-golden-rule (FGR) picture. FGR is one possible picture of {\em perturbation theory}. The same results for $D_{\tbox{E}}$ and $\mu$ can be derived by using other, equivalent formulations of perturbation theory. The most popular variation is known as `linear response theory' or as `Kubo-Greenwood formalism'. Whatever version of perturbation theory is being used the standard result is always the same (See Table 1). It should be realized that the standard result is in complete correspondence with the classical result, and it becomes identical with the classical result upon taking the formal limit $\hbar\rightarrow 0$. \begin{table} \begin{center} \leavevmode \setlength{\baselineskip}{0.5cm} \begin{tabular}{|lll|} \hline \ & \ & \\ \multicolumn{3}{|l|}{\bf Energy Scales:} \\ $\Delta$ &$\propto$& $\hbar^{d} \ = \ $ mean level spacing of the eigen-energies $\{E_n\}$. \\ $\Delta_b$ &=& $b\Delta \ = \ 2\pi\hbar/\tau_{\tbox{cl}} \ = \ $ bandwidth of the matrix $(\partial {\cal H} / \partial x)_{nm}$ \\ $\Delta_{\tbox{SC}}$ &$\propto$& $\hbar^{2/3} \ = \ $ semiclassical width of Wigner function. \\ \ & \ & \\ \multicolumn{3}{|l|}{\bf Parametric scales:} \\ $\delta x_c^{\tbox{cl}}$ &=& parametric correlation scale of the $x$-dependent Hamiltonian \\ $\delta x_c^{\tbox{qm}}$ &=& $(\Delta/\sigma) \ \propto \ \hbar^{(1{+}d)/2}\ = \ $ The $\delta x$ required to mix neighboring levels. \\ $\delta x_{\tbox{prt}}$ &=& $\sqrt{b}(\Delta/\sigma) \ \propto \ \hbar \ = \ $ The $\delta x$ to mix all the levels within the bandwidth. \\ $\delta x_{\tbox{SC}}$ &$\propto$& $\hbar^{2/3} \ = \ $ The $\delta x$ required to get detailed QCC. \\ \ & \ & \\ \multicolumn{3}{|l|}{\bf Temporal scales:} \\ $\tau_{\tbox{cl}}$ &=& Classical correlation time of ${\cal F}(t)$. \\ $t_{\tbox{erg}}$ &=& Ergodic time of the classical chaotic motion. \\ $t_{\tbox{frc}}$ &=& $\nu/(\mu V)^2 \ = \ $ Breaktime of the classical adiabatic approximation. \\ $\tau_c^{\tbox{qm}}$ &=& $\delta x_c^{\tbox{qm}}/V \ = \ $ The time it takes to mix neighboring levels . \\ $t_{\tbox{prt}}$ &=& Ultimate breaktime of the QM perturbation theory. \\ $t_{\tbox{sdn}}$ &=& Breaktime of the QM sudden approximation. \\ $t_{\tbox{H}}$ &=& $2\pi\hbar/\Delta \ = \ $ Time needed to resolve individual levels (Heisenberg time). \\ \ & \ & \\ \hline \end{tabular} \end{center} \caption{\protect\rm\footnotesize Various scales in the theory of energy spreading. The generic $\hbar$ dependence is indicated in most cases. The parametric scales and the temporal scales are associated with the kernels $P(n|m)$ and $P_t(n|m)$ respectively. The determination of $t_{\tbox{prt}}$ and $t_{\tbox{sdn}}$ is an important issue of this paper. Their dependence on $V$ is illustrated in Fig.\ref{f_regimes}. It should be realized that $\tau_{\tbox{cl}}$ can be defined, from a purely QM point of view, as the time which is required in order to resolve the energy scale $\Delta_b$. Similarly $t_{\tbox{sdn}}$ is defined as the time which is required in order to resolve the spreading profile. The time $t_{\tbox{sdn}}$ can be either equal or shorter than $\tau_{\tbox{cl}}$. } \end{table} \subsection{Specific motivation for the preset study} Reading some of the early literature one gets the impression that quantum dissipation is conceptually well-understood. Specifically, it looks as if the perturbative methods are effective for the purpose of constructing a general theory. However, this is a wrong impression. {\em A general theory of energy spreading is still lacking, a-fortiori there is no general theory of quantum dissipation}. This point becomes most evident once we read the work by Wilkinson and Austin (W\&A) \cite{wilk2}. Their observations constitute the original motivation for the present study \cite{crs}. W\&A \cite{wilk2} have defined two important dimensionless parameters that are associated with the velocity $V$. These are, (using our notations), the scaled velocity $v_{\tbox{LZ}}$, and the scaled velocity $v_{\tbox{RMT}}$. The QM-adiabatic regime is distinguished by the condition $v_{\tbox{LZ}} \ll 1$, where we have the relatively simple picture of spreading due to Landau-Zener transitions. At higher velocities ($v_{\tbox{LZ}} \gg 1$) the QM-adiabatic nature of the dynamics is lost, and the Landau-Zener picture no longer apply. In order to extend the perturbative treatment to such higher velocities W\&A have suggested to adopt an innocent-looking RMT assumption. As long as the velocity is sufficiently slow ($v_{\tbox{RMT}}\ll 1$, but still $v_{\tbox{LZ}} \gg 1$) a classical-like result for $D_{\tbox{E}}$ is obtained. On the other hand, once $v_{\tbox{RMT}}\gg 1$, the classical-like expression for $D_{\tbox{E}}$ no longer holds. It is modified in such a way that correspondence with the classical result is being lost! Obviously, W\&A have realized that the above conclusion is inconceivable. We will have to understand what is wrong with their innocent-looking RMT assumption. We shall argue that $v_{\tbox{RMT}} \sim 1$ does not mark a crossover to a non-classical regime. Rather, we shall find out that there is a {\em different} dimensionless parameter ($v_{\tbox{PR}}$) that controls the route towards quantal-classical correspondence (QCC). \begin{table} \begin{center} \leavevmode \setlength{\baselineskip}{0.5cm} \begin{tabular}{|ll|} \hline \ & \ \\ {\bf Classical slowness conditions:} & \ \\ $V\tau_{\tbox{cl}} \ \ \ll \ \ \delta x_c^{\tbox{cl}}$ & Trivial condition \\ $\ \tau_{\tbox{cl}} \ \ \ \ll \ \ t_{\tbox{frc}}$ & Non-trivial Condition \\ \ & \ \\ {\bf Quantal regimes:} & \ \\ $Vt_{\tbox{H}}\ \ \ll \ \ \delta x_c^{\tbox{qm}}$ & QM-adiabaticity (extremely slow velocities) \\ $V\tau_{\tbox{cl}}\ \ \ll \ \ \delta x_{\tbox{prt}}$ & QM-slow velocity (linear response regime)\\ $V\tau_{\tbox{cl}}\ \ \gg\ \ \delta x_{\tbox{SC}}$ & QM-fast velocity (semiclassical regime) \\ \ & \ \\ {\bf Scaled velocities:} & \ \\ $v_{\tbox{SC}} \ \ \ \ = \ \ \sqrt{2D_{\tbox{E}} \ \tau_{\tbox{cl}} } \ / \ \Delta_{\tbox{SC}}$ & $= \ \ V \ / \ (\delta x_{\tbox{SC}}/\tau_{\tbox{cl}})$ \\ $v_{\tbox{PR}} \ \ \ \ = \ \ \sqrt{2D_{\tbox{E}} \ \tau_{\tbox{cl}}} \ / \ \Delta_b$ & $= \ \ V \ / \ (\delta x_{\tbox{prt}}/\tau_{\tbox{cl}})$ \\ $v_{\tbox{RMT}} \ \ = \ \ b^{\tbox{1/2}} \ v_{\tbox{PR}} \ \ \ = \ \ \ \tau_{\tbox{cl}} \ / \ \tau_c^{\tbox{qm}}$ & $= \ \ V \ / \ (\delta x_c^{\tbox{qm}}/\tau_{\tbox{cl}})$ \\ $v_{\tbox{LZ}} \ \ \ \ = \ \ b^{\tbox{3/2}} \ v_{\tbox{PR}} \ \ \ = \ \ \ t_{\tbox{H}} \ / \ \tau_c^{\tbox{qm}}$ & $= \ \ V \ / \ (\delta x_c^{\tbox{qm}}/t_{\tbox{H}})$ \\ \ & \ \\ \hline \end{tabular} \end{center} \caption{\protect\rm\footnotesize Definitions of the various $V$ regimes in the theory of energy spreading and dissipation. The classical slowness condition is always assumed to be satisfied. In the QM case we distinguish between the regimes of QM-adiabaticity (extremely slow velocities), QM-slow velocities, and QM-fast velocities. Dimensionless (scaled) velocities can be defined in order to distinguish between the various regimes. For reasonably small $\hbar$ we have $v_{\tbox{LZ}} \gg v_{\tbox{RMT}} \gg v_{\tbox{PR}} \gg v_{\tbox{SC}}$. In the classical limit all of them $\gg 1$. The condition $v_{\tbox{LZ}}\ll 1$ defines the QM-adiabatic regime. The condition $v_{\tbox{PR}}\ll 1$ define the regime of QM-slow velocities. The condition $v_{\tbox{SC}}\gg 1$ defines the regime of QM-fast velocities. The parameter $v_{\tbox{RMT}}$ has been introduced in \cite{wilk2}, and we are going to explain that it determines the limitation of an over-simplified RMT approach.} \end{table} \subsection{Main claims of this paper} The purpose of this paper is to describe the time-evolution of the energy spreading in the various velocity regimes (See Table 3). The various `scenarios' are graphically illustrated in Fig.\ref{f_regimes}. The main claims of this paper are implied by this illustration. Disregarding the QM-adiabatic regime we are motivated by the following two questions: \begin{itemize} \setlength{\itemsep}{0cm} \item What is the regime where FGR/RMT picture is valid? \item What is the regime where QCC considerations are valid? \end{itemize} In particular we would like to know whether there is a `clash' between FGR/RMT considerations on the one hand, and QCC considerations on the other hand. The main object of this paper is the transition probability kernel $P_t(n|m)$. The variable $m$ denotes the initial energy-state of the system, and $n$ stands for one of the instantaneous energy-states at a later time $t$. This kernel is well defined quantum-mechanically as well as classically. The energy distribution $\rho_t(E)$ can be obtained by operating with the kernel $P_t(n|m)$ on the initial microcanonical preparation $\rho_{t{=}0}(E)$, and making a simple change of variables $n \mapsto E$. An important distinction in this paper is between {\em restricted} QCC and {\em detailed} QCC. Detailed QCC implies that the quantal $P_t(n|m)$ is similar to the classical $P_t(n|m)$. We shall see that detailed QCC can be established in an intermediate time regime provided the velocity $V$ is large enough. In the absence of detailed QCC we still may have restricted QCC. The latter implies that only the first and the second moments of the corresponding distributions (quantal versus classical) are similar. Restricted QCC is sufficient in order to guarantee QCC as far as the diffusion coefficient $D_{\tbox{E}}$ and the dissipation coefficient $\mu$ are concerned. Our main statements are: \begin{itemize} \setlength{\itemsep}{0cm} \item The FGR picture implies restricted rather than detailed QCC. \item The FGR picture is valid in the regime $v_{\tbox{PR}} \ll 1$. \item Detailed QCC considerations are valid in the regime $v_{\tbox{SC}} \gg 1$. \end{itemize} It should be realized that the FGR picture is not {\em valid} in the regime $v_{\tbox{PR}} \gg 1$. However, this does not necessarily imply that the standard Kubo-Greenwood result is not {\em correct} there. On the contrary: In the the limit $\hbar\rightarrow 0$ we have detailed QCC ($v_{\tbox{SC}} \gg 1$), and at the same time Kubo-Greenwood result simply coincides with the classical result. Thus we may say that for $v_{\tbox{SC}} \gg 1$, the standard Kubo-Greenwood result is not valid but correct. The distinction between `valid' and `correct' is crucial here: A correct result sometimes follows from using wrong assumptions. In the intermediate regime ($v_{\tbox{PR}} \gg 1$ but $v_{\tbox{SC}} \ll 1$) neither FGR nor QCC consideration apply and we may have qualitatively different results for $\mu$. It is suspected \cite{wbr}, but not yet proved in the present context, that some artificial RMT models, that does not possess a well defined classical limit, may exhibit for $v_{\tbox{PR}} \gg 1$ a significantly different behavior compared with the expected FGR or classical result. This latter observation is in the spirit of \cite{wilk2}, but it is quite different as far as details are concerned. It is important to understand what is the origin of the FGR picture validity condition $v_{\tbox{PR}}\ll 1$, and why it is different from the condition $v_{\tbox{RMT}}\ll 1$ that has been suggested in \cite{wilk2}. Again we assume that initially the energy is concentrated in one particular level. We shall argue that in order to determine $D_{\tbox{E}}$ it is important to estimate how many levels are mixed non-perturbatively at the time $t\sim\tau{\tbox{cl}}$. If the related parametric change $\delta x=Vt$, does not mix neighboring levels, then we are on ``safe ground'' of standard first-order perturbation theory (FOPT), and we can trust completely the FGR picture. Such circumstances are guaranteed by the condition $v_{\tbox{RMT}}\ll 1$. If $v_{\tbox{RMT}} \gg 1$ we have a breakdown of the standard FOPT picture, but this does not imply that the FGR picture becomes non-valid. It turns out that the FGR result for transition rate between levels is valid on ``large'' energy scales, even if there is non-perturbative mixing of levels on ``small'' energy scales. This is true as long as the ``small'' scale is much smaller compared with the bandwidth $\Delta_b$ of first-order transitions. Such circumstances are guaranteed by the condition $v_{\tbox{PR}}\ll 1$. \subsection{The `piston' example - The wall formula} We are using throughout this paper the `piston' model of Fig.\ref{f_pistons} as an illustrative example. It should be noticed that for simplicity of presentation we picture the `piston' as a small moving obstacle whose motion is constrained to be in one space direction. From purely linguistic point of view 'piston' implies also hermetic closure along the margins. We do not assume such a closure. Application of the ${\cal F\!D}$ relation in order to get an expression for the dissipation coefficient $\mu$ leads to the `wall' formula. This formula has been originally derived using kinetic considerations \cite{wall}, and only later using other approaches \cite{koonin,jar}, including the ${\cal F\!D}$ approach that we are using here. In the {\em proper classical limit} (taking $\hbar{\rightarrow}0$, while all the other parameters are held fixed) the walls of the `piston' always become `soft', meaning that De-Broglie wavelength becomes much smaller than the penetration distance. The hard walls limit (meaning that the penetration distance is taken to be zero, while $\hbar$ is being kept fixed), is non-generic. It is important to understand the consequences of taking this limit. In the hard wall limit $(\partial {\cal H} / \partial x)_{nm}$ is not a banded matrix, and the (generic) problem of having a non-perturbative regime for $\hbar\rightarrow 0$ is being avoided. The non-generic features of the hard wall limit are possibly responsible for some prevailing miss-conceptions, and in particular to the {\em illusion} that perturbative techniques can be used in order to get a {\em general} theory for `quantum dissipation'. This is the reason for the inclusion of a quite detailed discussion of the `piston' example. The consequences of taking the hard-wall limit, as well as other non-generic features of the 'piston' example are further discussed in the concluding section and in \cite{wls}. \subsection{The `mesoscopic' example - Drude formula} Another physical examples that can be treated by the general theory of dissipation is taken from the realm of mesoscopic physics. Consider the case where $x$ is the magnetic flux via a ring. The velocity $V=\dot{x}$ has then the meaning of electro-motive-force. Let us assume that the ring contains one charged particle $(Q,P)$ that performs diffusive motion. Ohmic dissipation ($\dot{{\cal Q}}=\mu V^2$) means that the charged particle gains kinetic energy, where the dissipation coefficient $\mu$ is just the conductivity of the ring. Equivalently, having $\langle {\cal F} \rangle = -\mu V$ just means that the drift velocity along the ring is proportional to the electro-motive-force. It is a trivial exercise to get Drude formula from the general ${\cal F\!D}$ relation. The advantage of this procedure is that the derivation can easily be extended to the case where the motion of the charged particle is chaotic rather then diffusive. It should be noted that in actual circumstances the charged-particle is an electron, and its (increasing) kinetic energy is eventually transfered to the vibrational modes (phonons) of the ring. The latter process (that leads to Joule heating) is `on top' of the generic dissipation problem that we are going to analyze. \subsection{Overview of the paper} This paper divides roughly into three parts. The appendixes (A-J) should be considered an integral component in the reading of the main text. The reason for transforming some of the sections into appendixes was the desire to maintain a simple logical flow. We turn now to give a brief description of the paper. The first part of this paper ({\bf sections 2-7}), in a superficial glance, looks like a review. However, inspite of its textbook style, it is not a review. It gives the necessary introduction for the later QM analysis, and in particular it contains a careful examination of the various assumptions that are involved in the common approaches to the theory of dissipation. (It turns out that a satisfactory presentation is lacking in the existing literature). The main items of the first part are: \begin{itemize} \setlength{\itemsep}{0cm} \item The crossover from ballistic to diffusive energy spreading. \item Precise formulation of the classical slowness conditions. \item Brief description of the derivation of the ${\cal F\!D}$ relation. \item Critical discussion of the QM linear-response theory. \item Critical discussion of the standard FGR picture. \item The wall formula generalized to arbitrary dimensionality ($d=2,3...$). \end{itemize} The second part of this paper ({\bf sections 8-10}) contains the precise formulation of the theory and an overview of the general picture. The main items of the second part are: \begin{itemize} \setlength{\itemsep}{0cm} \item Definitions of the kernels $P(n|m)$ and $P_t(n|m)$. \item The stochastic description of the energy spreading process. \item Restricted QCC versus detailed QCC, and the classical approximation. \item Overview of the dynamical scenarios in the different $V$ regimes. \end{itemize} The third part of this paper ({\bf sections 11-20}) gives a detailed presentation of perturbation theory and RMT considerations. The main items are: \begin{itemize} \setlength{\itemsep}{0cm} \item The Schroedinger equation in the $x$-dependent basis. \item The QM-sudden approximation and parametric evolution. \item The over-simplified RMT picture. \item An improved version of perturbation theory. \item The core-tail structure of the spreading kernel. \end{itemize} The paper is concluded ({\bf section~21}) by pointing out some important questions that have been left open. Some future directions for research are indicated. \subsection{The need for a generic theory} Our main interest in this paper is to construct a {\em generic} theory for energy spreading and quantum dissipation. In particular we want to define the conditions for getting ohmic dissipation, to establish the associated ${\cal F\!D}$ relation and to explore the validity limits of the QCC principle. One may wonder what is the practical gain in achieving the above mentioned goals. Is it just a matter of doing `mathematics' properly? A similar type of question is frequently asked with regard to the efforts to re-derive well known RMT results using semiclassical methods. The answer to such questions should be clear: It is not possible to analyze non-generic (or non-universal) features unless one possess a thorough understanding of a generic theory along with its limitations. In the future, our intention is to analyze circumstances that go beyond these limitations, and to look for genuine QM effects \cite{rsp}. We are using the term `generic' frequently, and it is now appropriate to define what do we mean by that. The answer is as follows: In order to understand a phenomena (energy spreading and dissipation in the present case) it is a common practice to make the maximum simplifications possible. Then we get a theory with a minimal number of parameters. It turns out that the theory of energy spreading involves the minimal number of {\em two} dimensionless parameters. We are going to consider any additional (dimensionless) parameter as non-generic. Finally, it should be clear that some of our predictions concerning energy spreading are completely non-trivial. The kernel $P_t(n|m)$, as well as its parametric version $P(n|m)$, are accessible to numerical studies \cite{prm,lds} as well as to real experiments. \newpage \section{Energy surfaces and eigenstates} \label{s_surfaces} We consider a system that is described by an Hamiltonian ${\cal H}(Q,P;x)$ where $(Q,P)$ are canonical variables and $x$ is a parameter. The phase space volume that corresponds to a the energy surface ${\cal H}(Q,P;x)=E$ is \begin{eqnarray} \label{e_1} n \ = \ \Omega(E;x) \ = \ \int \frac{dQdP}{(2\pi\hbar)^d} \ \Theta(E-{\cal H}(Q,P;x)) \end{eqnarray} where $d$ is the number of degrees of freedom. Measuring phase-space volume in units of $(2\pi\hbar)^d$ is insignificant classically, but very convenient upon quantization. The density of phase space cells will be denoted by $g(E)=\partial_{\tbox{E}} \Omega(E;x)$. The energy surface that corresponds to a phase space volume $n$ will be denoted by $|n(x)\rangle$ and its energy will be denoted by $E_n(x)$. We assume a simple phase space topology such that for given $n$ and $x$ corresponds a unique energy surface. Thus \begin{eqnarray} \label{e_2} |n(x)\rangle \ = \ \{(Q,P) | \ \ {\cal H}(Q,P;x)=E_n(x) \ \} \end{eqnarray} The microcanonical distribution which is supported by $|n(x)\rangle$ is \begin{eqnarray} \label{e_3} \hspace*{-2cm} \rho_{n,x}(Q,P) \ = \ \frac{1}{g(E)}\delta({\cal H}(Q,P;x)-E_n(x)) \ = \ \delta(\Omega({\cal H}(Q,P;x)) - n) \end{eqnarray} In the QM case the energy becomes quantized, and the mean level density is related to the classical density of phase space cells. By Weyl law we have: \begin{eqnarray} \label{e_4} \frac{1}{\Delta} \ \ \equiv \ \ \overline{\sum_n \delta(E-E_n)} \ \ = \ \ g(E) \end{eqnarray} Thus, upon quantization, the variable $n$ becomes a level-index, and $\rho_{n,x}(Q,P)$ should be interpreted as the Wigner function that corresponds to the eigenstate $|n(x)\rangle$. With these definitions we will be able to address the QM theory and the classical theory simultaneously. We shall use from now on an admixture of classical and quantum-mechanical jargon. This should not cause any confusion. The QM discussion however is postponed to later sections. The following discussion is purely classical. Let us consider a set of {\em parametrically related} energy surfaces $|n(x)\rangle$ that enclose the same phase space volume $n$. By differentiation of the expression $\Omega(E(x);x) = n$ with respect to the parameter $x$ one obtains: \begin{eqnarray} \label{e_5} \delta E = -F(x) \delta x \ , \ \ \ \ \ \ \ F(x) \ \equiv \ \left\langle - \frac{\partial {\cal H}}{\partial x} \right\rangle_{\tbox{E}} \end{eqnarray} The angular brackets denote microcanonical average over all the phase space points that satisfy ${\cal H}(Q,P;x)=E$. Later we shall see that the quantity $F(x)$ has the meaning of a generalized (conservative) force. Having $F(x)=0$ for any $x$ is equivalent to having $\Omega(E;x)$ which is independent of $x$. Such is the case for a gas particle which is affected by collisions with a small rigid body that is being translated inside a large cavity. We shall refer to the latter example as the `piston' example. See Fig.\ref{f_pistons} and Sec.\ref{s_piston} for more details. In order to simplify notations we shall assume, with almost no loss of generality, that indeed $\Omega(E;x)$ is independent of $x$. It is also useful to define \begin{eqnarray} \label{e_6} {\cal F}(Q,P;x) \ \equiv \ \left( -\frac{\partial {\cal H}}{\partial x} \right) -\left\langle -\frac{\partial {\cal H}}{\partial x} \right\rangle_{\tbox{E}} \end{eqnarray} We can define a parametric correlation scale $\delta x_c^{\tbox{cl}}$ that is associated with the function ${\cal F}(Q,P;x)$. For the `piston' example (to be discussed in Sec.\ref{s_piston}) it is just the penetration distance into the `piston' upon collision (the effective `thickness' of the wall). If either $(Q,P)$ or $x$ become time-dependent, then ${\cal F}$ becomes a fluctuating quantity. The nature of these fluctuations is discussed in the next section. \section{Fluctuations} \label{s_flc} For a given $x=x(t)$ and initial conditions $(Q(0),P(0))$ we may find the time-history $(Q(t) P(t))$. We shall also use the notations \begin{eqnarray} \nonumber {\cal E}(t) \ & \equiv & \ {\cal H}(Q(t),P(t);x(t)) \\ \nonumber {\cal F}(t) \ & \equiv & \ {\cal F}(Q(t),P(t);x(t)) \end{eqnarray} The correlator of the fluctuating force is \begin{eqnarray} \label{e_7} C(t,\tau) \ \equiv \ \langle {\cal F}(t) {\cal F}(t+\tau) \rangle \end{eqnarray} The angular brackets denote microcanonical average over the initial ($t{=}0$) phase-space point $(Q(0),P(0))$. In the next two paragraphs we are going to discuss the statistical properties of of the fluctuating force ${\cal F}(t)$. First we consider the time independent case ($x=\mbox{\it const}$), and then the time dependent case. If $x=\mbox{\it const}$, then ${\cal E}(t) = E$ is a constant of the motion, and $C(t,\tau) \equiv C_{\tbox{E}}(\tau)$ is independent of $t$. It is assumed that the dynamics is {\em chaotic}, and consequently the the stochastic force looks like white noise. The fluctuations spectrum $\tilde{C}_{\tbox{E}}(\omega)$ is defined as the Fourier transform of $C_{\tbox{E}}(\tau)$. The intensity of the fluctuations is characterized by the parameter \begin{eqnarray} \label{e_8} \nu_{\tbox{E}} \ \equiv \ \int_{-\infty}^{\infty} C_{\tbox{E}}(\tau) d\tau \ \equiv \ \tilde{C}_{\tbox{E}}(\omega{=}0) \end{eqnarray} The fluctuations are also characterized by a short correlation time $\tau_{\tbox{cl}} = \tilde{C}_{\tbox{E}}(0)/ C_{\tbox{E}}(0)$. For generic Hamiltonian system it is natural to identify $\tau_{\tbox{cl}}$ with the ergodic time $t_{\tbox{erg}}$. However, in specific applications we may have $\tau_{\tbox{cl}}<t_{\tbox{erg}}$. For the `piston' example, that will be discussed in Sec.\ref{s_piston}, the correlation time $\tau_{\tbox{cl}}$ is equal to the duration of a collision with the wall, while $t_{\tbox{erg}}$ is determined by the ballistic time $\tau_{\tbox{bl}}$. If $x$ is time dependent rather than a constant, for example $x(t)=Vt$, then for any finite $V$ and $0<t$ the actual distribution of $(Q(t),P(t))$ is no longer microcanonical. The statistical properties of the fluctuating force ${\cal F}(t)$ are expected to be different from the $V{=}0$ case. The average $\langle {\cal F}(t) \rangle$ is no longer expected to be zero. Rather, we shall argue (See (\ref{e_23}) that $\langle {\cal F}(t) \rangle = -\mu V$, where $\mu$ is the dissipation coefficient. This implies that the correlator $C(t,\tau)$ acquires an offset $(\mu V)^2$. The offset term can be neglected for a limited time $t<t_{\tbox{frc}}$ \begin{eqnarray} \label{e_9} t_{\tbox{frc}} \ = \ \nu/(\mu V)^2 \hspace*{2cm} \mbox{[classical breaktime],} \end{eqnarray} provided $(\mu V)^2 \ll C_{\tbox{E}}(0)$. The latter condition, which is equivalent to having \begin{eqnarray} \label{e_10} \tau_{\tbox{cl}} \ \ll \ t_{\tbox{frc}} \hspace*{2cm} \mbox{[non-trivial slowness condition].} \end{eqnarray} implies that the velocity $V$ should be small enough. There is another possible reason for the correlator $C(t,\tau)$ to be different from $C_{\tbox{E}}(\tau)$. Loss of correlation may be either due to the dynamics of $(Q(t),P(t))$ or else due to the parametric change of $x(t)$. The correlation time which is associated with the dynamics is $\tau_{\tbox{cl}}$. The correlation time which is associated with the parametric time dependence is $\tau_c^{\tbox{cl}}=\delta x_c^{\tbox{cl}}/V$. We always assume that \begin{eqnarray} \label{e_11} \tau_{\tbox{cl}} \ \ll \ \tau_c^{\tbox{cl}} \hspace*{2cm} \mbox{[trivial slowness condition].} \end{eqnarray} meaning that loss of correlations is predominantly determined by the chaotic nature of the dynamics rather than by the (slow) parametric change of the Hamiltonian. Since we always assume that that the classical slowness conditions ((\ref{e_10}) and (\ref{e_11})) are being satisfied, it follows that we can make the approximation \begin{eqnarray} \label{e_12} C(t,\tau) \ = \ C(\tau) \ \approx \ C_{\tbox{E}}(\tau) \hspace*{2cm} \mbox{for $t<t_{\tbox{frc}}$} \end{eqnarray} \section{Energy spreading and dissipation} \label{s_frc} \begin{figure} \begin{center} \leavevmode \epsfysize=1.7in \epsffile{evolving.eps} \end{center} \caption{\protect\footnotesize Schematic illustration of the dynamics. An initially localized distribution is launched in phase space. For a limited time (left plot) it travels upon the initial energy surface. But then it departs from it. After much longer times (right plot) the evolving distribution is concentrated across an instantaneous energy surface. } \label{f_evolving} \end{figure} For $x(t)=\mbox{\it const}$ the energy ${\cal E}(t)$ is a constant of the motion. For time dependent $x(t)$ and any particular time-history we can write \begin{eqnarray} \nonumber \frac{d{\cal E}}{dt} \ = \ \frac{\partial {\cal H}}{\partial t} \ = \ - (F(x(t)) + {\cal F}(t)) \ \dot{x} \end{eqnarray} The first term implies reversible change of energy due to a conservative force that equals $F(x)$. In what follows we shall see that the second term is responsible for an irreversible dissipation process. Integrating over time and using (\ref{e_5}) we get \begin{eqnarray} \label{e_13} {\cal E}(t) \ = \ E(x(t)) \ - \ V \int_0^t {\cal F}(t)dt \end{eqnarray} where $E(x)\equiv E_{m}(x)$ is the energy that correspond to the initial phase-space volume $m$. The difference $E(x(t)){-}E(x(0))$ is due to the reversible work done by the generalized force $F(x)$. If we disregard the fluctuating term, then we come to the conclusion that the trajectory is approximately bounded to the evolving energy surface $|n(x(t))\rangle$. Thus the phase-space volume $n=\Omega({\cal E}(t),x(t))$ is an approximate constant of the motion, the so-called `adiabatic invariant'. Using (\ref{e_13}) we can estimate the energy dispersion which is associated with the fluctuating force: \begin{eqnarray} \label{e_14} \langle ({\cal E}(t)- E(x(t)))^2 \rangle \ = \ V^2 \int_0^t dt' \int_{-t'}^{t'} C(t',\tau) d\tau \end{eqnarray} Hence we get a crossover from ballistic to diffusive behavior: \begin{eqnarray} \label{e_15} \langle ({\cal E}(t)-E(x(t)))^2 \rangle \ \approx \ C_{\tbox{E}}(0) \cdot (Vt)^2 \ \ \ \ \ \ \ \ \ \ & \mbox{for $t \ll \tau_{\tbox{cl}}$} \\ \label{e_16} \langle ({\cal E}(t)-E(x(t)))^2 \rangle \ \approx \ 2D_{\tbox{E}} \ t \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ & \mbox{for $\tau_{\tbox{cl}} \ll t \ll t_{\tbox{frc}}$} \end{eqnarray} The ballistic spreading on short time scales just reflects the parametric change of the energy surfaces. This is the essence of the {\em sudden approximation} which is illustrated in Fig.\ref{f_evolving} and further explained in App.\ref{a_evolving}. The diffusive spreading on longer times reflects the deviation from the {\em adiabatic approximation}. See illustration in Fig.\ref{f_evolving} and further details in App.\ref{a_evolving}. The diffusion coefficient is \begin{eqnarray} \label{e_17} D_{\tbox{E}} \ \ = \ \ \frac{1}{2}V^2 \int_{-t}^t C_{\tbox{E}}(\tau)d\tau \ \ \rightarrow \ \ \frac{1}{2} \ \nu_{\tbox{E}} \ V^2 \end{eqnarray} If $C_{\tbox{E}}(\tau)$ is short range in nature, then $D_{\tbox{E}}$ will tend eventually to the well defined constant value which is indicated in the right hand side of (\ref{e_17}). Note that the adiabatic approximation becomes exact in the formal limit $V\rightarrow 0$, keeping $Vt$ constant. For {\em intermediate} times $(Q(t),P(t))$ are distributed ergodically across the evolving energy surface, within a shell of thickness $\sqrt{2D_{\tbox{E}} t}$. We shall argue later (Sec.\ref{s8}) that more generally, for any $t\gg t_{\tbox{erg}}$, the spreading profile $\rho(E)$ obeys the following diffusion equation \begin{eqnarray} \label{e_18} \frac{\partial \rho}{\partial t} \ = \ \frac{\partial}{\partial E} \left(g(E)D_{\tbox{E}} \frac{\partial}{\partial E} \left(\frac{1}{g(E)}\rho\right)\right) \end{eqnarray} For simplicity we assume here that there is no conservative work ($F(x){=}0$). The energy dependence of the diffusion process implies a systematic growth of the mean energy $\langle {\cal E}(t) \rangle = \int E\rho(E)dE $. Namely, \begin{eqnarray} \label{e_19} \dot{{\cal Q}} \ \equiv \ \frac{d}{dt}\langle {\cal E} \rangle \ = \ - \int_0^{\infty} dE \ g(E) \ D_{\tbox{E}} \ \frac{\partial}{\partial E} \left(\frac{\rho(E)}{g(E)}\right) \end{eqnarray} Substituting $D_{\tbox{E}}=\half\nu_{\tbox{E}}V^2$ and integrating by parts one obtains $\dot{{\cal Q}}=\mu V^2$, along with ${\cal F\!D}$ relation that can be written schematically as $\mu={\cal F\!D}[\nu]$. The result for $\mu$ depends on $\rho(E)$. If $\rho(E)$ is well concentrated around some energy $E$, one obtains \begin{eqnarray} \label{e_20} \mu_{\tbox{E}} \ = \ \frac{1}{2} \frac{1}{g(E)} \frac{\partial}{\partial E} (g(E) \nu_{\tbox{E}}) \ \ \ \ \ \mbox{[microcanonical version]} \end{eqnarray} Another, more familiar variation of the ${\cal F\!D}$ relation is obtained if one assumes a canonical distribution $\rho(E) \propto g(E)\exp(-E/(k_{\tbox{B}}T))$. By substitution into (\ref{e_19}), or simply by canonical averaging over (\ref{e_20}), one obtains: \begin{eqnarray} \label{e_22} \mu_{\tbox{T}} \ = \ \frac{1}{2k_{\tbox{B}} T} \nu_{\tbox{T}} \ \ \ \ \ \mbox{[canonical version]} \end{eqnarray} where $\nu_{\tbox{T}}$ is related to $C_{\tbox{T}}(\tau)$, and the latter is defined the same way as $C_{\tbox{E}}(\tau)$, but with canonical rather than microcanonical averaging. Having energy dissipation implies that for $V\ne0$ the fluctuating quantity ${\cal F}(t)$ has a non-zero average: From (\ref{e_13}), and recalling that we assume $F(x)=0$, we get $\dot{{\cal Q}}=\langle {\cal F}(t) \rangle V$. Therefore $\dot{{\cal Q}}=\mu V^2$ implies \begin{eqnarray} \label{e_23} \langle {\cal F}(t) \rangle \ = \ -\mu V \end{eqnarray} In case of the `piston' example $V$ is the velocity of the `piston' and $\langle {\cal F}(t) \rangle$ is the associated friction force. In case of conductivity calculation (See Sec.1.9), the parameter $x$ represents (time-dependent) magnetic flux via a ring, $V$ is the electro-motive-force, and $\langle {\cal F}(t)\rangle$ is the drift velocity. \section{Quantal energy-spreading: Linear response theory} \label{s_elementary} At first sight it seems that the classical derivation in Sec.\ref{s_frc} applies also to the QM case provided ${\cal F}(t)$ is treated as an operator. This is essentially the so-called `linear response theory'. The only approximation involved is $C(t,\tau) \approx C_{\tbox{E}}(\tau)$. This approximation should be valid as long as the evolving state $\rho_t(Q,P)$ is similar to the initial {\em microcanonical preparation}. It is more difficult to satisfy this condition in the QM case. The similarity $\rho_t(E)\approx\rho_0(E)$ is not a sufficient condition: It is also required that off-diagonal elements of the probability-matrix could be ignored, meaning that a superposition could be treated as if it were an incoherent mixture of the corresponding energy-eigenstates. The classical considerations (Sec.\ref{s_flc}) lead to the time restriction $t\ll t_{\tbox{frc}}$. The QM considerations will lead to a stronger time restriction $t\ll t_{\tbox{prt}}$. Accordingly, the classical slowness condition (\ref{e_10}) is replaced by: \begin{eqnarray} \label{e_24} \tau_{\tbox{cl}} \ \ll \ t_{\tbox{prt}} \ \ \ \ \ \mbox{[quantal slowness condition].} \end{eqnarray} The determination of $t_{\tbox{prt}}$, which is the breaktime for QM perturbation theory, will be discussed in later sections. For non-slow velocities, (meaning throughout this paper that (\ref{e_10}) and (\ref{e_11}) are satisfied but (\ref{e_24}) is violated), the following elementary considerations does not apply. The quantal slowness condition (\ref{e_24}) should not be confused with the QM-adiabaticity condition (to be discussed later). QM-adiabaticity requires extremely slow velocities. The QM version of the derivation in Sec.\ref{s_frc} gives a classical look-alike result for the energy spreading. The only implicit modification is that the classical $C_{\tbox{E}}(\tau)$ should be replaced by the corresponding QM object. For the purpose of concise presentation, the formula for the energy spreading can be written as follows: \begin{eqnarray} \label{e_25} \hspace*{-2cm} \delta E^2 \ = \ V^2\int_0^t\int_0^t C_{\tbox{E}}(t_2{-}t_1)dt_1dt_2 \ = \ V^2 t \int_{-\infty}^{+\infty}\frac{d\omega}{2\pi} \tilde{C}_{\tbox{E}}(\omega) \ \tilde{F}_t(\omega) \end{eqnarray} where \begin{eqnarray} \label{e_26} \tilde{F}_t(\omega) \ = \ t{\cdot}(\mbox{sinc}(\omega t /2))^2 \end{eqnarray} Now $C_{\tbox{E}}(\tau)$ is a QM object, and its Fourier transform $\tilde{C}_{\tbox{E}}(\omega)$ can be expressed as \begin{eqnarray} \label{e_27} \tilde{C}_{\tbox{E}}(\omega) \ = \ \sum_n' \left|\left(\frac{\partial {\cal H}}{\partial x}\right)_{nm}\right|^2 \ 2\pi\delta\left(\omega-\frac{E_n{-}E_m}{\hbar}\right) \end{eqnarray} One observes that the power-spectrum of the QM fluctuations has a discrete nature, and consequently the correlation function $C_{\tbox{E}}(\tau)$ is characterized by the the additional time scale $t_{\tbox{H}}$. We assume that $\hbar$ is reasonably small such that $\tau_{\tbox{cl}}\ll t_{\tbox{H}}$. Correspondence considerations imply that the quantal $C_{\tbox{E}}(\tau)$ is similar to the classical $C_{\tbox{E}}(\tau)$ as long as $t \ll t_{\tbox{H}}$. Equivalently, as long as $t \ll t_{\tbox{H}}$ the discrete nature of the quantal $\tilde{C}_{\tbox{E}}(\omega)$ can be ignored, and we can effectively use the classical $\tilde{C}_{\tbox{E}}(\omega)$. Recall that the power-spectrum of the classical fluctuations looks like that of white noise: It satisfies $\tilde{C}_{\tbox{E}}(\omega) \approx \nu_{\tbox{E}}$ for $|\omega|\ll 1/\tau_{\tbox{cl}}$ and decays rapidly to zero outside of this regime. Thus, for $t \ll \tau_{\tbox{cl}}$ we can make the replacement $\tilde{F}_t(\omega)\rightarrow t$, and we obtain the ballistic result $\delta E^2 = C_{\tbox{E}}(0){\cdot}(Vt)^2$, while for $t \gg \tau_{\tbox{cl}}$ we can make the replacement $\tilde{F}_t(\omega)\rightarrow 2\pi\delta(\omega)$, and we get then the diffusive behavior $\delta E^2 = \nu_{\tbox{E}}V^2 t$. We can get a semiclassical estimate for the matrix elements in (\ref{e_27}) by exploiting the correspondence that has been mentioned above \cite{mario}. The function $\tilde{C}_{\tbox{E}}(\omega)$ is assumed to be vanishingly small for $\omega\gg 1/\tau_{\tbox{cl}}$ which implies that $({\partial {\cal H}}/{\partial x})_{nm}$ is a banded matrix. Energy levels are coupled by matrix elements provided $|E_n-E_m|<\Delta_b$ where \begin{eqnarray} \label{e_28} \Delta_b \ = \ \frac{2\pi\hbar}{\tau_{\tbox{cl}}} \ = \ \mbox{band width} \end{eqnarray} For $\omega \ll 1/\tau_{\tbox{cl}}$ the smoothed $\tilde{C}_{\tbox{E}}(\omega)$ should be equal to the classical noise intensity $\nu_{\tbox{E}}$. Consequently one obtains the following estimate for individual matrix elements within the band: \begin{eqnarray} \label{e_29} \sigma^2 \ = \ \left|\left(\frac{\partial {\cal H}}{\partial x}\right)_{nm}\right|^2 \ \approx \ \frac{\Delta}{2\pi\hbar} \ \nu_{\tbox{E}} \ \ \ \ \ \mbox{for} \ \ |E_n{-}E_m| < \Delta_b \end{eqnarray} It is important to specify the minimal number of (generic) parameters that are involved in the above analysis. In the classical problem there are just two generic parameters: Namely, $\tau_{\tbox{cl}}$ and $\nu_{\tbox{E}}$. Quantum-mechanics requires the specification of {\em two} additional parameters: Namely, the band width $\Delta_b$ and the mean level spacing $\Delta$. The associated dimensionless parameters are: \begin{eqnarray} v_{\tbox{PR}} \ & = \ \mbox{\small scaled velocity} \ & = \ \sqrt{2D_{\tbox{E}} \ \tau_{\tbox{cl}}} \ / \ \Delta_b \\ b \ & = \ \mbox{\small scaled band width} \ & = \ {\Delta_b} \ / \ {\Delta} \end{eqnarray} The specification of $\Delta$ is not dynamically significant as long as $t\ll t_{\tbox{H}}$. Longer times are required in order to resolve individual energy levels. Thus we come to the conclusion that in the time regime $t\ll t_{\tbox{H}}$ there is a {\em single} generic dimensionless parameter, namely $v_{\tbox{PR}}$, that controls QCC. We shall see that the QM definition of slowness (\ref{e_24}) can be cast into the form $v_{\tbox{PR}} \ll 1$. \section{Quantal energy spreading: The conventional FGR picture} \label{s_further} Equation (\ref{e_25}) for the QM energy spreading can be derived using linear response theory, i.e. by following the same steps as in Sec.\ref{s_frc}. However, the simplicity of linear response theory is lost once we try to formulate a controlled version of it. It is difficult to derive and to get a good understanding for the breaktime scale $t_{\tbox{prt}}$. It is better to use the conventional version of time-dependent first-order perturbation theory (FOPT), and to view the energy spreading as arising from {\em transitions between energy levels}. The choice of basis for the representation of the dynamics is a crucial step in the analysis. The proper basis for the understanding of energy spreading is the $x$-dependent set of eigenstates $|n(x)\rangle$ of the Hamiltonian ${\cal H}(Q,P;x(t))$. This is the basis that we are going to use later in this paper. In a sense we are going to introduce an {\em improved} version of FGR picture. However, for sake of completeness, we would like to discuss in this section the capabilities and the limitations of the {\em conventional} FGR picture. The conventional FGR picture is using a {\em fixed basis} that is determined by the unperturbed Hamiltonian ${\cal H}(Q,P;x(0))$. It should be clear that transitions between unperturbed energy levels reflect reduced-energy-changes rather than actual-energy-changes (see corresponding classical definitions in App.\ref{a_evolving}). Therefore the description of the crossover from ballistic to diffusive energy spreading is out-of-reach for this version of perturbation theory. The {\em conventional} FGR picture can be used in order to determine the diffusion coefficient $D_{\tbox{E}}$, as well as the perturbative breaktime $\tau_{\tbox{prt}}$. A detailed derivation can be found in App.\ref{a_FGR}). We use the notation $\tau_{\tbox{prt}}$ rather than $t_{\tbox{prt}}$, because a different version of perturbation theory is involved here. The final result is $D_{\tbox{E}} = \half\nu_{\tbox{E}}^{\tbox{eff}}V^2$, with the effective noise intensity: \begin{eqnarray} \label{e_32} \nu_{\tbox{E}}^{\tbox{eff}} \ = \ \int_{-\infty}^{+\infty} C_{\tbox{E}}(\tau) \ F(\tau) \ S(\tau) \ d\tau \end{eqnarray} where $F(\tau)$ is the correlation function of the driving source, $S(\tau)=\exp(-(\Gamma/2)t)$ is the survival amplitude, and $\tau_{\tbox{prt}}=1/\Gamma$. The introduction of $S(\tau)$ is a common ad-hoc improvement of equation (\ref{e_FGRD}). Such type of improvement is used in other contexts to get the Wigner-Weisskopf Lorentzian line shape. It approximates the effect of higher orders of time-dependent perturbation theory. It is also important to specify the validity condition of the FGR picture. It is not difficult to be convinced that the requirement (\ref{e_FGRrq}) can be relaxed slightly, and the actual condition is: \begin{eqnarray} \label{FGRc} \mbox{\bf FGR-condition:} \ \ \ \ \mbox{Either} \ \tau_{\tbox{cl}} \ \mbox{or} \ \tau_c \ \ \ \ll \ \ \ \tau_{\tbox{prt}} \end{eqnarray} Here $\tau_c$ characterizes the correlation function $F(\tau)$, namely, it is the correlation time of the driving source. It should be realized that having $\tau_c\gg\tau_{\tbox{cl}}$ constitutes an obvious variation of the trivial slowness condition (\ref{e_11}). Furthermore, using (\ref{ea_c4}) one can easily conclude that a necessary condition for the applicability of FGR picture is $v_{\tbox{PR}}\ll 1$, which coincides with the quantal slowness condition (\ref{e_24}). Thus, it is {\em not possible in principle} to apply the FGR picture in the limit $\hbar\rightarrow 0$. The consistency of the FGR result (\ref{e_32}) with the linear response result (\ref{e_25}) is not obvious. It is true that (\ref{e_25}) gives a crossover from ballistic to diffusive behavior, where indeed $D_{\tbox{E}} = \half\nu_{\tbox{E}}^{\tbox{eff}}V^2$. But if one takes (\ref{e_25}) seriously for $t\gg t_{\tbox{H}}$, one will come to the conclusion that this diffusion will stop due to recurrences \cite{berry}. The FGR result (\ref{e_32}), due to the presence of $S(\tau)$, does not imply such a conclusion, provided $\tau_{\tbox{prt}} \ll t_{\tbox{H}}$. A vanishingly small result for $D_{\tbox{E}}$ is obtained only in the QM-adiabatic regime where we may have $\tau_{\tbox{prt}}\gg t_{\tbox{H}}$. In the QM-adiabatic regime Landau-Zener transitions between neighboring levels become the predominant mechanism for energy spreading \cite{wilk1}, and the FGR picture becomes of minor importance. \section{The `piston' example and the wall formula} \label{s_piston} The above picture and considerations become much more transparent once applied to cavities with moving walls. The Hamiltonian ${\cal H}=E(\mbf{p})+{\cal V}(\mbf{x})$ describes the free motion of a `gas particle', whose canonical coordinates are $Q=\mbf{x}$ and $P=\mbf{p}$, inside a $d$-dimensional space which is confined by some boundary. Unless otherwise specified $E(\mbf{p})=\mbf{p}^2/2m$ where $m$ is the mass of the gas particle. The corresponding velocity will be denoted by $v_{\tbox{E}}$. The boundary is composed of wall-elements, and may have few components. For example it may consist of some `static' component that defines an interior space, and an additional `moving' component that defines an excluded space of an impenetrable `piston' as in Fig.\ref{f_pistons}. The displacement of the moving wall-elements is parameterized by $x$. The gas particle undergoes elastic collisions with the boundary. The ballistic time will be denoted by $\tau_{\tbox{bl}}$. It is determined by the collision rate with the walls. The derivation in App.\ref{a_Ft} gives the result: \begin{eqnarray} \label{e_39} \frac{1}{\tau_{\tbox{col}}} \ = \ \left\langle\sum_{\tbox{col}}\delta(t-t_{\tbox{col}})\right\rangle \ = \ \frac{1}{2}\frac{\mboxs{Area}}{\mboxs{Volume}} \langle|\cos\theta|\rangle \ v_{\tbox{E}} \end{eqnarray} The $d$-dependent geometrical factor $\langle|\cos\theta|\rangle$ is defined in App.\ref{a_spherical}. For the purpose of `ballistic-time' definition we should take the total \mboxs{Area} of all the wall elements. For the purpose of calculating an effective collision rate the {\em effective} \mboxs{Area} should be defined as in (\ref{e_41}). For the `piston' example the total effective \mboxs{Area} of the moving-faces of the `piston' may be much smaller compared with the total \mboxs{Area} of the walls, and consequently the effective time between collisions will be much larger than the ballistic time. For the later QM considerations it is essential to consider `soft walls'. For concreteness we may assume that the wall is realized by a constant force field $f$. If $z$ is a coordinate perpendicular to a wall elements, than the potential barrier is ${\cal V}(z)=0$ for $z<0$ and ${\cal V}(z)=f{\cdot}z$ for $z>0$ up to some maximal value ${\cal V}_{\tbox{wall}}$ well inside the barrier. We assume that the energy $E$ of the particle is much lower than ${\cal V}_{\tbox{wall}}$, and therefore the latter energy scale should be of no significance. For strongly chaotic billiards successive collisions with the `piston' are uncorrelated and therefore the classical correlation time $\tau_{\tbox{cl}}$ is equal to the collision time with the wall. Namely \begin{eqnarray} \label{e_40} \tau_{\tbox{cl}} \ = \ (2mv_{\tbox{E}})/f \end{eqnarray} Obviously, in the hard wall limit $f\rightarrow\infty$ we have $\tau_{\tbox{cl}}\rightarrow 0$. The displacement of the walls is parameterized by some parameter $x$. With each surface-element $ds$ we can associate a normal unit vector $\mbf{n}$, and a `propagation velocity' which will be denoted by $\hat{\mbf{V}}(\mbf{s})$. The latter is simply the derivative of the wall-element displacement by the controlling parameter $x$. The effective moving-wall area is defined as follows: \begin{eqnarray} \label{e_41} \mboxs{Area} \ = \ \oint (\mbf{n}{\cdot}\hat{\mbf{V}})^2 ds \end{eqnarray} Due to ergodicity there are two different strategies that can be applied in order to calculate $\nu_{\tbox{E}}$. One possibility is to average ${\cal F}(t){\cal F}(t{+}\tau)$ over $(\mbf{x},\mbf{p})$, as implied by the definition (\ref{e_7}), and then to integrate over $\tau$. Schematically the calculation goes as follows: $\langle {\cal F}^2 \rangle$ simply equal to $f^2$ multiplied by the ratio between the collision-volume ($\mboxs{Area}\times (v_{\tbox{E}} \tau_{\tbox{cl}}))$ and the total $\mboxs{Volume}$. Note that the latter ratio simply equals $\tau_{\tbox{cl}}/\tau_{\tbox{col}}$. The noise intensity is obtained by further multiplication with $\tau_{\tbox{cl}}$. The proper calculation should be done as in App.\ref{a_CE}. The other possibility to calculate $\nu_{\tbox{E}}$ is to write ${\cal F}(t)$ as a sum over short impulses and to perform the averaging in time domain. The magnitude of the impulses is $2mv_{\tbox{E}}$. The noise intensity is simply equal to the square of the impulses multiplied by the collision rate. The exact calculation is done in App.\ref{a_Ft}. Both approaches give obviously the same result: \begin{eqnarray} \label{e_42} \nu_{\tbox{E}} \ = \ 2 \langle|\cos\theta|^3\rangle \ \frac{\mboxs{Area}}{\mboxs{Volume}} \ m^2 v_{\tbox{E}}^3 \end{eqnarray} This result is quite general, but it assumes that successive collisions with the `piston' are uncorrelated. More generally, correlations between successive collisions should be taken into account. The derivation can be done by following an essentially identical computation by Koonin \cite{koonin} and the result is cast into the form \begin{eqnarray} \label{e_43} \hspace*{-1cm} \nu_{\tbox{E}} \ = \ \oint\oint ds_2 ds_1 \ (\mbf{n}{\cdot}\hat{\mbf{V}}(\mbf{s}_2)) \ \nu(\mbf{s}_2,\mbf{s}_1) \ (\mbf{n}{\cdot}\hat{\mbf{V}}(\mbf{s}_1)) \end{eqnarray} If one ignores correlations between successive collisions, then one obtains $\nu(\mbf{s}_2,\mbf{s}_1) \ \propto \delta(\mbf{s}_2-\mbf{s}_1)$ and (\ref{e_43}) reduces to (\ref{e_42}). Using the ${\cal F\!D}$ relation one obtains the following generalized `wall formula': \begin{eqnarray} \label{e_44} \mu_{\tbox{E}} \ = \ 2 \langle|\cos\theta|\rangle \ \frac{\mboxs{Area}}{\mboxs{Volume}} \ mv_{\tbox{E}} \end{eqnarray} The familiar $d{=}3$ version of the wall formula is obtained by substituting $\langle|\cos(\theta)|\rangle=1/2$. As in the the case of $\nu_{\tbox{E}}$ we can try to derive this result using a simple-minded time-domain approach. See App.\ref{a_Ft}. It turns out that only {\em half} of the correct result is obtained. Alternatively, the correct result (\ref{e_44}) can be obtained by extending the standard `kinetic' derivation. See App.\ref{a_kinetic}. The kinetic derivation demonstrates that (\ref{e_44}) is more general than it seems at first sight. It applies to any velocity-momentum dispersion relation. The mass $m=(dv/dp)^{-1}$ may be energy dependent. The QM calculation of $\nu_{\tbox{E}}^{\tbox{eff}}$ requires the knowledge of the quantal $C_{\tbox{E}}(\tau)$, and we should also have a proper understanding of $F(\tau)$. For the time being let us assume that the effective $\tau_c$ is much larger than $\tau_{\tbox{cl}}$. It means that the transitions are resonance-limited, and detailed knowledge of $F(\tau)$ becomes irrelevant. If the De-Broglie wavelength $\lambda_{\tbox{E}}=2\pi\hbar/(mv_{\tbox{E}})$ is much smaller compared with other (classical) scales, then it is expected to have QCC, as discussed previously with respect to (\ref{e_27}). However, it is also possible to make a direct estimate of the matrix elements that appear in (\ref{e_27}). See App.\ref{a_matrix}. The results are in complete agreement with our semiclassical expectations. Indeed, the bandwidth is determined by the collision time with the (soft) walls (see App.\ref{a_softwall}), and the power-law decay of matrix elements outside the band can be associated with the discontinuity (see App.\ref{a_CE}) in the derivative of the classical $C_{\tbox{E}}(\tau)$ at $\tau=0$. The expression for the effective noise intensity can be cast into the form of (\ref{e_43}) with \begin{eqnarray} \label{e_45} \hspace*{-2cm} \nu(\mbf{s}_2-\mbf{s}_1) \ = \ \Omega_d \ \frac{\mboxs{Area}}{\mboxs{Volume}} \ m^2 v_{\tbox{E}}^3 \ \frac{1}{\lambda_{\tbox{E}}^{d{-}1}} \ \left(\mbox{Sinc}\left( \frac{2\pi}{\lambda_{\tbox{E}}} |\mbf{s}_2-\mbf{s}_1|\right)\right)^2 \end{eqnarray} The Sinc function, as well as other notations, are defined in App.\ref{a_spherical}. The QM result coincides with the classical result if $\lambda_{\tbox{E}}$ is small compared with the classical length scales that describe the $\mbf{s}$-dependence of $\mbf{n}{\cdot}\mbf{V}(\mbf{s})$. It is also assumed that $\lambda_{\tbox{E}}$ is small compared with the surface radius-of-curvature, else further corrections are required \cite{koonin}. \newpage \section{The route to stochastic behavior} \label{s8} In this section we shall introduce a general phase-space formulation for the theory of energy-spreading. The main mathematical object of the study, namely the kernel $P_t(n|m)$, will be defined. In order to go smoothly from the classical theory to the QM theory it is essential to use proper notations. From now on we use the variable $n = \Omega(E)$ instead of $E$. See definitions in Sec.\ref{s_surfaces}. The transition probability kernel $P_t(n|m)$ is defined as the projection of an evolving state on the instantaneous set of energy-states. It is also possible to define a parametric kernel $P(n|m)$. The latter depends on the displacement $\delta x$ but not on the actual time that it takes to realize this displacement. The definitions are: \begin{eqnarray} \label{e_47} P_t(n|m) \ & = & \ \mbox{trace} ( \ \rho_{n,x(t)} \ {\cal U}(t) \ \rho_{m,x(0)} \ ) \\ P(n|m) \ \ & = & \ \mbox{trace} ( \ \rho_{n,x(t)} \ \rho_{m,x(0)} \ ) \end{eqnarray} In the above definitions the initial energy-surface is $|m(x(0))\rangle$, and the associated phase-space density is $\rho_{m,x(0)}(Q,P)$. In the QM-case $|m(x(0))\rangle$ is an energy-eigenstate and $\rho_{m,x(0)}(Q,P)$ is the associated Wigner function. The evolving surface/state is represented by ${\cal U}(t) \ \rho_{m,x(0)}$, where ${\cal U}(t)$ is either the classical Liouville propagator or its QM version. In the classical case it simply re-positions points in phase-space. In the QM case it propagates a Wigner function and it may have a more complicated structure. The trace operation is just a $dQdP$ integral over phase-space. In the QM-case the definitions of $P(n|m)$ and $P_t(n|m)$ can be cast into a much simpler form using Dirac's notations: See (\ref{e_68}) and (\ref{e_69}). \begin{figure} \begin{center} \leavevmode \epsfysize=6.0in \epsffile{overlaps.eps} \end{center} \caption{\protect\footnotesize {\em Upper Left:} Phase space illustration of the initial and of the instantaneous set of parametric energy surfaces; Plot of the associated $P(n|m)$, where the classical behavior is indicate by the black lines, and the QM behavior is represented by the grey filling. Detailed QCC is assumed. In the QM case classical sharp-cutoffs are being smeared. {\em Upper Right:} Illustration of a typical non-generic feature. In the QM case the classical delta-singularity is being smeared. {\em Lower Right:} The same non-generic feature manifests itself in the `piston' example. {\em Lower Left:} In the perturbative case there is no detailed QCC. The kernel is characterized by a core-tail structure. The tail is limited by the bandwidth of the coupling matrix-elements. If $\delta x$ is sufficiently small the core is just a kronecker's delta.} \label{f_surfaces} \end{figure} In the classical case the kernel $P(n|m)$ reflects the parametric correlations between two sets of energy surfaces (Fig.\ref{f_surfaces} upper left). Consequently non-Gaussian features may manifest themselves. An important special non-Gaussian feature is encountered in many specific examples where $x$ affects only a tiny portion of the energy surface. (Fig.\ref{f_surfaces} upper right). In the `piston' example this is the case because $({\partial {\cal H}}/{\partial x}) = 0$ unless $Q$ is near the face of the piston. Consequently $P(n|m)$ will have a $\delta$-singularity for $n=m$. The classical scenario for $P_t(n|m)$ consists of three time regimes. For short times we have the classical {\em sudden approximation}: \begin{eqnarray} \label{e_49} P_t(n|m) \ \approx \ P(n|m) \ \ \ \ \ \ \mbox{for $t \ll \tau_{\tbox{cl}}$} \end{eqnarray} See Fig.\ref{f_evolving} and App.\ref{a_evolving} for more details. For longer times we have the classical {\em adiabatic approximation}, or more precisely we have diffusive spreading: \begin{eqnarray} \label{e_50} P_t(n|m) \ \approx \ \mbox{Gaussian}(n{-}m) \ \ \ \ \ \ \mbox{for $t_{\tbox{erg}} \ll t \ll t_{\tbox{frc}}$} \end{eqnarray} For $t \gg t_{\tbox{frc}}$ the kernel $P_t(n|m)$ is no-longer a narrow Gaussian that is centered around $n=m$. Using (\ref{e_9}) it is easily observed that $t>t_{\tbox{frc}}$ is equivalent to $\dot{{\cal Q}}t > \sqrt{2D_{\tbox{E}}t}$, meaning that the systematic energy change becomes larger than the width of the spreading. Thus $t_{\tbox{frc}}$ should be regarded as the {\em breaktime} for the classical adiabatic approximation. On time scales larger than $t_{\tbox{erg}}$ one may argue that the energy spreading is like a {\em stochastic process}, and consequently the diffusive behavior should persist beyond $t_{\tbox{frc}}$. A precise formulation of this point will be presented now. For $t\gg t_{\tbox{erg}}$ the evolving surface ${\cal U}(t) |m(x(0))\rangle$ becomes very convoluted due to `mixing'. As long as one does not insist on looking for fine structures, $\rho_t(Q,P)$ can be replaced by its smeared version. Any `tangential' non-homogeneity in phase space will be washed away due to the ergodic behavior, and therefore the smeared $\rho_t(Q,P)$ is fully characterized by the projected distribution $\rho_t(n)$. Obviously this statement is true only if $t_{\tbox{erg}}$ is much smaller than the time scale $t_{\tbox{frc}}$ that characterizes the `transverse' spreading. Thus the dynamics acquires the following stochastic property: \begin{eqnarray} \label{e_51} \hspace*{-2cm} P_{t_1{+}t_2}(n|m) \approx \int P_{t_2}(n|n') \ dn' \ P_{t_1}(n'|m) \ \ \ \ \ \mbox{provided} \ \ \ t_1,t_2 \gg t_{\tbox{erg}} \end{eqnarray} Assuming that $t_{\tbox{erg}} \ll t_{\tbox{frc}}$, it is possible to define an intermediate time $t_1=t/N$, where $N$ is some large integer, such that $t_{\tbox{erg}} \ll t_1 \ll t_{\tbox{frc}}$. Using the stochastic property (\ref{e_51}) one can write $P_t(n|m)$ as a convolution of the $N$ kernels $P_{t_1}(n|m)$. Applying the same considerations as in the derivation of the central limit theorem, we come to the conclusion that $P_t(n|m)$ will become a spreading Gaussian that obeys the diffusion equation \begin{eqnarray} \label{e_18a} \frac{\partial \rho}{\partial t} \ = \ \frac{\partial}{\partial n} \left(D_n \frac{\partial}{\partial n} \rho \right) \end{eqnarray} which is equivalent to (\ref{e_18}). Note that $D_n=g(E)^2 D_{\tbox{E}}$. This description holds on time scales larger than $t_{\tbox{erg}}$, irrespective of the detailed structure of $P_{t_1}(n|m)$. Only the second moment of the latter is important for the determination of the diffusion coefficient. The argumentation in favor of long-time stochastic behavior is more subtle in the QM case. Using obvious notations the stochastic assumption (\ref{e_51}) is: \begin{eqnarray} \label{e_53} \hspace*{-2cm} \Big\langle \Big| \langle n | \mbf{U}_{t_2} \mbf{U}_{t_1} | m \rangle \Big|^2 \Big\rangle \ \approx \ \sum_{n'} \Big\langle \Big| \langle n | \mbf{U}_{t_2} | n' \rangle \Big|^2 \Big\rangle \Big\langle \Big| \langle n' | \mbf{U}_{t_1} | m \rangle \Big|^2 \Big\rangle \end{eqnarray} Later we shall argue that $\mbf{U}$ is a banded matrix. It is true in general that in the absence of correlations between successive unitary operations we will always have a stochastic diffusive behavior \cite{qkr}. Thus, in order to establish a stochastic behavior in the QM case we should look for a time scale $\tau_c$ that marks the loss of phase-correlation. For $t_1,t_2 \gg \tau_c$ we can argue that the off-diagonal 'interference' terms in the matrix multiplication $\mbf{U}_{t_2}\mbf{U}_{t_1}$ will be averaged to zero. One way to establish the existence of a time scale $\tau_c$ is just to assume {\em irregular} driving. As an example let us assume that we have a `piston' that is pushed back and forth in arbitrary a-periodic fashion, meaning that $\dot{x}(t)$ becomes uncorrelated on a time scale that will be denoted by $\tau_c^{\tbox{drv}}$. The irregular driving is like noise and consequently the interference contribution is averaged to zero \cite{qkr}. For {\em periodic} driving, there may be limitation of diffusion due to `localization' effect as in the quantum-kicked-rotator model \cite{qkr}. The study of this latter issue is beyond the scope of this paper. As in the classical case we will be able to establish a diffusive behavior on an {\em intermediate} time scale. QM considerations will be limited either by a semiclassical breaktime $t_{\tbox{scl}}$ or by a perturbative breaktime $t_{\tbox{prt}}$, which are analogous to the classical breaktime $t_{\tbox{frc}}$. If we do not have the separation of time scales ($\tau_c^{\tbox{drv}} \ll t_{\tbox{prt}}$ or $\tau_c^{\tbox{drv}} \ll t_{\tbox{scl}}$) then we should wonder whether there is an {\em intrinsic} $\tau_c$. An intrinsic $\tau_c$ is expected to be either equal or larger than the classical time $t_{\tbox{erg}}$. Indeed, in the perturbative regime, where $t_{\tbox{erg}}\ll t_{\tbox{prt}}$, it will be argued that effectively $\tau_c \gg t_{\tbox{erg}}$. In the perturbative regime we are not able to give a general mathematical proof for having an effective $\tau_c$ such that the separation of time scales requirement ($\tau_c \ll t_{\tbox{prt}}$) is being satisfied. However, we are going to demonstrate that a crossover to a diffusive-growth of the second moment happens {\em before} the breaktime $t_{\tbox{prt}}$. The {\em assumption} that the diffusive behavior {\em persists} beyond $t_{\tbox{prt}}$ with the {\em same} diffusion coefficient is the cornerstone of the common FGR picture. We are not going to study in this paper the general conditions for having stochastic-like behavior. \begin{figure} \begin{center} \leavevmode \epsfysize=2.5in \epsffile{route.eps} \vspace*{5mm} \frame{\ \ \ \ \ \ \mpg{9}{ \ \\ $t_{\tbox{frc}} \ = \ \mbox{breakdown of classical perturbation theory}$ \\ $t_{\tbox{prt}} \ = \ \mbox{breakdown of quantal perturbation theory}$ \\ $t_{\tbox{scl}} \ = \ \mbox{breakdown of semiclassical approximation}$ \\ $\tau_{\tbox{cl}} < t_{\tbox{frc}}$ $ \ \ \ \ \ \ \leadsto \ \ \ \ \ \ $ \mbox{classical definition of slowness} \\ $\tau_{\tbox{cl}} < t_{\tbox{prt}}$ $ \ \ \ \ \ \ \leadsto \ \ \ \ \ \ $ \mbox{quantal definition of slowness} \\ $\tau_{\tbox{cl}} < t_{\tbox{scl}}$ $ \ \ \ \ \ \ \leadsto \ \ \ \ \ \ $ \mbox{not restrictive condition} \\ $\tau_{\tbox{SC}} < \tau_{\tbox{cl}}$ $ \ \ \ \ \ \ \leadsto \ \ \ \ \ \ $ \mbox{quantal definition of fastness} \ \\ }} \end{center} \caption{\protect\footnotesize Illustration of the various time scales involved in constructing either classical, semiclassical or perturbation theory of dissipation. We use the notation $\tau_{\tbox{SC}}=\delta x_{\tbox{SC}}/V$. The accompanying table summarizes the associated requirements for the applicability of each of those theories. Note that the quantum mechanical definitions of slowness and of fastness are not complementary. Note also that slowness in the classical sense is always assumed. The rest of this paper (Sections 9-20) is devoted to the study of the ballistic-diffusive crossover. This crossover is `captured' either by perturbation theory or by the semiclassical theory, provided the respective slowness or fastness conditions are satisfied. For simplicity we assume a generic system where $\tau_{\tbox{cl}}$ can be identified with $t_{\tbox{erg}}$. The persistence of the stochastic behavior for arbitrarily long times is assumed. We are not going to study in this paper the general conditions for having such long-time stochastic behavior.} \label{f_route} \end{figure} The derivation of the classical ${\cal F\!D}$ relation consists of two steps: The {\em first step} establishes the local diffusive behavior for short ($t\ll t_{\tbox{frc}}$) time scales, and $D_{\tbox{E}}$ is determined; The {\em second step} establishes the global stochastic behavior on large ($t\gg t_{\tbox{erg}}$) time scales. The various time scales involved are illustrated in Fig.\ref{f_route}. The validity of the classical derivation depends on the slowness condition (\ref{e_10}). The validity of the analogous QM theory is further restricted by the quantal slowness condition (\ref{e_24}). However, an optional derivation of the ${\cal F\!D}$ relation in the QM case can be based on semiclassical considerations. The limitations of the latter strategy are illustrated in Fig.\ref{f_route}, and further discussed in the next section. \section{The semiclassical picture and detailed QCC} The main objects of our discussion are the transition probability kernel $P_t(n|m)$ and the parametric kernel $P(n|m)$ which have been introduced in the previous section. Recall that we are measuring phase-space volume ($n{=}\Omega(E)$) in units of $(2\pi\hbar)^d$. This way we can obtain a `classical approximation' for the QM kernel, simply by making $n$ and $m$ integer variables. If the `classical approximation' is similar to the QM kernel, then we say that there is {\em detailed} QCC. If only the second-moment is similar, then we say that there is {\em restricted} QCC. In the present section we are going to discuss the conditions for having detailed QCC, using simple {\em semiclassical} considerations. In the next paragraph we discuss the conditions for having detailed QCC in the computation of the parametric kernel $P(n|m)$. Then we discuss the further restrictions on detailed QCC, that are associated with the computation of the actual kernel $P_t(n|m)$. Wigner function $\rho_{n,x}(Q,P)$, unlike its classical microcanonical analog, has a non-trivial transverse structure. For a curved energy surface the transverse profile looks like Airy function and it is characterized by a width \cite{airy} \begin{eqnarray} \label{e_56} \Delta_{\tbox{SC}} \ = \ \left( \varepsilon_{\tbox{cl}} \left( \frac{\hbar}{\tau_{\tbox{cl}}} \right)^2 \right)^{1/3} \end{eqnarray} where $\varepsilon_{\tbox{cl}}$ is a classical energy scale. For the `piston' example $\varepsilon_{\tbox{cl}}=E$ is the kinetic energy of the gas particle. The classical $P(n|m)$ has a dispersion \begin{eqnarray} \label{e_57_0} \delta E_{\tbox{cl}} \ = \ \sqrt{\left\langle \left(\frac{\partial {\cal H}}{\partial x}\right)^2 \right\rangle_{\tbox{E}}} \ \delta x \end{eqnarray} which characterizes the transverse distance between the intersecting energy-surfaces $|m(x)\rangle$ and $|n(x{+}\delta x)\rangle$. In the generic case, it should be legitimate to neglect the transverse profile of Wigner function provided $\delta E_{\tbox{cl}} \gg \Delta_{\tbox{SC}}$. This condition can be cast into the form $\delta x \gg \delta x_{\tbox{SC}}$ where \begin{eqnarray} \label{e_57_1} \delta x_{\tbox{SC}} \ = \ \frac{\Delta_{\tbox{SC}}} {\sqrt{\nu_{\tbox{E}}/\tau_{\tbox{cl}}}} \ \propto \ \hbar^{2/3} \end{eqnarray} For the `piston' example see \cite{wls}. Another important parametric scale is defined in a similar fashion: We shall see that it is {\em not} legitimate to ignore the transverse profile of Wigner function if $\delta E_{\tbox{cl}} < \Delta_b$. This latter condition can be cast into the form $\delta x \ll \delta x_{\tbox{prt}}$ where \begin{eqnarray} \label{e_57_2} \delta x_{\tbox{prt}} \ = \ \frac{\Delta_b} {\sqrt{\nu_{\tbox{E}}/\tau_{\tbox{cl}}}} \ = \ \frac{2\pi\hbar} {\sqrt{\nu_{\tbox{E}} \tau_{\tbox{cl}}}} \end{eqnarray} Typically the two parametric scales are well separated ($\delta x_{\tbox{prt}} \ll \delta x_{\tbox{SC}}$). If we have $\delta x \ll \delta x_{\tbox{prt}}$ then the parametric kernel $P(n|m)$ is characterized by a perturbative core-tail structure which is illustrated in Fig.\ref{f_surfaces} and further discussed in the next section. If we have $\delta x \gg \delta x_{\tbox{SC}}$ then the transverse profile of Wigner function can be ignored, and we get detailed QCC. Obviously, `detailed QCC' does not mean complete similarity. The classical kernel is typically characterized by various non-Gaussian features, such as sharp cutoffs, delta-singularities and cusps. These features are expected to be smeared in the QM case. The discussion of the latter issue is beyond the scope of the present paper \cite{wls}. We turn now to discuss the actual transition probability kernel $P_t(n|m)$. Here we encounter a new restriction on QCC: The evolving surface ${\cal U}(t)|m\rangle$ becomes more and more convoluted as a function of time. This is because of the mixing behavior that characterizes chaotic dynamics. For $t\gg t_{\tbox{scl}}$ the intersections with a given instantaneous energy surface $|n\rangle$ become very dense, and associated QM features can no longer be ignored. The time scale $t_{\tbox{scl}}$ can be related to the failure of the stationary phase approximation \cite{heller}. The breaktime scale $t_{\tbox{scl}}$ of the semiclassical theory is analogous to the breaktime scale $t_{\tbox{prt}}$ of perturbation theory, as well as to the breaktime scale $t_{\tbox{frc}}$ of the classical theory. In order to establish the crossover from ballistic to diffusive energy spreading using a semiclassical theory we should satisfy the condition $\tau _{\tbox{cl}} < t_{\tbox{scl}}$. This velocity-independent condition is not very restrictive. On the other hand we should also satisfy the condition $\delta x \gg \delta x_{\tbox{SC}}$, with $\delta x = V\tau _{\tbox{cl}}$. The latter condition implies that the applicability of the semiclassical theory is restricted to relatively high velocities. We can define: \begin{eqnarray} \label{e_58} v_{\tbox{SC}} \ = \ \sqrt{ D_{\tbox{E}} \ \tau_{\tbox{cl}} } \ / \ \Delta_{\tbox{SC}} \end{eqnarray} If $v_{\tbox{SC}}\gg 1$ then the above semiclassical analysis is applicable in order to analyze the crossover from ballistic to diffusive energy spreading. \section{The perturbative picture and restricted QCC} Detailed QCC between the quantal $P(n|m)$ and the classical $P(n|m)$ is not guaranteed if $\delta x < \delta x_{\tbox{SC}}$. A-fortiori, this statement holds also for $P_t(n|m)$. For sufficiently small parametric changes $\delta x$, or for sufficiently short times $t$, perturbation theory becomes a useful tool for the analysis of these kernels. A detailed formulation of perturbation theory is postponed to later sections. In the present section we are going to sketch the main observations. We are going to argue that for small enough $\delta x$ there is no {\em detailed} QCC between the quantal and the classical kernels, but there is still {\em restricted} QCC that pertains to the second moment of the distributions. Large enough $\delta x$ is a necessary condition for getting detailed QCC. The following paragraph discuss the parametric evolution of $P(n|m)$, and the rest of this section discuss the actual evolution of $P_t(n|m)$. For extremely small $\delta x$ the parametric kernel $P(n|m)$ has a standard `first-order' perturbative structure, namely: \begin{eqnarray} \label{e_59} P(n|m) \ \approx \ \delta_{nm} \ + \ \mbox{Tail}(n{-}m) \ \ \ \ \ \mbox{for $\delta x \ll \delta x_c^{\tbox{qm}}$} \end{eqnarray} where $\delta x_c^{\tbox{qm}}$ is defined as parametric change that is needed in order to mix neighboring levels. For larger values of $\delta x$ neighbor levels are mixed non-perturbatively and consequently we have a more complicated spreading profile: \begin{eqnarray} \label{e_60} P(n|m) \ \approx \ \mbox{Core}(n{-}m) \ + \ \mbox{Tail}(n{-}m) \ \ \ \ \ \mbox{for $\delta x \ll \delta x_{\tbox{prt}}$} \end{eqnarray} In the perturbative case ($\delta x \ll \delta x_{\tbox{prt}}$) the second moment of $P(n|m)$ is generically determined by the `tail'. It turns out that the QM expression for the second-moment is classical look-alike, and consequently {\em restricted} QCC is satisfied. The {\em core} of the quantal $P(n|m)$ is of non-perturbative nature. The {\em core} is the component that is expected to become similar (eventually) to the classical $P(n|m)$. A large perturbation $\delta x \gg \delta x_{\tbox{prt}}$ makes the {\em core} spill over the perturbative {\em tail}. If we have also $\delta x \gg \delta x_{\tbox{SC}}$, then we can rely on detailed QCC in order to estimate $P(n|m)$. The parametric scales $\delta x_c^{\tbox{qm}}$ and $\delta x_{\tbox{prt}}$ are easily estimated in case of the `piston' example. The displacement which is needed in order to mix levels is much smaller than De-Broglie wavelength, namely $\delta x_c^{\tbox{qm}}\approx (\lambda_{\tbox{E}}^{d{+}1}/\mbox{\small Area})^{\tbox{1/2}}$. The displacement which is needed in order to mix core and tail is much larger than De-Broglie wavelength, namely $\delta x_{\tbox{prt}}\approx (\tau_{\tbox{col}}/{\tau_{\tbox{cl}}})^{\tbox{1/2}} \lambda_{\tbox{E}}$. For a more careful discussion of these parametric scales see \cite{wls,prm} and the concluding section. \begin{figure} \begin{center} \leavevmode \epsfysize=3.0in \epsffile{regimes.eps} \end{center} \caption{\protect\footnotesize The various crossovers in the time evolution of $P_t(n|m)$. The vertical axis is $x(t)=Vt$. The parametric scales $\delta x_c^{\tbox{qm}}$ and $\delta x_{\tbox{prt}}$ are indicted by horizontal lines. The horizontal axis is the velocity $V$. It is divided by vertical dashed lines to various velocity regimes. In each velocity regime there is a different dynamical route. The various crossovers are explained in the text and the various symbols are easily associated with having either Gaussian or some non-Gaussian spreading profile. In particular the perturbative spreading profile is either with or without non-trivial core, and its tail is either band-limited or resonance-limited. } \label{f_regimes} \end{figure} The dynamical evolution of $P_t(n|m)$ is related to the associated parametric evolution of $P(n|m)$. We can define a perturbative time scale $t_{\tbox{prt}}$ which is analogous to $\delta x_{\tbox{prt}}$. For $t\ll t_{\tbox{prt}}$ the kernel $P_t(n|m)$ is characterized by a core-tail structure that can be analyzed using perturbation theory. In particular we can determine the second moment of the energy distribution, and we can establish {\em restricted} QCC. If the second moment for the core-tail structure is proportional to $t^2$, we shall say that there is a ballistic-like behavior. If it is proportional to $t$, we shall say that there is a diffusive-like behavior. {\em In both cases the actual energy distribution is not classical-like}, and therefore the term `ballistic' and `diffusive' should be used with care. We are going now to give a brief overview of the various scenarios in the time evolution of $P_t(n|m)$. These are illustrated in Fig.\ref{f_regimes}. In later sections we give a detailed account of the theory. For {\em slow velocities} such that $\tau_{\tbox{cl}}\ll t_{\tbox{prt}}$, there is a crossover from ballistic-like spreading to diffusive-like spreading at $t\sim \tau_{\tbox{cl}}$. In spite of the lack of detailed QCC there is still restricted QCC as far as this ballistic-diffusive crossover is concerned. If the breakdown of perturbation theory happens before the Heisenberg time ($t_{\tbox{prt}} \ll t_{\tbox{H}}$) it is implied that there is a second crossover at $t \sim t_{\tbox{prt}}$ from a diffusive-like spreading to a genuine diffusive behavior. Once a stochastic behavior is established, the time scale $t_{\tbox{H}}$ for recurrences becomes non-effective, and we expect a long-time classical-like behavior. {\em extremely slow velocities} are defined by the the inequality $t_{\tbox{H}} \ll t_{\tbox{prt}}$. This inequality implies that there are QM recurrences {\em before} the expected crossover from diffusive-like spreading to genuine-diffusion. This is the QM adiabatic regime. In the $t\rightarrow\infty$ limit Landau-Zener transitions will dominate the energy spreading, and consequently neither detailed nor restricted QCC is a-priori expected \cite{wilk1}. For {\em fast velocities} we have $t_{\tbox{prt}} \ll \tau_{\tbox{cl}}$. There is a crossover at $t\sim t_{\tbox{prt}}$ from ballistic-like spreading to a genuine ballistic behavior, and at $t \sim \tau_{\tbox{cl}}$ there is a second crossover from genuine-ballistic to genuine-diffusive spreading. The description of this classical-type crossover is out-of-reach for perturbation theory, but we can use the semiclassical picture instead. Note that the semiclassical definition of `fastness' and the perturbative definition of `slowness' imply that there is a `gap' between the corresponding regimes. However, the interpolation is smooth, and therefore for simple systems surprises are not expected. \newpage \section{Actual versus Parametric Evolution} The QM time evolution is governed by the time dependent Schroedinger equation with the time dependent Hamiltonian ${\cal H}(x(t))$. In practice it is quite unnatural to use a fixed basis. For example, in case of the `piston' example one may propose to use the fixed-basis that consists of the eigenfunctions of the empty cavity. However, the matrix elements of the `piston' may be very large and even infinite if we assume impenetrable walls. Thus, it is much more natural to use the so called adiabatic basis, though the time evolution is not necessarily of adiabatic nature. The evolving state-vector is expanded as follows: \begin{eqnarray} \label{e_64} |\psi(t)\rangle \ = \ \sum_{n} a_n(t) \ |n(x(t))\rangle \end{eqnarray} Using standard manipulation we obtain the Schroedinger-like equation \begin{eqnarray} \label{e_65} \frac{da_n}{dt} \ = \ -\frac{i}{\hbar}E_n \ a_n -\frac{i}{\hbar}\sum_m \mbf{W}_{nm}(x(t)) \ a_m \end{eqnarray} Were the off diagonal elements of ${\mathbf W}_{nm}$ are \begin{eqnarray} \label{e_66} \mbf{W}_{nm} \ = \ \frac{\hbar}{i} \Big\langle n \Big| \frac{d}{dt} m \Big\rangle \ = \ i\frac{\hbar\dot{x}}{E_n{-}E_m} \Big\langle n \Big| \frac{\partial{\cal H}}{\partial x} \Big| m \Big\rangle \end{eqnarray} and we use the `gauge' convention ${\mathbf W}_{nm}{=}0$ for $n{=}m$. (Only one parameter is being changed and therefore Berry's phase is not an issue). Equation (\ref{e_65}) will be now our starting point. It is defined in terms of $\mbf{W}_{nm}(x)$ and in terms of a set of numbers $\{E_n\}$. Note that as long as $t \ll t_{\tbox{H}}$ we can {\em ignore} the dependence of $E_n$ on the changing parameter $x$. The formal solution of (\ref{e_65}) will be written as follows: \begin{eqnarray} \label{e_67} a_n(t) \ = \ \sum_{m} \mbf{U}_{nm}(t) \ a_m(0) \end{eqnarray} If all the $E_n$ are set equal to the same constant, (or without loss of generality to zero), then (\ref{e_65}) describes the time evolution of a frozen wavefunction. In other words, (\ref{e_65}) without the $\{E_n\}$ is equivalent to the trivial equation $d\psi/dt=0$. In this special case the formal solution (\ref{e_67}) will be written with $\mbf{T}_{nm}(x(t))$ instead of $\mbf{U}_{nm}(t)$. Note that if the $\{E_n\}$ are taken away from (\ref{e_65}), then $\dot{x}$ can be scaled out and therefore the $x$ dependence rather than the $t$ dependence become significant. The transition probability kernel and the parametric kernel can be written as: \begin{eqnarray} \label{e_68} P_t(n|m) \ = \ |{\mathbf U}_{nm}(t)|^2 \ = \ \left|\langle n(x(t))| {\mathbf U}(t) | m(x(0)) \rangle \right|^2 \\ \label{e_69} P(n|m) \ = \ |{\mathbf T}_{nm}(x)|^2 \ = \ \left|\langle n(x)| m(x(0)) \rangle \right|^2 \end{eqnarray} From now on we shall refer to the $t$-dependent evolution which is represented by $\mbf{U}_{nm}(t)$ as the {\em actual evolution} (AE). To the $t$-dependent evolution which is represented by $\mbf{T}_{nm}(x(t))$ will shall refer as {\em parametric evolution} (PE). For PE the velocity $\dot{x}=V$ plays no role, and it can be scaled out from the above equation. Consequently, for PE, parametric scales and temporal scales are trivially related via the scaling transformation $\delta x = V \tau$. It is important to realize that in a certain sense, defined below, the AE coincided with the PE for short times $t \ll t_{\tbox{sdn}}$. This is the QM-{\em sudden} approximation. The detailed picture is as follows: We start with some initial state $|m\rangle$. After time $t$ there will be some non-vanishing probability to find the system in a certain energy range $\delta E(t)$ around $E_m$. As long as $\delta E(t) \ll \hbar/t$ the corresponding energy levels $E_n$ within $\delta E$ are not resolved. The latter condition defines a time interval $t \ll t_{\tbox{sdn}}$. By definition, for $t \ll t_{\tbox{sdn}}$ it is as if the energy-levels were degenerated. Therefore we can say that the AE coincides with the PE, implying that the evolving state (\ref{e_64}) remains approximately unchanged. The QM sudden approximation will be further discussed in Sec.\ref{s_sdn}. \section{Application of perturbation theory} We can use Equation (\ref{e_65}) as a starting point for a standard first-order perturbation theory (FOPT). For short times, such that $P_t(m|m) \sim 1$, the transition probability from level $m$ to level $n$ is determined by the coupling strength $|\mbf{W}_{nm}|^2$, by the energy difference $(E_n{-}E_m)$ and by the correlation function $F(\tau)$. The latter describes loss of correlation between $\mbf{W}_{nm}(x(0))$ and $\mbf{W}_{nm}(x(t))$. It is defined via \begin{eqnarray} \label{e_xcorr} \left\langle \ \mbf{W}_{nm}^{\star}(t{+}\tau) \ \mbf{W}_{nm}(t) \ \right\rangle \ = \ \ |W_{nm}|^2 \ F(\tau) \end{eqnarray} with the convention $F(0)=1$. Using FOPT one obtains the following result: \begin{eqnarray} \nonumber P_t(n|m) \ &=& \ \left|\int_{0}^{t} \frac{\mbf{W}_{nm}(t')}{\hbar} \ \mbox{e}^{i\frac{E_n{-}E_m}{\hbar}t'} \ dt' \right|^2 \\ \nonumber \ &=& \ \left( \frac{W_{nm}}{\hbar} \right)^2 \int_0^t\int_0^t dt_2 dt_1 \ F(t_2{-}t_1) \ \mbox{e}^{i\frac{E_n{-}E_m}{\hbar}(t_2{-}t_1)} \\ \ &=& \ \label{e_Pnm} t\tilde{F}_t\left(\frac{E_n{-}E_m}{\hbar}\right) \times \left( \frac{W_{nm}}{\hbar} \right)^2 \ \ \ \ \ \ \mbox{for $n\ne m$} \end{eqnarray} The function $\tilde{F}_t(\omega)$ describes the spectral content of the perturbation. For a {\em constant} perturbation ($F(\tau)=1$) it is just given by equation (\ref{e_26}). For a {\em noisy} perturbation $F(\tau)$ is characterized by some finite correlation-time $\tau_c$, and therefore the definition of $\tilde{F}_t(\omega)$ is modified as follows: \begin{eqnarray} \tilde{F}_t(\omega) \ = \ \left\{ \matrix{ t{\cdot}(\mbox{sinc}(\omega t /2))^2 & \mbox{for} & t<\tau_c \cr \tilde{F}(\omega) & \mbox{for} & t>\tau_c} \right. \end{eqnarray} where $\tilde{F}(\omega)$ is the Fourier transform of the correlation function $F(\tau)$. Now it is a simple matter to calculate the {\em second-moment} of the spreading: \begin{eqnarray} \label{e_43a} \hspace*{-1cm} \delta E^2 \ = \ \sum_n (E_n{-}E_m)^2 \ P_t(n|m) \ = \ V^2 t \int_{-\infty}^{+\infty}\frac{d\omega}{2\pi} \tilde{C}_{\tbox{E}}(\omega) \ \tilde{F}_t(\omega) \end{eqnarray} This result coincides with the linear-response result (\ref{e_25}) only if the coupling matrix-elements could have been treated as {\em constant} in time, meaning $F(\tau){=}1$ and accordingly $\tau_c=\infty$. For $t>\tau_c$ it becomes {\em formally equivalent} to the FGR result (\ref{e_32}). However, a {\em practical equivalence} seems unlikely because $F(\tau)$ of (\ref{e_xcorr}) is not necessarily determined by the correlations of the external driving source. The critical discussion of this point is going to be the main issue of the subsequent sections. In order to use (\ref{e_43a}) we should determine how $F(\tau)$ look like, and in particular we should determine what is the correlation-time $\tau_c$. We postpone this discussion, and assume that $F(\tau)$ and hence $\tau_c$ are known from some calculation. The total transition probability is $p(t)=\sum_n'P(n|m)$, where the prime indicates omission of the term $n=m$. FOPT is valid as long as $p(t)\ll 1$. This defines a breaktime $t_{\tbox{prt}}'$ for the {\em standard} FOPT treatment. The above derivation imply that we can trust (\ref{e_43a}) only during the short time $t\ll t_{\tbox{prt}}'$. However, later we shall argue that with a proper (modified) definition of $F(\tau)$ we can trust (\ref{e_43a}) during a longer time $t\ll t_{\tbox{prt}}$. The breaktime $t_{\tbox{prt}}$ will be determined by using an {\em improved} perturbation theory (IMPT). It is now possible to formulate the conditions for having {\em restricted} QCC. By `restricted' QCC we mean that only the {\em second-moment} of the spreading (\ref{e_43a}) is being considered. It is essential to distinguish between two different possible scenarios: \begin{eqnarray} \mbox{Resonance-limited transitions:} \ \ \ \ \ & \tau_c \gg \tau_{\tbox{cl}} \\ \mbox{Band-limited transitions:} \ \ \ \ \ & \tau_c \ll \tau_{\tbox{cl}} \end{eqnarray} For resonance-limited transitions, finite $\tau_c$ has no consequence as far as $\delta E^2$ is concerned. The crossover to diffusive behavior $\delta E^2 \propto t$ will happen at $t\sim\tau_{\tbox{cl}}$. This diffusive behavior will persist for $t>\tau_c$ with the same diffusion coefficient. On the other hand, for band-limited transitions we will have at $t\sim\tau_c$ a pre-mature crossover from ballistic to diffusive behavior. Consequently the classical result will be suppressed by a factor $(\tau_c/\tau_{\tbox{cl}}) \ll 1$. This is due to the fact that the transitions between levels are limited not by the resonance width (embodied~by~$\tilde{F}(\omega)$), but rather by the band-width of the coupling matrix elements (embodied~by~$\tilde{C}_{\tbox{E}}(\omega)$). We have realized that the perturbative result (\ref{e_43a}) can be used in order to establish a {\em diffusive} growth of the {\em second moment}. Obviously the applicability of this picture requires a separation of time scales: \begin{eqnarray} \label{e_FGRc} \mbox{\bf FGR-condition:} \ \ \ \ \mbox{Either} \ \tau_{\tbox{cl}} \ \mbox{or} \ \tau_c \ \ \ \ll \ \ \ t_{\tbox{prt}} \end{eqnarray} The long time {\em stochastic} behavior of the spreading is determined by the short-time dynamics, as explained in Section \ref{s8}. The FGR condition guarantees that the {\em diffusive} growth of the {\em second-moment} is established {\em before} the breakdown of the short-time analysis. Therefore, the correct determination of the breaktime $t_{\tbox{prt}}$ is extremely important, and it is going to be the main issue of the subsequent sections. \section{The applicability regime of the standard FOPT treatment} In order to have practical estimates for applicability regime of the standard FOPT treatment, we should look on the matrix ${\mathbf W}_{nm}$. This matrix is banded, and its elements satisfy: \begin{eqnarray} \left\langle \left|\frac{{\mathbf W}_{nm}}{\hbar}\right|^2 \right\rangle \ \approx \ \left(\frac{V}{\delta x_c^{\tbox{qm}}}\right)^2 \ \frac{1}{(n{-}m)^2} \ \ \ \ \ \mbox{for} \ \ |n{-}m|<b/2 \end{eqnarray} where $\delta x_c^{\tbox{qm}}=\Delta/\sigma$. From the above expression, once used in (\ref{e_Pnm}) for the calculation of the kernel $P(n|m)$, it follows that $\delta x_c^{\tbox{qm}}$ is the parametric change which is required in order to mix neighboring levels. Similarly, in the calculation of $P_t(n|m)$ the related $\tau_c^{\tbox{qm}}=\delta x_c/V$ is the time which is required in order to mix neighboring levels. Given two distant levels $n$ and $m$, and taking the mixing on ``small'' scale into account, one realizes that $\delta x_c^{\tbox{qm}}$ also determines the correlation time $\tau_c=\tau_c^{\tbox{qm}}$ of the matrix-element ${\mathbf W}_{nm}(x(t))$, as defined in (\ref{e_xcorr}). These observations can be summarized as follows: \begin{eqnarray} \label{e_simple} t_{\tbox{prt}}' \ = \ \tau_c \ = \ \tau_c^{\tbox{qm}} \ \ \ \ \ \mbox{for the standard FOPT.} \end{eqnarray} The standard perturbative structure (\ref{e_59}) of either $P(n|m)$ or $P_t(n|m)$ is maintained as long as neighboring levels are not being mixed. This structure obviously does not correspond to the classical structure since it is characterized by the non-classical energy scale $\Delta_b$. Still, there is {\em restricted} QCC which is implied by (\ref{e_43a}). A sufficient condition for the applicability of the {\em standard} FOPT treatment is $v_{\tbox{RMT}} \ll 1$. The argument goes as follows: By definition $v_{\tbox{RMT}} \ll 1$ implies $\tau_{\tbox{cl}} \ll \tau_c^{\tbox{qm}}$. Using (\ref{e_simple}) we observe that it is equivalent to $\tau_{\tbox{cl}} \ll t_{\tbox{prt}}'$. By definition $t_{\tbox{prt}}$ is either equal or larger than $t_{\tbox{prt}}'$. Therefore the FGR condition (\ref{e_FGRc}) is satisfied. The converse however is not true. Having $v_{\tbox{RMT}} \gg 1$ does not imply that the FGR condition cannot be satisfied. Therefore, for $v_{\tbox{RMT}} \gg 1$, we cannot tell on the basis of the {\em standard} FOPT treatment whether or when there is a crossover to a diffusive behavior. We shall try to overcome this difficulty in the next sections. \section{The over-simplified RMT (ORMT) picture} \label{s_RMT} Recall that the matrix $\mbf{W}_{nm}$ is a banded. The bandwidth $b{=}1,3...$ corresponds to diagonal, tridiagonal matrix and so on. In the spirit of RMT we can think of $\mbf{W}_{nm}$ as a particular realization which is taken out from some large ensemble of (banded) random matrices. In order to go beyond standard FOPT it is essential to further specify the {\em cross-correlation between matrix elements}. Following \cite{wilk2} the simplest statistical assumption is {\em absence} of cross-correlations, namely, \begin{eqnarray} \label{e_crosscorr} \left\langle \ \mbf{W}_{n'm'}^{\star}(t{+}\tau) \ \mbf{W}_{nm}(t) \ \right\rangle \ = \ 0 \ \ \ \ \ \ \mbox{if} \ \ \ \{n',m'\} \ne \{n,m\} \end{eqnarray} Equation (\ref{e_65}) with the statistical assumptions (\ref{e_crosscorr}) and (\ref{e_xcorr}), where $\tau_c=\tau_c^{\tbox{qm}}$, is a well defined RMT model. We shall refer to it as the over-simplified RMT (ORMT) picture. The main observation of \cite{wilk2} can be summarized as follows: The FOPT result (\ref{e_43a}), assuming an ORMT picture, can be trusted for classically long times. Namely, \begin{eqnarray} \label{e_rmtpic} t_{\tbox{prt}}' = t_{\tbox{frc}} \ \ \ \ \ \ \mbox{and} \ \ \ \ \ \ \tau_c \ = \ \tau_c^{\tbox{qm}} \ \ \ \ \ \mbox{for the ORMT picture.} \end{eqnarray} The ORMT picture reduces to FOPT and implies restricted QCC provided $v_{\tbox{RMT}}\ll 1$. However, this condition is {\em not} satisfied in the classical limit ($\hbar\rightarrow 0$). In the classical limit $v_{\tbox{RMT}}\gg 1$, and consequently transitions are band-limited. In particular it follows from (\ref{e_43a}), that the classical diffusion ($D_{\tbox{E}}^{\tbox{cl}}$) is suppressed by a factor $\tau_c/\tau_{\tbox{cl}}$, leading to \begin{eqnarray} \label{e_79} D_{\tbox{E}}^{\tbox{ORMT}} \ \approx \ \frac{1}{v_{\tbox{RMT}}} D_{\tbox{E}}^{\tbox{cl}} \hspace*{2cm} \mbox{for \ $v_{\tbox{RMT}} \gg 1$} \end{eqnarray} This result, which is the main result of \cite{wilk2}, is obviously inconceivable, because it is implied that the classical limit does not coincide with the classical result, and that the correspondence principle is actually violated. The ORMT picture predicts (for $v_{\tbox{RMT}}\gg 1$) a premature crossover from ballistic to diffusive behavior once $\delta x$ becomes larger than $\delta x_c^{\tbox{qm}}$. It is important to realize that (\ref{e_79}), if it were true, would reflects a property of PE. This statement becomes more transparent if we write: \begin{eqnarray} \label{e_79a} \delta E^2\Big|_{\tbox{ORMT}} \ \ = \ \ 2D_{\tbox{E}}^{\tbox{ORMT}} \times t \ \ \approx \ \ \left\langle \left(\frac{\partial {\cal H}}{\partial x}\right)^2 \right\rangle_{\tbox{E}} \delta x_c^{\tbox{qm}} \times \delta x \end{eqnarray} Exactly the same result would be obtained if we start with (\ref{e_65}) without the the first term in the right hand side. The value of $V$ has no significance in the above analysis. \section{The core-tail structure} There is no detailed QCC between the classical and the quantal $P_t(n|m)$ for short times. We are going to explain that for a limited time $P_t(n|m)$ consists of a {\em core} whose width will be denoted by $b(t)$, and a {\em tail} whose main component is contained within the bandwidth $b$. In order to analyze this core-tail structure we are going to use an improved perturbation theory (IMPT). The IMPT treatment assumes that out-of-band transitions ($b/2<|n{-}m|$) can be neglected. The IMPT is useful as long as the {\em second-moment} of the energy distribution is predominated by the tail component. This determines the breaktime $t_{\tbox{prt}}$. The breakdown of IMPT at $t \sim t_{\tbox{prt}}$ happens once the core spills over the tail region, and the FOPT-like structure of $P_t(n|m)$ is completely washed away. It is important to have a proper intuitive understanding of how the {\em core} is being formed. For \mbox{$t \ll \tau_c^{\tbox{qm}}$} we have $P_t(m|m)\sim 1$ and $P_t(n|m) \ll 1$ for $n \ne m$. It means that the core width is $b(t)=1$. For \mbox{$t \sim \tau_c^{\tbox{qm}}$} few levels are expected to be mixed by the perturbation, meaning that $b(t)>1$. We may have the tendency to associate this mixing with an avoided crossing. However, this aspect should not be over-emphasized. The mixing of neighboring levels is {\em not} conditioned by having exceptionally small energy difference (having $(E_{m{+}1}{-}E_m)\ll\Delta$ is not required). Moreover, one should not over-emphasize the importance of near-neighbor transitions, unless $v_{\tbox{LZ}}\ll 1$. (See further discussion of the QM-adiabatic regime in Sec.\ref{s_LZ}). If near-neighbor transitions were the dominant mechanism for energy spreading, it would imply that $b(t) \approx (t/\tau_c^ {\tbox{qm}})$. Rather, we shall argue that the core develops much more rapidly, namely $b(t) \approx (t/\tau_c^ {\tbox{qm}})^2$. It is also important to have a proper intuitive understanding of how the {\em tail} is being formed. Let us assume for simplicity that only three levels $m=1$, $m=2$ and $n=100$ are actually coupled by matrix elements. Let us start at $t=0$ with all the probability concentrated in $m=1$. As we go at $t=t_1$ via an avoided-crossing of $m=1$ and $m=2$, the matrix element $\mbf{W}_{\! n,\tbox{1}}$ may change sign. This change of sign may be viewed as a loss of correlations. However, at the same time, assuming a diabatic transition, almost all the probability is transfered to $m=2$ in such a way that $\mbf{W}_{\! n,\tbox{2}}a_{\tbox{2}}$ at $t>t_1$ is strongly correlated with $\mbf{W}_{\! n,\tbox{1}}a_{\tbox{1}}$ at $t<t_1$. It is implied that the effective correlation time $\tau_c$ for $m\rightarrow n$ transitions is larger than the time between avoided crossings. This is not captured by the statistical assumption (\ref{e_crosscorr}) of the previous section. A proper transformation, that effectively removes the in-core transitions between $m$-states, can be used for the purpose of tail-formation analysis. The associated perturbative treatment is characterized by an effective $\tau_c \gg \tau_c^{\tbox{qm}}$ as well as by $t_{\tbox{prt}}' \gg \tau_c^{\tbox{qm}}$. \begin{figure} \begin{center} \leavevmode \epsfysize=1.5in \epsffile{coretail.eps} \end{center} \caption{\protect\footnotesize Schematic illustration of a generic core-tail spreading profile. The core-width $b(t)$ is defined by the participation-ratio. The second-moment should satisfy $b(t) \ll s(t) \ll b$, where $b$ is that bandwidth. In case of $P_t(n|m)$ the tail becomes (for $t>\tau_{\tbox{cl}}$) resonance-limited rather than band-limited. In the resonance-limited case the bandwidth $b$ in the above figure should be replaced by $(\hbar/t)/\Delta$, and accordingly the requirement is $b(t) \ll s(t) \ll (\hbar/t)/\Delta$. } \label{f_coretail} \end{figure} For $t>\tau_c^{\tbox{qm}}$ neighboring levels are being mixed and consequently the transition kernel acquires a non-trivial core-tail structure which is illustrated in Fig.\ref{f_coretail}. The expression for $P_t(n|m)$ can be written schematically as follows: \begin{eqnarray} \label{e_80} P_t(n|m) \ \approx \ \mbox{Core}(n{-}m) \ + \ \mbox{Tail}(n{-}m) \ \ \ \ \ \mbox{for $t \ll t_{\tbox{prt}}$} \end{eqnarray} The kernel is characterized now by two scales: \begin{eqnarray} \label{e_81} b(t) \ = \ \mbox{\small core width} \ &= \ \left( \sum_n (P_t(n|m))^2 \right)^{-1} \\ \label{e_82} s(t) \ = \ \mbox{\small spreading} \ &= \ \left( \sum_n (n{-}m)^2 \ P_t(n|m) \right)^{-1/2} \end{eqnarray} such that $b(t) \ll s(t) \ll b$. For $t<\tau_c^{\tbox{qm}}$ we have a trivial core with $b(t) \approx 1$, whereas for $t \gg \tau_c^{\tbox{qm}}$ we have a non-trivial core with $b(t) \gg 1$. The matrix elements satisfy $\langle |{\mathbf W}_{nm}|^2 \rangle \ \propto \ 1/(n{-}m)^2$. We shall see that in the `band-limited tail' case we have $P_t(n|m) \sim \mbox{const}/(n{-}m)^2$ up to the cutoff $b$, while for the `resonance-limited tail' case we have $P_t(n|m) \sim \mbox{const}/(n{-}m)^2$ up to the cutoff $(\hbar/t)/\Delta$. One should realize that the power-law behavior of the tail is `fast' enough in order to guarantee that $b(t)$ is independent of the tail's cutoff. The cutoff does not have any effect on the evolving core. On the other hand, the second moment $s(t)$, unlike $b(t)$, is predominantly determined by the tail's cutoff, and it is independent of the core structure. \section{An improved perturbation theory (IMPT)} \label{s16} In order to extend perturbation theory beyond $\tau_c^{\tbox{qm}}$ it is essential to eliminate the non-perturbative transitions within the core. This can be done by making a transformation to an appropriate basis as follows: \begin{eqnarray} \label{e_83} a_n(t) \ = \ \sum_m \tilde{\mathbf{T}}_{nm} \ c_m(t) \\ \label{e_84} \tilde{\mathbf{T}}_{nm} = {\mathbf{T}}_{nm} \ \ \ \mbox{if $|n{-}m|<b'/2$, else zero.} \end{eqnarray} The amplitudes $c_n(t)$ satisfy the same Schroedinger equation as the $a_n(t)$, with a transformed matrix $\tilde{\mathbf{W}}$. The general expression for $\tilde{\mathbf{W}}$ is \begin{eqnarray} \label{e_85} \tilde{\mbf{W}}= \tilde{\mbf{T}}^{\dagger}\mbf{W}\tilde{\mbf{T}} -i\hbar\tilde{\mbf{T}}^{\dagger}(d\mbf{T}/dt)+ \tilde{\mbf{T}}^{\dagger}\mbf{E}\tilde{\mbf{T}} \end{eqnarray} where $\mbf{E}$ is a diagonal matrix of the energies. This is a quite complicated expression. However, we are interested only in the core-to-tail transitions for which \begin{eqnarray} \label{e_86} (\tilde{\mathbf{W}})_{nm}= (\tilde{\mathbf{T}}^{\dagger}\mathbf{W}\tilde{\mathbf{T}})_{nm} \ \ \ \mbox{for $|n{-}m|>b'$} \end{eqnarray} (no approximation is involved). Once this transformation is performed the `new' Schroedinger equation will be characterized by a new correlation time $\tau_c$ and by a new perturbative time $t_{\tbox{prt}}'$. Both $\tau_c$ and $t_{\tbox{prt}}'$ will depend on the free parameter $b'$. Our choice of the course-graining parameter $b'$ is not completely arbitrary. The restrictions are: \\ \ \\ $\hspace*{2mm} \bullet \ \ $ Unitarity is approximately preserved: \ \ \ \ \ \ \ \ \ $b(t) \ \ll \ b'$. \\ $\hspace*{2mm} \bullet \ \ $ Core-to-Tail transitions are preserved: \ \ \ \ \ \ \ \ \ $b' \ \ll \ b$. \\ $\hspace*{2mm} \bullet \ \ $ Long effective correlation time is attained: \ \ \ \ $\tau_{\tbox{cl}} \ \ll \ \tau_c$ \\ \ \\ The feasibility of the last requirement deserves further discussion. Only matrix elements with $|n{-}m|\gg b'$ are of interest, and therefore in the multiplication $(\tilde{\mathbf{T}}^{\dagger})_{n'n} (\mathbf{W})_{nm}(\tilde{\mathbf{T}})_{mm'}$ we can substitute (\ref{e_66}) with $(E_{n}{-}E_{m})$ replaced by $(E_{n'}{-}E_{m'})$. As $b'$ becomes closer to $b$, the matrix elements $(\tilde{\mathbf{W}})_{n'm'}$ become correlated on a time scale of the order $\tau_c^{\tbox{cl}}$. This is because $b'= b$ implies transformation to an $x$-independent basis. The time scale $\tau_c^{\tbox{cl}}$ has been defined in Sec.\ref{s_flc}. As we change $b'$ from $b$ back to smaller values, we expect $\tau_c$ to become smaller. By continuity, we expect no difficulty in satisftying the conditions $b'\ll b$ and $\tau_c\gg\tau_{\tbox{cl}}$ simultaneously. The validity of the improved perturbative treatment is further discussed in the next section. The usefullness of the above transformation stems from the fact that due to the elimination of non-perturbative transitions within the core, $t_{\tbox{prt}}'$ becomes much longer than $\tau_c^{\tbox{qm}}$. At the same time the information which is required in order to determine the second moment $s(t)$ is not lost. We have $|\tilde{\mathbf{W}}_{nm}| \approx |{\mathbf{W}}_{nm}|$ for core-to-tail transitions, and a practical approximation for the `renormalized' spreading profile is \begin{eqnarray} \label{e_87} \hspace*{-2cm} P_t(n|m) \ \sim \ \delta_{nm} \ + \ t \tilde{F}_t\left( \frac{E_n{-}E_m}{\hbar}\right) \times\left(\frac{1}{\tau_c^{\tbox{qm}}}\right)^2 \ \frac{1}{(b')^2+(n{-}m)^2} \end{eqnarray} Breakdown of the improved perturbative treatment happens once the total transition probability becomes non-negligible (of order 1). Thus \begin{eqnarray} \label{e_88} t_{\tbox{prt}}' \ = \ (b')^{\tbox{1/2}} \times \tau_c^{\tbox{qm}} \end{eqnarray} The behavior for $|n{-}m| \le b'$ is an artifact of the transformation and contains false information. However, for the calculation of the second moment only the tail is significant. The tail is not affected by our transformation and therefore we will obtain the {\em same} result (\ref{e_43a}) for $\delta E^2$ with one important modification: a different effective value for $\tau_c$. Moreover, since $b'$ is chosen such that $\tau_c\gg\tau_{\tbox{cl}}$, it follows that the transitions are resonant-limited and consequently QCC is established also in the domain $v_{\tbox{RMT}}>1$. Obviously, at the same time we should satisfy the condition $\tau_{\tbox{cl}}\ll t_{\tbox{prt}}'$. It is easily verified that the latter condition cannot be satisfied if $v_{\tbox{PR}}>1$. This is not just a technical limitation of the IMPT strategy, but reflects a real difference between two distinct routes towards QCC. This point is further illuminated in the next section. \section{Consequences of the IMPT treatment} \label{s17} The IMPT is capable of giving information about the tail, and hence about the second moment. Given $t$, one wonders how much $b'$ can be `pushed down' without violating the validity conditions of our procedure. It is quite clear that $b'\gg b(t)$ is a necessary condition for {\em not} having a breakdown of perturbation theory. If we {\em assume} that the energy-spreading-profile is characterized just by the single parameter $b(t)$, then the condition $b'\gg b(t)$ should be equivalent to $t\ll t_{\tbox{prt}}'$. Hence the following estimate is suggested: \begin{eqnarray} \label{e_89} b(t) \ = \ (t/\tau_c^{\tbox{qm}})^2 \end{eqnarray} We turn now to determine the $\delta x_{\tbox{prt}}$ of the parametric evolution (PE), and then the $t_{\tbox{prt}}$ of the actual evolution (AE). Recall that PE is obtained formally by ignoring the differences $(E_n{-}E_m)$, which implies that we can make in (\ref{e_87}) the replacement $\tilde{F}_t \mapsto t$. Thus the tail of $P(n|m)$ is band-limited and consequently the second moment is \begin{eqnarray} \label{e_90} s(t)^2 \ = \ b \times (1/\tau_c^{\tbox{qm}})^2 \ t^2 \hspace*{2cm} \mbox{[band-limited tail]} \end{eqnarray} in agreement with the classical ballistic result (\ref{e_15}). Our procedure for analyzing the core-tail structure of $P(n|m)$ is meaningful as long as we have $b(t) \ll s(t) \ll b$. This defines an upper time limitation $t_{\tbox{prt}}$ which is related via $\delta x = Vt$ to the following parametric scale: \begin{eqnarray} \label{e_91} \delta x_{\tbox{prt}} \ = \ b^{\tbox{1/2}} \ \delta x_c^{\tbox{qm}} \ = \ \frac{\hbar}{\sqrt{\nu_{\tbox{E}} \tau_{\tbox{cl}}}} \end{eqnarray} At $t=t_{\tbox{prt}}$ we have $b(t) \sim s(t) \sim b$, and we expect a crossover from a ballistic-like spreading to a genuine ballistic spreading. In the perturbative regime the AE departs from the PE once the energy scale $\Delta_b$ is resolved. This happens when $t\sim\tau_{\tbox{cl}}$. The perturbative approach is applicable for the analysis of the crossover at $t\sim\tau_{\tbox{cl}}$ provided $V\tau_{\tbox{cl}}\ll\delta x_{\tbox{prt}}$. This is precisely the condition $v_{\tbox{PR}}\ll 1$. For $t\gg\tau_{\tbox{cl}}$ the tail becomes resonance limited ($|n{-}m|<(\hbar/t)/\Delta$) rather than band limited ($|n{-}m|<b$) and we obtain: \begin{eqnarray} \label{e_92} s(t)^2 \ = \ (1/\tau_c^{\tbox{qm}})^2 \ t_{\tbox{H}} \ t \hspace*{2cm} \mbox{[resonance-limited tail]} \end{eqnarray} in agreement with the classical diffusive result (\ref{e_16}). Our procedure for analyzing the core-tail structure of $P_t(n|m)$ is meaningful as long as we have $b(t) \ll s(t) \ll b$. This defines a {\em modified} upper time limitation \begin{eqnarray} \label{e_93} t_{\tbox{prt}} \ = \ (\tau_c^{\tbox{qm}})^{\tbox{2/3}} \ t_{\tbox{H}}^{\tbox{1/3}} \ = \ \left( \frac{\hbar^2} {\nu_{\tbox{E}}V^2}\right)^{1/3} \ \ \ \ \ \mbox{[applies to $v_{\tbox{PR}}\ll 1$]} \end{eqnarray} At $t=t_{\tbox{prt}}$ we have $b(t) \sim s(t) \ll b$, and we expect a crossover from a diffusive-like spreading to a genuine diffusive spreading. \section{Validity of the IMPT picture} It is important to have a clear understanding of the difference between the IMPT picture and the ORMT picture. In both cases we can argue that $P(n|m)$ has, for short times, a core-tail structure. The fundamental difference is the assumption concerning the effective $\tau_c$ for core-to-tail transitions. The ORMT picture assumes that effectively $\tau_c=\tau_c^{\tbox{qm}}$, and equivalently the tails grow like $\delta x$. This is the reason for having the pre-mature crossover to diffusive growth (\ref{e_79}) of the second moment. The IMPT picture assumes that the effective $\tau_c$ is scale-dependent, and that the tails grow predominantly like $\delta x^2$. In other words, the tails grow as if we are still in the regime of FOPT. Therefore, from practical point of view, all we have to do in order to establish the validity of the IMPT picture is to verify that indeed the tails grow in a ballistic-like fashion ($\propto \delta x^2$), and not in diffusive-like fashion ($\propto \delta x$). A related observation is that also the core-width $b(t)$ grows as ($\propto \delta x^2$). The argumentation (Sec. \ref{s16}) in favor of the IMPT picture is not mathematically rigorous. It is therefore important to study specific examples. The obvious example to begin with has been defined by Wigner forty years ago \cite{wigner,casati,flamb}. Let ${\cal H} = \mbf{E} + x\mbf{B}$ where $\mbf{E}$ is a diagonal matrix and $\mbf{B}$ is a banded random matrix. The IMPT picture should apply to the analysis of the PE of this Wigner model. Indeed, it is well known that $P(n|m)$ for Wigner's model is a Lorentzian, and we may view this Lorentzian as a special case of core-tail structure. The width of the Wigner's Lorentzian, in energy units, is $\Gamma=2\pi ((x{\cdot}\sigma)/\Delta)^2\times\Delta$. Thus we have indeed $\propto \delta x^2$ for both the core and the tails. If the ORMT picture were true we would expect to get $\propto \delta x$ dependence. All the results of the previous sections are consistent with the established results of Wigner. Having established the IMPT picture for PE, and observing that going from PE to AE involves no additional assumptions, it follows that we can safely proceed with the analysis as in Sec.\ref{s17}. The validity of the the IMPT also has been verified numerically for the PE of the 'piston' example \cite{prm}, and for the PE of a 2D nonlinear oscillator \cite{lds}. It has been verified that indeed the tails grow in a ballistic-like fashion ($\propto \delta x^2$), and not in diffusive-like fashion ($\propto \delta x$). Obviously, the assumption of having a structure-less {\em core} does not universally apply, and also having $b(t)\propto\delta x^2$ is a quite fragile result. If we want to have a better idea about the core structure we should apply, in any special example, specific (non-perturbative) considerations. In case of the `piston' example we can use semiclassical considerations \cite{wls} in order to argue that the core has a Lorentzian shape whose width is $\hbar/\tau_{\tbox{col}}$. This semiclassical Lorentzian has nothing to do with Wigner's Lorentzian. The semiclassical Lorentzian is a purely non-perturbative structure. This structure is exposed provided $(\hbar/\tau_{\tbox{col}})/\Delta \ll b(t)$, leading to the condition $\delta x \gg \lambda_{\tbox{E}}$. Else we have a structure-less core whose width is characterized by the single parameter $b(t)$ of (\ref{e_89}). \section{The quantum mechanical sudden approximation} \label{s_sdn} \begin{table}[h] \begin{center} \leavevmode \fbox{\mpg{12}{ \begin{eqnarray} \mbox{perturbative route ($v_{\tbox{PR}} \ll 1$):} \hspace*{-2cm} & \ \nonumber\\ t_{\tbox{sdn}} = \tau_{\tbox{cl}} \ll t_{\tbox{prt}} & \ \nonumber\\ \mbox{At} \ \ t = \tau_{\tbox{cl}} & b(t) \ll s(t) \ll b \sim (\hbar/t)/\Delta \nonumber\\ \mbox{At} \ \ t = t_{\tbox{prt}} & b(t) \sim s(t) \sim (\hbar/t)/\Delta \ll b \nonumber\\ \ & \ & \ \nonumber\\ \mbox{Non-perturbative route ($v_{\tbox{PR}} \gg 1$):} \hspace*{-2cm} & \ \nonumber\\ t_{\tbox{prt}} \ll t_{\tbox{sdn}} \ll \tau_{\tbox{cl}} & \ \nonumber\\ \mbox{At} \ \ t = t_{\tbox{prt}} & b(t) \sim s(t) \sim b \ll (\hbar/t)/\Delta \nonumber\\ \mbox{At} \ \ t = t_{\tbox{sdn}} & b \ll s(t) \sim (\hbar/t)/\Delta \nonumber\\ \mbox{At} \ \ t = \tau_{\tbox{cl}} & b \sim (\hbar/t)/\Delta \ll s(t) \nonumber \end{eqnarray} \vspace*{0.1mm} }} \end{center} \caption{\protect\footnotesize Various time scales in the route to stochastic behavior.} \end{table} It is now appropriate to discuss the QM sudden approximation. For the perturbative scenario ($v_{\tbox{PR}}\ll 1$) we have already mentioned that the AE departs from the PE at $t_{\tbox{sdn}}=\tau_{\tbox{cl}}$, which is the time to resolve the energy scale $\Delta_b$. In case of the non-perturbative scenario ($v_{\tbox{PR}}\gg 1$) there will be an {\em earlier breakdown} of the QM sudden approximation. This is because we have $\tau_{\tbox{cl}} \gg t_{\tbox{prt}}$ and consequently at $t=\tau_{\tbox{cl}}$ we already have $s(t)\gg b$. Therefore $t_{\tbox{sdn}}$ should be defined as the time to resolve the energy scale which is associated with $s(t)$. It leads to \begin{eqnarray} \label{e_94} t_{\tbox{sdn}} \ = \ b^{\tbox{1/4}} (\tau_c^{\tbox{qm}} \tau_{\tbox{cl}})^{\tbox{1/2}} \ = \ \left( \frac{\hbar^2 \tau_{\tbox{cl}}} {\nu_{\tbox{E}}V^2}\right)^{1/4} \hspace*{1cm} \mbox{for $v_{\tbox{PR}}\gg 1$} \end{eqnarray} The various time scales are summarized in Table 4. The non-perturbative crossover from genuine-ballistic to genuine-diffusive behavior in not trivial. If $v_{\tbox{SC}} \gg 1$ we can relay on semiclassical considerations in order to establish the existence of this crossover. More generally, for $v_{\tbox{PR}} \gg 1$, we would like to have an appropriate effective RMT model. This effective RMT model should support genuine-ballistic motion with an elastic scattering time~$\tau_{\tbox{cl}}$. \section{The quantum mechanical adiabatic approximation} \label{s_LZ} The previous analysis has emphasized the role of core-to-tail transitions in energy spreading. An implicit assumption was that these transitions are not suppressed by recurrences. This is not true in the QM adiabatic regime ($v_{\tbox{LZ}}\ll 1$). Following \cite{wilk1} it is argued that energy spreading in the latter regime is dominated (eventually) by Landau-Zener transitions between near-neighbor levels. For completeness, the present section is devoted to the clarification of this observation. As a preliminary exercise it interesting to estimate the contribution of transitions between near-neighbor levels. The time scale that characterize these transitions is $\tau_c^{\tbox{qm}}$, and the `step' size is $\Delta$. Disregarding all other transitions, we have a random-walk process with diffusion coefficient $(\Delta)^2/\tau_c^{\tbox{qm}}$, leading to \begin{eqnarray} \label{e_95} D_{\tbox{E}}^{\tbox{NN}} \ \sim \ \frac{1}{v_{\tbox{LZ}}} D_{\tbox{E}}^{\tbox{cl}} \hspace*{2cm} \mbox{[not applicable]} \end{eqnarray} Thus for $v_{\tbox{LZ}} \gg 1$ the contribution of near-neighbor (NN) transitions is indeed negligible as expected. In the QM adiabatic regime ($v_{\tbox{LZ}}\ll 1$) the above result should be modified as follows \cite{wilk1}: \begin{eqnarray} \label{e_96} D_{\tbox{E}}^{\tbox{LZ}} \ \approx \ \left(\frac{1}{v_{\tbox{LZ}}}\right)^{1{-}(\beta/2)} D_{\tbox{E}}^{\tbox{cl}} \hspace*{2cm} \mbox{for \ $v_{\tbox{LZ}}\ll 1$} \end{eqnarray} This result takes into account the no-trivial nature of Landau-Zener transitions and the statistics of the avoided-crossings. One should use $\beta=1$ for the Gaussian unitary ensemble (GUE) and $\beta=2$ for the Gaussian orthogonal ensemble (GOE). Recalling the stochastic considerations that lead to (\ref{e_95}) one deduces that the perturbative breaktime is \begin{eqnarray} \hspace*{-1cm} t_{\tbox{prt}} \ = \ \left(\frac{1}{v_{\tbox{LZ}}}\right)^{\beta/2} \tau_c^{\tbox{qm}} \ \propto \ V^{-(1+(\beta/2))} \hspace*{1cm} \mbox{[applies to \ $v_{\tbox{LZ}}\ll 1$]} \end{eqnarray} In the QM adiabatic regime energy spreading is dominated by near-neighbor level transitions for two distinct reasons. The first reason applies to the $\beta=1$ case, namely $D_{\tbox{E}}^{\tbox{LZ}} \gg D_{\tbox{E}}^{\tbox{cl}}$. The other reason is that $D_{\tbox{E}}^{\tbox{cl}} \gg D_{\tbox{E}}^{\tbox{FGR}}$. In the latter inequality, $D_{\tbox{E}}^{\tbox{FGR}}$ is based on the FGR result (\ref{e_32}). The FGR result becomes very small, compared with the classical result, once $\tilde{F}(\omega)$ becomes much narrower than the average level-spacing. The QM-adiabaticity condition $t_{\tbox{H}} \ll \tau_c^{\tbox{qm}}$ means that individual energy levels are being resolved before the breakdown of first-order perturbation theory. Having no 'systematic' transitions to 'other' levels implies that the energy-distribution remains localized in the initial level for a very long time. The above argumentation implies that $D_{\tbox{E}}^{\tbox{LZ}} \gg D_{\tbox{E}}^{\tbox{FGR}}$, meaning that for extremely slow velocities energy spreading, and the eventual breakdown of the QM adiabatic approximation, is predominantly due to Landau-Zener mechanism. \section{Open questions and future directions} The purpose of this paper was to make the first steps towards a theory for energy spreading and quantum dissipation. In particular we wanted to demonstrate that perturbation theory, and semiclassical theory have different regimes of validity. There are still a lot of open questions that have to be answered. An important issue is the specification of the general conditions for having a genuine stochastic behavior in the QM case. For fast velocities it is suggested (but not proved) that the stochastic behavior persists beyond the semiclassical breaktime. For slow velocities, it is suggested (but again not proved) that the stochastic behavior persists beyond the breaktime of perturbation theory. The latter suggestion is indirectly supported by common-wisdom and by various numerical experiments with banded matrices \cite{kottos,wilk2,wbr}. For the generic RMT picture, which is still lacking, it is implied that both, breakdown of perturbation theory and resolving the bandwidth of first-order transitions, are necessary conditions for having genuine stochastic behavior. In any case, stochasticity {\em can be established} if we assume irregular a-periodic driving with an appropriate correlation scale. For periodic driving, further considerations are required in order to analyze the possible manifestation of localization effect. A better understanding of the core-tail structure is required. Only in the case of Wigner's model \cite{wigner,flamb,casati} we have an established result: Namely, the core-tail structure is simply a Lorentzian. For real systems the core-tail structure is not necessarily a Lorentzian \cite{wls}. The determination of the border between the core and the tail may be problematic. One cannot exclude the existence of a distinct tail component, in the vicinity of the core, that does not grow like $\delta x^2$. A strongly related issue is to get an analytical understanding of the $b'$ dependence of the effective (`renormalized') correlation time $\tau_c$. The `piston' example is non-generic in many respects. There are three classical length scales: The penetration distance upon collision with the piston; The mean path-length between collisions with the piston; And the ballistic length scale that characterizes the volume of the cavity. Quantum-mechanics adds two additional length scales: one is related to the Airy structure in the vicinity of the turning points, and the other is the De-Broglie wavelength. Having all these scales has some non-universal consequences \cite{wls} that we have not considered in this paper. The application of the general theory of sections 8-20 to the 'piston' example is quite straightforward, but these non-universal features should be taken into account. In the generic theory there are only two parametric scales: The displacement $\delta x_c^{\tbox{qm}}$ that is needed in order to mix neighboring levels; And the displacement $\delta x_{\tbox{prt}}$ that is needed in order to mix the core with the tail. The former is much smaller than De-Broglie wavelength, and the latter is much larger than De-Broglie wavelength. It turns out that in the `piston' example there is a third, non-universal parametric scale $\delta x_{\tbox{NU}}$ that roughly equals to De-Broglie wavelength \cite{wls}. Consequently, the perturbative (slow-velocity) regime is further divided into a universal slow-velocity regime, and a non-universal slow-velocity regime. Of particular importance is the understanding of the {\em hard-wall limit}. In the generic theory $\tau_{\tbox{cl}}$ determines the bandwidth $\Delta_b$ of the matrix $\mbf{W}_{nm}$. Having {\em finite bandwidth} is essential in order to understand that there is a crossover to a {\em non-perturbative} regime in the $\hbar\rightarrow 0$ limit. We cannot treat $\mbf{W}_{nm}$ as a banded matrix if we take first the limit $\tau_{\tbox{cl}}\rightarrow 0$. The walls of the `piston' should be regarded as `hard' once $\Delta_b$ becomes equal or larger than $E$. This is equivalent to having (classical) penetration distance smaller than De-Broglie wavelength. The consequence of taking the hard wall limit is that the non-perturbative regime ($v_{\tbox{PR}}>1$) disappears. This state-of-affairs is possibly responsible to the {\em illusion} that a {\em general} theory for quantum dissipation can be base on a perturbative approach. At first sight it looks strange that hard-walls are `better' for perturbation theory. It looks even more strange that for hard walls we cannot apply the semiclassical theory. In order to make the latter observation less strange recall that solving the one-dimensional Schroedinger equation near a sharp step, and then taking $\hbar\rightarrow 0$, never corresponds to the WKB approximation. The problem of quantum dissipation, in the sense of this paper, is a preliminary stage in the construction of a theory \cite{vrn} of quantal Brownian motion (QBM). In the classical case it is known \cite{jar} that the motion of a 'heavy' particle that is coupled to {\em chaotic} degrees-of-freedom is quite generally described by the classical Langevin equation. The effect of the environment is represented by a friction force plus a noise term. The friction leads to dissipation of energy and the noise term is essential for having diffusion. Furthermore, the friction coefficient is related to the noise intensity via the universal ${\cal F\!D}$ relation. The fact that there is no general theory for quantum dissipation, and a-fortiori there is no general theory of QBM, has not been universally recognized in the literature. It is true that there is a vast literature that comes under those headings, but actually this literature is commonly based on an {\em effective-bath approach}. In previous studies \cite{dld,qbm,dph} the common effective-bath strategy has been applied in order to develop a universal description of QBM and dephasing. Another possibility is to use an effective RMT bath \cite{rmt}. The results of the latter study agree with \cite{dld,qbm}. A future theory of QBM should clarify whether effective-bath methods universally apply. \ack{ I thank {\em Eric Heller} for useful and stimulating discussions. I also thank {\em Shmuel Fishman} for fruitful interaction in early stages, for his comments on an earlier version of this paper, and for his generous hospitality while visiting the Technion. The ITAMP in the Harvard-Smithsonian Center is acknowledged for support, and the MPI f\"ur Komplexer Systeme in Dresden is acknowledged for support and for the kind hospitality during the workshop and conference {\it Dynamics of Complex Systems}. I thank {\em Felix Izrailev} for making me aware of the fact that Wigner's Lorentzian is the obvious and the simplest example for a core-tail structure, and for drawing my attention to the strong relation between the QCC considerations in this paper and the classical approximation of \cite{felix2}.} \newpage
0804.4168
\section{Acknowledgements} We thank the staff of the Collider-Accelerator and Physics Departments at BNL for their vital contributions. We acknowledge support from the Department of Energy and NSF (USA), MEXT and JSPS (Japan), CNPq and FAPESP (Brazil), NSFC (China), MSMT (Czech Republic), IN2P3/CNRS, and CEA (France), BMBF, DAAD, and AvH (Germany), OTKA (Hungary), DAE (India), ISF (Israel), NRF (Korea), MES, RAS, and FAAE (Russia), VR and KAW (Sweden), U.S. CRDF for the FSU, US-Hungarian NSF-OTKA-MTA, and US-Israel BSF. \def{Int. J. Mod. Phys.}~{\bf A}{{Int. J. Mod. Phys.}~{\bf A}} \def{J. Phys}~{\bf G}{{J. Phys}~{\bf G}} \defNuovo Cimento{Nuovo Cimento} \defNucl. Instrum. Methods{Nucl. Instrum. Methods} \def{Nucl. Instrum. Methods}~{\bf A}{{Nucl. Instrum. Methods}~{\bf A}} \def{Nucl. Phys.}~{\bf A}{{Nucl. Phys.}~{\bf A}} \def{Nucl. Phys.}~{\bf B}{{Nucl. Phys.}~{\bf B}} \defPhys. Lett. B{Phys. Lett. B} \defPhys. Repts.\ {Phys. Repts.\ } \defPhys. Rev. Lett.\ {Phys. Rev. Lett.\ } \defPhys. Rev. D{Phys. Rev. D} \defPhys. Rev. C{Phys. Rev. C} \def{Z. Phys.}~{\bf C}{{Z. Phys.}~{\bf C}} \def{\it et al.}{{\it et al.}}
1203.2682
\section{Deriving Stokes' Law} To guide the students' derivation of Stokes' Law, we followed a classroom model called Modeling Discourse Management (MDM)~\cite{Desbien2002}, and incorporated the method Think-Pair-Share \cite{Lyman1981}. Students worked in groups of four, using a small whiteboard to record their work and share ideas with the class. MDM prescribes that if the class is missing a critical or interesting idea in their small group discussions, teachers should attempt to ``seed'' discussions by asking simpler, related questions that could serve as stepping stones to the main result. We introduced the concept of viscosity by asking students to describe what happens to a thick fluid between two parallel plates separated by distance $\Delta y$ when one plate moves at constant speed $\Delta v$ relative to the other, fixed plate. An example of a question we used to seed discussion in one small group was, ``What is the fluid right next to the plate doing?'' Students correctly intuited a flow pattern with a position-dependent velocity that increases smoothly from zero at the stationary plate to $\Delta v$ at the moving plate, a pattern commonly referred to as \emph{Couette flow}~\cite{Bat67}, and that there must be forces exerted both between neighboring layers of fluid and at the plate-fluid interface. Additionally, students guessed that processes at the molecular level must be responsible for this ``sticking'' or ``dragging''. We took the standard approach of lumping all of the microscopic details into one macroscopic quantity, the viscosity $\eta$. We asked students to identify the parameters on which the force exerted on the top plate depends, and to articulate how the force depends on those factors. Students identified four relevant dependencies, namely: $F$ increases with increasing $\eta$, $\Delta v$, and plate area $A$, and $F$ decreases with increasing $\Delta y$. We then asked students to construct a quantitative expression for the drag force on one fluid layer due to a neighboring layer using dimensional analysis, a technique that has been used to study other aspects of fluid flow~\cite{Guerra2011}. Given that viscosity has units of Pa$\cdot$s, students successfully constructed the only simple expression that satisfies the above dependencies: \begin{equation} \label{eq:simpleDrag} F = \eta A \Delta v/ \Delta y, \end{equation} which reduces to Newton's law of viscosity in the limit $\Delta v/\Delta y\rightarrow dv/dy$~\cite{Bat67}. Next, we asked the students to use (\ref{eq:simpleDrag}) to make a prediction about the drag force on a ball of radius $R$ moving at velocity $v$ through an otherwise stationary fluid that is infinite in extent. Students made the following approximations (assisted by seed questions as necessary): the area over which the force is applied is approximately the surface area of the ball, so $A\approx 4\pi R^2$; because the fluid far from the ball is stationary whereas the fluid near the ball moves with speed $v$, the change in velocity is $\Delta v = v$; and, since $R$ is the only available length scale, the characteristic distance over which the change in velocity occurs is $\Delta y \approx R$. These approximations yield \begin{equation} \label{simpleStokes} F \approx 4 \pi R \eta v, \end{equation} consistent with Stokes' Law. Because determination of $C$ in (\ref{eq:stokes}) was beyond the scope of our goals, we focused on the scaling of $F$ with $v$ rather than the absolute magnitude of the force. \begin{figure}\center \includegraphics[width=\columnwidth]{fig1.PDF} \caption{\label{fig:apparatus} Schematic of appartus used to measure velocity dependence of fluid drag forces.} \end{figure} \begin{figure}\center \includegraphics[width=\columnwidth]{fig2.pdf} \caption{\label{fig:students} Student experimentalists measuring drag forces in oobleck.} \end{figure} \section{Apparatus and procedure} The apparatus, shown in Fig.~\ref{fig:apparatus}, consists of a ping-pong ball submerged in a fluid-filled, acrylic trough (60~cm long, 8~cm wide, and 10~cm tall), with a wire that connects the ball to a counterweight of mass $m$ via a crossbeam and pulley. When the weight is released from rest, the ball accelerates to a constant (terminal) velocity $v$, at which point the counterweight exactly balances the drag force experienced by the ball: $F=mg$, where $g=9.8$~m/s$^2$ is the standard gravity on Earth. Because the ball and counterweight are attached by a taut wire, they travel at the same speed. Thus $F$ and $v$ can be determined by measuring the mass and speed of the counterweight. The counterweight was a 3~g cup filled with up to 60~mL water, which allowed for an easily scalable system. Three holes were drilled in the ball, a small hole of diameter 3~mm and two large ones of 6~mm. The purpose of the holes was twofold: first, to facilitate attachment of the ball to the wire; and second, to allow the ball to fill with fluid in order to prevent it from floating or sinking. The ball was attached to the wire by threading the wire through the small hole and tying it to a plastic bead 5~mm in diameter. Though only one large hole was needed to thread the bead, the second hole greatly reduced the time needed to fill the ball with the ambient fluid. Although we, the teachers, designed and built the apparatus, students developed their own measurement procedure. The class reached consensus on the procedure, and all groups followed the same steps. To measure $v$, students marked 4 successive 10-cm intervals on the wire with flags of tape (Fig.~\ref{fig:students}), and timed them with a stopwatch as they passed the bottom edge of the trough. For a given counterweight mass, the experiment was repeated 4 times. In all their experiments, students allowed the ball to travel an initial distance of 10~cm before measuring its speed to ensure that it had reached terminal velocity. This distance was determined empirically. \begin{figure}[t]\center \includegraphics[width=\columnwidth]{fig3.pdf} \caption{\label{fig:data} Weighted class average of student measurements of ball speed $v$ as a function of drag force $F$ for oobleck (blue circles) and corn syrup (red triangles). The function $v(F)=\alpha F$ was fit to each data set. Fitted curves are plotted as dashed blue and solid red lines for oobleck ($\alpha = 0.2$~s/kg) and syrup ($\alpha=0.04$~s/kg), respectively. The nonlinear dependence of $v$ on $F$ in oobleck indicates that it is a non-Newtonian fluid.} \end{figure} \begin{figure}[t]\center \includegraphics[width=\columnwidth]{fig4.pdf} \caption{\label{fig:microscope} Microscopic image of oobleck. The best images were obtained by looking near the edges of a droplet or through a very thin sample.} \end{figure} \section{Results and discussion} The students fit the $v(F) = \alpha F$ model to their data using Microsoft Excel, and the corresponding correlation coefficient $R^2$ was used to determine whether or not a fluid was Newtonian: $R^2 \approx 1$ for Newtonian fluids, and $R^2\not\approx 1$ indicates non-Newtonian behavior. For Newtonian fluids, the viscosity could in principle be determined from the slope $\alpha$ using Stokes' Law. To do so, the constant $C$ appearing in (\ref{eq:stokes}) must be determined for the particular apparatus~\cite{Dolz2005,Ambari1985}. However, the goal of our experiment was to test the prediction of linear relationship between $v$ and $F$, thereby discriminating between Newtonian and non-Newtonian fluids. For this purpose, it is sufficient to determine whether the data is linear. The results of the students' experiments, shown in Fig.~\ref{fig:data}, indicate that syrup is Newtonian ($R^2=0.99$) whereas oobleck ($R^2=-1.56$) is not. In a subsequent class, we encouraged students to think about microscopic differences between oobleck and corn syrup that could result in their macroscopically different behavior. After forming hypotheses of what they expected to see, students examined a thin sample of oobleck under a microscope. They observed that corn starch crystals were not dissolved but suspended in the water, as can be seen in Fig.~\ref{fig:microscope}. Packing and jamming of the crystals, two mechanisms for the shear-thickening nature of suspensions~\cite{Cheng2011}, were observed by agitating the sample. In conclusion, students developed a conceptual and formal understanding of fluid viscosity, measured viscous forces of various fluids, and explored the microscopic origins of oobleck's non-Newtonian behavior. In addition to identifying non-Newtonian fluids, the apparatus we designed has other potential uses, \emph{e.g.}, determination of the velocity dependence of turbulent drag forces or viscometry. The pedagogical and experimental methods described here are appropriate for advanced high school and undergraduate students. \acknowledgments The authors acknowledge helpful discussions with B.~Albanna, J.~Corbo, D.~Edelberg, A.~Little, and G.~Quan. This work was supported by the Departments of Physics, Astronomy, and Earth and Planetary Sciences at UC Berkeley, the Center for Integrative Planetary Science, and private donations. DRDF, JL, and AMZ were supported by the National Science Foundation under grants PHY-1068875, DGE-1106400, and EEC-0832819, respectively. NR was supported by the Department of Energy Office of Science Graduate Fellowship Program, made possible by the American Recovery and Reinvestment Act of 2009, administered by ORISE-ORAU under contract DE-AC05-06OR23100.
1203.2972
\section{Introduction} \label{s:intro} \setcounter{equation}{0} The contact process with parameter $\lambda > 0$ on a graph $G = (V, E)$ is a continuous-time Markov process $(\xi_t)_{t \geq 0}$ with state space $\{0,1\}^V$ and generator \begin{equation} \label{eq1gen} \Omega f(\xi) = \sum_{x \in V} \left(f(\phi_x\xi) - f(\xi) \right) + \lambda \cdot \sum_{e\in E} \left(f(\phi_e\xi) - f(\xi) \right), \end{equation} where $f$ is any local function on $\{0,1\}^V$ and, given $x \in V$ and $\{y, z\} \in E,$ we define $\phi_x\xi,\; \phi_{\{y,z\}}\xi \in \{0,1\}^V$ by $$\phi_x\xi(w) = \left|\begin{array}{ll}0 &\text{if } w = x;\\\xi(w)&\text{otherwise;} \end{array}\right.\qquad \phi_{\{y,z\}}\xi(w) = \left|\begin{array}{ll}\max(\xi(y),\xi(z))&\text{if } w \in \{y,z\};\\\xi(w) &\text{otherwise.} \end{array} \right.$$ Given $A \subseteq V$, we write $(\xi^A_t)_{t \ge 0}$ to denote the contact process started from the initial configuration that is equal to 1 at vertices of $A$ and 0 at other vertices. When we write $(\xi_t)$, with no superscript, the initial configuration will either be clear from the context or unimportant. We often abuse notation and associate configurations $\xi \in \{0,1\}^V$ with the corresponding sets $\{x\in V: \xi(x) = 1\}$. The contact process is a model for the spread of an infection in a population. Vertices of the graph (sometimes referred to as \textit{sites}) represent individuals. In a configuration $\xi \in \{0,1\}^V$, individuals in state 1 are said to be \textit{infected}, and individuals in state 0 are \textit{healthy}. Pairs of individuals that are connected by edges in the graph are in proximity to each other in the population. The generator (\ref{eq1gen}) gives two types of transition for the dynamics. First, infected individuals \textit{heal} with rate 1. Second, given two individuals in proximity so that one is infected and the other is not, with rate $\lambda$ there occurs a \textit{transmission}, as a consequence of which both individuals end up infected. The configuration $\underline{0} \in\{0,1\}^V$ that is equal to zero at all vertices is a trap for $(\xi_t)$. For certain choices of the underlying graph $G$ and the parameter $\lambda$, it may be the case that the probability of the event $\{\underline{0} \text{ is never reached}\}$ is positive even if the process starts from finitely many infected sites. In fact, whether or not this probability is positive does not depend on the set of initially infected sites, as long as this set is nonempty and finite. We say that the process \textit{survives} if this probability is positive; otherwise we say that the process \textit{dies out}. In order to be able to motivate and state our results, we will now list some of the properties of the contact process for certain choices of the graph $G$, namely: the lattice $\mathbb{Z}^d$, $d$-regular infinite trees and the finite counterparts of these graphs. For proofs of these properties and a detailed treatment of the topic, we refer the reader to \cite{lig85,lig99}. Let us start with ${\mathbb Z}^d$, the $d$-dimensional integer lattice endowed with edge set $\{\{x,y\}: \|x-y\| = 1\}$, where $\|\cdot\|$ denotes Euclidean distance. In this case, there exists a number $\lambda_c = \lambda_c({\mathbb Z}^d)$ such that, depending on whether $\lambda < \lambda_c,\;\lambda = \lambda_c$ or $\lambda > \lambda_c$, the process exhibits different behavior; these three regimes are respectively called \textit{subcritical}, \textit{critical} and \textit{supercritical}. It is known that the process survives if and only if it is supercritical. In this case, the process is known to \textit{survive strongly}, meaning that (for any nonempty initial configuration) with positive probability, each site becomes infected at arbitrarily large times: $$\lambda > \lambda_c \Longrightarrow P\left[\forall x, \forall t, \exists t'> t: \xi_{t'}(x) = 1\right] > 0.$$ The interest in the contact process on trees was prompted after it was discovered in \cite{P} that death and strong survival are not the only possibilities in this case. For $d \geq 2$, let $T_d$ denote the infinite $(d+1)$-regular tree with a distinguished vertex $o$ called the root. The different phases of the process are captured by two constants $\lambda_1(T_d) < \lambda_2(T_d)$. If $\lambda \leq \lambda_1$, $(\xi^{\{o\}}_t)$ dies out, and if $\lambda > \lambda_2$, it survives strongly. If $\lambda \in (\lambda_1,\;\lambda_2]$, then the process \textit{survives weakly}, meaning that it survives but does not survive strongly. This implies that, even though the infection has positive probability of always being present on the graph, each individual site eventually becomes permanently healthy. If $G$ is a finite graph, the contact process on $G$ dies out. Given $A\subseteq V$, define $\uptau_G^A = \inf\{t: \xi^A_t = \underline{0}\}$, the \textit{extinction time} for the process started from occupancy in $A$. We may omit the subscript $G$ when the context is clear enough, and simply write $\uptau$ when the contact process is started from full occupancy, that is, $\uptau = \uptau^{\underline{1}}$. The distribution of $\uptau$ and the behavior of the process until this time can be very interesting. Consider the graph $\{0,\ldots,n\}^d$ (viewed as a subgraph of ${\mathbb Z}^d$) and the distribution of $\uptau$ for this graph, as $n$ goes to infinity. The three regimes of the infinite-volume process manifest themselves in the following way. If $\lambda < \lambda_c({\mathbb Z}^d)$, then $\uptau/\log n$ converges in probability to a constant \cite{durliu}. If $\lambda = \lambda_c$, then $\uptau/n \to \infty$ and $\uptau/n^4 \to 0$ in probability \cite{dursctan}. If $\lambda > \lambda_c$, then $\lim_{n\to\infty} \log E[\uptau]/n^d$ exists and $\uptau/E[\uptau]$ converges in distribution to the unit exponential distribution \cite{dursc,tommeta,tomexp}. In the latter case, the process is said to exhibit \textit{metastability}, meaning that it persists for a long time in a state that resembles an equilibrium and then quickly moves to its true equilibrium ($\underline{0}$ in this case). Metastability for the contact process in this setting was also studied in \cite{eulalia} and \cite{schonmeta}. For the case of finite trees, the picture is less complete, and the available results concerning the extinction time are contained in \cite{St}. Fix $d \geq 2$, let $T_d^h$ be the finite subgraph of $T_d$ defined by considering up to $h$ generations from the root and again take the contact process started from full occupancy on this graph, with associated extinction time $\uptau$. If $\lambda < \lambda_2$, then there exist constants $c, C>0$ such that $P(ch \leq \uptau \leq Ch) \to 1$ as $h \to \infty$. If $\lambda > \lambda_2$, then for any $\sigma < 1$ there exist $c_1, c_2 >0$ such that $$P\left[\uptau > c_1 e^{c_2(\sigma d)^h}\right] \to 1 \text{ as } h \to \infty.$$ Notice that the above implies that $\uptau$ is at least as large as a stretched exponential function of the number of vertices, $(d+1)^h$. As far as we know, no results are available concerning finite graphs that are not regular. \medskip For $n \in {\mathbb N}$ and $d>0$, let $\Lambda(n,d)$ be the set of all trees with $n$ vertices and degree bounded by $d$, and let ${\mathcal G}(n,d)$ be the set of graphs having a spanning tree in $\Lambda(n,d)$. In this paper, we prove the following theorems. \begin{theorem} \label{thm1main1} For any $d \geq 2$ and $\lambda > \lambda_c({\mathbb Z})$, there exists $c>0$ such that, for any~$n$ large enough, $$\inf_{T \in \Lambda(n,d)} \frac{\log E[\uptau_T]}{n} \geq c.$$ \end{theorem} \begin{theorem} \label{thm2main2} Let $d \geq 2$, $\lambda > \lambda_c({\mathbb Z})$, and $G_n \in {\mathcal G}(n,d)$. The distribution of $\uptau_{G_n}/E[\uptau_{G_n}]$ converges to the unitary exponential distribution as $n$ tends to infinity. \end{theorem} \begin{theorem} \label{thm3main3} Let $d \geq 2$ and $\lambda> \lambda_c({\mathbb Z})$. There exists $c >0$ such that $$\inf_{T \in \Lambda(n,d)} P\left[\uptau_T \ge e^{cn}\right] \to 1 \text{ as } n \to \infty.$$ \end{theorem} By attractiveness, one can replace $\Lambda(n,d)$ by the set of all graphs having a subgraph in $\Lambda(n,d)$ in Theorems~\ref{thm1main1} and \ref{thm3main3}, and in particular, one can replace $\Lambda(n,d)$ by ${\mathcal G}(n,d)$. For instance, the above results cover the case of any sequence of increasingly large connected subsets of ${\mathbb Z}^d$. At the cost of requiring $\lambda > \lambda_c({\mathbb Z})$, we thus recover and extend previously mentionned results, without any strong assumption on the regularity of the graph. For these values of $\lambda$, this shows in particular that on regular trees with finite depth, the extinction time is not only larger than a stretched exponential function of the number of vertices, but actually an exponential function. In order to exemplify further the usefulness of our results, we then consider the contact process on Newman-Strogatz-Watts (NSW) random graphs, as considered in \cite{NSW} and \cite{CD}. Let us define them. For any $n \in {\mathbb N}$, we construct a graph $G^n$ on $n$ vertices. The vertex set is simply $\{1, \ldots, n\}$. The random set of edges will be constructed from a probability $p$ on $\{3,4,\ldots\}$ with the property that, for some $a>1$, $c_0 = \lim_{m\to\infty} p(m)/m^a$ exists and is in $(0, \infty)$. We let $d_1, \cdots, d_n$ be independent random variables distributed according to $p$, and conditioned on the event that $d_1 + \cdots + d_n$ is even. Next, from each vertex $i \in \{1, \ldots, n\}$ we place $d_i$ \textit{half-edges}; when two half-edges are connected, an edge is formed. We pair up the $d_1+\cdots+d_n$ half-edges in a random way that is uniformly chosen among all possibilities. Note that this can produce multiple edges between two vertices and also loops (edges that start and finish at the same vertex). We then take the contact process with parameter $\lambda > 0$ on this random graph. Notice that the generator given by (\ref{eq1gen}) does not exclude the case of multiple edges or loops: the latter have no effect in the dynamics and the former increase the rate of transmission between vertices. Let us write $\mathbb{P}$ to denote a probability measure under which both the random graph and the contact process on this graph are defined. In \cite{CD}, it is shown that, for any $\lambda > 0$ and any $\delta > 0$, we have $\mathbb{P}[\uptau(G^n) \ge e^{n^{1-\delta}}] \to 1$ as $n \to \infty$. We improve this and show that \begin{theorem} \label{thm1cd1} For any $\lambda > 0$, there exists $c > 0$ such that $$\mathbb{P}\left[\uptau_{G^n} \ge e^{cn} \right] \to 1 \text{ as } n \to \infty.$$ \end{theorem} Although it would be simple to deduce Theorem~\ref{thm1cd1} from Theorem~\ref{thm3main3} assuming $\lambda > \lambda_c({\mathbb Z})$, we stress that here we cover any non-zero infection parameter. Theorem \ref{thm1cd1} is true for all $a>2$, but we only give the proof for $a > 3$, which is the harder case (when we increase $a$, the degrees of the vertices become stochastically smaller, so the graph is less connected and the contact process survives for a shorter time). Nevertheless in the case $a>3$ we have the advantage that the law $p$ has finite second moment. \medskip Let us briefly explain the proofs of our results and how they are organized in the paper. Section~\ref{s:remind} is a brief reminder on some properties of the contact process that will be useful for our purposes. In Section \ref{s:metastab}, we show a weaker version of Theorem~\ref{thm1main1}, which states that the expectation of the extinction time is larger than $e^{cn^{\alpha}}$ for some $\alpha > 0$. In order to do this, we consider two cases: either the tree contains a large segment, or it contains a large number of disjoint smaller segments. In the first case, the result follows from the known behavior of the extinction time on finite intervals of ${\mathbb Z}$. In the second case, we adapt an argument of \cite{CD} and show that, even if the segments are not too large, the time scale of extinction in individual segments is large enough for the infection to spread to other, possibly inactive, segments, so that the segments can jointly sustain activity for the desired amount of time. At this point, using a general metastability argument from \cite{tommeta}, we prove Theorem \ref{thm2main2}. Given a tree $T \in \Lambda(n,d)$, we decompose it into two subtrees $T_1, T_2$ by removing an edge; we argue that this can be done so that $T_1$ and $T_2$ both contain a non-vanishing proportion of the vertices of $T$. In Section \ref{s:comparison2}, we bound the contact process $(\xi_t)_{t \ge 0}$ on $T$ from below by a pair of processes $(\zeta_{T_1,t})_{t \ge 0}$ on $T_1$ and $(\zeta_{T_2,t})_{t \ge 0}$ on $T_2$. The process $\zeta_{T_1}$ evolves as a contact process on $T_1$ until extinction. However, once extinct, the process stays extinct for some time, and then, as the Phoenix, it rises back from the ashes. This rebirth of the process reflects the fact that, as long as the true process $\xi$ has not died out, the tree $T_1$ constantly receives new infections that can restore its activity. The process $\zeta_{T_2}$ evolves independently, following the same rules. We show that the true process $\xi$ dominates $\zeta_{T_1} \cup \zeta_{T_2}$ up to the extinction of $\xi$, with probability close to $1$. With this comparison at hand, we argue that, modulo a factor that is polynomial in the number of vertices, the expected extinction time for $T$ is larger than the product of the expected extinction times for $T_1$ and $T_2$. This, together with the lower bound $e^{cn^\alpha}$ mentioned in the last paragraph, is then used to prove Theorem~\ref{thm1main1}, from which Theorem~\ref{thm3main3} follows. In Section \ref{s:discrete}, we re-state some of the results explained above for a discrete-time version of the contact process. In Section \ref{s:nsw}, we turn to the NSW random graph $G^n$. We present an algorithm that finds with high probability a certain subgraph $G'$ of $G^n$ containing a large quantity of vertices with degree above a certain threshold $M$ ($M$ depends on $\lambda$ but not on $n$). The algorithm also guarantees that most of these vertices are not isolated from other vertices with degree above $M$. Next, a tree $T$ and a mapping $\theta$ from the vertices of $T$ to those of $G'$ are given. $\theta$ has the properties that, for any $x$, $\theta(x)$ has degree larger than $M$ and, if $x, y$ are neighbors in $T$, then $\theta(x)$ and $\theta(y)$ are not far from each other in $G'$. By considering $(\xi_t \cap G')$ only at values of $t$ that are integer multiples of a large constant $R$, we then define a discrete-time version of the contact process on $T$, denoted $(\eta_k)_{k \geq 1}$. The construction is such that, if a vertex $\theta(x)$ of $G'$ has many infected neighbors in the configuration $\xi_{k\cdot R}\cap G'$, we have $\eta_k(x) = 1$. The key idea is that, on $G'$, around vertices of degree above $M$, the infection has high probability of persisting for more than $R$ units of time, and during this period, of propagating far enough that other vertices of high degree are reached; this is then interpreted as a transmission in the process $(\eta_k)$. Even if the parameter $\lambda$ is very small, we can construct $T$ and $\theta$ so that, if $n$ is large enough, $(\eta_k)$ has parameter $\lambda'$ larger than the critical parameter for the one-dimensional contact process. We then apply our results to conclude that $(\eta_k)$, and consequently $(\xi_t)$, survive for a long time. \medskip \noindent \textbf{Notations.} For $x \in {\mathbb R}$, we write $\lfloor x \rfloor$ for the integer part of $x$. If $A$ is a set, $|A|$ denotes its cardinality. When talking about the size of a graph, we always mean its number of vertices. \section{A reminder on the contact process} \label{s:remind} \setcounter{equation}{0} We start this section by presenting the graphical construction of the contact process and its self-duality property. Fix a graph $G = (V,E)$ and $\lambda > 0$. We take the following family of independent Poisson point processes on $[0,\infty)$: $$\begin{array}{ll} (D^x): x \in V &\text{with rate } 1;\\ (N^e): e \in E &\text{with rate } \lambda.\end{array}$$ Let $H$ denote a realization of all these processes. Given $x,y\in V,\; s \leq t$, we say that $x$ and $y$ are connected by an \textit{infection path in $H$} (and write $(x,s)\leftrightarrow (y,t)$ in $H$) if there exist times $t_0 = s < t_1 < \cdots < t_k = t$ and vertices $x_0 = x, x_1, \ldots, x_{k-1} = y$ such that \begin{itemize} \item[$\bullet$] $D^{x_i} \cap (t_i,\; t_{i+1}) = \varnothing$ for $i = 0, \ldots, k - 1$; \item[$\bullet$] $\{x_i,x_{i+1}\}\in E$ for $i = 0, \ldots, k-2$; \item[$\bullet$] $t_i \in N^{x_{i-1}, x_i}$ for $i = 1, \ldots, k-1$. \end{itemize} Points of the processes $(D^x)$ are called \textit{death marks} and points of $(N^e)$ are \textit{links}; infection paths are thus paths that traverse links and do not touch death marks. $H$ is called a \textit{Harris system}; we often omit dependence on $H$. For $A, B \subseteq V$, we write $A\times\{s\} \leftrightarrow B \times \{t\}$ if $(x,s)\leftrightarrow (y,t)$ for some $x \in A$, $y \in B$. We also write $A\times \{s\} \leftrightarrow (y,t)$ and $(x,s) \leftrightarrow B\times\{t\}$. Finally, given another set $C \subseteq V$, we write $A \times \{s\} \leftrightarrow B \times \{t\}$ \textit{inside} $C$ if there is an infection path from a point in $A\times \{s\}$ to a point in $B\times\{t\}$ and the vertices of this path are entirely contained in $C$. Given $A \subseteq V$, put \begin{equation}\label{eq1harris} \xi^A_t(x) = \mathds{1}_{\{A \times \{0\} \leftrightarrow (x,t)\}} \text{ for } x \in V,\; t \geq 0\end{equation} (here and in the rest of the paper, $\mathds{1}$ denotes the indicator function). It is well-known that the process $(\xi^A_t)_{t\geq 0} = (\xi^A_t(H))_{t\geq 0}$ thus obtained has the same distribution as that defined by the infinitesimal generator (\ref{eq1gen}). The advantage of (\ref{eq1harris}) is that it allows us to construct in the same probability space versions of the contact processes with all possible initial distributions. From this joint construction, we also obtain the \textit{attractiveness} property of the contact process: if $A \subseteq B \subseteq V$, then $\xi^A_t(H) \subseteq \xi^B_t(H)$ for all $t$. From now on, we always assume that the contact process is constructed from a Harris system, and will write $P_{G,\lambda}$ to refer to a probability measure under which such a system (on graph $G$ and with rate $\lambda$) is defined; we usually omit $G,\lambda$. Now fix $A \subseteq V,\; t > 0$ and a Harris system $H$. Let us define the \textit{dual process} $(\hat \xi^{A, t}_s)_{0 \leq s \leq t}$ by $$\hat \xi^{A,t}_s(y) = \mathds{1}_{\{(y,t-s)\leftrightarrow A \times \{t\} \text{ in } H\}}.$$ If $A = \{x\}$, we write $(\hat \xi^{x,t}_s)$. This process satisfies two important properties. First, its distribution (from time 0 to $t$) is the same as that of a contact process with same initial configuration. Second, it satisfies the \textit{duality equation} \begin{equation}\xi^A_t \cap B \neq \varnothing \text{ if and only if } A \cap \hat \xi^{B,t}_t \neq \varnothing. \end{equation} In particular, \begin{equation}\xi^{\underline 1}_t(x) = 1 \text{ if and only if } \hat \xi^{x,t}_t \neq \varnothing, \end{equation} where $(\xi^{\underline 1}_t)$ is the process started from full occupancy. \medskip We now recall classical results about the contact process on an interval. \begin{proposition} \label{cpinterval} For $n \in {\mathbb Z}_+$, $A \subseteq {\mathbb Z}_+$, let $$ \sigma_n^A = \inf \left\{t \ge 0 : \xi_t^A(n) = 1\right\}, $$ where $(\xi_t^A)_{t \ge 0}$ denotes the contact process on ${\mathbb Z}_+$ with initial configuration $A$. For any $\lambda > \lambda_c({\mathbb Z})$, there exists $\overline{c}_1 > 0$, $n_0$ such that the following results hold. \begin{enumerate} \item For any $n$, $$ P\left[\sigma_n^{\{0\}} < \frac{n}{\overline{c}_1}\right] > \overline{c}_1. $$ \item For any $A \subseteq \{0,\ldots,n\}$ and any $n \ge n_0$, $$ P\left[ \sigma_0^A + \sigma_n^A \ge \frac{n}{\overline{c}_1}, \ \xi^A_{n/\overline{c}_1} \neq \underline{0} \right] \le e^{-n}. $$ \item If $(\xi_{n,t}^{\underline{1}})_{t \ge 0}$ denotes the contact process on $\{0,\ldots,n\}$ started with full occupancy, then for any $n \ge n_0$ and any $t \ge 0$, we have $$ P\left[ \xi_{n,t}^{\underline{1}} = \underline{0} \right] \le t e^{-\overline{c}_1 n}. $$ \end{enumerate} \end{proposition} This follows from the classical renormalization argument that compares the contact process with supercritical oriented percolation, see for instance the proof of \cite[Corollary~VI.3.22]{lig85}. \section{Metastability} \label{s:metastab} \setcounter{equation}{0} We begin with the following basic graph-theoretic observation. \begin{lemma} \label{lem1} For a tree $T \in \Lambda(n,d)$, there exists an edge whose removal separates $T$ into two subtrees $T_1$ and $T_2$ both of size at least $\lfloor n/d \rfloor$. \end{lemma} \begin{proof} Associate to each edge the value of the smallest cardinality of the two subtrees resulting from the edge's removal. Let $\{x,y\} $ be an edge having maximal value. We suppose that the subgraph $T_y$ containing vertex $y$ is the smaller and that the value of its subtree is less than $\lfloor n/d \rfloor - 1$. Let the remaining edges of vertex $x$ be $\{x, x_1\}, \ \{x, x_2\}, \cdots \{x, x_r\} $, where $r \leq d-1$. Let $T_j$ be the subtree containing $x_j$ obtained by removing the edge $\{x, x_j\}$, and let $n_j$ be its cardinality. By maximality, all the $n_j$ must be less than $\lfloor n/d \rfloor - 1$, but equally, $$ |T_y| = \left| T \setminus \left(\{x\} \cup T_1 \cup \cdots \cup T_r\right) \right| = n-(1 + n_1 + n_2 + \cdots +n_r) \leq \lfloor n/d \rfloor - 1. $$ That is, $n \le (d-1) (\lfloor n/d \rfloor - 1) + \lfloor n/d \rfloor \le n -(d-1)$, a contradiction (the case $d = 1$ being trivial). \end{proof} \begin{proposition} \label{p:expalpha} For any $\lambda > \lambda_c({\mathbb Z})$, there exists $\alpha > 0$ and $\overline{c}_2 > 0$ such that the following holds. \begin{enumerate} \item For any $n$ large enough, any $T \in \Lambda(n,d)$, any non-empty $A \subseteq T$, one has $$ P\left[ \uptau^A \ge e^{\overline{c}_2 n^\alpha} \right] \ge \overline{c}_2. $$ In particular, $E[\uptau^A] \ge \overline{c}_2 e^{\overline{c}_2 n^\alpha}$. \item Moreover, $$ P\left[ \uptau \ge e^{\overline{c}_2 n^{\alpha/2}} \right] \ge 1-e^{-\overline{c}_2 n^{-{\alpha/2}}}, $$ where we recall that we write $\uptau$ as a shorthand for $\uptau^{\underline{1}}$. \item For $n$ large enough and any $G \in {\mathcal G}(n,d)$, if the contact process on $G$ started with an arbitrary non-empty configuration survives up to time $n^2$, then the chance that at this time, it is equal to the contact process starting from full occupancy, is at least $1- e^{-n^{-\alpha/2}}$. \end{enumerate} \end{proposition} From now on, $d$ is fixed and we consider a tree $T$ of maximal degree $d$ and size $n \to \infty$. Let $\beta > 0$ to be determined, not depending on $n$. Applying Lemma~\ref{lem1} repeatedly $\beta \log n$ times, we obtain $L_n = 2^{\beta \log n}$ disjoint subtrees each of size at least $\frac{n}{{(2d)}^{\beta \log n}} \geq \sqrt{n}$, provided $\beta \le 1/(2\log(2d))$ (for clarity, we simply assume that $L_n$ is an integer, without writing that the integer part should be taken). We write $T_1,\ldots, T_{L_n}$ for the trees thus obtained. Since the tree $T$ has maximal degree bounded by $d$, so do the subtrees $(T_j)$. Now, the size of a tree with maximal degree $d$ is at most $$ 1+d+\ldots+d^\textsf{diam} = \frac{d^{\textsf{diam}+1}-1}{d-1}, $$ where $\textsf{diam}$ denotes its diameter. As a consequence, for $n$ large enough, each $T_j$ must have a diameter at least $\frac{\log n}{4 \log d}$, and thus contain a path of $\frac{\log n}{4 \log d}$ distinct vertices. We write $I_j$ to denote such a path, which we identify with an interval of length $\frac{\log n}{4 \log d}$. \medskip In what follows, we will distinguish between the two possibilities: \begin{enumerate} \item[(A)] the diameter of $T$ is at least $n^{\alpha}$, \item[(B)] the diameter of $T$ is less than $n^\alpha$, \end{enumerate} where $\alpha > 0$ is a fixed number whose value will be specified in the course of the proof. It is worth keeping in mind that $\alpha$ will be chosen much smaller than $\beta$, itself chosen as small as necessary. \begin{proof}[Proof of parts (1-2) of Proposition~\ref{p:expalpha}] Assume that the tree $T$ satisfies (A). For part (1), by attractiveness, it suffices to consider initial configurations with a single occupied site $z$. Condition (A) ensures that one can find an interval of length at least $n^\alpha$. We write $[x,y]$ to denote such an interval, with $x$ and $y$ its endpoints. Consider the event that within time $2n/\overline{c}_1$, the contact process has infected site $x$, and thereafter the contact process begun at this time restricted to $[x,y]$ and with only $x$ occupied has infected $y$. This event has probability at least $\overline{c}_1^2$ by part (1) of Proposition~\ref{cpinterval}. If this event occurs, then at time $2n/\overline{c}_1$, the contact process on $T$ dominates the contact process on $[x,y]$ begun with full occupancy. The desired bound now follows from bounds on survival times for supercritical contact processes on an interval, see part (3) of Proposition~\ref{cpinterval}. Part (2) also follows using the interval $[x,y]$ and part (3) of Proposition~\ref{cpinterval}. We now consider that the graph satisfies (B), and adapt an approach due to \cite{CD}. For any $A \subseteq I_i$, we write $(\xi^{A}_{i,t})_{t \ge 0}$ for the contact process on $I_i$ with initial configuration $A$, and define $$ p_i(A) = P\left[ \xi^{A}_{i,Kn^\alpha} = \xi^{\underline{1}}_{i,Kn^\alpha} \neq \underline{0} \right], $$ where $K = 2/\overline{c}_1$. For any $i \le L_n$, we say that the interval $I_i$ is \emph{good at time} $t$ if $p_i(\xi_t) \ge 1 -n^{-2\beta}$, where for simplicity we write $p_i(\xi_t)$ instead of $p_i\left({\xi_t} \cap I_i\right)$. For $k \in {\mathbb N}$, we let $X_k \in \{0,\ldots L_n\}$ be the number of good intervals at time $kKn^{\alpha}$. For $i \le L_n$ and $k \ge 0$, let us write ${\mathcal E}_{i,k}$ for the event that the interval $I_i$ is good at time $kK n^\alpha$. By definition, $$ P[{\mathcal E}_{i,k+1} \ | \ {\mathcal E}_{i,k}] = P[p_i(\xi_{(k+1)Kn^\alpha}) \ge 1 - n^{-2\beta} \ | \ {\mathcal E}_{i,k}]. $$ By attractiveness, the latter is larger than \begin{equation*} \begin{split} & P\left[p_i\left( \xi^{\xi_{kKn^\alpha}}_{i,Kn^\alpha} \right)\ge 1 - n^{-2\beta} \ | \ {\mathcal E}_{i,k}\right] \\ & \qquad \ge P\left[p_i\left( \xi^{\underline{1}}_{i,Kn^\alpha} \right) \ge 1 - n^{-2\beta}, \xi^{\xi_{kKn^\alpha}}_{i,Kn^\alpha} = \xi^{\underline{1}}_{i,Kn^\alpha} \ | \ {\mathcal E}_{i,k}\right] \\ & \qquad \ge 1 - P\left[p_i\left( \xi^{\underline{1}}_{i,Kn^\alpha} \right) < 1 - n^{-2\beta}\right] - \underbrace{P\left[\xi^{\xi_{kKn^\alpha}}_{i,Kn^\alpha} \neq \xi^{\underline{1}}_{i,Kn^\alpha} \ | \ {\mathcal E}_{i,k}\right]}_{\le n^{-2\beta}}. \end{split} \end{equation*} We now argue that for $n$ large enough, \begin{equation} \label{e:expalpha} P\left[p_i\left( \xi^{\underline{1}}_{i,Kn^\alpha} \right) < 1 - n^{-2\beta}\right] \le n^{-2\beta}. \end{equation} Letting $(\xi^{A,t}_s)_{s \ge t}$ be the contact process started at time $t$ with $A$ occupied, one can rewrite the probability on the l.h.s.\ of \eqref{e:expalpha} as \begin{multline*} P\left[ P\left[\xi^{\xi^{\underline{1}}_{i,Kn^\alpha},Kn^\alpha}_{i,2Kn^\alpha} \neq \xi^{\underline{1},Kn^\alpha}_{i,2Kn^\alpha} \text{ or } \xi^{\underline{1},Kn^\alpha}_{i,2Kn^\alpha} = \underline{0} \ | \ \xi^{\underline{1}}_{i,Kn^\alpha} \right] > n^{-2\beta} \right] \\ \le n^{2\beta} P\left[ \xi^{\underline{1}}_{i,2Kn^\alpha} \neq \xi^{\underline{1},Kn^\alpha}_{i,2Kn^\alpha} \text{ or } \xi^{\underline{1}}_{i,2Kn^\alpha} = \underline{0}\right]. \end{multline*} By part (3) of Proposition~\ref{cpinterval}, the contact process on $I_i$ started with full occupancy survives up to time $2Kn^\alpha$ with probability larger than $$ 1-2Kn^\alpha \exp\left(-\overline{c}_1 |I_i|\right) = 1-2Kn^{\alpha-\overline{c}_1/4\log d }. $$ On this event, the probability that it gets coupled with the contact process started from full occupancy at time $Kn^\alpha$ within time $Kn^\alpha$ is larger than $1-e^{-|I_i|} = 1-n^{-\overline{c}_1/4\log d}$ by part (2) of Proposition~\ref{cpinterval}. Hence, the l.h.s.\ of \eqref{e:expalpha} is bounded by $$ n^{2\beta} \left( 2Kn^{\alpha-\overline{c}_1/4\log d } + n^{-\overline{c}_1/4\log d} \right), $$ which can be made smaller than $n^{-2\beta}$ if $0 < \alpha \ll \beta \ll 1$ are suitably chosen. To sum up, we have shown that for all $n$ large enough, $$ P[{\mathcal E}_{i,k+1} \ | \ {\mathcal E}_{i,k}] \ge 1-2 n^{-2\beta}. $$ Moreover, an examination of the above proof shows that this estimate still holds if we condition also on the state of the intervals $(I_j)_{j \neq i}$. In other words, we have shown that for any $x \ge 0$, \begin{equation} \label{driftgauche} P\left[X_{k+1} \leq X_{k} -x \ | \ X_{k}\right] \leq P\left[\mathsf{Bin}(L_n, 2 n^{-2\beta}) \geq x\right], \end{equation} where $\mathsf{Bin}(n,p)$ denotes a binomial random variable of parameters $n$ and $p$. Note also that with probability tending to $1$, all the intervals that are good at time $kKn^\alpha$ remain so at time $(k+1)Kn^\alpha$. We now show that if $l < L_n$, then \begin{equation} \label{driftdroite} P\left[X_{k+1}-X_k \ge 1 \ | \ \xi_{kKn^\alpha} \neq \underline{0}, X_k = l \right] \ge \frac{\overline{c}_1^2}{2}. \end{equation} (Obviously, if $X_k = l \neq 0$, then it must be that $\xi_{kKn^\alpha} \neq \underline{0}$.) By the Markov property, it suffices to show \eqref{driftdroite} for $k = 0$. We thus consider a non-empty initial configuration $A$ with $l < L_n$ good intervals. Let $I_i = [x,y]$ be an interval that is not good at time $0$. With probability tending to $1$, all good intervals remain good at time $Kn^\alpha$, so we only need to study the probability that $I_i$ becomes good. The probability of the complementary event is $$ P\left[ p_i\left(\xi^A_{Kn^\alpha}\right) < 1-n^{-2\beta} \right] \le P\left[\xi^A_{Kn^\alpha} < \xi_{i,Kn^\alpha}^{\underline{1}} \right] + P\left[ p_i\left(\xi_{i,Kn^\alpha}^{\underline{1}}\right) < 1-n^{-2\beta} \right] . $$ Inequality \eqref{e:expalpha} ensures that the last probability becomes arbitrarily small for $n$ large enough. It thus suffices to show that \begin{equation} \label{e:expal} P\left[\xi^A_{Kn^\alpha} < \xi_{i,Kn^\alpha}^{\underline{1}} \right] \le 1-\overline{c}_1^2. \end{equation} Let $z \in A$. We consider the event ${\mathcal E}_1$ that within time $Kn^\alpha = 2n^\alpha/\overline{c}_1$, the contact process has infected $x$, and thereafter the contact process restricted to $[x,y]$ and with only $x$ occupied has reached $y$. Note that the diameter of $T$ is less than $n^\alpha$ (so that there exists a path of length less than $n^\alpha$ linking $z$ to $x$), while the length of $I_i$ is $\frac{\log n}{4 \log d} \le n^\alpha$. As a consequence, part~(1) of Proposition~\ref{cpinterval} ensures that the event ${\mathcal E}_1$ has probability at least $\overline{c}_1^2$. Since on the event ${\mathcal E}_1$, we have $\xi^A_{Kn^\alpha} \ge \xi_{i,Kn^\alpha}^{\underline{1}}$, this justifies \eqref{e:expal}, and thus also \eqref{driftdroite}. The conclusion will now follow from \eqref{driftgauche} and \eqref{driftdroite} by a comparison with a random walk on ${\mathbb Z} \cap (-\infty,L_n]$ with a drift to the right. The necessary information on this drifted walk is contained in the following lemma. \begin{lemma} \label{l:rw} Let $(Z_l)_{l \in {\mathbb N}}$ be the random walk on ${\mathbb Z} \cap (-\infty,L_n]$ with transition probabilities $$ P[Z_{l+1} = x + k \ | \ Z_l = x < L_n] = \left|\begin{array}{ll} 0 & \text{if } k > 1, \\ \overline{c}_1^2/{2} & \text{if } k = 1, \\ e^{-n^{-\beta}} \ n^{-|k|\beta}/{|k|!} & \text{if } k \le -1. \end{array} \right. $$ Let also $H_0$ be the hitting time of ${\mathbb Z}_- = {\mathbb Z} \cap(-\infty,0]$, and $H_L$ be the hitting time of $L_n$. For any $n$ large enough and any $x \le L_n$, we have $$ P\left[ H_0 < H_L \ | \ Z_0 = x \right] \le n^{-x \beta/2}. $$ \end{lemma} Let us postpone the proof of this lemma, and see how it enables us to conclude. From \eqref{driftdroite}, we learn that whatever the initial non-empty configuration, we have $X_1 \ge 1$ with probability bounded away from $0$. On this event, we want to couple $(X_k)$ with the random walk of the lemma, so that $X_{k-1} \ge Z_k$ for every $k \ge 0$. In the r.h.s.\ of \eqref{driftgauche}, a binomial random variable appears, while jumps to the left in the lemma follow a Poisson random variable. Since a Bernoulli random variable of parameter $p$ is stochastically dominated by a Poisson random variable of parameter $-\log(1-p)$, it follows that $\mathsf{Bin}(L_n, 2 n^{-2\beta})$ is stochastically dominated by a Poisson random variable of parameter $$ -L_n \log (1-2 n^{-2\beta}) = -n^{\beta \log 2}\log (1-2 n^{-2\beta}) \le n^{-\beta}. $$ This and \eqref{driftdroite} guarantee the existence of the coupling. With probability at least $1-n^{-\beta/2} \ge 1/2$, the random walk hits $L_n$ before entering ${\mathbb Z}_-$. The proof of part (1) will be complete if we can argue that starting from $L_n$, with probability close to $1$, the walk needs to exit $L_n$ at least $e^{n^{\alpha}}$ times before reaching ${\mathbb Z}_-$. Let us consider a sequence of $e^{n^\alpha}$ excursions from $L_n$, and show that with high probability, none of them visits ${\mathbb Z}_-$. The first jump out of $L_n$ is distributed according to a Poisson random variable of parameter $n^{-\beta}$, which (for convenience) may be dominated by an exponential random variable of parameter $1$. With probability tending to $1$, the maximum over $e^{n^{\alpha}}$ such random variables does not exceed $n^{2 \alpha} \le L_n/4$. In view of the lemma, given an excursion whose first step has size smaller than $L_n/4$, the excursion will visit ${\mathbb Z}_-$ with probability smaller than $n^{-3L_n\beta/4} \le e^{-2n^\alpha}$, and this finishes the proof of part (1). As for part (2), the argument is similar, except that in this case $X_0 = L_n$. Consider $e^{n^{\alpha/2}}$ excursions from $L_n$. With probability at least $1-e^{-n^{\alpha/2}}$, none of these excursions has size larger than $n^{2\alpha} \le L_n/4$. As noted above, given an excursion from $L_n$ whose first step has size smaller than $L_n/4$, the excursion will visit ${\mathbb Z}_-$ with probability smaller than $n^{-3L_n\beta/4} \le e^{-2n^\alpha}$, thus finishing the proof of part (2). \end{proof} \begin{proof}[Proof of Lemma~\ref{l:rw}] Let $h(x) = P\left[ H_0 < H_L \ | \ Z_0 = x \right]$, $\tilde{h}(x) = n^{-x \beta/2}$, and let ${\mathcal L}$ be the generator of the random walk: $$ {\mathcal L} f(x) = \frac{\overline{c}_1^2}{2}(f(x+1)-f(x)) + e^{-n^{-\beta}} \sum_{k = 1}^{+\infty} \frac{n^{-k\beta}}{k!} (f(x-k) - f(x))\qquad (x < L_n). $$ For $x \in {\mathbb Z} \cap (0,L_n)$, we have ${\mathcal L} h(x) = 0$. On the other hand, for such $x$, we have \begin{eqnarray*} {\mathcal L} \tilde{h}(x) & = & \frac{\overline{c}_1^2}{2}\left(n^{-\beta/2}-1\right)\tilde{h}(x) + e^{-n^{-\beta}}\sum_{k = 1}^{+\infty} \frac{n^{-k\beta}}{k!} (n^{k\beta/2}-1)\tilde{h}(x) \\ & \le & \frac{\overline{c}_1^2}{2}\left(n^{-\beta/2}-1\right)\tilde{h}(x) + \sum_{k = 1}^{+\infty} \frac{n^{-k\beta}}{k!} n^{k\beta/2}\tilde{h}(x) \\ & \le & \left[\frac{\overline{c}_1^2}{2}\left(n^{-\beta/2}-1\right) + e^{n^{-\beta/2}}-1 \right] \tilde{h}(x), \end{eqnarray*} so ${\mathcal L} \tilde{h}(x)\le 0$ as soon as $n$ is large enough. As a consequence, ${\mathcal L} (h-\tilde{h}) \ge 0$ on ${\mathbb Z} \cap (0,L_n)$. By the maximum principle, $$ \max_{{\mathbb Z} \cap (0,L_n)} (h-\tilde{h}) \le \max_{{\mathbb Z}_- \cup \{L_n\}} (h-\tilde{h}) = 0, $$ and the lemma is proved. \end{proof} The following observation will be useful in the proof of part (3) of Proposition~\ref{p:expalpha}. \begin{rem} \label{remduality} Let a Harris system for the contact process on some graph $G = (V,E)$ be given (and fixed). We identify $G$ with its set of vertices, and assume that $\xi^A_t = \xi^{\underline{1}}_t$ for some $A \subseteq V$ and $t > 0$. This implies that in the Harris system, any infection path from $V\times \{0\}$ to $V\times \{t\}$ intersects the offspring of elements of $A$. Let $(\hat{\xi}^{B,t}_s)_{0 \le s \le t}$ be the dual contact process for time $t$, started with configuration $B$. If furthermore, $\hat{\xi}^{B,t}$ survives up to time $t$, then there must exist an infection path from $A \times \{0\}$ to $B \times \{t\} $. \end{rem} \begin{proof}[Proof of part (3) of Proposition~\ref{p:expalpha}] We continue with case (B), but considering that $T$ is the spanning tree of some graph $G = (V,E)$. For an arbitrary $z \in V$, we wish to bound $$ P\left[ \xi^z_{n^2} \neq \xi^{\underline{1}}_{n^2}, \ \xi^z_{n^2} \neq \underline{0} \right]. $$ The probability above is equal to $P[\exists y : \xi^z_{n^2}(y) \neq \xi^{\underline{1}}_{n^2}(y), \ \xi^z_{n^2} \neq \underline{0}]$. For any fixed $y$, we will thus bound \begin{equation} \label{e:debut} P\left[\xi^z_{n^2}(y) \neq \xi^{\underline{1}}_{n^2}(y), \ \xi^z_{n^2} \neq \underline{0}\right]. \end{equation} Letting $(\hat{\xi}^{y,n^2}_t)_{0 \le t \le n^2}$ be the dual contact process for time $n^2$ started with configuration $\{y\}$, we can rewrite this probability as $$ P\left[\xi^z_{n^2}(y) = 0,\ \hat{\xi}^{y,n^2}_{n^2} \neq \underline{0}, \ \xi^z_{n^2} \neq \underline{0}\right]. $$ As in the proof of part (1), we consider $X_k$ the number of good intervals at time $kKn^\alpha$. By attractiveness, if an interval is good for the contact process on $T$, then it must be good for the contact process on $G$. Note that, for $H_L$ as in Lemma~\ref{l:rw}, a classical large deviation estimate on sums of i.i.d.\ random variables with an exponential moment gives us that $$ P\left[ H_L > n \right] \le e^{-\sqrt{n}}, $$ and as a consequence, \begin{equation} \label{step1} P\left[L_n \notin \{X_k, \ k \le n\}, \ \xi^z_{nKn^\alpha} \neq \underline{0} \right] \le e^{-\sqrt{n}}. \end{equation} Let ${\mathcal E}_{3/4}$ be the event that starting from $z$ occupied, at least $3/4$ of all the intervals~$(I_i)_{i \le L_n}$ are good at time $n^2/2$ (which, for simplicity, is assumed to be a multiple of $Kn^\alpha$). As the proof of part (2) reveals, once $X_k$ has reached $L_n$, the probability that it makes an excursion below $3L_n/4$ before time $n^2$ is smaller than $e^{-n^{\alpha}}$. Combining this with \eqref{step1}, we obtain $$ P\left[ \xi^z_{n^2} \neq \underline{0},\ {\mathcal E}_{3/4}^c \right] \le 2 e^{-n^{\alpha}}, $$ where ${\mathcal E}_{3/4}^c$ denotes the complement of ${\mathcal E}_{3/4}$. Similarly, if we let $\hat{{\mathcal E}}_{3/4}$ denote the event that for the dual process $\hat{\xi}^{y,n^2}$, at least $3/4$ of the intervals are good at time $n^2/2 - Kn^\alpha$, then $$ P\left[ \hat{\xi}^{y,n^2}_{n^2} \neq \underline{0},\ \hat{{\mathcal E}}_{3/4}^c \right] \le 2 e^{-n^{\alpha}}. $$ Consider the event $\tilde{{\mathcal E}}_i$ defined by: \begin{equation*} \begin{array}{l} \text{during the time interval } [n^2/2,n^2/2+Kn^\alpha]\text{, the direct contact process}\\ \text{restricted to } I_i \text{ becomes identical with the contact process started with full} \\ \text{occupancy (on } I_i \text{), while the dual contact process restricted to } I_i \text{ survives}. \end{array} \end{equation*} Let also ${\mathcal I}$ be the set of indices $i$ such that $I_i$ is good both for the contact process and its dual. We have \begin{equation*} P\left[\bigcap_{i \le L_n} (\tilde{{\mathcal E}}_i)^c, {\mathcal E}_{3/4},\hat{{\mathcal E}}_{3/4} \right] \le P\left[\bigcap_{i \in {\mathcal I}} (\tilde{{\mathcal E}}_i)^c, {\mathcal E}_{3/4},\hat{{\mathcal E}}_{3/4} \right]. \end{equation*} Given that ${{\mathcal E}}_{3/4}$ and $\hat{{\mathcal E}}_{3/4}$ both happen, at least $1/2$ of the intervals are good both for the contact process and its dual, or in other words, $|{\mathcal I}| \ge L_n/2$. Moreover, the events ${\mathcal E}_{3/4}$ and $\hat{{\mathcal E}}_{3/4}$, and the set ${\mathcal I}$, are independent of the state of the Harris system in the time layer $T \times [n^2/2,n^2/2+Kn^\alpha]$. By the definition of being good, we have $P[(\tilde{{\mathcal E}}_i)^c \ | \ i \in {\mathcal I}] \le 2 n^{-2\beta}$. Note also that the events $(\tilde{{\mathcal E}}_i)$ are independent. Hence \begin{equation*} P\left[\bigcap_{i \le L_n} (\tilde{{\mathcal E}}_i)^c, {\mathcal E}_{3/4},\hat{{\mathcal E}}_{3/4} \right] \le (2n^{-2\beta})^{L_n/2}. \end{equation*} Finally, note that when one of the $\tilde{{\mathcal E}}_i$ happens, it must be that $\xi^z_{n^2}(y) = 1$, by Remark~\ref{remduality}. We have thus proved that \begin{eqnarray*} P\left[\xi^z_{n^2}(y) = 0,\ \hat{\xi}^{y,n^2}_{n^2} \neq \underline{0}, \ \xi^z_{n^2} \neq \underline{0}\right] & \le & P\left[\xi^z_{n^2}(y) = 0,\ {\mathcal E}_{3/4}, \hat{{\mathcal E}}_{3/4} \right] + 4 e^{-n^{\alpha}}\\ & \le & \left( 2n^{-2\beta} \right)^{L_n/2} + 4 e^{-n^{\alpha}} \\ & \le & 5 e^{-n^{\alpha}}. \end{eqnarray*} Recalling that the probability on the l.h.s.\ above is that appearing in \eqref{e:debut}, we have thus shown that $$ P\left[ \xi^z_{n^2} \neq \xi^{\underline{1}}_{n^2}, \ \xi^z_{n^2} \neq \underline{0} \right] \le 5n e^{-n^{\alpha}}. $$ Now for a general $A \subseteq V$, we have $$ P\left[ \xi^A_{n^2} \neq \xi^{\underline{1}}_{n^2}, \ \xi^A_{n^2} \neq \underline{0} \right] \le \sum_{z \in T } P\left[ \xi^z_{n^2} \neq \xi^{\underline{1}}_{n^2}, \ \xi^z_{n^2} \neq \underline{0} \right] \le 5n^2 e^{-n^{\alpha}}. $$ In view of part (1) of Proposition~\ref{p:expalpha}, we thus have, for $A \neq \varnothing$, $$ P\left[ \xi^A_{n^2} \neq \xi^{\underline{1}}_{n^2} \ | \ \xi^A_{n^2} \neq \underline{0} \right] \le \frac{5n^2}{\overline{c}_2} e^{-n^{\alpha}}, $$ which proves the desired result. \medskip For case (A), the reasoning is similar, only simpler. Let $I$ be an interval of length $n^\alpha$ contained in $T$. For any $A \subseteq I$, we write $(\xi^{A}_{I,t})_{t \ge 0}$ for the contact process on $I$ with initial configuration $A$, and define $$ p(A) = P\left[ \xi^{A}_{I,K n} = \xi^{\underline{1}}_{I,K n} \neq \underline{0} \right]. $$ We say that $I$ is \emph{good at time} $t$ if $p(\xi_t) \ge 1-e^{-n^{3\alpha/4}}$, and for $k \in {\mathbb N}$, we let $X_k$ be the indicator function that $I$ is good at time $kK n$. In view of the proof of part (1) of Proposition~\ref{p:expalpha}, we have \begin{equation} \label{driftright} P\left[ X_{k+1} = 1 \ | \ \xi_{kKn} \neq \underline{0} \right] \ge \overline{c}_1^2, \end{equation} while the same reasoning as in case (B) leads to \begin{equation} \label{driftleft} P\left[X_{k+1} = 1 \ | \ X_k = 1 \right] \ge 1-2e^{-n^{3\alpha/4}}. \end{equation} From \eqref{driftright} and \eqref{driftleft}, one can see that, for any $z \in V$, $$ P\left[ \xi_{n^2}^z \neq \underline{0},\ I \text{ not good at time } n^{3/2} \text{ for } \xi^z \right] \le 2 e^{-n^{5\alpha/8}}, $$ where for simplicity we assume that $n^{3/2}$ is a multiple of $Kn$. Similarly, for any $z \in V$, one has $$ P\left[ \hat{\xi}^{y,n^2}_{n^2} \neq \underline{0},\ I \text{ not good at time } n^{3/2}-Kn \text{ for } \hat{\xi}^{y,n^2} \right] \le 2 e^{-n^{5\alpha/8}}, $$ and we conclude as in case (B). \end{proof} \begin{proof}[Proof of Theorem~\ref{thm2main2}] The result follows from \cite[Proposition~2.1]{tommeta}, using parts (2-3) of Proposition~\ref{p:expalpha}. \end{proof} \vspace{0.4cm} \section{Comparison with Phoenix contact processes} \label{s:comparison2} \setcounter{equation}{0} The aim of this section is to prove Theorems~\ref{thm1main1} and \ref{thm3main3}. To this end, we manufacture a ``Phoenix contact process''. This process evolves as a contact process up to extinction, but has then the ability to recover activity, making it a positive recurrent Markov process. Separating a tree $T$ into $T_1$ and $T_2$ as in Lemma~\ref{lem1}, we then show that with high probability, the true contact process $\xi$ dominates the union of the two Phoenix contact processes running independently on $T_1$ and $T_2$, and this enables us to conclude. \medskip Let $T \in \Lambda(n,d)$. Given a Harris system for the contact process on $T$, for any $x \in T$ and $t \ge 0$, we write $(\xi^{x,t}_s)_{s \ge t}$ for the contact process starting at time $t$ with $x$ the only occupied site. We say that the Harris system is \emph{trustworthy} on the time interval $[0,n^4]$ if for any $(x,s) \in T \times [0,n^4/2]$, the following two conditions hold: \begin{enumerate} \item[(C$_1$)] if $\xi^{x,s}$ survives up to time $n^4$, then $\xi^{x,s}_{n^4} = \xi^{\underline{1}}_{n^4}$, \item[(C$_2$)] if $\xi^{x,s}$ survives up to time $s+2 n^2$, then it survives up to time $n^4$. \end{enumerate} We say that the Harris system $H$ is trustworthy on the time interval $[t,t+n^4]$ if $\Theta_t H$ is trustworthy on the time interval $[0,n^4]$, where $\Theta_t H$ is the Harris system obtained by a time translation of $t$. For a given Harris system and for $(Y_t)_{t \in {\mathbb R}_+}$ a family of independent auxiliary random variables following a Bernoulli distribution of parameter $1/2$, independent of the Harris system, we define the \emph{Phoenix contact process} $(\zeta_{T,t})_{t \ge 0} = (\zeta_t)_{t \ge 0}$ on $\{0,1\}^T$ as follows. \noindent \emph{Step 0.} Set $\zeta_0 = \underline{1}$, and go to Step 1. \noindent \emph{Step 1.} Evolve as a contact process according to the Harris system, up to reaching the state $\underline{0}$, and go to Step 2. \noindent \emph{Step 2.} Let $t$ be the time when Step 2 is reached. Stay at $\underline{0}$ up to time $t+n^4$ and \begin{itemize} \item if the Harris system is trustworthy on $[t,t+n^4]$ and $Y_{t} = 1$, then set $\zeta_{t+n^4} = \xi^{\underline{1},t}_{t+n^4}$ (where $\xi^{\underline{1},t}$ is the contact process started with full occupancy at time $t$ and governed by the Harris system), and go to Step 1 ; \item else, go to Step 2. \end{itemize} We say that the process is \emph{active} when it is running Step 1 ; is \emph{quiescent} when it is running Step 2. Note that after initialization, the process alternates between active and quiescent phases. If it happens that during Step 2, the Harris system is trustworthy on $[t,t+n^4]$ and $Y_t = 1$, but $\xi^{\underline{1},t}_{t+n^4} = \underline{0}$, we consider that the process is active at time $t+n^4$, and becomes inactive again immediately afterwards. \begin{rem} \label{r:markov} Note that since the time the process spends on state $\underline{0}$ is not exponential, $(\zeta_t)$ is not Markovian. It would however be easy to make the process Markovian, by enlarging its state space into $\left(\{0,1\}^T \setminus \{\underline{0}\}\right) \cup \left(\{\underline{0}\} \times [0,n^4)\right)$, so that when arriving in Step 2, the process is in the state $(\underline{0},0)$, and subsequently the second coordinate increases at unit speed. \end{rem} \begin{rem} \label{r:randomization} The auxiliary randomization of $\zeta$ provided by the family $(Y_t)$ is a technical convenience, which guarantees that if $\zeta_t$ is quiescent at some time $t$, then with probability at least $1/2$ it remains so at least up to time $t + n^4$. \end{rem} \begin{rem} \label{r:defnu} Each time the process becomes active again, its distribution at this time is that of $\xi^{\underline{1}}_{n^4}$ conditionned on the event that the Harris system is trustworthy on the time interval $[0,n^4]$. We write $\nu$ to denote this distribution. \end{rem} \begin{lemma} \label{l:trust} Let $T \in \Lambda(n,d)$. For any $n$ large enough and any $t$, the probability that the Harris system on $T$ is trustworthy on $[t,t+n^4]$ is larger than $1/2$. \end{lemma} \begin{proof} It suffices to show the lemma for $t = 0$. We first consider condition (C$_1$). By part (3) of Proposition~\ref{p:expalpha}, the probability that \begin{equation} \label{eventC1} \forall z \in T, \ \xi^{z,n^4/2}_{n^4} \neq \underline{0} \Rightarrow \xi^{z,n^4/2}_{n^4} = \xi^{\underline{1},n^4/2}_{n^4} \end{equation} goes to $1$ as $n$ tends to infinity. Let $(x,s) \in T \times [0,n^4/2]$, and assume that $\xi^{x,s}$ survives up to time $n^4$, that is, $$ (x,s) \leftrightarrow T \times \{n^4\}. $$ Then there must exist $z \in T$ such that $$ (x,s) \leftrightarrow (z,n^4/2) \leftrightarrow T \times \{n^4\}. $$ On the event \eqref{eventC1}, we thus have $\xi^{x,s}_{n^4} \ge \xi^{\underline{1},n^4/2}_{n^4}$. The converse comparison being clearly satisfied, we have in fact $\xi^{x,t}_{n^4} = \xi^{\underline{1},n^4/2}_{n^4}$. In order to show that condition (C$_1$) is satisfied for any $(x,s) \in T \times [0,n^4/2]$ with probability tending to $1$, it thus suffices to show that \begin{equation} \label{C1bis} P\left[ \xi^{\underline{1}}_{n^4} = \xi^{\underline{1},n^4/2}_{n^4} \right] \to 1 \text{ as } n \to \infty. \end{equation} In view of part (2) of Proposition~\ref{p:expalpha}, with probability tending to one, we have $\xi^{\underline{1}}_{n^4} \neq \underline{0}$. On this event, by part (3) of Proposition~\ref{p:expalpha}, we also have $\xi^{\underline{1},n^4/2}_{n^4} = \xi^{\underline{1}}_{n^4}$ with probability tending to $1$, and thus \eqref{C1bis} is proved. We now turn to condition (C$_2$). Note that the event $\xi_{x,s}^{s+2n^2} \neq \underline{0}$ can be rewritten as $$ (x,s) \leftrightarrow T \times \{s+2n^2\}, $$ and under such a circumstance, there must exist $z \in T$ such that $$ (x,s) \leftrightarrow (z, \lceil s/n^2 \rceil n^2) \leftrightarrow T \times \{s+2n^2\}. $$ It is thus sufficient to show that \begin{equation} \label{C2a} P\left[\exists z \in T, k \in \{0,\ldots, \lceil n^2/4 \rceil\} : \ \xi^{z,kn^2}_{(k+1)n^2} \neq \underline{0} \text{ but } \xi^{z,kn^2}_{n^4} = \underline{0}\right] \to 0 \text{ as } n \to \infty. \end{equation} For a fixed $z \in T$ and integer $k$, we have by part (3) of Proposition~\ref{p:expalpha} that $$ P\left[ \xi^{z,kn^2}_{(k+1)n^2} \neq \underline{0} \text{ but } \xi^{z,kn^2}_{(k+1)n^2} \neq \xi^{\underline{1},kn^2}_{(k+1)n^2} \right] \le e^{-n^{\alpha/2}}, $$ so the probability of the event \begin{equation} \label{C2b} \forall z \in T, k \in \{0,\ldots, \lceil n^2/4 \rceil\} : \ \xi^{z,kn^2}_{(k+1)n^2} = \underline{0} \text{ or } \xi^{z,kn^2}_{(k+1)n^2} = \xi^{\underline{1},kn^2}_{(k+1)n^2} \end{equation} tends to $1$ as $n$ tends to infinity. On the other hand, with probability tending to $1$, $\xi^{\underline{1}}$ survives up to time $n^4$, and is clearly dominated by $\xi^{\underline{1},kn^2}_{(k+1)n^2}$, for any $k \le \lceil n^2/4 \rceil$. On the conjunction of this event and the one described in \eqref{C2b}, we thus have $$ \forall z \in T, k \in \{0,\ldots, \lceil n^2/4 \rceil\} : \ \xi^{z,kn^2}_{(k+1)n^2} = \underline{0} \text{ or } \xi^{z,kn^2}_{(k+1)n^2} \ge \xi^{\underline{1}}_{n^4} \neq \underline{0}, $$ and this proves \eqref{C2a}. \end{proof} \begin{lemma} \label{l:attract} For any $s > 0$, one has $$ P\left[\uptau \le s \right] \le \frac{s}{s+E[\uptau]}, $$ where we recall that $\uptau$ is the extinction time of the contact process started with full occupancy. Moreover, there exists a constant $C$ such that for any $T \in \Lambda(n,d)$, $E[\uptau] \le e^{Cn}$. \end{lemma} \begin{proof} Attractiveness of the contact process implies that for any $r \in {\mathbb N}$, \begin{equation} \label{e:prob00} P\left[ \uptau \ge r s \right] \le \left(P\left[ \uptau \ge s \right]\right)^r. \end{equation} Since \begin{equation} \label{e:prob1} E[\uptau] \le s \sum_{r = 1}^{+\infty} P\left[ \uptau \ge r s \right] \le s \frac{P\left[ \uptau \ge s \right]}{1-P\left[ \uptau \ge s \right]}, \end{equation} it comes that $$ P\left[ \uptau \ge s \right] \ge \frac{E[\uptau]}{s+E[\uptau]}, $$ which proves the first part. For the second part, note that one can find $C$ such that \begin{equation} \label{e:prob01} P\left[\uptau \ge 1\right] \le 1-e^{-Cn} \end{equation} uniformly over $T \in \Lambda(n,d)$. The conclusion thus follows from \eqref{e:prob1}. \end{proof} \begin{lemma} \label{l:prob0} For any $n$ large enough, any $T \in \Lambda(n,d)$ and any $t \ge 0$, one has \begin{equation} \label{e:prob0} P\left[ \zeta_t = \underline{0} \right] \le \frac{6 n^6}{E[\uptau]}. \end{equation} \end{lemma} \begin{proof} Using Lemma~\ref{l:attract} with $s = n^6$, it is clear that \eqref{e:prob0} holds for any $n$ and any $t \le n^6$. Note moreover that, writing $\uptau^\nu$ for the extinction time of the contact process started from the distribution $\nu$ defined in Remark~\ref{r:defnu}, we have \begin{equation} \label{e:prob2} P\left[ \uptau^\nu \ge n^6 -n^4 \right] = P\left[ \uptau \ge n^6 \ | \ \text{Harris sys. trustworthy on } [0,n^4] \right] \le \frac{2n^6}{E[\uptau]}, \end{equation} where we used Lemma~\ref{l:trust} in the last step. Suppose now that $t > n^6$, and consider the event ${\mathcal E}$ defined by $$ \exists s \in (t - n^6/2, t - n^6/4] \mbox{ such that } \zeta_s = \underline 0. $$ We write $\tilde{\uptau}$ for the first $s \ge t-n^6/2$ such that $\zeta_s = \underline{0}$. On the event ${\mathcal E}$, we have $\tilde{\uptau} \le t - n^6/4$. The event ${\mathcal E}'$ defined by \begin{multline*} \forall k \in {\mathbb N}, k < \lfloor n^2/4 \rfloor , \\ \text{Harris sys. not trustworthy on } [\tilde{\uptau} + k n^4, \tilde{\uptau} + (k+1) n^4] \text{ or } Y_{\tilde{\uptau} + k n^4} \neq 1 \end{multline*} has probability smaller than $(3/4)^{\lfloor n^{2}/4 \rfloor}$ by Lemma~\ref{l:trust}. When ${\mathcal E}$ and $({\mathcal E}')^c$ both hold, the process $\zeta$ becomes active at some time $t_A \in [t-n^6/2, t]$, and is distributed according to $\nu$ at this time. Hence, \begin{eqnarray*} P\left[\zeta_t = \underline{0}, {\mathcal E} \right] & \le & P\left[\zeta_t = \underline{0}, {\mathcal E}, ({\mathcal E}')^c\right] + P\left[{\mathcal E}'\right] \\ & \le & P\left[\zeta_t = \underline{0}, {\mathcal E}, ({\mathcal E}')^c\right] + P\left[{\mathcal E}'\right] \\ & \le & P\left[\uptau^\nu \le n^6/2\right] + P\left[{\mathcal E}'\right]. \end{eqnarray*} Since $P[{\mathcal E}'] \ll 1/E[\uptau]$ and in view of \eqref{e:prob2}, we have indeed \begin{equation} \label{e:prob3} P\left[\zeta_t = \underline{0}, {\mathcal E} \right] \le \frac{3 n^6}{E[\uptau]} \end{equation} for any large enough $n$. It thus remains to bound \begin{equation} \label{e:prob4} P\left[\zeta_t = \underline{0}, {\mathcal E}^c \right]. \end{equation} Let $k$ be the first positive integer such that $Y_{t-n^6/2+kn^4} = 1$ and the Harris system is trustworthy on $$ [a_k,b_k] \stackrel{\text{(def)}}{=} [t - n^6/2 + k n^4, t - n^6/2 + (k+1) n^4]. $$ For the same reason as above, we may assume that $[a_k,b_k] \subseteq [t-n^6/2,t-n^6/4]$. Since on the event ${\mathcal E}^c$, the process $\zeta$ remains active on the time interval $[a_k,b_k]$, and considering the definition of trustworthiness and of the Phoenix process, we know that $\zeta_{b_k} = \xi^{\underline{1},a_k}_{b_k}$, and moreover, the latter random variable is distributed according to $\nu$. Hence, up to a negligible event, the probability in \eqref{e:prob4} is bounded by $$ P\left[ \uptau^\nu \le n^6/2 \right], $$ and thus, using \eqref{e:prob2} again, \begin{equation} \label{e:prob5} P\left[\zeta_t = \underline{0}, {\mathcal E}^c \right] \le \frac{3 n^6}{E[\uptau]}. \end{equation} The conclusion now follows, combining \eqref{e:prob3} and \eqref{e:prob5}. \end{proof} \begin{lemma} \label{transmission} Let $T \in \Lambda(n,d)$ and $x \in T$. Define recursively $\gamma_0 = 0$ and, for any $i \in {\mathbb N}$, $$ \gamma_{i+1} = \inf\{t \ge \gamma_i + 2 n^2 : \xi_t(x) = 1\} \quad (+ \infty \text{ if empty}). $$ For $n$ large enough, we have $$ P\left[ \gamma_{n^2/8} > n^4/2 \ | \ \xi_{n^4/2} \neq \underline{0}\right] \le e^{-n^{2}}. $$ \end{lemma} \begin{proof} In view of part (1) of Proposition~\ref{cpinterval}, for any non-empty $A \subseteq T$, we have $$ P\left[ \exists s \le \frac{n}{\overline{c}_1} : \xi^A_s(x) = 1 \right] \ge \overline{c}_1. $$ Let ${\mathcal F}_i$ be the $\sigma$-field generated by $\{\xi_t, t \le \gamma_i\}$. By induction and the Markov property, we can thus show that for any $k \in {\mathbb N}$, $$ P\left[\gamma_{i+1}-(\gamma_i+2 n^2) \ge \frac{kn}{\overline{c}_1},\ \xi_{\gamma_i+2n^2+(k-1)n/\overline{c}_1} \neq \underline{0} \ | \ {\mathcal F}_i \right] \le (1-\overline{c}_1)^{k}. $$ Hence, \begin{eqnarray*} P\left[ \gamma_{n^2/8} > n^4/2, \ \xi_{n^4/2} \neq \underline{0}\right] & = & P\left[ \sum_{i = 0}^{n^2/8-1}\gamma_{i+1}-(\gamma_i+2 n^2) > n^4/4 , \ \xi_{n^4/2} \neq \underline{0}\right] \\ & \le & P\left[\sum_{i = 0}^{n^2/8-1} B_i n /\overline{c}_1 > n^4/4 \right], \end{eqnarray*} where $(B_i)$ are independent geometric random variables of parameter $1-\overline{c}_1$. For $\lambda > 0$ small enough, we have $$ e^{\phi(\lambda)} \stackrel{\text{(def)}}{=} E[e^{\lambda B_i}] < +\infty, $$ and we thus obtain $$ P\left[\sum_{i = 0}^{n^2/8-1} B_i > \overline{c}_1n^3/4 \right] \le \exp\left(\phi(\lambda) n^2/8 - \lambda \overline{c}_1n^3/4\right), $$ which, together with part (1) of Proposition~\ref{p:expalpha}, proves the claim. \end{proof} \begin{proposition} \label{p:coupling} For $n$ large enough, let $T \in \Lambda(n,d)$ be split into two subtrees $T_1,T_2$ as described by Lemma~\ref{lem1}. Define the process $(\tilde{\zeta}_t)_{t \ge 0}$ by $$ \tilde{\zeta}_t = \zeta_{T_1,t} \cup \zeta_{T_2,t} \qquad (t \ge 0), $$ where $\zeta_{T_1}$ and $\zeta_{T_2}$ are Phoenix processes defined on $T_1$ and $T_2$ respectively, using the Harris system on $T$ together with two independent families of auxiliary random variables, independent of the Harris system. One has $$ P\left[ \forall t \le \uptau, \ \xi_t \ge \tilde{\zeta}_t \right] \ge 1-e^{-n^{3/2}}. $$ \end{proposition} \begin{proof Let $(\sigma_i)_{i \ge 1}$ be the sequence of (stopping) times when the process $\zeta_{T_1}$ becomes quiescent. We start by showing that, for any $i$, \begin{equation} \label{activation} P\left[\xi_{\sigma_i+n^4} < \zeta_{T_1,\sigma_i+n^4}, \ \xi_{\sigma_i+n^4} \neq \underline{0}\right] \le e^{-n^{7/4}}. \end{equation} For some arbitrary $x \in T_1$, consider the stopping times introduced in Lemma~\ref{transmission}, but started with $\gamma_0 = \sigma_i$, and let $N$ be the largest index satisfying $\gamma_N \le \sigma_i + n^4/2$. By Lemma~\ref{transmission}, we have \begin{equation} \label{e:p:coupling0} P\left[N < n^2/8 ,\ \xi_{\sigma_i+n^4} \neq \underline{0}\right] \le e^{-n^2}. \end{equation} Moreover, part (1) of Proposition~\ref{p:expalpha} ensures that, for any $j$, \begin{equation} \label{e:p:coupling} P\left[\xi^{x,\gamma_j}_{T_1} \text{ survives up to time } \gamma_j + 2 n^2 \ | \ \gamma_j < +\infty \right] \ge \overline{c}_2, \end{equation} where $\xi^{x,\gamma_j}_{T_1}$ denotes the contact process restricted to $T_1$ started with $x$ occupied at time $\gamma_j$. We introduce the stopping times $\tilde{\gamma}_j$ to deal with the fact that $\gamma_j$ may be infinite. Let $\td{\jmath}$ we be the largest index such that $\gamma_{\td{\jmath}} \le \sigma_i + n^4/2$. We let $\tilde{\gamma}_j = \gamma_j$ if $j\le \td{\jmath}$, $\tilde{\gamma}_{\td{\jmath}+1} = \sigma_i + n^4/2 + 2 n^2$, and then recursively, $\tilde{\gamma}_{j+1}-\tilde{\gamma}_j = 2 n^2$. We have \begin{multline} \label{e:p:coup} P\left[ \forall j \le N, \ \xi^{x,\gamma_j}_{T_1, \gamma_j + 2 n^2} = \underline{0}, \ \xi_{\sigma_i+n^4} \neq \underline{0}\right] \\ \le P\left[N < n^2/8,\ \xi_{\sigma_i+n^4} \neq \underline{0}\right] + P\left[ \forall j \le n^2/8, \ \xi^{x,\tilde{\gamma}_j}_{T_1, \tilde{\gamma}_j + 2 n^2} = \underline{0}\right], \end{multline} Since for any $j$, we have $\tilde{\gamma}_{j+1} \ge \tilde{\gamma}_j + 2 n^2$, the events indexed by $j$ appearing in the second probability on the r.h.s.\ of \eqref{e:p:coup} are independent. Using also \eqref{e:p:coupling0} and \eqref{e:p:coupling} (with $\gamma_j$ replaced by $\tilde{\gamma}_j$), we thus arrive at \begin{equation} \label{e:p:coupling1} P\left[ \forall j \le N, \ \xi^{x,\gamma_j}_{T_1, \gamma_j + 2 n^2} = \underline{0}, \ , \xi_{\sigma_i+n^4} \neq \underline{0}\right] \le e^{-n^2} + (1-\overline{c}_2)^{n^2/8}. \end{equation} We now show that \begin{equation} \label{e:p:coupling2} \exists j \le N, \ \xi^{x,\gamma_j}_{T_1, \gamma_j + 2 n^2} \neq \underline{0} \ \Rightarrow \ \xi_{\sigma_i+n^4} \ge \zeta_{T_1,\sigma_i+n^4}. \end{equation} Indeed, in order for $\zeta_{T_1, \sigma_i+n^4}$ to be non $\underline{0}$, it must be that the Harris system restricted to $T_1$ is trustworthy on $[\sigma_i,\sigma_i+n^4]$. In this case, by the definition of trustworthiness, if there exists some $j \le N$ such that $\xi^{x,\gamma_j}_{T_1, \gamma_j + 2 n^2} \neq \underline{0}$, then it must be that $$ \xi^{x,\gamma_j}_{T_1,\sigma_i+n^4} = \xi^{\underline{1},\sigma_i}_{T_1,\sigma_i + n^4} \ge \zeta_{T_1,\sigma_i+n^4} $$ (the last two being equal when $Y_{\sigma_i} = 1$, otherwise $\zeta_{T_1,\sigma_i+n^4} = \underline{0}$). Since $\xi_{\gamma_j}(x) = 1$, it is clear that $\xi_{\sigma_i+n^4} \ge \xi^{x,\gamma_j}_{T_1,\sigma_i+n^4}$, thus justifying \eqref{e:p:coupling2}. This and \eqref{e:p:coupling1} prove \eqref{activation}. In order to conclude, we first show that $\uptau$ cannot be too large. It comes from \eqref{e:prob00} and \eqref{e:prob01} that \begin{equation} \label{e:p:coupling3} P\left[\uptau \ge n^4 e^{C n}\right] \le e^{-n^{2}}, \end{equation} where $C$ can be chosen uniformly over $T \in \Lambda(n,d)$. If $\zeta_{T_1}$ is active at time $t$ and $\xi$ dominates $\zeta_{T_1}$ at this time, then the domination is preserved during the whole phase of activity, since $\zeta_{T_1}$ is driven by a subset of the Harris system driving the evolution of $\xi$. When $\zeta_{T_1}$ becomes quiescent, the domination is obviously preserved. As a consequence, if the domination of $\zeta_{T_1}$ by $\xi$ is broken at some time, it must be when $\zeta_{T_1}$ turns from quiescent to active. We thus have $$ P\left[ \exists t \le \uptau, \ \xi_t < \zeta_{T_1,t} \right] = P\left[ \exists i : \xi_{\sigma_i+n^4} < \zeta_{T_1,\sigma_i+n^4}\text{ and } \xi_{\sigma_i+n^4} \neq \underline{0} \right]. $$ Since $\sigma_{i+1} - \sigma_i \ge n^4$, on the event $\uptau \le n^4 e^{C n}$, there are at most $e^{C n}$ times when $\zeta_{T_1}$ turns from quiescent to active. Using \eqref{activation}, we thus obtain $$ P\left[ \forall t \le \uptau, \ \xi_t \ge \zeta_{T_1,t} \right] \le P[\uptau \ge n^4 e^{C n}] + e^{C n} e^{-n^{7/4}}. $$ The proposition is now proved, using \eqref{e:p:coupling3} together with the fact that $$ P\left[ \exists t \le \uptau, \ \xi_t < \tilde{\zeta}_{t} \right] \le P\left[ \exists t \le \uptau, \ \xi_t < \zeta_{T_1,t} \right] + P\left[ \exists t \le \uptau, \ \xi_t < \zeta_{T_2,t} \right]. $$ \end{proof} \begin{corollary} \label{c:coupling} For $n$ large enough, let $T \in \Lambda(n,d)$ be split into two subtrees $T_1,T_2$ as described by Lemma~\ref{lem1}. We have $$ E[\uptau_T] \ge n^{-9} \ E\left[\uptau_{T_1}\right] E\left[\uptau_{T_2}\right]. $$ \end{corollary} \begin{proof} Let $\tilde{\sigma}$ be the first time when $\zeta_{T_1}$ and $\zeta_{T_2}$ are simultaneously quiescent. By Proposition~\ref{p:coupling}, for any $t \ge 0$, we have \begin{equation} \label{e:c0} P[\uptau \le t] \le P[\tilde{\sigma} \le t] + e^{-n^{3/2}}. \end{equation} In view of Remark~\ref{r:randomization}, at time $\tilde{\sigma}$, both $\zeta_{T_1}$ and $\zeta_{T_2}$ remain quiescent for a time $n^4$ with probability at least $1/2$ (one of them just becomes quiescent at time $\tilde{\sigma}$, while the other stays quiescent for a time $n^4$ with probability at least $1/2$). As a consequence, for any $t \ge 0$, $$ P\left[\tilde{\sigma} \le t\right] \le \frac{2}{n^4} \int_0^{t+n^4} P\left[\tilde{\zeta}_s = 0\right] \ \mathrm d s. $$ Since $\zeta_{T_1}$ and $\zeta_{T_2}$ are independent, and using Lemma~\ref{l:prob0}, we thus obtain \begin{equation} \label{e:c1} P[\tilde{\sigma} \le t] \le \frac{2}{n^4} (t+n^4) \frac{(6n^6)^2}{E[\uptau_{T_1}]E[\uptau_{T_2}]} = \frac{72 n^8(t+n^4)}{E[\uptau_{T_1}]E[\uptau_{T_2}]}. \end{equation} Let us now fix $$ \tilde{t} = 2 \frac{{E[\uptau_{T_1}]E[\uptau_{T_2}]}}{n^9}. $$ Since we know from part (1) of Proposition~\ref{p:expalpha} that $\tilde{t}$ grows faster than any power of $n$, \eqref{e:c1} gives us that for $n$ large enough, $$ P\left[\tilde{\sigma} \le \tilde{t}\right] \le 1/4. $$ In view of \eqref{e:c0}, we thus obtain $$ P\left[\uptau \le \tilde{t}\right] \le 1/4 + e^{-n^{3/2}} \le 1/2, $$ which implies that $E[\uptau] \ge \tilde{t}/2$, and thus the corollary. \end{proof} \begin{proof}[Proof of Theorem \ref{thm1main1}] Let $\rho = 1+1/d$, and consider, for any $r \in {\mathbb N}$, the quantity $$ V_{r}= \inf_{n \in (\rho^{r-1}/d, \rho^r]} \ \inf_{T \in \Lambda(n,d)} \frac{\log E[\uptau(T)]}{|T|} $$ Theorem~\ref{thm1main1} will be proved if we can show that $\liminf_{r \to \infty} V_r > 0$. Let $r$ be a positive integer, and $T$ be a tree of degree bounded by $d$ and whose size lies in $\left(\rho^{r},\rho^{r+1} \right]$. Since $1-\rho^{-1} = 1/(d+1) < 1/d$ and in view of Lemma~\ref{lem1}, for $r$ large enough, we can split up $T$ into two subtrees $T_1$, $T_2$ such that $$ |T_{1}|, |T_{2}| \ge |T|(1-\rho^{-1}). $$ As a consequence, $$ |T_1|, |T_2| \ge \rho^{r-1}/d, $$ and also $$ | T_{1}| \leq |T| - |T_2| \leq |T|\left( 1-(1-\rho^{-1}) \right) \le \rho^r, $$ with the same inequality for $T_2$. Corollary~\ref{c:coupling} tells us that for $r$ large enough, $$ E[\uptau(T)] \ge \frac{1}{|T|^{9}} E[\uptau(T_1)] \ E[\uptau(T_2)], $$ that is to say, $$ \log E[\uptau(T)] \ge \log E[\uptau(T_1)] + \log E[\uptau(T_2)] - \log |T|^{9}. $$ Observing that $$ \log E[\uptau(T_1)] + \log E[\uptau(T_2)] \ge V_r (|T_1| + |T_2|) = V_r |T|, $$ we arrive at \begin{equation} \label{superadd} \frac{\log E[\uptau(T)]}{|T|} \ge V_r - \frac{\log |T|^{9}}{|T|}. \end{equation} Part (1) of Proposition~\ref{p:expalpha} ensures that for $r$ large enough, one has \begin{equation} \label{positivvr} V_r \ge \frac{c}{\rho^{r(1-\alpha)}} \end{equation} for some constant $c > 0$. Recalling that $|T| \le \rho^{r+1}$, we thus have $$ \frac{\log |T|^{9}}{|T|} \le \frac{V_r}{\rho^{r\alpha/2}}, $$ and \eqref{superadd} turns into $$ \frac{\log E[\uptau(T)]}{|T|} \ge V_r\left(1-\frac{1}{\rho^{r \alpha/2}}\right), $$ for any large enough $r$ and any tree whose size lies in $(\rho^r,\rho^{r+1}]$. If the size of the tree lies in $(\rho^r/d,\rho^r]$, then the inequality $$ \frac{\log E[\uptau(T)]}{|T|} \ge V_r $$ is obvious, so we arrive at $$ V_{r+1} \ge V_r\left(1-\frac{1}{\rho^{r \alpha/2}}\right). $$ Since $V_r > 0$ for any $r$ large enough by \eqref{positivvr}, and $$ \prod_{r} \left(1-\frac{1}{\rho^{r \alpha/2}}\right) > 0, $$ we have shown that $\liminf_{r \to \infty} V_r > 0$, and this finishes the proof. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm3main3}] Let $c > 0$ be given by Theorem~\ref{thm1main1}, and $T \in \Lambda(n,d)$. We learn from Lemma~\ref{l:attract} that $$ P\left[ \uptau \le e^{cn/2} \right] \le \frac{e^{cn/2}}{E[\uptau]}, $$ which, by our choice of $c$, is smaller than $e^{-cn/4}$ for $n$ large enough, uniformly over $T \in \Lambda(n,d)$. \end{proof} \section{Discrete time growth process} \label{s:discrete} \setcounter{equation}{0} For comparison purposes, it is sometimes useful to consider a discrete time analogue of the contact process; we will need to consider such a process in the next section. Though many different definitions may be proposed, we have decided on the following. Fix $p \in (0,1)$ and let $\{I^r_{x,y}: r \in \{1, 2, \ldots\},\;x,y \in {\mathbb Z},\; |x-y|\leq 1\}$ be a family of independent Bernoulli($p$) random variables. Fix $\eta_0 \in \{0,1\}^{\mathbb Z}$ and, for $r\geq 0$, let $$\eta_{r+1}(x) = \mathds{1}\{\exists y: |x-y| \leq 1,\; \eta_r(y) = 1,\; I^r_{y,x} = 1\}.$$ The following is standard. \begin{proposition} The above process is attractive and there exists $p_{c}^{(1)} < 1$ so that for $p > p^{(1)}_{c}$ the process survives in the sense that, for any $\eta_0 \neq \underline{0}$, $$P\left[\eta_r \neq \underline{0} \hspace{0.3cm} \forall r\right] > 0$$ and, if $\eta_0 = \underline{1}$, then $\eta_{r}$ decreases stochastically to a non zero limit. \end{proposition} This process generalizes to locally finite graphs, just as does the contact process. In particular it will have the self duality property and we can easily follow through the arguments of the preceding sections to arrive at \begin{proposition} \label{discrete} \noindent Let $d \geq 2$ and $p > p_c^{(1)}$. There exists $c>0$ such that $$\inf_{T\in \Lambda(n,d)} \; P\left[\uptau_T \geq e^{cr}\right] \longrightarrow 1 \text{ as } n \to \infty.$$ \end{proposition} \noindent(again $\uptau_T$ is the extinction time for the process on $T$ started from full occupancy). \vspace{0.4cm} \section{Extinction time on Newman-Strogatz-Watts random graphs} \label{s:nsw} Let us briefly recall the definition of the NSW random graph on $n$ vertices, $G^n = (V^n, E^n)$. We take $V^n = \{1, 2, \ldots, n\}$ and suppose given a probability $p(\cdot) $ on the positive integers greater than or equal to $3$ with the property that, for some $a > 2$ and $c_0 > 0,\; \ p(m) \sim \frac{c_0} {m^a}$. The NSW graph $G^n$ is then generated by choosing the degrees for the $n$ vertices $d_1, d_2, \ldots, d_n$, according to i.i.d. random variables of law $p(\cdot)$ conditioned on $\sum_{x=1}^n d_x$ being even. Given this realization, we choose the edges by first giving each vertex $x\; d_x$ half-edges and then matching up the half-edges uniformly among all possible matchings, so that, say, a half-edge for vertex $x$ matched with a half-edge of vertex $y$ becomes an edge between $x$ and $y$. Of course, loops and parallel edges may occur (though as noted in \cite{CD}, if $a > 3 $ the probability of nonexistence of both is bounded away from zero). In this section we consider the contact process with small parameter $\lambda > 0$ on NSW random graphs and prove Theorem \ref{thm1cd1}. As mentioned in the Introduction, we will assume that $a > 3$. Instead of choosing a matching for all half-edges at once, we can also match them in a sequence of steps, so that, in each step, we are free to choose one of the half-edges involved in the matching, and the other is chosen at random. To be more precise, let us introduce some terminology. A \textit{semi-graph} $g = (V^n, \mathcal{H},\mathcal{E})$ is a triple consisting of the set of vertices $V^n$, a set of half-edges $\mathcal{H}$ and a set of edges $\mathcal{E}$ (of course, if $\mathcal{H} = \varnothing$, then $g$ is a graph). The degree of a vertex in a semi-graph is the number of its half-edges plus the number of edges that are incident to it. Given two half-edges $h, h' \in \mathcal{H}$, we will denote by $h+h'$ a new edge produced by ``attaching'' $h$ and $h'$. We will now inductively define a finite sequence of semi-graphs $g_0, g_1, \ldots, g_k$ so that $g_k$ has the distribution of a NSW graph. $g_0 = (V^n, \mathcal{H}_0, \mathcal{E}_0)$ is defined with $\mathcal{E}_0 = \varnothing$ and so that each vertex $x$ has $d_x$ half-edges, where $(d_1,\ldots,d_n)$ is chosen at random as described in the previous paragraph. Assume $g_i = (V^n, \mathcal{H}_i, \mathcal{E}_i)$ is defined and has half-edges. Fix an arbitrary half-edge $h \in \mathcal{H}_i$ and randomly choose another half-edge $h'$ uniformly in $\mathcal{H}_i -\{h\}$. Then put $g_{i+1} = (V^n, \mathcal{H}_{i+1}, \mathcal{E}_{i+1})$, where $\mathcal{H}_{i+1} = \mathcal{H}_i - \{h, h'\}$ and $\mathcal{E}_{i+1} = \mathcal{E}_i \cup \{h+h'\}$. When no half-edges are left, we are done, and the graph thus obtained is a NSW random graph. Often, instead of updating the sets each time, say from $\mathcal{H}_i, \mathcal{E}_i$ to $\mathcal{H}_{i+1}, \mathcal{E}_{i+1}$ as above, we will hold the notation $g = (V^n, \mathcal{H}, \mathcal{E})$ and say (for example) that $h, h'$ are deleted from $\mathcal{H}$ and $h+h'$ is added to $\mathcal{E}$. We will be particularly interested in vertices with degree in $\left[ S,\; 2S\right],$ where $S = M \frac{1}{\lambda^2}\log^{2} \left(\frac{1}{\lambda}\right)$ and $M$ is a large universal constant to be chosen later. We designate by $I$ the vertices of $V^n$ whose degree lies in this set. In order to prove Theorem \ref{thm1cd1}, by attractiveness of the contact process, it is sufficient to show that given $G^n$, (with high probability as $n$ tends to infinity) there exists a subgraph on which the contact process survives for the desired amount of time. The plan is to show that for some $\delta > 0$ (that depends on $\lambda$), with high probability as $n \rightarrow \infty$, we can find a subgraph $G^{n \prime} = (V^{n\prime}, E^{n\prime})$ of $G^n$ that is a tree with certain good properties and with vertex set containing $\delta n $ vertices of $I$. Let $I'$ be the set of vertices of $V^{n\prime}$ of degree (with respect to $E^{n\prime}$) in the set $[S, 2S]$. Let us say that for $x, y \in I', \ x \stackrel{*}{\sim} y$ if $x$ and $y$ are connected to each other in $G^{n \prime}$ by a path which, apart from $x$ and $y$, contains no elements of $I'$ and which is of length less than $20a \log\left(\frac{1}{\lambda }\right)$. We wish to compare the contact process on $G^{n \prime }$ to a discrete time growth process (as in Section 4) on a tree $T$ with vertex set $I'$ and edge set $\{\{x,y\}: x \stackrel{*}{\sim} y\}$. We wish to have \vskip 3mm \noindent \textit{Property A:} $T$ is a tree of degree bounded by 4.\vskip 3mm \noindent \textit{Property B:} every element of $I'$ has $\frac{S}{4}$ neighbors of degree 1.\vskip 3mm Property B ensures that around each site in $I$, the infection persists for a long time. This guarantees that our discrete time growth process has infection rate as large as desired. Together with Property A, this allows us to apply Proposition \ref{discrete} to the growth process and conclude that its extinction time is very large. We then conclude that the extinction time of the contact process on $G^{n\prime}$ is also very large. We will find the subgraph $G^{n\prime}$ with the aid of an algorithm whose starting point will be the semi-graph $g_0$ defined above. Before we present the algorithm let us make some remarks about the random degree sequence $d_1, \ldots, d_n$. Let $\mu = \sum_{m=1}^\infty m\cdot p(m)$. Let us remark that, if the degrees are given by $d_1, \ldots, d_n$ and we choose a half-edge uniformly at random in $g_0$, then the probability that the corresponding vertex has degree $m$ is $$\frac{m \cdot |x:d_x = m|}{\sum_x d_x} \to \frac{m \cdot p(m)}{\mu}\text{ as } n \to \infty.$$ The probability $q(m) = m\cdot p(m)/\mu$ is called the \textit{size biased distribution}. By our assumption that $p(m) \sim \frac{c_0}{m^a}$, it follows that $q(m) \sim \frac{c_1}{m^{a-1}}$, where $c_1 = \frac{c_0}{\mu}$. If $x$ is large enough, it can be easily verified by comparison with an integral that $$\frac{c_1}{2(a-2)}\cdot x^{-(a-2)} < q([x, 2x]) < \frac{2 c_1}{a-2}\cdot x^{-(a-2)}.$$ We will also need the following facts, whose proofs are omitted. \begin{lemma} \label{lem5degs} For any small enough $\lambda > 0$, there exists $\epsilon > 0$ such that, with probability tending to 1 as $n \to \infty$, for any $A \subseteq V^n$ with $|A| \leq \epsilon n$ we have\medskip\\ $(i.)\;\displaystyle{\frac{c_0}{2(a-1)S^{a-1}} < \frac{|I\cap A^c|}{n} < \frac{2c_0}{(a-1)S^{a-1}} };$\medskip\\ $(ii.)\;\displaystyle{\frac{\sum_{x \in A}\;d_x}{\sum_{x \in V^n}\; d_x} < \frac{1}{8}};$\medskip\\ $(iii.)\;\displaystyle{\frac{c_1}{2(a-2)S^{a-2}}<\frac{\sum_{x\in I \cap A^c}\; d_x}{\sum_{x \in V^n}\; d_x} < \frac{2c_1}{(a-2)S^{a-2}};}$\medskip\\ $(iv.)\;\displaystyle{\frac{\mu}{2}<\frac{\sum_{x \in A^c} \;d_x}{n} < 2\mu}.$ \end{lemma} The hypothesis that $\lambda$ is small is not problematic to us because clearly it is sufficient to prove Theorem \ref{thm1cd1} for $\lambda$ small enough. In what follows, $\lambda$ is fixed and $\epsilon$ is taken corresponding to $\lambda$ as in the lemma. We will often assume that $\lambda$ is small enough, and also that $n$ is large enough, for other desired properties to hold. We will say that a degree sequence $d_1, \ldots, d_n$ is \textit{robust} if it satisfies $(i.), (ii.), (iii.)$ and $(iv.)$. Our algorithm proceeds by matching half-edges, as described in the beginning of this section. We will thus have to deal with the set of half-edges after some matchings have been made, and that is why the robustness property will come into play. Other than match half-edges, the algorithm also writes labels on edges and vertices. Edges are labeled either $\mathsf{marked}$ or $\mathsf{unmarked}$; the former are included in $G^{n\prime}$ and the latter are not. Vertex labels serve to guide the order of the matchings. The possible vertex labels are: $\mathsf{unidentified}$, $\mathsf{preactive}$, $\mathsf{active}$ and $\mathsf{read}$. An $\mathsf{unidentified}$ vertex is one that has not yet been ``seen'' by the algorithm, that is, none of its half-edges has been matched yet. If a vertex has any label different from $\mathsf{unidentified}$, then it is said to be identified. The labels $\mathsf{preactive}$ and $\mathsf{active}$ can only be associated to vertices in $I$, and at most one vertex will be $\mathsf{active}$ at a given time. The algorithm repeatedly follows a subroutine called a \textit{pass}. Between two passes, there will be no $\mathsf{active}$ vertices. When a new pass starts, it typically takes a $\mathsf{preactive}$ vertex $\bar x$, turns it into $\mathsf{active}$ and successively explores the graph around $\bar x$ (by performing matchings) until certain conditions are satisfied; then, it labels every vertex that was touched as $\mathsf{read}$ except for the vertices of $I$ that were found; these are labeled $\mathsf{preactive}$ and are activated by future passes. A labeled semi-graph $g=(V^n, \mathcal{H}, \mathcal{E}, \{\ell_x\}_{x\in V^n},\{\ell_e\}_{e\in \mathcal{E}}, \prec)$ is a semi-graph with a label $\ell_x$ attached to each vertex $x$, a label $\ell_e$ associated to each edge $e$ and a total order $\prec$ on the set of $\mathsf{preactive}$ vertices. It is worth remarking that since a pass only does matchings and relabeling, it does not change the degree of any vertex. In particular, the definition of the set $I$ does not change. Let us now define the pass. Obviously, whenever there is an instruction to give a vertex a label, this label replaces the former label of that vertex. \vskip 2mm \noindent\rule[0.5ex]{\linewidth}{1pt} \textbf{The pass\\ Input:} $g = (V^n, \mathcal{H}, \mathcal{E}, \{\ell_x\},\{\ell_e\}, \prec)$ with at least one vertex of $I$ $\mathsf{preactive}$ or $\mathsf{unidentified}$. \smallskip \\ \begin{tabular}{p{0.45cm} p{11.6cm}} \textbf{(S1)}& Let $\bar x$ be the $\mathsf{preactive}$ vertex of highest order; if there are no $\mathsf{preactive}$ vertices, let $\bar x$ be an arbitrary $\mathsf{unidentified}$ site of $I$.\\ &$\bullet\;$ If $\bar x$ has less than $\frac{S}{2}$ half-edges (which can only happen if it is $\mathsf{preactive}$), label it $\mathsf{read}$; the pass is then stopped in status $\mathsf{B}_1$.\\&$\bullet\;$ Otherwise, label $\bar x$ $\mathsf{active}$ and proceed to (S2).\end{tabular}\smallskip \\ \begin{tabular}{p{0.45cm} p{11.6cm}} \textbf{(S2)}& Define the set $\mathcal{H}^*$ of \textit{relevant half-edges} of the pass as the set of half-edges attached to the $\mathsf{active}$ vertex. Endow $\mathcal{H}^*$ with a total order $\prec^*$ chosen arbitrarily. Also let $\bar C = 0$; this will be a counting variable whose value will be progressively incremented. Proceed to (S3).\end{tabular}\smallskip\\ \begin{tabular}{p{0.45cm} p{11.6cm}} \textbf{(S3)} &Let $h$ be the half-edge of highest order in $\mathcal{H}^*$. Choose another half-edge $h'$ uniformly at random in $\mathcal{H} - \{h\}$ and let $v'$ be the vertex of $h'$. Delete $h, h'$ from all sets that contain them ($h$ from $\mathcal{H}$ and $\mathcal{H}^*$, $h'$ from $\mathcal{H}$ and possibly $\mathcal{H}^*$) and add $h+h'$ to $\mathcal{E}$; its label is given as follows:\\ &$\bullet\;$ If $v'$ is identified, label $h+h'$ $\mathsf{unmarked}$.\\ &$\bullet\;$ If $v'$ is $\mathsf{unidentified}$ and not in $I$, label $h+h'$ $\mathsf{marked}$. Also label $v'$ $\mathsf{read}$ and add its half-edges to $\mathcal{H}^*$ (note that at this point $h'$ is no longer a half-edge of $v'$) so that they have arbitrary order among themselves but lower order than all half-edges previously in $\mathcal{H}^*$. \\ &$\bullet\;$ If $v'$ is $\mathsf{unidentified}$ and in $I$, and if $\bar C < 3$, label $h+h'$ $\mathsf{marked}$, label $v'$ $\mathsf{preactive}$, assign it the lowest order in the set of $\mathsf{preactive}$ vertices and add 1 to $\bar C$.\\ &$\bullet\;$ If $v'$ is $\mathsf{unidentified}$ and in $I$, and if $\bar C \geq 3$, label $h+h'$ $\mathsf{unmarked}$ and label $v'$ $\mathsf{read}$.\\ &Proceed to (S4).\end{tabular}\smallskip\\ \begin{tabular}{p{0.45cm} p{11.6cm}} \textbf{(S4)} &$\bullet\;$ If $\bar x$ still has half-edges, go to (S3).\\&$\bullet\;$ If the last half-edge of $\bar x$ has been deleted in the previous step and now there are less than $\frac{S}{4}$ $\mathsf{marked}$ edges incident to $\bar x$, label $\bar x$ and all vertices that have been identified in the pass (including the $\mathsf{preactive}$ ones) $\mathsf{read}$. The pass is stopped in status $\mathsf{B}_2$.\\&$\bullet\;$ Otherwise go to (S5).\end{tabular} \smallskip\\ \begin{tabular}{p{0.45cm} p{11.6cm}} \textbf{(S5)} &$\bullet\;$ If $\bar C \geq 3$, label $\bar x$ $\mathsf{read}$ and end the pass in status $\mathsf{G}$.\\&$\bullet\;$ Otherwise go to (S6). \end{tabular}\smallskip\\ \begin{tabular}{p{0.45cm} p{11.6cm}} \textbf{(S6)} &$\bullet\;$ If (a) more than $\left(\frac{1}{\lambda} \right)^{2a-3}$ vertices have been identified in the pass, or (b) a path of length $20a\log\left(\frac{1}{\lambda}\right)$ may be formed with $\mathsf{marked}$ edges constructed in the pass, or (c) $\mathcal{H}^*$ is empty, then label $\bar x$ $\mathsf{read}$ and end the pass in status $\mathsf{B}_3$. \\&$\bullet\;$ Otherwise go to (S3).\end{tabular} \smallskip\\ \textbf{Output:} updated labeled semi-graph, status.\\ \noindent\rule[0.5ex]{\linewidth}{1pt} Let us explain in words what happens when a pass ends in status $\mathsf{G}$. It first activates the $\mathsf{preactive}$ vertex $\bar x$ of highest order, then starts identifying the neighbors of $\bar x$; when they are all identified, it starts identifying the vertices at distance 2 from $\bar x$, and so on, until it has found three new vertices of $I$, at which point it stops. The ``bad'' outcomes $\mathsf{B}_1$,$\mathsf{B}_2$ and $\mathsf{B}_3$ are included to guarantee that $G^{n\prime}$ has the desired properties mentioned earlier and that the algorithm can successfully continue. $\mathsf{B}_1$ and $\mathsf{B}_2$ are necessary to ensure that the vertices of $G^{n\prime}$ that will be the focal points for the comparison growth process all have large degree. $\mathsf{B}_3$ is necessary to ensure that the focal points are not very far from each other and also that the pass does not delete too many half-edges, thus exploring too much of the graph. We wish the pass to return the status $\mathsf{G}$ ; the following lemma addresses this. \begin{lemma} \label{lem5goodpass} Assume that the degree sequence is robust, $g$ has less than $\frac{\epsilon}{2} n$ identified vertices before the pass starts and, when the pass defines $\bar x$, this vertex has more than $\frac{S}{2}$ half-edges. Then, the pass ends in status $\mathsf{G}$ with probability larger than $\frac{9}{10}$. \end{lemma} \begin{proof} Start noticing that the pass identifies at most $(1/\lambda)^{2a-3}$ vertices and this is much less than $\frac{\epsilon}{2}n$ if $n$ is large. So, at a moment immediately before the pass chooses a half-edge at random, there are less than $\epsilon n$ identified vertices; let $A$ in the definition of robustness be this set of identified vertices. The chosen half-edge then has probability:\\ (1) larger than $\frac{7}{8}$ of belonging to an $\mathsf{unidentified}$ vertex;\\ (2) larger than $\frac{c_1}{2(a-2)S^{a-2}}$ of belonging to an $\mathsf{unidentified}$ vertex that is in $I$;\\ (3) larger than $\frac{3}{4}$ of belonging to an $\mathsf{unidentified}$ vertex that is not in $I$. By hypothesis, the pass does not end in status $\mathsf{B}_1$. For it to end in status $\mathsf{B}_2$, at least half of the more than $\frac{S}{2}$ half-edges initially present in $\mathcal{H}^*$ must be matched to half-edges of previously identified vertices. By (1) this has probability less than $P[\mathsf{Bin}(S/2,1/8)>S/4]$, which is less than $\frac{1}{40}$ if $\lambda$ is small (and hence $S$ is large). Likewise, we can show using (2) that the probability of the pass ending because of case (a) in (S6) is less than $\frac{1}{40}$. Let us now show that the same holds for (b) in (S6). For $k \geq 1$, let $s_k$ be the set of vertices at distance $k$ from $\bar x$ that are not in $I$ and that the pass identifies; also let $s_0 = \{\bar x\}$. Since every vertex has degree 3 or more, there will be at least $2|s_k|$ half-edges of vertices of $s_k$ for the pass to match (unless it halts before). Define the event $$A_k =\left\{\begin{array}{c} \text{the pass deletes all half-edges of vertices of $s_k$; of these, }\\\text{less than $\frac{5}{8}$ are matched to half-edges of vertices not in I}\\\text{ that were (at the time of matching) $\mathsf{unidentified}$} \end{array}\right\},\quad k \geq 0.$$ We have ${\mathbb P}[A_0] \leq P\left[\mathsf{Bin}(S/2, 3/4) < (S/2) \cdot (5/8)\right].$ Given that $A_1, \ldots, A_k$ have not occurred and the pass reaches distance $k+1$ from $\bar x$, the probability of $A_{k+1}$ is less than $$P\left[\mathsf{Bin}\left(\frac{S}{2}\left(\frac{5}{8}\right)^{k+1} 2^{k},\; \frac{3}{4}\right) < \left(\frac{S}{2}\left(\frac{5}{8}\right)^{k+1} 2^{k} \right) \cdot \frac{5}{8}\right].$$ Letting $K = \frac{2a}{\log(5/4)}\log\left(\frac{1}{\lambda}\right) < 20a\log\left(\frac{1}{\lambda}\right)$, the above estimates show that $\displaystyle{{\mathbb P}\left[\cup_{k=0}^K\; A_k\right]}$ vanishes as $\lambda \to 0$. Now, assume that $A_0, \ldots, A_K$ have not occurred and the pass reaches level $K+1$. The probability that less than 3 $\mathsf{unidentified}$ vertices of $I$ are discovered in the matching of half-edges from $s_{K+1}$ is then less than \begin{equation}P\left[\mathsf{Bin}\left(\frac{S}{2}\left(\frac{5}{8}\right)^{K+1} 2^{K},\; \frac{c_1}{2(a-2)S^{a-2}} \right) < 3\right].\label{eqn5Bin3}\end{equation} Note that $$\frac{S}{2}\left(\frac{5}{8}\right)^{K+1} 2^{K} \geq \frac{S}{4} \left(\frac{1}{\lambda}\right)^{\frac{2a}{\log(5/4)}\cdot \log(5/4)} = \frac{S}{4}\left(\frac{1}{\lambda}\right)^{2a}$$ and $$\frac{c_1}{2(a-2)S^{a-2}} = \frac{c_1 \lambda^{2(a-2)}}{M^{a-2}\log^{2(a-2)}(1/\lambda)},$$ so the probability in (\ref{eqn5Bin3}) is very small if $\lambda$ is small. Putting these facts together, we get the desired result. The probability of (c) in (S6) occurring is similarly shown to be less than $\frac{1}{40}$, and this concludes the proof. \end{proof} From now on, we will assume that the degree sequence is robust. With the definition of the pass at hand, we are now ready to explain the full algorithm. From the degree sequence $d_1, \ldots, d_n$, we construct our initial labeled semi-graph $g$ containing no edges and so that each vertex $x$ is $\mathsf{unidentified}$ and has $d_i$ half-edges. We then run $\epsilon'n$ successive passes, where $\epsilon'=\frac{\epsilon \lambda^{2a-3}}{2}$. Since each pass identifies at most $\lambda^{-(2a-3)}$ vertices, we see that at the beginning of each pass, less than $\frac{\epsilon}{2}n$ vertices will be identified, so the hypotheses of Lemma \ref{lem5goodpass} will hold. Also let $\delta' = \frac{\epsilon'}{2}$ For $1 \leq i < \epsilon' n$, define $$\begin{aligned} &W_i = \text{Number of $\mathsf{preactive}$ vertices before pass $i$},\\ &X_i = W_{i+1} - W_{i},\\ &Y_i = \mathds{1}_{\{\text{Pass $i$ ends in status $\mathsf{B}_1$}\}}. \end{aligned}$$ The possible values for $X_i$ are $-1, 0, 1, 2$. If $Y_i = 1$, then $X_i = -1$. By the previous lemma, for any $x_1, \ldots, x_{i-1}, y_1, \ldots, y_{i-1}$ we have \begin{equation}\label{eqn5Xaflem}{\mathbb P}\left[\;X_i = 2 \;|\;\{X_j\}_{j=1}^{i-1} = \{x_j\}_{j=1}^{i-1},\; \{Y_j\}_{j=1}^{i-1}=\{y_j\}_{j=1}^{i-1},\;Y_i = 0\;\right] > 9/10.\end{equation} Let us now exclude the possibility that many passes end in status $\mathsf{B}_1$. \begin{lemma} \label{lem5manyB1} $$\mathbb{P}\left[\sum_{i=1}^{\lfloor \epsilon'n\rfloor} Y_i > \frac{1}{10}\lfloor \epsilon'n \rfloor\right] \xrightarrow{n \to \infty} 0.$$ \end{lemma} \begin{proof}We start remarking that, for $\{Y_i = 1\}$ to occur, there must exist a vertex $x \in I$ such that\\ $\bullet\;$ $x$ is identified before pass $i$;\\ $\bullet\;$ from the moment $x$ is identified to the beginning of pass $i$, more than $S/2$ half-edges of $x$ are chosen for matchings;\\ $\bullet\;$ $x$ is the $\mathsf{preactive}$ vertex of highest order when pass $i$ starts. Let $h_1, \ldots, h_N$ be the sequence of half-edges chosen at random by the algorithm. As explained above, we have $N \leq \epsilon n$. By $(iii.)$ of Lemma \ref{lem5degs}, regardless of what happened before $h_j$ is chosen, the probability that $h_j$ belongs to a vertex of $I$ is less than $\frac{2c_1}{(a-2)S^{a-2}}$. On the other hand, for $\{\sum Y_i > (1/10)\lfloor \epsilon'n \rfloor\}$ to occur, more than $\frac{1}{10} \lfloor \epsilon'n\rfloor \frac{S}{2}$ half-edges of vertices of $I$ must be chosen. The probability of this is less than $$P\left[\mathsf{Bin}\left(\lfloor \epsilon n \rfloor,\;\frac{2c_1}{(a-2)S^{a-2}} \right) > \frac{1}{10} \lfloor\epsilon'n\rfloor \frac{S}{2}\right].$$ By Markov's Inequality, this is less than $$\begin{aligned} \frac{\lfloor \epsilon n \rfloor \cdot \frac{2c_1}{(a-2)S^{a-2}}}{\frac{1}{10} \lfloor\epsilon'n\rfloor \frac{S}{2}} &= C\frac{\epsilon}{\epsilon'} \frac{1}{S^{a-1}} = C \frac{1}{\lambda^{2a-3}} \frac{\lambda^{2(a-1)}}{M^{a-1}\log^{2(a-1)}(1/\lambda)} = C' \frac{\lambda}{\log^{2(a-1)}(1/\lambda)}, \end{aligned}$$ where $C, C'$ are constants that do not depend on $\lambda$ or $n$. The above can be made as small as desired by taking $\lambda$ small. \end{proof} \begin{proposition}$\displaystyle{{\mathbb P}\left[W_{\lfloor \epsilon'n \rfloor} > \delta' n\right] \xrightarrow{n\to\infty} 1}.$ \end{proposition} \begin{proof} We start giving a random mapping representation of the random variables $X_1, \ldots, X_{\lfloor\epsilon'n\rfloor}, Y_1, \ldots, Y_{\lfloor \epsilon'n \rfloor}$. Given sequences $\{x_j\}_{j=1}^{i-1},\; \{y_j\}_{j=1}^{i}$ and $s \in (0,1)$, let $$\begin{aligned} &\upphi\left(s,\{x_j\}_{j=1}^{i-1},\{y_j\}_{j=1}^i\right) = m \text{ if }\\ &{\mathbb P}\left[X_i \leq m-1\;|\;\{X_j\}_{j=1}^{i-1} = \{x_j\}_{j=1}^{i-1},\;\{Y_j\}_{j=1}^{i} = \{y_j\}_{j=1}^{i}\right] \\&\qquad \qquad< s \leq {\mathbb P}\left[X_i \leq m\;|\;\{X_j\}_{j=1}^{i-1} = \{x_j\}_{j=1}^{i-1},\;\{Y_j\}_{j=1}^{i} = \{y_j\}_{j=1}^{i}\right] \end{aligned}$$ Likewise, let $$\begin{aligned} &\uppsi\left(s,\{x_j\}_{j=1}^{i-1},\{y_j\}_{j=1}^{i-1}\right)\\&\qquad\qquad=\left|\begin{array}{ll} 0 &\text{ if } s \leq {\mathbb P}\left[Y_i = 0\;|\;\{X_j\}_{j=1}^{i-1} = \{x_j\}_{j=1}^{i-1},\;\{Y_j\}_{j=1}^{i-1} = \{y_j\}_{j=1}^{i-1}\right]\\1&\text{otherwise.}\end{array}\right. \end{aligned}$$ (when we write only $\upphi(s),\; \uppsi(s)$, we mean the functions above for $X_1$ and $Y_1$, with no conditioning in the probabilities that define them). Let $U_1, U_2, \ldots,\;V_1, V_2, \ldots$ be independent random variables with the uniform distribution on $(0,1)$. Set $X_1' =\upphi(U_1),\; Y_1' = \uppsi(V_1)$ and recursively define, for $1 < i < \epsilon'n$, $$\begin{aligned} &Y_{i+1}' = \uppsi\left(V_{i+1}, \{X_j'\}_{j=1}^{i}, \{Y_j'\}_{j=1}^i\right);\\ &X_{i+1}' = \upphi\left(U_{i+1},\{X_j'\}_{j=1}^{i}, \{Y_j'\}_{j=1}^{i+1}\right). \end{aligned}$$ Now, clearly $\{X_i', Y_i'\}_{i=1}^{\lfloor \epsilon'n\rfloor}$ has the same distribution as $\{X_i, Y_i\}_{i=1}^{\lfloor \epsilon'n\rfloor}$. By (\ref{eqn5Xaflem}), we have $\{Y_i' = 0,\;X_i' \neq 2 \} \subseteq \{U_i \leq \frac{1}{10}\}$. We can now estimate $$\begin{aligned} {\mathbb P}\left[W_{\lfloor \epsilon'n\rfloor} < \frac{\epsilon'}{2}n\right] &\leq {\mathbb P}\left[\sum_i Y_i > \frac{1}{10}\epsilon'n\right] + P\left[\left|\{i: Y_i =0, X_i \neq 2\} \right| > \frac{1}{5}\epsilon'n\right]\\ &\leq {\mathbb P}\left[\sum_i Y_i > \frac{1}{10}\epsilon'n\right] + P\left[\left|\left\{i \leq \epsilon'n: U_i \leq \frac{1}{10}\right\}\right| > \frac{1}{5}\epsilon'n\right]. \end{aligned}$$ The first of these probabilities vanishes by Lemma \ref{lem5manyB1}, and the second by the Law of Large Numbers. \end{proof} We will define our subgraph $G^{n \prime}$ only on the event $\{W_{\lfloor \epsilon'n \rfloor > \delta'n}\}$. Let $$\begin{aligned} &i_0 = \sup\{i: W_i = 0\};\\ &V^{n\prime} = \text{vertices that have been identified in passes $i_0, \ldots, \lfloor \epsilon'n\rfloor$};\\ &E^{n\prime} = \text{edges that have been constructed by passes $i_0, \ldots, \lfloor \epsilon'n\rfloor$ and are }\mathsf{marked};\\ &G^{n\prime} = (V^{n\prime}, E^{n\prime});\\ &\deg'(x) = |\{e \in E^{n\prime}: x \in e\}|;\\ &I' = \{x \in V^{n\prime}: x \text{ has been activated by a pass after $i_0$ and } \deg'(x) \geq S/4\}. \end{aligned}$$ Let $\delta = \delta'/2$ and note that $|I'| > \delta n$. This follows from the fact that, in the sequence $X_{i_0},\ldots, X_{\lfloor \epsilon'n\rfloor}$, for every $i$ such that $X_i \neq -1$, the vertex that was activated in pass $i$ must be in $I'$. Since there are at least $\frac{\delta'}{2}n$ such $i$'s, we get $|I'| > \delta n$. As we have already mentioned, for $x, y \in I'$ we put $x \stackrel{*}{\sim} y$ if $x$ and $y$ are connected by a path (in $G^{n\prime}$) that contains no other elements of $I'$. If it exists, this path is necessarily unique and has length less than $20a\log(1/\lambda)$. We then define $T$ as the tree with vertex set $I'$ and edge set $\{\{x,y\}: x,y\in I',\;x\stackrel{*}{\sim}y\}$. Given a vertex $x \in G^{n\prime}$, we will denote by $S(x)$ the set containing $x$ and its neighbors (in $G^{n\prime}$). If $x, y \in I',\; x\stackrel{*}{\sim} y$, let $b(x,y)$ be the set of vertices of $G^{n\prime}$ in the unique path from $x$ to $y$. Our goal now is to use Proposition $\ref{discrete}$ to show Theorem $\ref{thm1cd1}$. To this end, we will couple the contact process on $G^{n \prime}$ (starting from full occupancy) and a growth process on $T$ (again starting from full occupancy). This comes down to a coupling between the Harris system on $G^{n \prime}$ and the Bernoulli random variables used to define the growth process. We suppose given the Harris system on $G^n$ which we will regard as a Harris system on $G^{n\prime}$ by ignoring non relevant Poisson processes. We will consider the process on time intervals of size $\kappa = e^{30a\log(1/\lambda)}$; this scale is chosen because it is large enough for an infection from a site $x \in I'$ to reach $y \in I'$ with $x \stackrel{*}{\sim} y$ but smaller than the extinction time for the process restricted to $S(x)$. The following lemma and proposition will make this precise. Given a set of vertices $U$ in a graph $\Gamma$ and $\xi \in \{0,1\}^U$, we will say that $U$ is \textit{infested} in $\xi$ if $|\{x\in U: \xi(x) = 1\}| \geq \frac{\lambda}{20}|U|$. In \cite{MVY} the following is proved. \begin{lemma} \label{lem6infested} Given $\lambda > 0$, there exist $\bar c_6$ and $N_0$ such that the following holds. Let $\Gamma$ be a star graph consisting of one vertex $x$ of degree $\frac{N}{\lambda^2}$, where $N \geq N_0$, and all other vertices of degree 1. Then, for the contact process with parameter $\lambda$ on $\Gamma$,\medskip\\ $(i.)\; \displaystyle{P_{\Gamma, \lambda}\left[\Gamma \text{ is infested in } \xi_1 \left| \xi_0 = \{x\}\right.\right] > 1/2}$;\medskip\\ $(ii.)\; \displaystyle{P_{\Gamma, \lambda}\left[\Gamma \text{ is infested in } \xi_{e^{\bar c_6 N}} \left| \Gamma \text{ is infested in } \xi_0\right.\right] > 1 - e^{-\bar c_6 N}}$. \end{lemma} In the case of the star graph given by a site $x \in I'$ and its neighbors in $G^{n\prime}$, the $N$ of the above lemma is equal to $\lambda^2 \deg'(x) = M\log^2\left(\frac{1}{\lambda}\right)$. The extinction time for the contact process restricted to $S(x)$ and started from full occupancy will then be with high probability larger than $e^{\bar c_6 M \log^2(1/\lambda)} = \left( \frac{1}{\lambda}\right)^{\bar c_6 M\log(1/\lambda)} > \left(\frac{1}{\lambda}\right)^{21a\log(1/\lambda)}$ as long as $M > \frac{21a}{\bar c_6}$. Now, if $x, y \in I'$ and $x\stackrel{*}{\sim} y$, the probability that an infection in $S(x)$ is transmitted along $b(x, y)$, reaches $y$ within time $20a\log\left(\frac{1}{\lambda}\right)$ and then infests $S(y)$ within time 1 is larger than $\frac{1}{2}\left(\lambda/(1+\lambda)\right)^{20a\log(1/\lambda)}$. If $S(x)$ holds the infection for $(1/\lambda)^{21a\log(1/\lambda)}$ units of time, there will be $\displaystyle{\frac{(1/\lambda)^{21a\log(1/\lambda)}}{20a\log(1/\lambda)+1}}$ chances for such a transmission to occur. Comparing the number of chances with the probability of a transmission, we see that a transmission will occur with very high probability. These considerations lead to \begin{proposition} For any $\sigma > 0$, $M$ can be chosen large enough so that the following holds. Assume that $x, y \in I',\; x \stackrel{*}{\sim} y$ and, in $\xi_0,\;S(x)$ is infested. Let $(\xi_t')$ denote the process restricted to $S(x) \cup S(y)\;\cup b(x,y)$. Then, with probability larger than $1-\sigma$, both $S(x)$ and $S(y)$ are infested in $\xi_\kappa'$.\end{proposition} Let $r \in {\mathbb N},\;x, y \in I'$ with $x \stackrel{*}{\sim} y$ and $(\xi_t)$ be the contact process on $G^{n\prime}$ started from full occupancy. Put $I^r_{x,y} = 1$ if one of the following holds:\\ $\bullet\;$ $S(x)$ is infested in $\xi_{\kappa r}$ and $$\left|\left\{\begin{array}{c} z \in S(y): \exists w\in S(x): \xi_{\kappa r}(w) = 1,\\(w,\kappa r) \leftrightarrow (z,\kappa(r+1)) \text{ inside } b(x,y) \end{array}\right\}\right| > \frac{\lambda}{20}|S(y)|;$$ $\bullet\;$ $S(x)$ is not infested in $\xi_{\kappa r}$.\\ Otherwise put $I^r_{x,y} = 0$. The second condition above is just present to guarantee that $I^r_{x,y} = 1$ with high probability regardless of $\xi_{\kappa r}$. As will soon be seen, this artificial assignment will not be problematic. Put $I^r_{x,x} = 1$ if one of the following holds:\\ $\bullet\;$ $S(x)$ is infested in $\xi_{\kappa r}$ and $$\left|\left\{\begin{array}{c}z \in S(x): \exists w \in S(x): \xi_{\kappa r}(w) = 1,\\ (w, \kappa r) \leftrightarrow (z, \kappa(r+1)) \text{ inside }S(x)\end{array}\right\}\right| > \frac{\lambda}{20}|S(x)|;$$ $\bullet\;$ $S(x)$ is not infested in $\xi_{\kappa r}$.\\ Otherwise put $I^r_{x,x} = 0$ Let $\eta_0 \equiv 1$ and, for $r \geq 0$, $$\eta_{r+1}(x) = \mathds{1}\left\{\begin{array}{c}\eta_r(x) = 1 \text { and } I^r_{x,x} = 1 \text{ or, for some }\\\text{$y$ with $x\stackrel{*}{\sim}y,\;\eta_r(y) = 1$ and $I^r_{y,x} = 1$.} \end{array}\right\}.$$ Notice that, if a sequence $x_1, x_2, \ldots, x_R$ in $I'$ is such that, for each $r$, either $x_r = x_{r+1}$ and $I^r_{x_r, x_r} = 1$ or $x_r \stackrel{*}{\sim} x_{r+1}$ and $I^r_{x_r, x_{r+1}} = 1$, then we will have $\eta_R(x_R) = 1$ and $S(x_R)$ will be infested in $\xi_{\kappa R}$. Now, using a result of Liggett, Schonmann Stacey \cite{LSchS} (see also Theorem B26 in \cite{lig99}), given $p \in (0,1)$ we can choose $M$ large enough that the measure of the field $\{\{I^r_{x,x}\},\{I^r_{x,y}\}\}$ stochastically dominates i.i.d. Bernoulli($p$) random variables. We then have \begin{corollary} For any $p > p_c(1)$, if $M$ is large enough, then $\{\eta_r\}$ dominates a growth process on $T$ defined from i.i.d. Bernoulli($p$) random variables. \end{corollary} \noindent This, the fact that $|I'| \geq \delta n$ and Proposition $\ref{discrete}$ give Theorem $\ref{thm1cd1}$.
1203.2700
\section{Introduction} The dynamics of many polymeric fluids are described by two-scale micro-macro models. The systems usually consist of a macroscopic momentum equation and a microscopic Fokker-Planck type equation. The fluid is described by the macroscopic equation (sometimes the incompressible Navier-Stokes equations), with an induced elastic stress. The stress is the micro-macro interaction. The particles in the system are represented by a probability distribution $\psi(t,x,R)$ or $\psi(t,x,m)$ that depends on time $t$, macroscopic physical location $x$ and particle configuration $R$ or $m$. The Lagrangian transport of the particles is modeled using a Taylor expansion of the velocity field, which accounts for a drift term that depends on the spatial gradient of velocity. The system attempts to describe the behavior of this complex mixture of polymers and fluids. For more physical and mechanical backgrounds, see \cite{BCA, DE}. The FENE (Finite Extensible Nonlinear Elastic) dumbbell model is one of the typical and extensively studied micro-macro models. In this model a polymer is idealized as an "elastic dumbbell" consisting of two "beads" joined by a spring which can be modeled by a vector $R$. Mathematically, this system reads \begin{equation}\label{1.1} \left\{ \begin{array}{l} \displaystyle\partial_t u + (u\cdot\nabla)u + \nabla p = \nu \Delta u + {\rm div}~\tau,\quad x \in \mathbb{R}^2,\\[3mm] {\rm div}~u =0,\quad x \in \mathbb{R}^2,\\[3mm] \partial_t \psi + (u\cdot \nabla)\psi = {\rm div}_R \big[-W(u) \cdot R\psi + \beta \nabla_R\psi\\ \quad\ \ \ \ \ \ \ \ \ \ \ + \nabla_R\mathcal{U} \psi\big],\quad (x, R) \in \mathbb{R}^2 \times B(0,R_0),\\[3mm] (\nabla_R \mathcal{U}\psi + \beta \nabla_{R} \psi)\cdot \textbf{n} = 0,\quad \mbox{on}\ \partial B(0,R_0),\\[3mm] t=0: u(t, x) = u_0(x),\ \ \ \psi(t, x, R) = \psi_0(x,R). \end{array}\right. \end{equation} In the above system, $u=u(t,x)$ denotes the velocity field of the fluid, $p=p(t,x)$ denotes the pressure, $\psi(t,x,R)$ is the distribution function for the internal configuration, $\nu>0$ is the viscosity of the fluid and $\beta$ is related to the temperature of the system. Moreover, the spring potential $\mathcal{U}$ and the induced elastic stress $\tau$ is given by \begin{equation}\label{1.1-1} \mathcal{U}(R) = -k\ln(1-|R|^2/|R_0|^2),\quad \tau_{ij} = \int_{B(0,R_0)} (R_i \otimes \nabla_{R_j} \mathcal{U})\psi(t,x, R) dR. \end{equation} Here $k > 0$ is a constant. The boundary condition insures the conservation of the polymer density. Assume that $W(u)= \frac{\nabla u - (\nabla u)^t}{2}$, which corresponds to the co-rotational case. For simplicity of writing, assume that $\beta=1$ and $R_0 =1$ and denote $B(0,R_0)$ by $B$. In what follows, without special claim, $\nabla$ represents $\nabla_x$, and ${\rm div}$ represents ${\rm div}_x$. The Smoluchowski equation coupled with the incompressible Navier-Stokes equations is another extensively studied micro-macro model. The Smoluchowski equation describes the temporal evolution of the probability distribution function $\psi$ for directions of rod-like particles in a suspension. Mathematically, the system reads \begin{equation}\label{1.2} \left\{ \begin{array}{l} \displaystyle\partial_t u + (u\cdot\nabla)u + \nabla p = \nu \Delta u + {\rm div}~\tau, \quad x \in \mathbb{R}^2,\\[3mm] {\rm div}~u =0, \quad x \in \mathbb{R}^2,\\[3mm] \partial_t \psi + (u\cdot \nabla)\psi + {\rm div}_g(G(u, \psi)\psi)- \Delta_g \psi=0,\quad (x, m) \in \mathbb{R}^2\times M,\\[2mm] t=0: u(t, x) = u_0(x),\ \ \ \psi(t, x, m) = \psi_0(x,m). \end{array}\right. \end{equation} Here $M$ is a $d$-dimensional smooth compact Riemannian manifold without boundary of and $dm$ is the Riemannian volume element of $M$, $u=u(t,x)$ and $p=p(t,x)$ denote the velocity field and the pressure of fluid respectively, $\psi=\psi(t,x,m)$ is the distribution function, $G(u, \psi)= \nabla_g \mathcal{U} + W$ stands for a meanfield potential resulting from the excluded volume effects due to steric forces between molecules with $W= c_\alpha^{ij} (m)\partial_j u_i$. Besides, the added stress tensor $\tau$ and the potential $\mathcal{U}$ are given by \begin{equation}\label{1.2-1} \left\{ \begin{array}{l} \mathcal{U}(t,x,m) = \displaystyle \int_M K(m,q) \psi(t,x, q)dq,\\[3mm] \tau_{ij}(t,x) =\displaystyle \int_M \gamma_{ij}^{(1)}(m) \psi(t,x,m)dm\\[3mm] \quad\ \ \ \ \displaystyle+ \int_M\int_M \gamma_{ij}^{(2)}(m_1, m_2)\psi(t,x,m_1)\psi(t,x,m_2) dm_1dm_2. \end{array}\right. \end{equation} Here the kernel $K$ is a smooth symmetric function defined on $M\times M$. $\gamma_{ij}^{(1)}$ and $\gamma_{ij}^{(2)}$ are smooth, time independent and $x$ independent. At present there have been extensive and systematical studies on the existence and regularity theories of those 2D micro-macro models of polymeric fluids \cite{BC1994, C, CM, CFTZ, CMa, LLZ, Ma}. For example, the first global well-posedness result for FENE \eqref{1.1} was derived by Lin, Zhang and Zhang \cite{LZZ} for $k>6$. Masmoudi \cite{Ma} extends it to the case of $k>0$ by a crucial observation on the linear operator. Very recently the global existence of weak solutions to the FENE dumbbell model of polymeric flows for a very general class of potentials was also obtained by Masmoudi \cite{Masmoudi2}. The global well-posedness of nonlinear Fokker-Planck system coupled with Navier-Stokes equations \eqref{1.2} in 2D has been proven by Constantin and Masmoudi in \cite{CMa}. When the nonlinear Fokker-Planck equation is driven by a time averaged Navier-Stokes system in 2D, global well-posedness has been obtained by Constantin-Fefferman-Titi-Zarnescu \cite{CFTZ}. Most proofs of the above global well-posedness theorems are based on an important analytic technique called losing a priori estimate in the spirit of Bahouri-Chemin \cite{BC1994} and Chemin-Masmoudi \cite{CM}. In \cite{LMZ}, we studied the blow-up criteria of a macroscopic viscoelastic Oldroyd-B system avoiding using the losing a priori estimates. The main purpose of this paper is to extend the method in \cite{LMZ} to micro-macro models and provide a new proof for global well-posedness of the co-rotational FENE dumbbell model \eqref{1.1} and the coupling Smoluchowski and incompressible Navier-Stokes equations (\ref{1.2}). Compared to the proofs of the theorems in \cite{Ma} and \cite{CMa}, which are based on the technique of losing a priori estimates, ours are direct and much simpler. For the co-rotational FENE model (\ref{1.1}), we have \begin{thm}\label{thm 1.1} Assume that $u_0 \in H^s(\mathbb{R}^2), (s>2)$, $\psi_0 \in H^s(\mathbb{R}^2; \mathcal{L}^r \cap \mathcal{L}^2)$ for some $r\geq 2$ such that $(r-1)k>1$ with $\int_{B} \psi_0 dR =1, a.e.$ in $x$. Then there exists a unique global solution $(u, \psi)$ of the FENE model (\ref{1.1}) in $C([0, \infty); H^s) \times C([0, \infty); H^s(\mathbb{R}^2; \mathcal{L}^2)).$ Moreover, $u\in L_{loc}^2 (0, \infty; H^{s+1})$ and $\psi \in L^2_{loc}(0, \infty; H^s(\mathbb{R}^2; \mathcal{L}^{1,r})).$ \end{thm} For the definition of $\mathcal{L}^r$ and $\mathcal{L}^{1,r}$, please refer to section 2. Similarly, for the coupling Smoluchowski and Navier-Stokes system (\ref{1.2}), we have \begin{thm}\label{thm 1.2} Take $u_0\in W^{1+\epsilon,r}(\mathbb{R}^2)\cap L^2(\mathbb{R}^2)$ and $\psi_0\in W^{1,r}(\mathbb{R}^2, H^{-s}(M)),$ for some $r> 2$, $\epsilon >0$, $s>\frac{d}{2} + 1$ and $\psi_0\geq 0$. $\int_M \psi_0 dm\in L^1(\mathbb{R}^2) \cap L^\infty(\mathbb{R}^2)$. Then (\ref{1.2}) has a global solution in $u\in L_{loc}^{\infty}(0,\infty; W^{1,r})\cap L_{loc}^2(0, \infty; W^{2,r})$ and $\psi \in L_{loc}^\infty(0,\infty; W^{1,r}(\mathbb{R}^2; H^{-s}(M)))$. Moreover, for $T>T_0>0$, we have $u\in L^\infty((T_0, T); W^{2-\epsilon, r}).$ \end{thm} We end this introduction by mentioning some other results on micro-macro models. Global existence of weak solutions can be found in \cite{BSS, LM} and local existence of strong solutions are studied in \cite{ELZ, JLL, ZZ}. For macroscopic models, we refer the readers to \cite{Lei1, Lei2, LZ, LLZ1, LLZh, CZ} as references. The paper is organized as follows. In section 2, we give some preliminaries. Then we give some a priori estimates for the FENE model \eqref{1.1} in section 3 and the coupling Smoluchowski and Navier-Stokes equations \eqref{1.2} in section 4. The a priori estimates obtained in sections 3 and 4 are enough to get the global existence of systems \eqref{1.1} and \eqref{1.2} and prove Theorem 1.1 and Theorem 1.2 \cite{Ma, CMa}. \section{Definitions and Useful Lemmas} We will use the Littlewood-Paley decomposition in the following sections. Define $\mathcal{C}$ to be the ring $$\mathcal{C} = \{\xi\in \mathbb{R}^2: \frac{3}{4} \leq |\xi| \leq \frac{8}{3}\},$$ and define $\mathcal{D}$ to be the ball $$\mathcal{D} = \{\xi\in \mathbb{R}^2: |\xi|\leq \frac43\}.$$ Let $\chi$ and $\varphi$ be two smooth nonnegative radial functions supported respectively in $\mathcal{D}$ and $\mathcal{C}$, such that $$\chi(\xi) + \sum_{q\geq 0} \varphi(2^{-q} \xi) =1\ \ \mbox{for}\ \xi\in \mathbb{R}^2 ,\ \ \mbox{and}\ \sum_{q\in \mathbb{Z}} \varphi(2^{-q}\xi)=1\ \ \mbox{for}\ \xi\in \mathbb{R}^2\setminus \{0\}.$$ Let us denote by $\mathcal{F}$ the Fourier transform on $\mathbb{R}^2$ and denote $$h= \mathcal{F}^{-1}\varphi,\ \ \ \ \ \ \tilde{h} = \mathcal{F}^{-1}\chi.$$ The frequency localization operator is defined by $$\Delta_q u = \mathcal{F}^{-1}\left[\varphi(2^{-q}\xi)\mathcal{F}(u)\right] = 2^{2q} \int_{\mathbb{R}^2} h(2^q y) u(x-y) dy, $$ and $$S_q u = \mathcal{F}^{-1}\left[\chi(2^{-q}\xi)\mathcal{F}(u)\right] = 2^{2q} \int_{\mathbb{R}^2} \tilde{h}(2^q y) u(x-y) dy. $$ Hence, for $s<2/p$ or $s=2/p$ and $r=1$, the homogeneous Besov space $\dot{B}_{p, r}^s$ is defined as the closure of compactly supported smooth functions under the norm $\|\cdot\|_{\dot{B}_{p,r}^s},$ $$\|u\|_{\dot{B}_{p,r}^s} = \left\| (2^{qs} \|\Delta_q u\|_{L^p})_{q\in \mathbb{Z}}\right\|_{l^r(\mathbb{Z})}.$$ when $p=r=\infty$, $s=k+\alpha$ where $k\in \mathbb{N}$ and $\alpha\in (0,1)$, then $\dot{B}_{p,r}^s$ turns to be the homogeneous H\"{o}lder space $\dot{C}^{k+\alpha}$. Another kind of space to be used is $\tilde{L}^p(t_1, t_2; \dot{C}^r)$, which is the space of distributions $u$ such that $$\|u\|_{\tilde{L}^p(t_1, t_2; \dot{C}^r)} \triangleq \sup_{q\in \mathbb{Z}} 2^{qr} \|\Delta_q u\|_{L^p(t_1, t_2; L^\infty)} < \infty.$$ For the FENE model, let $$\psi_\infty (R) = \frac{e^{-\mathcal{U}(R)}}{\int_B e^{-\mathcal{U}(R)} dR} = c (1- |R|^2)^k,$$ where the constant $c$ is given such that $\int_B \psi_\infty dR =1.$ In fact, $(u,\psi)=(0, \psi_\infty)$ defines a stationary solution of (\ref{1.1}). For $r\geq 1$, denote $\mathcal{L}^r$ and $\mathcal{L}^{1,r}$ the weighted spaces $$\mathcal{L}^r =\left\{\psi: \|\psi\|_{\mathcal{L}^r}^r = \int_B \psi_\infty \left|\frac{\psi}{\psi_\infty}\right|^rdR < \infty\right\}$$$$\mathcal{L}^{1, r} =\left\{\psi\in \mathcal{L}^r: |\psi|_{\dot{\mathcal{L}}^{1,r}}^r = \int_B \psi_\infty \left|\nabla_R\left(\frac{\psi}{\psi_\infty}\right)\right|^rdR < \infty\right\}.$$ We will need to use the following well-known inequalities. \begin{lem}[{\rm Bernstein Inequalities \cite{Ch}}] \label{lemma 2.1} For $s\in \mathbb{R}$, $1\leq p\leq r\leq \infty$ and $q\in \mathbb{Z}$, one has $$\|\Delta_q u\|_{L^r(\mathbb{R}^d)}\leq C\cdot 2^{d(\frac1p-\frac1r)q}\|\Delta_q u\|_{L^p(\mathbb{R}^d)},$$ $$c 2^{qs} \|\Delta_q u\|_{L^p} \leq \|\nabla^s \Delta_q u\|_{L^p} \leq C 2^{qs} \|\Delta_q u\|_{L^p},$$ $$\||\nabla|^s S_q u\|_{L^p} \leq C 2^{qs}\|u\|_{L^p},$$ $$c e^{-C 2^{2q} t } \|\Delta_q u\|_{L^\infty} \leq \|e^{t\Delta} \Delta_q u\|_{L^\infty} \leq C e^{-c 2^{2q} t}\|\Delta_q u\|_{L^\infty}.$$ Here $C$ and $c$ are positive constants independent of $s, p$ and $q$. \end{lem} We will also need the following lemma, whose proof can be found in \cite{LMZ}. \begin{lem} \label{lemma 2.2} Assume that $\beta >0$. Then there exists a positive constant $C>0$ such that $$\begin{array}{l}\displaystyle \int_t^T \|\nabla g(s, \cdot)\|_{L^\infty}ds \leq C\left( 1+ \int_t^T \|g(s,\cdot)\|_{L^2}ds\right. \\[3mm]\ \ \ \ \ \ \ \ \ \ \ \displaystyle+\left.\sup_q \int_t^T \|\Delta_q \nabla g(s,\cdot)\|_{L^\infty}ds \ln (e+ \int_t^T \|\nabla g(s,\cdot)\|_{\dot{C}^\beta}ds)\right).\end{array}$$ \end{lem} \section{Proof of Theorem 1.1} Since local existence of smooth solutions has been derived by N. Masmoudi \cite{Ma}, here we only focus on the a priori estimate which is sufficient for proving Theorem \ref{thm 1.1}. As explained in \cite{LZZ} or \cite{Ma}, to get the global existence we just need to control the $L^\infty$-norm of $\nabla u$, i.e., $\|\nabla u\|_{L^\infty}$. Define the flow associated with $u$ by $\Phi(t,x)$, which means $\Phi$ satisfies the ODEs, \begin{equation}\left\{ \begin{array}{l} \partial_t \Phi (t, x) = u (t, \Phi(t,x)),\\ \Phi(t=0, x) = x.\end{array} \right. \end{equation}\\ \textbf{Step I}~~\textbf{Uniform estimates for $\psi$ and $\tau$ with respect to $t$.} Due to the special structure of the equation about $\psi(t, x, R)$, we have the following bounded estimates of $\psi$. For $r>1$, multiplying the third equation of (\ref{1.1}) by $r\left|\frac{\psi}{\psi_\infty}\right|^{r-2}\frac{\psi}{\psi_\infty},$ and integrating over $B$, then we get $$\displaystyle \partial_t \int_{B} \psi_\infty \left|\frac{\psi}{\psi_\infty}\right|^rdR + u\cdot \nabla \int_{B} \psi_\infty \left|\frac{\psi}{\psi_\infty}\right|^rdR = - \frac{4(r-1)}{r} \int_{B} \psi_\infty \left|\nabla_R \left( \frac{\psi}{\psi_\infty}\right)^{\frac{r}{2}}\right|^2 dR .$$ Therefore, $$\displaystyle \int_{B}\psi_\infty \left|\frac{\psi}{\psi_\infty}\right|^r (t,\Phi (t, x), R) dR \leq \int_{B} \psi_\infty \left|\frac{\psi_0}{\psi_\infty}\right|^r(x, R) dR.$$ Since the flow is incompressible, then \begin{equation}\label{3-1}\|\psi\|_{L^\infty_{t,x} (\mathcal{L}^r)}\leq \|\psi_0\|_{L^\infty_x (\mathcal{L}^r)},\ \ \ \|\psi\|_{L^\infty_t (L^2_x (\mathcal{L}^r))} \leq \|\psi_0\|_{L^2_x(\mathcal{L}^r)}.\end{equation} To estimate $\tau$, we need a lemma. \vspace{2mm}\begin{lem} \label{lemma 3.1}For any $p$ such that $pk>1$, it holds that $$\int_{B} \frac{|\psi|}{1-|R|} dR \leq C\left( \int_{B} \frac{|\psi|^{p+1}}{\psi_\infty^p}\right)^{\frac{1}{p+1}}= \|\psi\|_{\mathcal{L}^{p+1}}.$$ \end{lem} \begin{proof} By the H\"{o}lder's inequality, $$\begin{array}{ll}\displaystyle\int_{B} \frac{|\psi|}{1-|R|} dR & \leq \displaystyle C \int_{B} \frac{1}{(1-|R|)^{1-kp/(p+1)}} \frac{|\psi|}{\psi_\infty^{p/(p+1)}}dR\\[4mm] & \leq \displaystyle C\left(\int_{B} \frac{1}{(1-|R|)^{1+ 1/p -k}}\right)^{\frac{p}{p+1}} \left(\int_{B} \frac{|\psi|^{p+1}}{\psi_\infty^p}dR\right)^{\frac{1}{p+1}}. \end{array}$$ Since $pk>1$, the result follows. \end{proof} \vspace{2mm}Noting that $$\tau_{ij}(t,x) = \int_{B} (R_i \otimes \nabla_{R_j} \mathcal{U} )\psi(t,x,R) dR,$$ one has $$\begin{array}{ll} |\tau(t,x)|& \leq \displaystyle C\int_{B} |\nabla_{R} \mathcal{U}|\cdot |\psi(t,x,R) |dR\\ [3mm] & \leq Ck\displaystyle \int_{B} \frac{|R|}{1-|R|^2}\cdot|\psi(t,x,R)|dR\\ [3mm]& \leq C \displaystyle \int_{B} \frac{|\psi(t,x,R)|}{1-|R|}dR \\[3mm]& \leq C\|\psi(t,x, \cdot)\|_{\mathcal{L}^r}, \end{array}$$ where in the last step we used Lemma \ref{lemma 3.1}. Hence, we have \begin{equation}\label{3-2} \|\tau\|_{L^\infty(0, T; L^2)}\leq C\|\psi_0\|_{L^2_x(\mathcal{L}^r)},\ \ \ \ \|\tau\|_{L^\infty(0,T; L^\infty)}\leq C\|\psi_0\|_{L^\infty_x(\mathcal{L}^r)}. \end{equation} \\\textbf{Step II}~~\textbf{A priori estimates for $u$.} We need a useful lemma whose proof was established by Chemin and Masmoudi \cite{CM} (see also \cite{LMZ}). \vspace{2mm}\begin{lem}[{\rm Chemin-Masmoudi}] \label{lemma 3.2}Let $v$ be a solution of the Navier-Stokes equations with initial data $v_0\in L^2(\mathbb{R}^2)$, and an external force $f\in \tilde{L}^1(0,T; \dot{C}^{-1})\cap L^2(0,T; \dot{H}^{-1})$: \begin{equation}\label{3-3} \left\{ \begin{array}{l} \partial_t v -\Delta v + (v\cdot \nabla)v + \nabla p = f,\ \ \ \mbox{in}\ \mathbb{R}^2\times (0,T)\\ {\rm div}~v=0,\ \ \ \mbox{in}\ \mathbb{R}^2\times (0,T)\\ v(t=0, x) = v_0(x),\ \ \ \mbox{in}\ \mathbb{R}^2 \end{array} \right. \end{equation} Then we have the following a priori estimates, $$\|v\|_{L^\infty(0,T; L^2)}^2 + 2\|\nabla v\|_{L^2(0,T; L^2)}^2 \leq \|v_0\|_{L^2}^2 + \|f\|_{L^2(0,T; \dot{H}^{-1}))}^2,$$ and $$\begin{array}{l}\|v\|_{\tilde{L}^1(0,T; \dot{C}^1)} \leq \displaystyle C \left( \sup_q \|\Delta_q v_0\|_{L^2}(1- \exp\{-c 2^{2q}T\})\right. \\ \ \ \ \ \ \ + \displaystyle \left.(\|v_0\|_{L^2}+ \|f\|_{L^2(0,T; \dot{H}^{-1}}) \|\nabla v\|_{L^2(0, T; L^2)}^2 + \sup_q \int_0^T \|2^{-q}\Delta_q f(s)\|_{L^\infty} ds\right).\end{array}$$ Furthermore, if $f\in L^1(0,T; \dot{C}^{-1})$, then $\forall \epsilon>0,$ there exists $t_0(\epsilon)\in (0,T)$ such that $$\|v\|_{\tilde{L}^1(t_0, T; \dot{C}^1)}\leq \epsilon.$$ \end{lem} \vspace{2mm}Particularly for our problem, since we have shown that $\tau \in L^\infty(0,T; L^2)\cap L^\infty(0,T; L^\infty)$, applying Lemma \ref{lemma 3.2}, we know that $$u\in L^\infty(0,T; L^2)\cap L^2(0,T; \dot{H}^1)\cap \tilde{L}^1(0,T; \dot{C}^1)$$ and \begin{equation}\label{3-add}\forall \epsilon >0, \exists\ t_0(\epsilon)\in (0,T), \ \ \mbox{such that} \ \ \|u\|_{\tilde{L}^1(t_0, T; \dot{C}^1)}\leq \epsilon.\end{equation} \vspace{2mm}\textbf{Step III~~H\"{o}lder estimates for $u$} For $0\leq t < T,$ choose some $\alpha$ satisfying $0<\alpha < \min{\{s-2, 1\}}$, define $$N_q^r(t,x) = \int_{B} \psi_\infty \left|\frac{\Delta_q \psi(t,x,R)}{\psi_\infty}\right|^r dR = \|\Delta_q \psi\|_{\mathcal{L}^r}^r(t,x),$$$$A(t) = \sup_{0\leq s <t }\|u(s,\cdot)\|_{\dot{C}^{1+\alpha}},\ \ \ B(t) = \sup_{0\leq s <t}\|\tau(s,\cdot)\|_{\dot{C}^\alpha}$$ $$D(t) = \sup_{0\leq s < t}\sup_{q\in \mathbb{Z}}2^{\alpha q}\|N_q(s, \cdot)\|_{L^\infty}.$$ Here $\Delta_q$ is the frequency operator with respect to $x$. Before the detailed estimates, we will introduce an inequality for later use, which can be considered as an extension of H\"{o}lder inequality. \vspace{2mm}\begin{lem}\label{lemma 3.3}For any $u\in L^4(\mathbb{R}^2) \cap \dot{C}^{1+\alpha}(\mathbb{R}^2)$, there holds that $$\|u\otimes u\|_{\dot{C}^{\frac12+\alpha}} \leq C\|u\|_{L^4}\cdot \|u\|_{\dot{C}^{1+\alpha}},$$ with some constant $C$ independent of $u$. \end{lem} \begin{proof} For any $q\in \mathbb{Z}$, using Bony's para-product decomposition\cite{B}, we have $$\begin{array}{ll} \|\Delta_q (u\otimes u)\|_{L^\infty} \cdot 2^{(\frac12 + \alpha) q} & \leq \displaystyle 2\sum_{|p-q|\leq 5} \|\Delta_q (S_{p-1}u \otimes \Delta_p u)\|_{L^\infty}\cdot 2^{(\frac12 + \alpha)q}\\[3mm] &\ \ \ + \displaystyle\sum_{p\geq q-3}\sum_{|p-r|\leq 1} \|\Delta_q (\Delta_p u \otimes \Delta_r u)\|_{L^\infty}\cdot 2^{(\frac12+\alpha)q}\\[3mm] & \triangleq 2I_1 + I_2. \end{array}$$ $I_1$ can be estimated as $$\begin{array}{ll} I_1 & \leq \displaystyle C\sum_{|p-q|\leq 5} \|S_{p-1}u \otimes \Delta_p u\|_{L^4}\cdot 2^{(1+\alpha)q}\\[5mm] & \leq \displaystyle C\sum_{|p-q|\leq 5}\|S_{p-1} u\|_{L^4} \cdot \|\Delta_p u\|_{L^\infty}\cdot 2^{(1+\alpha)q}\\[5mm] & \leq \displaystyle C \sum_{|p-q|\leq 5} \|u\|_{L^4} \cdot 2^{(1+\alpha)p}\|\Delta_p u\|_{L^\infty} \cdot 2^{(1+\alpha)(q-p)}\\[5mm] & \leq \displaystyle C\|u\|_{L^4} \cdot \|u\|_{\dot{C}^{1+\alpha}},\end{array}$$ here the first inequality is due to Lemma \ref{lemma 2.1}. While $I_2$ can be estimated as $$\begin{array}{ll} I_2 & = \displaystyle \sum_{p\geq q-3}\sum_{|p-r|\leq 1}\|\Delta_q (\Delta_p u\otimes \Delta_r u)\|_{L^\infty}\cdot 2^{(\frac12 +\alpha) q}\\[5mm] & \leq C\displaystyle \sum_{p\geq q-3}\sum_{|p-r|\leq 1}\|\Delta_p u\|_{L^\infty} \cdot \|\Delta_r u\|_{L^4}\cdot 2^{(\frac12+\alpha)q}\cdot 2^{\frac12 r}\\[5mm] & \leq C\displaystyle \sum_{p\geq q-3} 2^{(1+\alpha)p}\|\Delta_p u\|_{L^\infty} \cdot \|u\|_{L^4}\cdot 2^{(\frac12+\alpha)(q-p)}\\[5mm] & \leq C\|u\|_{\dot{C}^{1+\alpha}} \cdot\|u\|_{L^4}, \end{array}$$ here in the second inequality we also used Lemma \ref{lemma 2.1}. These above estimates complete the proof of Lemma \ref{lemma 3.3}. \end{proof} \vspace{2mm}First, applying $\Delta_q$ to the first equation of the FENE system, then we obtain $$\partial_t \Delta_q u- \nu \Delta \Delta_q u + \nabla \Delta_q p = \nabla\cdot \Delta_q (\tau - u\otimes u),$$ hence $$\displaystyle \Delta_q u = e^{\nu t\Delta}\Delta_q u_0 + \int_0^t e^{\nu (t-s)\Delta} \mathbb{P}(\Delta_q \nabla\cdot (\tau - u\otimes u))ds,$$ where $\mathbb{P}$ is the Helmoltz-Weyl projection operator. \begin{equation}\label{3-addadd}\begin{array}{l} \ \ \ 2^{q(1+\alpha)}\|\Delta_q u\|_{L^\infty}(t)\\[2mm]\leq C\displaystyle e^{-c\nu2^{2q}t}\|\Delta_q u_0\|_{L^\infty} 2^{q(1+\alpha)}+ C\int_0^t e^{-c\nu2^{2q} (t-s)} 2^{q(1+\alpha)}\|\Delta_q \nabla\cdot (\tau - u\otimes u)\|_{L^\infty}ds\\[3mm] \leq \displaystyle C\|u_0\|_{\dot{C}^{1+\alpha}} + C\int_0^t e^{-c 2^{2q} (t-s)} \cdot 2^{q(1+\alpha)}\left(\|\nabla\cdot \Delta_q \tau\|_{L^\infty}+ \|\nabla\cdot \Delta_q( u\otimes u)\|_{L^\infty}\right)ds \end{array}\end{equation} Applying Lemma \ref{lemma 2.1}, we obtain $$\begin{array}{ll}& \displaystyle\int_0^t e^{-c 2^{2q}(t-s)}2^{q(1+\alpha)} \|\nabla\cdot\Delta_q \tau\|_{L^\infty}ds\\[3mm] \leq & C\displaystyle\int_0^t e^{-c 2^{2q}(t-s)} 2^{q(1+\alpha)} 2^q \|\Delta_q \tau\|_{L^\infty}ds\\[3mm] \leq & C\displaystyle\int_0^t e^{-c 2^{2q} (t-s)}2^{2q} \cdot 2^{\alpha q} \|\Delta_q \tau\|_{L^\infty}ds \\[3mm] \leq & CB(t). \end{array}$$ On the other hand, applying Lemma \ref{lemma 2.1} again, we obtain $$\begin{array}{ll} & 2^{q(1+\alpha)} \displaystyle \int_0^t e^{-c 2^{2q}(t-s)} \|\nabla\cdot \Delta_q (u\otimes u)\|_{L^\infty} ds \\[3mm] \leq & C\displaystyle \int_0^t e^{-c 2^{2q}(t-s)}2^{q(2+\alpha)}\|\Delta_q (u\otimes u)\|_{L^\infty}ds\\[3mm] \leq & C\displaystyle\int_0^t e^{-c 2^{2q}(t-s)}2^{\frac32 q}\|u\otimes u\|_{\dot{C}^{\frac12 +\alpha}}ds\\[3mm] \leq & C \displaystyle\int_0^t e^{-c 2^{2q}(t-s)} 2^{\frac32 q} \|u\|_{L^4}\cdot \|u\|_{\dot{C}^{1+\alpha}} ds\\[3mm] \leq &\displaystyle C\left(\int_0^t \|u\|_{L^4}^4 \|u\|_{\dot{C}^{1+\alpha}}^4 ds\right)^{\frac14} \cdot \left(\int_0^t e^{-c2^{2q}(t-s)}2^{2q} ds\right)^{\frac34}\\[3mm] \leq &C \displaystyle \left(\int_0^t \|u\|_{L^2}^2 \|\nabla u\|_{L^2}^2 \|u\|_{\dot{C}^{1+\alpha}}^4 ds\right)^{\frac14}, \end{array}$$ where we used Lemma \ref{lemma 3.3} and interpolation inequality for the third and last inequality respectively. Taking the supreme of both sides of (\ref{3-addadd})with respect to $q$, one gets $$\displaystyle \|u(t)\|_{\dot{C}^{1+\alpha}}^4 \leq C(\|u_0\|_{\dot{C}^{1+\alpha}} + B(t))^4 + C\left( \int_0^t \|u(s)\|_{L^2}^2 \|\nabla u(s)\|_{L^2}^2 \|u(s)\|_{\dot{C}^{1+\alpha}}^4 ds\right).$$ By the Gronwall's inequality and the estimates in Step II, we have \begin{equation}\label{3-4} A(t) \leq C(\|u_0\|_{\dot{C}^{1+\alpha}} + B(t))\leq C(1+B(t)). \end{equation} As a result of the relationship between $\tau$ and $\psi$ and Lemma \ref{lemma 3.1}, $$\begin{array}{ll} B(t) & =\displaystyle \sup_{0\leq s <t} \sup_q 2^{\alpha q}\|\Delta_q \tau(s, \cdot) \|_{L^\infty}\\[2mm] & \leq \displaystyle \sup_{0\leq s <t}\sup_q 2^{\alpha q} \left\| \int_B |\Delta_q\psi| \cdot |\nabla_R \mathcal{U}| dR \right\|_{L^\infty} \\[3mm] & \displaystyle \leq C \sup_{0\leq s <t}\sup_q 2^{\alpha q} \|\Delta_q \psi\|_{L_x^\infty (\mathcal{L}^r)}(s) \\[2mm] & = C D(t) \end{array} $$ Combining the above inequality and (\ref{3-4}), \begin{equation}\label{3-5} A(t) \leq C(1+ D(t)). \end{equation} \vspace{2mm}\textbf{Step IV~~H\"{o}lder Estimates of $\psi$.} The remaining part is to estimate the $\dot{C}^\alpha$-norm of $\psi$. Take the operator $\Delta_q$ to the third equation, then \begin{equation}\label{3-6}\begin{array}{ll}\partial_t \Delta_q \psi + u\cdot \nabla \Delta_q \psi & ={\rm div}_R(-W(u) \cdot R \Delta_q \psi) + {\rm div}_R\left(\psi_\infty \nabla_R \left(\frac{\Delta_q \psi}{\psi_\infty}\right)\right)\\[3mm]& \ \ \ + {\rm div}_R~([\Delta_q, -W(u)]\cdot R\psi) + [u\cdot\nabla, \Delta_q]\psi\end{array}\end{equation} Multiply (\ref{3-6}) by $r\left|\frac{\Delta_q \psi}{\psi_\infty}\right|^{r-2} \frac{\Delta_q \psi}{\psi_\infty}$ and integrate over $B$, $$\begin{array}{ll} & \ \ \ \displaystyle\partial_t N_q^r + u\cdot \nabla N_q^r + \frac{4(r-1)}{r}\int_{B}\psi_\infty \left|\nabla_R \left(\frac{\Delta_q \psi}{\psi_\infty}\right)^{r/2}\right|^2 dR\\[4mm] & \leq \displaystyle \int_B {\rm div}_R\left( [ \Delta_q, -W(u)] \cdot R \psi\right) \cdot \left|\frac{\Delta_q \psi}{\psi_\infty}\right|^{r-2}\frac{\Delta_q \psi}{\psi_\infty}rdR\\[4mm] & \ \ \ + \displaystyle\int_B |[u\cdot \nabla, \Delta_q]\psi|\cdot \left|\frac{\Delta_q \psi}{\psi_\infty}\right|^{r-1}r dR\\ [4mm]& \triangleq J_1 + J_2. \end{array}$$ By the Young's inequality and the H\"{o}lder inequality, $$\begin{array}{ll} |J_1| & \displaystyle\leq C \|[\Delta_q, W(u)]\cdot R\psi\|_{\mathcal{L}^r}^2 \cdot N_q^{r-2} + \frac{r-1}{r}\int_B \psi_\infty \left|\nabla_R\left(\frac{\Delta_q \psi}{\psi_\infty}\right)^{r/2}\right|^2dR, \end{array}$$ $$J_2 \leq C\|[u\cdot \nabla, \Delta_q]\psi\|_{\mathcal{L}^r}\cdot N_q^{r-1}.$$ Hence $$\partial_t N_q^r + u\cdot\nabla N_q^r \leq C\|[\Delta_q, W(u)]\cdot R\psi\|_{\mathcal{L}^r}^2\cdot N_q^{r-2} + C \|[u\cdot \nabla, \Delta_q]\psi\|_{\mathcal{L}^r}\cdot N_q^{r-1},$$ which implies that \begin{equation}\label{3-7}\begin{array}{ll}2^{\alpha q r} \|N_q\|_{L^\infty}^r(t) \leq & \displaystyle C 2^{\alpha q r} \int_0^t \|[\Delta_q, W(u)]\cdot R\psi\|_{L^\infty_x(\mathcal{L}^r)}^2 \cdot \|N_q\|_{L^\infty}^{r-2} (s) ds\\ [3mm] & \displaystyle + C 2^{\alpha qr} \int_0^t \|[u\cdot\nabla, \Delta_q]\psi\|_{L^\infty_x(\mathcal{L}^r)} \cdot \|N_q\|_{L^\infty}^{r-1}(s)ds.\end{array}\end{equation} Note that $$\begin{array}{ll} \left|[\Delta_q, W(u)]\cdot R\psi\right|& =\displaystyle \left|\int_{\mathbb{R}^2} h(y) [W(u)(x) - W(u)(x- 2^{-q}y) ]\cdot (R\psi(x- 2^{-q} y))dy\right|\\ [3mm] & \displaystyle \leq C 2^{-\alpha q} \|W(u)\|_{\dot{C}^\alpha} \int_{\mathbb{R}^2} |h(y)|\cdot|R\psi(x-2^{-q}y)|dy, \end{array}$$ Then we get that $$\|[\Delta_q, W(u)]\cdot R\psi\|_{L^\infty_x(\mathcal{L}^r)} \leq C\|\nabla u\|_{\dot{C}^\alpha}\|\psi\|_{L^\infty_x(\mathcal{L}^r)} \cdot 2^{-\alpha q}\leq C\|\nabla u\|_{\dot{C}^\alpha}\cdot 2^{-\alpha q}.$$ And by the Bony's para-product formula, $$\begin{array}{ll} |[u\cdot \nabla, \Delta_q]\psi| & \leq \displaystyle \sum_{p\geq q-3}\sum_{|p-q^{\prime}|\leq 1}\left| [\Delta_p u\cdot \nabla , \Delta_q ]\Delta_{q^{\prime}} \psi\right| \\[5mm] & \displaystyle+ \sum_{|q- q^{\prime}|\leq 5}\left|[S_{q^{\prime}-1} u\cdot \nabla, \Delta_q]\Delta_{q^{\prime}}\psi\right| + \displaystyle\sum_{|q- q^{\prime}|\leq 5}\left| [\Delta_{q^{\prime}}u\cdot\nabla, \Delta_q]S_{q^{\prime}-1}\psi\right| \end{array}$$ with each term having the following estimate: Since $$\begin{array}{ll}&[\Delta_p u\cdot\nabla, \Delta_q]\Delta_{q^{\prime}} \psi(s, x, R)\\[3mm] =& \displaystyle\int_{\mathbb{R}^2}h(y) [\Delta_p u(s,x) - \Delta_pu(s, x-2^{-q}y)]\cdot \nabla \Delta_{q^{\prime}} \psi(s, x-2^{-q}y, R) dy,\end{array}$$ and $$\nabla \Delta_{q^{\prime}}\psi(x) = \displaystyle \int_{\mathbb{R}^2} 2^{2q^{\prime}}h(2^{q^{\prime}}y)\nabla\psi(x-y) dy = -\int_{\mathbb{R}^2}2^{3q^{\prime}}\nabla h(2^{q^{\prime}}y)\psi(x-y) dy$$ then $$\begin{array}{ll} & \displaystyle\sum_{p\geq q-3}\sum_{|p-q^{\prime}|\leq 1} 2^{\alpha q}\| [\Delta_p u\cdot \nabla , \Delta_q ]\Delta_{q^{\prime}} \psi\|_{L^\infty_x(\mathcal{L}^r)} (s) \\[5mm] \leq & C\displaystyle \sum_{p\geq q-3} \sum_{|p-q^{\prime}|\leq 1}2^{\alpha q}2^{-q} \|\nabla \Delta_p u\|_{L^\infty}(s) \cdot \|\nabla\Delta_{q^{\prime}}\psi\|_{L^\infty_x(\mathcal{L}^r)}(s)\\[5mm]\leq &C \displaystyle\sum_{p\geq q-3}\sum_{|p-q^{\prime}|\leq 1} 2^{\alpha q} 2^{-q}\|\nabla \Delta_p u\|_{L^\infty}(s)\cdot 2^{q^{\prime}}\| \psi\|_{L^\infty_x(\mathcal{L}^r)}(s)\\ [5mm]\leq & C\displaystyle \sum_{p\geq q-3}2^{\alpha p} \|\nabla \Delta_p u\|_{L^\infty}(s)\cdot 2^{\alpha(q-p)}\|\psi\|_{L_x^\infty(\mathcal{L}^r)}(s) \\[4mm] \leq & CA(s) \cdot \|\psi\|_{L^\infty_x(\mathcal{L}^r)}(s)\leq CA(s). \end{array}$$ Similarly, $$\begin{array}{ll}& \displaystyle\sum_{|q-q^{\prime}|\leq 5}2^{\alpha q} \|[S_{q^{\prime}-1} u\cdot\nabla, \Delta_q]\Delta_{q^{\prime}} \psi\|_{L^\infty_x(\mathcal{L}^r)}(s) \\[5mm]\leq & \displaystyle C\sum_{|q-q^{\prime}|\leq 5}2^{\alpha q-q} \|\nabla S_{q^{\prime}-1} u\|_{L^\infty}(s)\|\nabla \Delta_{q^{\prime}} \psi\|_{L^\infty_x(\mathcal{L}^r)}(s)\\[5mm] \leq & \displaystyle C\sum_{|q-q^{\prime}|\leq 5}\|\nabla u\|_{L^\infty}(s) \cdot 2^{\alpha q^{\prime}} \|\Delta_{q^{\prime}}\psi\|_{L^\infty_x(\mathcal{L}^r)}(s)\\[5mm] \leq & C\|\nabla u\|_{L^\infty}(s)\cdot D(s). \end{array}$$ $$\begin{array}{ll}& \displaystyle\sum_{|q^{\prime}-q|\leq 5}2^{\alpha q} \|[\Delta_{q^{\prime}}u\cdot\nabla, \Delta_q]S_{q^{\prime} -1}\psi\|_{L^\infty_x(\mathcal{L}^r)}(s)\\[5mm] \leq & C \displaystyle \sum_{|q^{\prime}-q|\leq 5}2^{\alpha q - q}\|\nabla \Delta_{q^{\prime}}u\|_{L^\infty}(s)\cdot\|S_{q^{\prime}-1}\nabla \psi\|_{L^\infty_x(\mathcal{L}^r)}(s)\\[5mm] \leq & C A(s)\| \psi\|_{L^\infty_x(\mathcal{L}^r)}(s)\leq CA(s). \end{array}$$ Therefore, taking the supreme of (\ref{3-7}) with respect to $q$, by the Young's inequality and (\ref{3-5}), $$\begin{array}{ll} D(t)^r & \leq \displaystyle\int_0^t C\left[A(s)^r + D(s)^r\right] ds + C\int_0^t\|\nabla u\|_{L^\infty}(s) D(s)^r ds\\ & \leq \displaystyle \int_0^t C\left[1+D(s)^r\right]+ C\|\nabla u\|_{L^\infty}D(s)^r ds \end{array}$$ Then the Gronwall's inequality implies that $$D(t)\leq C(D(0)+1) e^{C\int_0^t (\|\nabla u\|_{L^\infty} + 1)ds}.$$ Hence according to Lemma \ref{lemma 2.2}, $$\begin{array}{ll} e + D(t) & \leq C(D(0) + 1)C_*\exp\{C \int_{t_*}^t( \|\nabla u\|_{L^\infty}+1)ds\}\\[3mm] & \leq CC_* (D(0)+1) \exp\{C(1+ \int_{t_*}^t \|u\|_{L^2}ds+ \epsilon \ln(e+ (t-t_*)A(t)) )\}\\[3mm] & \leq CC_*(D(0)+1)\exp\{C[1+ \epsilon \ln(e+D(t))]\}\\[3mm] &\leq CC_* (D(0) +1)(e+ D(t))^{C\epsilon},\end{array}$$ where $C_*$ is some positive constant depending on the solution $u$ on $[0, t_*]$. Choosing $\epsilon = \frac{1}{2C}$, $$D(T) \leq [CC_*(D(0)+1)]^2.$$ Then by (\ref{3-5}), $A(T)$ is bounded, which implies that $\|\nabla u\|_{L^\infty}$ is bounded on $[0,T]$ since $$\|\nabla u\|_{L^\infty}\leq C\left(\|u\|_{L^2}+ \|u\|_{\dot{C}^{1+\alpha}}\right).$$ \section{Proof of Theorem 1.2} The proof of Theorem \ref{thm 1.2} is similar to that of Theorem \ref{thm 1.1}, we just give the sketch. As above, to get the global existence we only need to control $\|\nabla u\|_{L^\infty},$ see \cite{CMa}. \vspace{2mm}\textbf{Step I~~Uniform estimates for $\psi$ and $\tau$} Integrate the third equation of (\ref{1.2}) on $M$, then $$\partial_t \int_M\psi(t,x,m)dm + u\cdot \nabla\int_M \psi(t,x,m) dm =0.$$ By the maximum principle for evolutionary equation, $\psi$ is always nonnegative. Hence \begin{equation}\label{4.1}\|\psi\|_{L_x^1\cap L_x^\infty(L^1(M))}(t) = \|\psi_0\|_{L_x^1\cap L_x^\infty(L^1(M))}\end{equation} \begin{equation}\label{4.2}\|\tau\|_{L^\infty(0,T; L^2)}\leq C\left(\|\psi_0\|_{L_x^4(L^1(M))}^2 + \|\psi_0\|_{L_x^2(L^1(M))}\right),\end{equation} \begin{equation}\label{4.3} \|\tau\|_{L^\infty(0,T; L^\infty)}\leq C(\|\psi_0\|_{L_x^\infty(L^1(M))}^2 + 1)\end{equation} Since $s>\frac{d}{2} +1$ and $M$ is a smooth compact manifold without boundary, \begin{equation}\label{4.4} \|\psi\|_{L_{t,x}^\infty(H^{-s}(M))}\leq C\|\psi_0\|_{L_x^\infty(L^1(M))}.\end{equation} \vspace{2mm}\textbf{Step II~~A priori estimates for $u$} Applying Lemma \ref{lemma 3.2} and the estimates (\ref{4.2})(\ref{4.3}), we get that $$u\in L^\infty(0,T; L^2)\cap L^2(0,T; \dot{H}^1)\cap \tilde{L}^1(0,T; \dot{C}^1)$$ and $\forall\ \epsilon >0$, there exists $t_0(\epsilon)\in (0,T),$ such that $\|u\|_{\tilde{L}^1(t_0, T; \dot{C}^1)}\leq \epsilon.$ \vspace{2mm}\textbf{Step III~~H\"{o}lder estimates for $u$} Denote $$H= (-\Delta_g + I)^{-\frac{s}{2}}, \ \ \ N_q^2(t,x) = \int_M |H\Delta_q \psi(t,x,m)|^2 dm,$$ $$A(t) = \sup_{0\leq s <t}\|u(s,\cdot)\|_{\dot{C}^{1+\alpha}},\ \ \ B(t)= \sup_{0\leq s<t }\|\tau(s, \cdot)\|_{\dot{C}^\alpha},$$ $$D(t) = \sup_{0\leq s<t }\sup_{q\in\mathbb{Z}}2^{\alpha q}\|N_q(s,\cdot)\|_{L^\infty}.$$ As in section 3, we have the estimates $$A(t)\leq C \left(\|u_0\|_{\dot{C}^{1+\alpha}} + B(t)\right)\leq C(1+B(t)),$$ and by (\ref{4.4}), $\forall q\in \mathbb{Z}$, $$\|N_q(t,\cdot)\|_{L^\infty}\leq C\|H\psi(t, \cdot, \cdot)\|_{L_x^\infty(L^2(M))}\leq C\|\psi_0\|_{L_x^\infty(L^1(M))},$$ $$\begin{array}{l}\ \ \ \ 2^{\alpha q}\|\Delta_q \tau_{ij}(s,x)\|_{L^\infty} \\[3mm] \leq 2^{\alpha q}\left\|\int_M\int_M H_{m_1}^{-1}H_{m_2}^{-1} \gamma_{ij}^{(2)}\Delta_q (H_{m_1}\psi(s,x,m_1) H_{m_2}\psi(s,x,m_2))dm_1 dm_2\right\|_{L^\infty}\\[3mm] \ \ +2^{\alpha q}\left\|\int_M H^{-1}\gamma_{ij}^{(1)}(m) H\Delta_q\psi(s,x,m)dm\right\|_{L^\infty} \\[3mm] \leq \displaystyle 2^{\alpha q}\sum_{|p-q|\leq 5}\left\|\int_M\int_M H_{m_1}^{-1}H_{m_2}^{-1} \gamma_{ij}^{(2)}(m_1, m_2) S_{p-1}H\psi(m_1) \Delta_p H\psi(m_2) dm_1 dm_2 \right\|_{L^\infty}\\[4mm] \ \ +\displaystyle 2^{\alpha q}\sum_{|p-q|\leq 5}\left\|\int_M\int_M H_{m_1}^{-1}H_{m_2}^{-1} \gamma_{ij}^{(2)}(m_1,m_2) \Delta_p H\psi(m_1)S_{p-1}H\psi(m_2) dm_1dm_2\right\|_{L^\infty}\\[3mm] \ \ +\displaystyle 2^{\alpha q}\sum_{p\geq q-3}\sum_{|p-r|\leq 1} \left\| \int_M\int_M H_{m_1}^{-1}H_{m_2}^{-1}\gamma_{ij}^{(2)} \Delta_pH\psi(m_1)\Delta_rH\psi(m_2) dm_1dm_2\right\|_{L^\infty}\\[4mm]\ \ + C 2^{\alpha q}\|N_q(s,\cdot)\|_{L^\infty}\\[2mm] \leq CD(s) \|H\psi\|_{L_x^\infty(L^2(M))}(s)+ CD(s)\leq CD(s) \end{array}$$ which implies that $$B(t)\leq CD(t).$$ Therefore, \begin{equation}\label{4.5} A(t)\leq C(1+D(t)). \end{equation} \vspace{2mm}\textbf{Step IV~~H\"{o}lder estimates for $\psi$} Take the operator $H$ and $\Delta_q$ to the third equation, multiply by $\Delta_q H\psi$ and integrate over $M$, then $$\begin{array}{l}\ \ \ \displaystyle \frac{1}{2}\partial_t\int_{M}|\Delta_q H\psi|^2 dm + \frac12 u\cdot \nabla \int_{M}|\Delta_q H\psi|^2dm + \int_{M}|\nabla_g \Delta_q H\psi|^2dm \\[3mm]=\displaystyle \int_M [u\cdot \nabla, \Delta_q]H\psi\cdot \Delta_q H\psi dm -\int_M \Delta_q H {\rm div}_g~(G(u,\psi)\psi)\cdot \Delta_q H\psi dm\\[3mm] = \displaystyle \int_M [u\cdot \nabla, \Delta_q]H\psi\cdot \Delta_q H\psi dm -\partial_j u_i\int_M H{\rm div}_g~(c_\alpha^{ij}\Delta_q \psi)\cdot H\Delta_q \psi dm \\[4mm] \ \ \ \ \displaystyle +\int_M [\partial_j u_i, \Delta_q ]H(c_\alpha^{ij}\psi)\cdot \nabla_g\Delta_q H\psi dm +\int_M \Delta_q (\nabla_g \mathcal{U} H\psi) \cdot \nabla_g \Delta_q H\psi dm\\[4mm]\ \ \ \ +\displaystyle \int_M \Delta_q [H\nabla_g \mathcal{U}, H^{-1}]H\psi\cdot \nabla_g \Delta_q H\psi dm \end{array}$$ By the Young's inequality, we have \begin{equation}\label{4-add}\begin{array}{l}\ \ \ \ \ \ \frac12\|\Delta_q H\psi\|_{L_x^\infty(L^2(M))}^2(t) \\[3mm]\leq \displaystyle 2\int_0^t \left\| \int_M [u\cdot\nabla, \Delta_q] H\psi\cdot \Delta_q H\psi dm \right\|_{L^\infty}ds \\[3mm]\ +\displaystyle \int_0^t \left\|\partial_j u_i\int_M H{\rm div}_g~(c_\alpha^{ij} \Delta_q \psi)\Delta_q H\psi dm \right\|_{L^\infty}ds\\[3mm]\ + \displaystyle\frac34\int_0^t \left\|[\partial_j u_i, \Delta_q] H(c_\alpha^{ij}\psi)\right\|_{L^\infty_x(L^2(M))}^2ds\\[3mm]\displaystyle\ +\frac34 \int_0^t\left\| \Delta_q (\nabla_g\mathcal{U}H\psi)\right\|_{L^\infty_x(L^2(M))}^2ds +\frac34 \int_0^t\left\|\Delta_q ([H\nabla_g \mathcal{U}, H^{-1}]H\psi)\right\|_{L^\infty_x(L^2(M))}^2ds\\[4mm] \triangleq\displaystyle\int_0^t (J_1 + J_2 + J_3 + J_4 +J_5) ds. \end{array}\end{equation} As in section 3, applying (\ref{4.5}), for every $q\in \mathbb{Z}$, $$\begin{array}{ll}2^{2\alpha q}J_1(s) &\leq CA(s)\|H\psi\|_{L_x^\infty(L^2(M))}\cdot D(s) + C\|\nabla u\|_{L^\infty}\cdot D(s)^2 \\[2mm]&\leq C(D(s)^2+1)(\|\nabla u\|_{L^\infty}(s) + 1).\end{array}$$ $J_2$, $J_4$ and $J_5$ are estimated as in \cite{CFTZ}, $$\begin{array}{ll}&\displaystyle \left|\int_M H {\rm div}_g (c_{\alpha}^{ij}\Delta_q \psi ) \cdot \Delta_q H\psi dm\right|\\[3mm] \leq & \displaystyle\left|\int_M {\rm div}_g~(c_\alpha^{ij}H\Delta_q \psi)\cdot \Delta_q H\psi dm\right|+ \left|\int_M \left([H{\rm div}_g c_\alpha^{ij}, H^{-1}]H\Delta_q \psi\right)\cdot \Delta_q H\psi dm\right|\\[4mm]\leq & \displaystyle \left| \int_M \frac{1}{2}({\rm div}_g c_\alpha^{ij})|\Delta_q H\psi|^2 dm\right|+ \left|\int_M \left([H{\rm div}_g c_\alpha^{ij}, H^{-1}]H\Delta_q \psi\right)\cdot \Delta_q H\psi dm\right|\\[4mm] \leq & C\|\Delta_q H\psi\|_{L^2(M)}^2,\end{array}$$which deduces that $$2^{2\alpha q}J_2(s)\leq C 2^{2\alpha q}\|\nabla u\|_{L^\infty}(s)\cdot N_q^2(s) \leq C\|\nabla u\|_{L^\infty}(s)\cdot D(s)^2.$$ Using Bony's decomposition, $$\begin{array}{l}\ \ \ \ 2^{\alpha q}\|\Delta_q (\nabla_g \mathcal{U} H\psi)\|_{L^\infty_x(L^2(M))} \\[2mm]\leq 2^{\alpha q}\displaystyle \sum_{|p-q|\leq 5}\left\|S_{p-1}\nabla_g \mathcal{U} \cdot \Delta_p H\psi\right\|_{L_x^\infty(L^2(M))} \\[4mm]\ \ \ \ \displaystyle + \sum_{|p-q|\leq 5}\left\|\Delta_p \nabla_g \mathcal{U}\cdot S_{p-1}H\psi\right\|_{L_x^\infty(L^2(M))}\\[4mm] \displaystyle\ \ \ \ + \sum_{p\geq q-3}\sum_{|p-r|\leq 1}\left\|\Delta_p \nabla_g \mathcal{U} \cdot \Delta_r H\psi\right\|_{L_x^\infty(L^2(M))}\\[3mm] \displaystyle\leq C\|\nabla_g \mathcal{U}\|_{L_x^\infty(L^2(M))}\cdot \sup_p 2^{\alpha p}\|\Delta_p H\psi\|_{L_x^\infty(L^2(M))}\\[3mm]\ \ \ \ +\displaystyle C\sup_p 2^{\alpha p} \|\Delta_p \nabla_g \mathcal{U}\|_{L_x^\infty(L^2(M))}\cdot \|H\psi\|_{L^\infty_x(L^2(M))}\end{array}$$ Combining the relationship between $\mathcal{U}$ and $\psi$, $$2^{2\alpha q} J_4(s) \leq C2^{2\alpha q}\|H\psi\|_{L_x^\infty(L^2(M))}^2(s) \cdot N_q^2(s)\leq CD(s)^2.$$ Similarly, $$2^{2\alpha q} J_5(s)\leq C\|H\psi\|_{L_x^\infty(L^2(M))}^2(s) \cdot \sup_p 2^{2\alpha p}N_p^2(s)\leq CD(s)^2.$$ Since $$[\partial_j u_i, \Delta_q]H(c_\alpha^{ij}\psi)= \int_{\mathbb{R}^2} h(y)[\partial_j u_i(x) - \partial_j u_i(x-2^{-q}y)]H(c_\alpha^{ij}\psi)(x-2^{-q}y, m)dy,$$ $$\begin{array}{ll}2^{2\alpha q}J_3(s) &\leq C2^{2\alpha q}\cdot 2^{-2\alpha q}\|\nabla u\|_{\dot{C}^\alpha}^2(s)\|H(c_\alpha^{ij}\psi)\|_{L^\infty_x(L^2(M)))}^2(s)\\[3mm]& \leq CA(s)^2 \left[\|c_\alpha^{ij} H\psi\|_{L^\infty_x(L^2(M))}^2 + \|H[c_\alpha^{ij}, H^{-1}]H\psi\|_{L^\infty_x(L^2(M))}^2\right]\\[3mm]&\leq CA(s)^2 \|H\psi\|_{L^\infty_x(L^2(M))}^2\\[2mm] &\leq CA(s)^2. \end{array} $$ Therefore, taking the supreme of $(\ref{4-add})\times 2^{2\alpha q}$ with respect to $t$ and $q$, $$D(t)^2 \leq C\int_0^t (\|\nabla u\|_{L^\infty}+1)(1+ D(s)^2) ds.$$ The remaining proof is just the same as that in section 3. We omit the details. \section*{Acknowledgement} The work was in part supported by NSFC (grants No. 10801029 and 10911120384), FANEDD, Shanghai Rising Star Program (10QA1400300) and SGST 09DZ2272900. Part of the work was done when Zhen Lei was visiting the Institute of Mathematical Sciences of CUHK. Zhen Lei would like to thank the hospitality of Professor Zhouping Xin and the Institute. \vspace{2mm}
2211.13989
\section{Introduction} \label{sec:intro} \ps{Scaling challenges: Economics} CMOS technology scaling enables us to build chips with an ever-increasing transistor density. The main advantage of transitioning to a more advanced technology node is that it allows us to pack more transistors and hence more performance into chips of the same size. The downside of this transition is the increased complexity of the physical design, verification, firmware, and mask sets. As a result, the non-recurring cost almost doubles whenever we transition to a more advanced technology node \cite{design-cost-exp}. Another challenge of manufacturing chips in advanced technology nodes is the high defect rate which diminishes the yield and increases the recurring cost. Due to these trends, making the design and fabrication of chips in bleeding-edge technology nodes economically viable has become a real challenge. \ps{Chiplet motivation: Solve scaling challenges} A promising solution to this challenge is the disaggregation of monolithic chips into \gls{mcms}. Current trends show that the only chips that keep up with Moore's law \cite{mooreslaw} are \gls{mcms}~\cite{lookingglass2022}. One strategy to create multi-chip modules is 2.5D integration in which the chip is disaggregated into multiple chiplets which are connected through an organic packaging substrate or silicon interposer. 2.5D integration has various economical advantages: \lbcom{if you need space, this itemize can be reduced in size and fit in a single paragraph. The DAC audience and reviewers do not need to be tutored on these advantages of chiplets!} \picom{I tried to make the itemize more compact by reducing the word-count.} \begin{itemize} \item \textbf{Heterogeneity}: Different chiplets can be implemented in different technology nodes. Here, subcircuits that cannot take advantage of transistor scaling, e.g., I/O drivers, are fabricated in more mature technology nodes with lower non-recurring cost and higher yield. \item \textbf{Reuse}: A given chiplet can be used in multiple designs. For example, we do not need to redesign the aforementioned I/O chiplet when the rest of the chip is transitioned to a more advanced technology node. As demonstrated by AMD \cite{amd-chiplets}, we can use the same compute-chiplet in multiple products with varying core-counts. Reuse avoids redesigning components, further reducing the non-recurring cost. \item \textbf{Improved Yield}: A single fabrication defect can render a whole die useless, whether it is a chiplet or a monolithic chip. Since chiplets are smaller than monolithic chips, 2.5D integration reduces the area loss due to fabrication defects, hence improving the yield. \item \textbf{Binning}: Power- and frequency-binning are important strategies to deal with parametric variation. In binning, chips are grouped into different bins (e.g., based on power consumption or maximum clock frequency) which are then priced differently. In 2.5D integration, binning is done on a per-chiplet scale, increasing the total revenue. \end{itemize} \ps{Chiplet challenges: Interconnect performance} While 2.5D integration has many economical benefits, it also comes with technological challenges. One such challenge is the fact that a \gls{d2d} link requires a \gls{phy} interface in both the sending and the receiving chiplet. As a consequence, the total silicon area and power consumption of all chiplets combined exceed the area and power of a monolithic chip with the same functionality. However, the additional cost due to the \gls{phy}'s area and power overhead is often compensated by the other economical benefits of 2.5D integration. A more important challenge is creating a high-bandwidth and low-latency \gls{ici}. To connect chiplets to the package substrate, \gls{c4} bumps are used and to connect them to a silicon interposer, one uses micro-bumps. The minimum pitch of these bumps limits the number of bumps per mm$^2$ of chiplet area which limits the number and bandwidth of \gls{d2d} links. As a consequence, \gls{d2d} links are the bottleneck of the \gls{ici}. \ps{Why shapes and arrangements matter} Since the \gls{d2d} links limit the \gls{ici} data width, we want to operate them at the highest frequency possible to maximize their throughput. To run such links at high frequencies without introducing unacceptable bit error rates, we must limit their length to a minimum \cite{bow, ucie}. The length of \gls{d2d} links is minimized if we only connect adjacent chiplets. However, with such restricted connections, the shape and arrangement of chiplets has a significant impact on the performance of the \gls{ici}. \ps{Our contributions} In this paper, we analyze how to shape and arrange chiplets to maximize the \gls {ici} performance. We make the following contributions: \begin{itemize} \item \textbf{Problem Statement}: We formulate a detailed problem statement including economics- and technology-driven constraints for the shape of chiplets and proxies for the \gls{ici} performance (Section \ref{sec:problem}). \item \textbf{HexaMesh}: We address the above problem by proposing the HexaMesh arrangement. HexaMesh asymptotically reduces the network diameter by $42\%$ and improves the bisection bandwidth by $130\%$ compared to a grid arrangement (Section \ref{sec:proposal}). \item \textbf{\gls{d2d} link model}: Instead of only relying on network diameter and bisection bandwidth as performance proxies, we want to consider implementation details of the \gls{ici}. To do so, we introduce our model to estimate the bandwidth of \gls{d2d} links (Section \ref{sec:model}). \item \textbf{Evaluation}: We combine link bandwidth estimates using our model and cycle-level simulations using BookSim2 \cite{booksim} to compare HexaMesh to a 2D grid. On average, HexaMesh reduces the latency by $19\%$ and improves the throughput by $34\%$ (Section \ref{sec:evaluation}). \end{itemize} \section{Background on 2.5D Integration} \label{sec:back} \ps{Argue why we focus on (passive) silicon interposer and organic package substrate} In 2.5D integration, multiple chiplets are enclosed in a single package. The two most prominent techniques to provide connectivity between chiplets are organic package substrates (see Figure \ref{fig:back-integration-substrate}) and silicon interposers (see Figure \ref{fig:back-integration-interposer}). Besides these established 2.5D integration schemes, there are more complex techniques, e.g., active interposers \cite{intact}. Active interposers do not only contain wires but also transistors which allows constructing buffered wires or offloading some power management circuits from the chiplets to the interposer. However, active interposers come with additional challenges, e.g., reduced yield or thermal problems. In this work, we focus on passive silicon interposers and package substrates as they are more established. \begin{figure}[h] \centering \captionsetup{justification=centering} \begin{subfigure}{0.98 \columnwidth} \centering \includegraphics[width=1.0\columnwidth]{img/background/legend.drawio.pdf} \end{subfigure} \begin{subfigure}{0.32 \columnwidth} \centering \includegraphics[width=1.0\columnwidth]{img/background/monolithic.drawio.pdf} \caption{Monolithic chip.} \label{fig:back-integration-monolithic} \end{subfigure} \begin{subfigure}{0.32 \columnwidth} \centering \includegraphics[width=1.0\columnwidth]{img/background/substrate.drawio.pdf} \caption{Package substrate.} \label{fig:back-integration-substrate} \end{subfigure} \begin{subfigure}{0.32 \columnwidth} \centering \includegraphics[width=1.0\columnwidth]{img/background/interposer.drawio.pdf} \caption{Silicon interposer.} \label{fig:back-integration-interposer} \end{subfigure} \caption{Comparison of a monolithic chip and 2.5D stacked chips using a package substrate or silicon interposer (side view).} \label{fig:back-integration} \vspace{-1em} \end{figure} \ps{Explain organic package substrate} The \textbf{organic package substrate} provides connectivity between different chiplets and between chiplets and the \gls{pcb}. \gls{c4} bumps with a pitch of $150$-$200\mu$m are used to connect chiplets to the package substrate. Connections between the package substrate and the \gls{pcb} are built using solder bumps with a pitch of $500$-$1000\mu$m. The small pitch of \gls{c4} bumps enables the construction of \gls{d2d} links that offer up to $44\times$ more bandwidth than off-chip links. This shows that the bandwidth between multiple chiplets in a 2.5D stacked chip is substantially higher than the bandwidth between multiple \gls{scps} on the same \gls{pcb}. \ps{Explain pros and cons of silicon interposer} A \textbf{silicon interposer} can be added between the chiplets and the package substrate. Micro-bumps with a pitch of $30$-$60\mu$m are used to connect chiplets to the interposer. Regular \gls{c4} bumps with a pitch of $150$-$200\mu$m are used to connect the interposer to the package substrate. The reduced pitch of micro-bumps further enhances the throughput of \gls{d2d} links. Besides increased design and manufacturing cost, silicon interposers also come with higher signal loss compared to package substrates \cite{usr-links}. As a consequence, \gls{d2d} links in silicon interposers need to be even shorter ($\leq 2$mm \cite{ucie}) to provide low bit error rates when operated at high frequencies. \ps{Explain PHYs} \textbf{\gls{d2d} links} often use different protocols, voltage levels, and clock frequencies than the intra-chiplet interconnect. The conversion between protocols, voltage levels, and clock frequencies is performed by a \gls{phy} which is added at the start and end of each \gls{d2d} link. \gls{phy}s reside inside the chiplets and they introduce a certain area and power overhead compared to monolithic chips that do not require them. \gls{phy}s and \gls{ici} protocols have been standardized \cite{bow, ucie} to achieve interoperability between chiplets from different manufacturers. \section{The Problem of Chiplet Shape \& Arrangement} \label{sec:problem} \ps{Section intro} In this section, we formalize the problem of finding chiplet shapes and arrangements. To do so, we define technology- and economics-driven constraints for the shape of chiplets. We also introduce proxies for the performance of the \gls{ici} be able to assess a given arrangement without making any assumptions on implementation details. \subsection{Assumptions and Scope} \label{ssec:problem-assumptions} \ps{We search arrangements for N identical chiplets} We assume that our chip consists of several identical compute-chiplets and additional chiplets for I/O drivers or other functions. We limit our scope to the search for shape and arrangement of the identical compute-chiplets. Whenever we propose a shape and arrangement of compute-chiplets, we implicitly assume that the remaining chiplets are placed on the perimeter of our arrangement (see Figure \ref{fig:problem-assumptions}). Placing the I/O drivers close to the border of the chip is favorable because usually, only solder balls at the border of the package are used for signals. As it is hard to route \gls{pcb} lanes to solder balls at the center of the package, those solder balls are often used for the power supply. \begin{figure}[h] \centering \captionsetup{justification=centering} \includegraphics[width=0.70\columnwidth]{img/problem/assumptions.drawio.pdf} \caption{We place chiplets for I/O drivers or other functions on the perimeter of a proposed arrangement of compute-chiplets (top view).} \label{fig:problem-assumptions} \vspace{-1em} \end{figure} \subsection{Constraints for Chiplet Shapes} \label{ssec:problem-constraints} \ps{Chiplets must be identical and rectangular} To ensure that we only consider designs that are economical and easy to manufacture, we identify constraints for the shape of chiplets. \begin{itemize} \item \textbf{Uniform Chiplets}: All compute-chiplets in a given arrangement must have the same shape and size. Integrating the same functionality into multiple chiplets with different shapes is technologically feasible, however, designing multiple compute-chiplets for a single product generation would increase the non-recurring cost and diminish the economical advantages of 2.5D integration. \item \textbf{Rectangular Chiplets}: All chiplets must be rectangular. Dicing methods such as stealth dicing \cite{stealth-dicing} or plasma dicing \cite{plasma-dicing} enable the fabrication of non-rectangular chiplets. However, the most common dicing method is blade dicing which can only produce rectangular chiplets. By limiting our search to rectangular chiplets, we ensure that we only consider designs with a wide range of applicability. \end{itemize} \subsection{Proxies for Inter-Chiplet Interconnect Performance} \label{ssec:problem-proxies} \ps{Why performance proxies?} How a given arrangement of chiplets translates into performance (latency and throughput) of the \gls{ici} is not obvious. We could run simulations, but this would force us to make many assumptions, e.g., on the bump pitch, chiplet area, or \gls{ici} communication protocol details. To be able to predict the performance of an arrangement of chiplets without making any assumptions, we introduce performance proxies. \ps{Introduce graph representation of 2.5D stacked chipls} As discussed in Section \ref{sec:intro}, we want to minimize the length of \gls{d2d} links which implies that only adjacent chiplets can be connected. To be more precise, we define that only chiplets sharing a common edge can be connected. We do not allow links between chiplets that only share a common corner as this would increase the link length. Based on this definition, we represent our 2.5D stacked chip as a planar graph \cite{graphs} where vertices correspond to chiplets and edges correspond to links. Two vertices are connected by an edge whenever the corresponding chiplets are adjacent (see Figure \ref{fig:problem-example}). \begin{figure}[h] \centering \captionsetup{justification=centering} \begin{subfigure}{0.6 \columnwidth} \centering \includegraphics[width=1.0\columnwidth]{img/problem/example-chiplets.drawio.pdf} \caption{Arrangement of chiplets (top view).} \label{fig:probelm-example-chiplets} \end{subfigure} \begin{subfigure}{0.38 \columnwidth} \centering \includegraphics[width=0.9\columnwidth]{img/problem/example-graph.drawio.pdf} \caption{Graph representation.} \label{fig:probelm-example-graph} \end{subfigure} \caption{We represent arrangements of chiplets as graphs.} \label{fig:problem-example} \end{figure} \ps{Use network diameter and bisection bandwidth as proxies for latency and throughput} We use a 2.5D stacked chip's graph representation to obtain proxies for the latency and global throughput of the \gls{ici}. Whenever a flit\footnote{Flow control unit: Atomic amount of data transported across the network.} transitions from a chiplet to the interposer or vice-versa, it needs to be processed by a PHY which adds a certain latency. Based on this observation, we use the diameter of a chip's graph representation as a proxy for its latency and we use the bisection bandwidth of the graph representation as a proxy for the global throughput. \subsection{Problem Statement} \ps{Maximize proxies while fulfilling constraints} In this work, we want to solve the following problem: \begin{center} \textit{ Find a shape and arrangement of chiplets that maximizes the proxies for inter-chiplet interconnect performance as defined in Section \ref{ssec:problem-proxies} while satisfying all constraints from Section \ref{ssec:problem-constraints}.} \end{center} \section{Enhancing Shape and Arrangement of Chiplets} \label{sec:proposal} \ps{Section intro} We now illustrate a novel arrangement of chiplets, the \gls{cor}, which enhances the performance of the \gls{ici} while maintaining ease of manufacturing. For this, we first observe that the most straightforward way to build a 2.5D stacked chip is arranging chiplets in a 2D \gls{g}, which we use as the main baseline. We illustrate how to go through multiple improvements of a \gls{g} until we arrive at the \gls{cor}. For each arrangement, we deliver a shape of chiplets and a placement of \gls{c4} bumps or micro-bumps that minimizes the length of \gls{d2d} links. Finally, we discuss how to apply arrangements to arbitrary chiplet counts and we compare multiple arrangements in terms of their network diameter and bisection bandwidth (performance proxies). \subsection{Optimizing the Arrangement of Chiplets} \label{ssec:proposal-arrangement} \ps{Introduce grid, give intuition that a higher average number of neighbors is desirable} \paragraph{Grid (\texttt{G})} We show a 2D grid in Figure \ref{fig:proposal-arrangements-grid}. We observe that each non-border chiplet is connected to four other chiplets. Mathematically speaking, the average number of neighbors per chiplet goes to four as the number of chiplets goes to infinity. Intuitively, increasing the average number of neighbors per chiplet should reduce the network diameter and increase the bisection bandwidth. \ps{Argue why we discuss hexagonal chiplets} To explore arrangements that maximize the average number of neighbors per chiplet, we drop the constraint that chiplets need to be rectangular for the next paragraph. In the subsequent paragraph, we show how to fix this violation of constraints. \ps{Introduce honeycomb, show that they asymptotically maximize the avg \#neighbors} \paragraph{Honeycomb (\texttt{HC})} If we manufacture hexagonal chiplets and arrange them in a honeycomb pattern, then, each non-border chiplet is connected to six other chiplets (see Figure \ref{fig:proposal-arrangements-hex}). The average number of neighbors per chiplet approaches six as the number of chiplets goes to infinity. As we have seen in Section \ref{ssec:problem-proxies}, we can represent each arrangement of chiplets as a planar graph (a graph that can be drawn such that no edges cross each other). A fundamental theorem of graph theory states that for planar graphs with $v \geq 3$ vertices and $e$ edges, $e \leq 3v-6$ does hold. We use this inequality to derive an upper bound for the average vertex degree $d_\text{avg}$ in planar graphs, which corresponds to the average number of neighbors per chiplet: \begin{equation*} \label{eq:proposal-bound-neighbors} d_\text{avg} = \frac{2e}{v} \leq \frac{2 (3v - 6)}{v} = 6 - \frac{12}{v}. \end{equation*} Asymptotically speaking, the \gls{hc} maximizes the average number of neighbors per chiplet. However, it does violate our constraints since it uses non-rectangular chiplets. \ps{Introduce brickwall, show that it has the same advantages as honeycomb but without violating any constraints.} \paragraph{Brickwall (\texttt{BW})} Arranging rectangular chiplets in a brickwall pattern (see Figure \ref{fig:proposal-arrangements-brick}) results in the same graph structure as the \gls{hc}. This enables an asymptotically optimal average number of neighbors per chiplet without violating any constraints on the shape of chiplets. \ps{Introduce circular, motivation: increasing the minimum number of neighbors, reducing the diameter.} \paragraph{HexaMesh (\texttt{HM})} We want to further optimize our arrangement of chiplets. One issue in the \gls{bw} is that there are two chiplets with only two neighbors. By arranging chiplets in a circle around one central chiplet (see Figure \ref{fig:proposal-arrangements-circle}), we can increase the minimum number of neighbors per chiplet from $2$ to $3$. An additional advantage of this arrangement is that it asymptotically reduces the network diameter by $33\%$ compared to the \gls{bw} (see Section \ref{ssec:proposal-proxies} for details). \ps{Argue why we do not consider the honeycomb arrangement for the remainder of the paper} As the \gls{hc} violates our constraints and the \gls{bw} results in the same graph structure, we only consider the \gls{g}, \gls{bw}, and \gls{cor} from now on. \subsection{Optimizing the Shape of Chiplets} \label{ssec:proposal-shape} \ps{Subsection Intro} For each arrangement that we discussed above, we find a shape of chiplets that maximizes the performance of the \gls{ici}. \lbcom{Since you have quite a feq symbols, a table with the defs could be useful.} \picom{Due to space constraints, I didn't add the table, but I added more symbols to figure 5 which hopefully explains them visually.} \ps{Divide chiplet area into sectors based on bump usage} Recall that the \gls{ici} is built using \gls{d2d} links which are attached to chiplets using \gls{c4} bumps or micro-bumps. The bandwidth of a link is larger if the link has more bumps at its disposal. The maximum number of bumps per chiplet is proportional to the chiplet area $A_C$. A fraction $p_p \in [0,1]$ of these bumps is used for the chiplet's power supply and the remaining bumps are used for \gls{d2d} links. We divide the area of a chiplet into different sectors. Each sector contains bumps used for either the power supply or for one of the \gls{d2d} links (see Figure \ref{fig:proposal-bumps}). To make sure that all links have the same bandwidth, all sectors for bumps of \gls{d2d} links must have the same area $A_B$. \ps{Minimize distance between bumps and chiplet edge} The second shape-related factor that influences the performance of the \gls{ici} is the length of \gls{d2d} links. To minimize the link length, we minimize the maximum distance $D_B$ between a bump and the edge of the chiplet (see Figure \ref{fig:proposal-bumps}). To minimize $D_B$, we place the sector containing power bumps in the center of the chiplet and we place the sectors for bumps of \gls{d2d} links at the chiplet edges. To make sure that all \gls{d2d} links have the same performance, we enforce that the distance $D_B$ is identical for all sectors containing link bumps. \begin{figure}[h] \vspace{-1em} \centering \captionsetup{justification=centering} \begin{subfigure}{0.43 \columnwidth} \centering \includegraphics[width=1.0\columnwidth]{img/proposal/bumps-grid.drawio.pdf} \caption{Grid (\texttt{G}).~~~~} \label{fig:proposal-bumps-grid} \end{subfigure} \begin{subfigure}{0.55 \columnwidth} \centering \includegraphics[width=0.83\columnwidth]{img/proposal/bumps-brick.drawio.pdf} \vspace{-1.0em} \caption{Brickwall (\texttt{BW}) and HexaMesh (\texttt{HM}).} \label{fig:proposal-bumps-brick} \end{subfigure} \caption{Assignment of \gls{c4} bumps or micro-bumps in chiplets.} \label{fig:proposal-bumps} \vspace{-1em} \end{figure} \ps{Bump assignment and chiplet shape for grid} \paragraph{Grid (\texttt{G})} Figure \ref{fig:proposal-bumps-grid} displays how we arrange \gls{c4} bumps or micro-bumps in chiplets of the \gls{g}. The measurements annotated in said figure guarantee that the maximum distance $D_B$ between a bump and the edge of the chiplet is identical for all links. \mbcom{this is repeating, same as above, merge/eliminate redundancy in text.} \picom{In my opinion, this is not redundant. In the previous paragraph, we state why we NEED identical $D_B$ and $A_B$ (in general) and in this paragraph, we explain how we GUARANTEE identical $D_B$ and $A_B$ in our proposed chiplet shape for the G. And in the next paragraph we explain how we GUARANTEE identical $D_B$ and $A_B$ in our proposed chiplet shape for the BW and CoR.} \lbcom{Tend to agree} To guarantee that all sectors for link bumps have the same area $A_B$, we require that the chiplets are square ($W_C = H_C = \sqrt{A_C}$) which implies that the sector for power-bumps is square ($W_P = H_P = \sqrt{p_p \cdot A_C}$). The area of one sector for bumps of a \gls{d2d} links is $A_B = (1/4) (1-p_p) A_C$ and the maximum distance between a bump and the edge of the chiplet is $D_B = (W_C - W_P) / 2 = (H_C - H_P) / 2$. \ps{Bump assignment and chiplet shape for brickwall and hexamesh} \paragraph{Brickwall (\texttt{BW}) and HexaMesh (\texttt{HM})} For the \gls{bw} and \gls{cor}, we arrange the \gls{c4} bumps or micro-bumps as displayed in Figure \ref{fig:proposal-bumps-brick}. Similarly to the \gls{g}, the measurements annotated in said figure guarantee that both the area of each sector for link bumps $A_B$ and the maximum distance between a link bump and the edge of the chiplet $D_B$ are identical for all \gls{d2d} links. The area available for bumps of a given link is $A_B = (1/6) (1-p_p) A_C$. Computing the maximum distance $D_B$ between a link bump and the edge of the chiplet as well as the resulting chiplet dimensions is a bit more involved. Based on Figure \ref{fig:proposal-bumps-brick}, we set up the following system of equations: \begin{equation} \label{eq:bumps-1} H_C = 2D_B + L_B \end{equation} \begin{equation} \label{eq:bumps-2} W_C = 2L_B \end{equation} \begin{equation} \label{eq:bumps-3} W_P = W_C - 2 D_B \end{equation} \begin{equation} \label{eq:bumps-4} H_C \cdot W_C = A_C \end{equation} \begin{equation} \label{eq:bumps-5} W_P \cdot L_B = A_C \cdot p_p \end{equation} By solving this system of equations, we get the chiplet dimensions $W_C$ and $H_C$ as well as the maximum distance $D_B$ between a bump and the edge of the chiplet: \begin{equation*} \label{eq:bumps-11} W_C = \sqrt{\frac{A_C ( 2 + 4 p_p)}{3}} \quad H_C = \frac{A_C}{W_C} \quad D_B = \frac{(1-p_p)A_C}{\sqrt{A_C(6 + 12p_p)}} \end{equation*} \ps{Example for equations above (suggested by TH)} Consider an example design with a chiplet area of $A_C = 16$ mm$^2$ where a fraction $p_p = 0.4$ of all bumps are needed for the power supply. Our equations yield the chiplet dimensions $W_C = 4.38$ mm and $H_C = 3.65$ mm and a maximum distance of $D_B = 0.73$ mm between a bump used for \gls{d2d} links and the chiplet edge. \subsection{Applicability of Arrangements} \ps{Arrangements that only apply to certain chiplets counts are a problem} To apply the \gls{g} or \gls{bw} as depicted in Figure \ref{fig:proposal-arrangements}, the number of chiplets $N$ needs to be a square number and for the \gls{cor}, we need to have $N = 1 + 3 r (r+1)$ for some $r \in \mathbb{N}$ (if there are $r$ rings around the central chiplet where the $i$-th ring contains $6i$ chiplets, then, we have $1 + \sum_{i=1}^r 6i = 1 + 3 r (r+1)$ chiplets in total). We call such an arrangement \textit{regular}. For the \gls{g} and \gls{bw}, we could also use $R$ rows and $C$ columns of chiplets such that $RC = N$, but $R \neq C$ which results in a rectangular, non-square shape. We call this a \textit{semi-regular} arrangement. Semi-regular arrangements make only sense if $R$ and $C$ are similar, otherwise, both diameter and bisection bandwidth deteriorate. We conclude that for many chiplet-counts, there is no regular or no reasonable semi-regular arrangement. This is a problem as we want to set the number of chiplets based on technological and economical factors, not their desired arrangement. \ps{Solve this problem by using irregular arrangements} To solve this problem, we introduce \textit{irregular} arrangements. Starting from the closest smaller regular arrangement, we incrementally add more chiplets until the desired chiplet-count is reached. In the case of the \gls{g} and \gls{bw}, these additional chiplets form incomplete rows or columns, and in the case of the \gls{cor}, they form an incomplete circle. \ps{Disadvantages of irregular arrangements} For regular and semi-regular \gls{g} and \gls{bw} of $N \geq 4$ chiplets, the minimum number of neighbors per chiplet is $2$, and for regular \gls{cor} of $N \geq 7$ chiplets, the minimum number of neighbors per chiplet is $3$. Introducing irregular arrangements reduces the minimum number of neighbors per chiplet to $1$ for some \gls{g} and to $2$ for some \gls{cor}. This suggests that irregular \gls{g} and \gls{cor} might have slightly lower performance compared to their regular peers. Our analysis of performance proxies in Section \ref{ssec:proposal-proxies} will confirm this speculation (see Figure \ref{fig:proposal-theory-results}). \subsection{Analysis of Performance Proxies} \label{ssec:proposal-proxies} \ps{Formulas and asymptotic analysis for diameter of regular arrangements} \paragraph{Diameter} The diameter for a regular \gls{g}, \gls{bw}, or \gls{cor} with $N$ chiplets can be computed as follows: \begin{equation*} \label{eq:diam-grid} D_{G}\text(N) = 2\sqrt{N}-2 \end{equation*} \begin{equation*} \label{eq:diam-brickwall} D_\text{BW}(N) = 2\sqrt{N}-2-\left \lfloor \sfrac{(\sqrt{N}-1)}{2} \right \rfloor \end{equation*} \begin{equation*} \label{eq:diam-circular} D_\text{HM}(N) = \sfrac{1}{3} \sqrt{12N-3} - \sfrac{1}{2} \end{equation*} In Figure \ref{fig:proposal-diameter}, we compare the diameter of all three arrangements for chiplet counts from $1$ to $100$. The \gls{bw} has a significantly lower diameter than the \gls{g}, and the \gls{cor} further reduces the diameter. We observe that regular and semi-regular \gls{g} and \gls{cor} provide the highest chiplet-count for a given diameter. For the \gls{bw}, regular and semi-regular arrangements do not seem to have advantages over their irregular peers. To analyze the asymptotic behavior of the diameter of regular arrangements, we compute $\lim \limits_{N\to \infty}\frac{D_\text{BW}(N)}{D_\text{G}(N)} = \sfrac{3}{4}$ and $\lim \limits_{N\to \infty}\frac{D_\text{HM}(N)}{D_\text{G}(N)} = \sfrac{1}{\sqrt{3}}$. We conclude that asymptotically, the \gls{bw} reduces the diameter by $25\%$ and the \gls{cor} reduces the diameter by $42\%$ compared to the \gls{g}. \begin{figure}[h] \centering \captionsetup{justification=centering} \begin{subfigure}{0.98 \columnwidth} \centering \includegraphics[width=0.95\columnwidth]{img/proposal/legend.pdf} \end{subfigure} \begin{subfigure}{0.49 \columnwidth} \centering \includegraphics[width=1.0\columnwidth]{img/proposal/diameter.pdf} \caption{Network Diameter.} \label{fig:proposal-diameter} \end{subfigure} \begin{subfigure}{0.49 \columnwidth} \centering \includegraphics[width=1.0\columnwidth]{img/proposal/estimated_bisection_bandwidth.pdf} \caption{Bisection Bandwidth.} \label{fig:proposal-bandwidth} \end{subfigure} \caption{Performance proxies of chiplet arrangements.} \label{fig:proposal-theory-results} \vspace{-0.5em} \end{figure} \ps{Formulas and asymptotic analysis for bisection bandwidth of regular arrangements} \paragraph{Bisection Bandwidth} The bisection bandwidth of a regular \gls{g}, \gls{bw}, or \gls{cor} with $N$ chiplets is computed as follows: \begin{equation*} \label{eq:diam-grid} B_\text{G}(N) = \sqrt{N} \end{equation*} \begin{equation*} \label{eq:diam-brickwall} B_\text{BW}(N) = 2\sqrt{N}-1 \end{equation*} \begin{equation*} \label{eq:diam-circular} B_\text{HM}(N) = \sfrac{2}{3} \sqrt{12N-3} \end{equation*} Figure \ref{fig:proposal-bandwidth} compares the bisection bandwidth of all three arrangements for chiplet counts from $1$ to $100$. The bisection bandwidth of regular arrangements is computed using the formulas above, that of semi-regular or irregular arrangements is estimated using METIS \cite{metis}. The \gls{bw} comes with a significantly higher bisection bandwidth compared to the \gls{g} and the \gls{cor} further improves upon the \gls{bw}. To analyze the asymptotic behavior of the bisection bandwidth of regular arrangements, we compute $\lim \limits_{N\to \infty}\frac{B_\text{BW}(N)}{B_\text{G}(N)} = 2$ and $\lim \limits_{N\to \infty}\frac{B_\text{HM}(N)}{B_\text{G}(N)} = \frac{4}{\sqrt{3}}$. Asymptotically, the \gls{bw} improves the bisection bandwidth by $100 \%$ and the \gls{cor} improves it by $130\%$ compared to the \gls{g}. \section{A Model for \gls{d2d} Links} \label{sec:model} \ps{Section Intro} The bisection bandwidth is an incomplete proxy for the global throughput, as it only considers the number of links, but not their bandwidth. Since the \gls{bw} and \gls{cor} have more \gls{d2d} links per chiplet than the \gls{g}, the number of \gls{c4} bumps or micro-bumps per link and hence the per-link bandwidth is lower for them. To estimate the link bandwidth for a given arrangement, we introduce our \gls{d2d} link model. \subsection{Model Inputs} \label{ssec:model-links} \ps{Describe model inputs} Table \ref{tab:model-inputs} lists the architectural parameters that our model needs as inputs to estimate the bandwidth of \gls{d2d} links. \begin{table}[h] \centering \captionsetup{justification=centering} \caption{Architectural Parameters Needed as Model Inputs.} \begin{tabular}{ll} \rowcolor{lightgray} \hline \textbf{Symbol} & \textbf{Description} \\ \hline $A_{B}$ & \makecell[l]{Area (in mm$^2$) available for C4 bumps/micro-bumps \\of one \gls{d2d} link}\\ \hline $P_{B}$ & Pitch (in mm) of a C4 bump/micro-bump\\ \hline $N_\text{ndw}$ & \makecell[l]{Number of non-data wires needed for a \gls{d2d} link \\(e.g., wires for handshake, clock, etc.)}\\ \hline $f$ & Frequency at which the \gls{d2d} links are operated\\ \hline \end{tabular} \label{tab:model-inputs} \vspace{-1em} \end{table} \subsection{Link Bandwidth Estimation} \label{ssec:model-link} \ps{Explain Model for \gls{d2d} links} We start by estimating the number of wires $N_w$ that can be built between the two chiplets. To compute $N_w$ we divide the area available for C4 bumps/micro-bumps by the squared pitch of said bumps. This estimate assumes a regular layout of bumps. A staggered layout would result in a slightly larger number of wires. \lbcom{Vague...} To get the number of data wires $N_\text{dw}$ we subtract the number of non-data wires $N_\text{ndw}$ from the number of wires $N_w$. To estimate the link bandwidth $B$, we multiply the number of data wires $N_\text{dw}$ by the link frequency $f$. \lbcom{I am a bit confused - f depends strongly on the length of the link... not really a constant as it depends on geometry...} \picom{My initial idea was to estimate f based on the link length and required bit error rate, however, since my discussions with Matheus and Sina revealed that this relation is very complex, I simply used f as an input parameter. It is still my goal to once have this model for f = function(BER, link-length) but it seems very unrealistic to achieve this before the DAC deadline. Since in this paper, we only consider links between adjacent chiplets, the link-lengths are very similar, hence, this shortcoming of the model doesn't affect the results too much.} \begin{equation*} \label{eq:model-Nw} N_{w} = \frac{A_{B}}{(P_{B})^2} \qquad \qquad N_\text{dw} = N_{w} - N_\text{ndw} \qquad \qquad B = N_\text{dw} \cdot f \end{equation*} In practice, the maximum operating frequency of \gls{d2d} links depends on the length of said links. In this work, we only consider \gls{d2d} links between adjacent chiplets, whose lengths are relatively short (below $4$mm in general, for $N\geq10$ chiplets even below $2$mm). Therefore, we make the operating frequency an input parameter rather than computing it based on the link's physical characteristics. \section{Evaluation} \label{sec:evaluation} \ps{Section Intro} We leverage our model for \gls{d2d} links and network simulations in BookSim2 \cite{booksim} to compare the \gls{ici} performance of different arrangements of chiplets. Many parameters used in this section are based on the UCIe protocol specifications \cite{ucie}. \subsection{Cycle-Accurate Simulations using BookSim2} \ps{Explain BookSim input parameters} \label{ssec:evaluation-booksim} We use the established, cycle-accurate BookSim2 \cite{booksim} network-on-chip simulator to estimate the latency and throughput of different chiplet arrangements. The graph representation (see Section \ref{ssec:problem-proxies}) of a given arrangement is used as an input to BookSim2. We assume that each chiplet contains two endpoints and one local router. This router can route packets between the chiplet's PHYs or between a PHY and an endpoint. We configure a link-latency of $27$ cycles which models the combined latency of the outgoing PHY, the \gls{d2d} link, and the incoming PHY (UCIe \cite{ucie} states a \gls{phy} latency of $12$-$16$ UI). Each router has a latency of $3$ cycles, $8$ virtual channels, and $8$ flit buffers. \ps{How to extract BookSim results} BookSim2 reports the average packet latency and the saturation throughput as the percentage of the full global bandwidth. The full global bandwidth is the maximum theoretical cumulative throughput when all endpoints inject packets in the network at full rate; in our setting, it is the product of the chiplet count, the number of endpoints per chiplet, and the per-link bandwidth which we estimate using our model for \gls{d2d} links (see next paragraph). We multiply the reported relative throughput by the full global bandwidth of the corresponding arrangement to get the saturation throughput in Tb/s. \subsection{Link Bandwidth Estimation using our Model} \label{ssec:evaluation-model} \ps{Explain \gls{d2d} link input parameters} We use our model for \gls{d2d} links to estimate the per-link bandwidth in different arrangements of chiplets. To do this, we need to specify a set of architectural parameters. We assume that the combined area of all chiplets is $A_\text{all} = 800$ mm$^2$ which is slightly below the lithographic reticle limit. For an arrangement of $N$ chiplets, we compute the chiplet area as $A_C = A_\text{all} / N$. We assume that any chiplet needs a fraction $p_p = 0.4$ of all \gls{c4} bumps for its power supply and that \gls{c4} bumps have a pitch of $P_B = 0.15$ mm. Furthermore, we assume that $N_\text{ndw} = 12$ wires per link are needed for handshake and clock (UCIe \cite{ucie} uses $2$ clock-, $1$ valid- and $1$ track-wire per direction plus $4$ wires for the side band). Finally, we assume that \gls{d2d} links are operated at $16$ GHz (UCIe \cite{ucie} can be operated at $16$ GHz to support its maximum data rate or $32$ GT/s). The area $A_B$ available for bumps of a given \gls{d2d} link is computed using the equations derived in Section \ref{ssec:proposal-shape} (except for arrangements with $N \le 7$ chiplets which are hand-optimized). \subsection{Discussion of Results} \label{ssec:evaluation-results} \ps{Discuss results (Figure \ref{fig:evaluation-results})} Figures \ref{fig:evaluation-results-latency-abs} and \ref{fig:evaluation-results-throughput-abs} show the latency and throughput of the \gls{g}, \gls{bw}, and \gls{cor}, for chiplet counts from $2$ to $100$. Figures \ref{fig:evaluation-results-latency-rel} and \ref{fig:evaluation-results-throughput-rel} show the latency and throughput of the \gls{bw} and \gls{cor} relative to the \gls{g} (baseline). For $N \geq 10$ chiplets, both the \gls{bw} and the \gls{cor} consistently reduce the latency by almost $20\%$ compared to the \gls{g}. On average, the throughput is increased by $12\%$ if the \gls{bw} is used and by $34\%$ if the \gls{cor} is used. We observe that the throughput relative to the \gls{g} exhibits high fluctuations---this is mainly due to the inconsistent throughput of the \gls{g} (baseline). Another observation is that in practice (throughput), the \gls{bw} and \gls{cor} do not outperform the \gls{g} by as much as their theoretical superiority (bisection bandwidth) suggests. The cause of this discrepancy is the fact that the \gls{bw} and \gls{cor} have more links per chiplet than the \gls{g} which results in fewer \gls{c4} bumps/micro-bumps per link and hence a lower per-link bandwidth compared to the \gls{g}. This difference in bandwidth is accounted for in the simulations yielding the throughput but not in the theoretical analysis yielding the bisection bandwidth. Nevertheless, we see that by using the \gls{cor}, we can significantly reduce the latency and significantly improve the throughput without adding any additional manufacturing complexity. \section{Related Work} \label{sec:related-work} \ps{Discuss related work} AMD \cite{amd-chiplets} shows how to use 2.5D integration to solve the economical challenges of technology scaling in production chips. Since they use no more than eight compute-chiplets per chip, they can hand-optimize their chiplet arrangement. In Tesla's Dojo training tile \cite{dojo} with $25$ chiplets where hand-optimizing the chiplet arrangement most likely is infeasible, a 2D grid arrangement with a 2D mesh topology is used. The mesh only connects adjacent chiplets which results in short links with high operating frequencies. Kite \cite{kite} is an \gls{ici} topology for 2D grid arrangements where non-adjacent chiplets are connected if the topological advantages of longer links outweigh their disadvantages due to a lower operating frequency. By introducing HexaMesh, we achieve a low network diameter and a high bisection bandwidth while only connecting adjacent chiplets. This means that we can get rid of the mesh's disadvantages (limited performance) while keeping its advantages (short, high-frequency links). Coskun et al. \cite{placement-opt} introduce a cross-layer co-optimization approach for \gls{ici} design and chiplet arrangement. For a set of existing topologies, they optimize the chiplet arrangement to maximize \gls{ici} performance and minimize manufacturing cost and operating temperature. In their approach, the chiplet arrangement depends on the topology while in our approach the topology depends on the chiplet arrangement (we connect adjacent chiplets). The advantage of our approach is that by only connecting adjacent chiplets, we minimize the length of \gls{d2d} links which maximizes their operating frequency. There are many works in the 2.5D integration landscape that provide contributions orthogonal to ours. Chiplet Actuary \cite{chiplet-actuary} for example provides a detailed cost model to analyze the economical benefits of disaggregation. This cost model could be applied together with our evaluation methodology to compare architectures both in terms of cost (Chiplet Actuary) and performance (our methodology). Dehlaghi et al. \cite{usr-links} provide a detailed model to estimate the insertion loss and crosstalk of \gls{usr} \gls{d2d} links. Their work could be used to extend our model for \gls{d2d} links by adding predictions for the bit error rate in addition to our link bandwidth predictions. \section{Conclusion} \label{sec:conclusion} \ps{Why chiplet shapes and arrangements matter} 2.5D integration is believed to be the solution to the economical challenges of CMOS technology scaling, but it introduces a new challenge: Providing a high-performance inter-chiplet interconnect (ICI). The fact that \gls{d2d} links need to be short to run at high frequencies strongly limits the choice of \gls{ici} topologies. \ps{How we optimized chiplet shapes and arrangements} In this work, we propose HexaMesh, an arrangement of chiplets that reduces the \gls{ici}'s network diameter by $42\%$ while increasing its bisection bandwidth by $130\%$ compared to a grid arrangement. Furthermore, we introduce a model to estimate the bandwidth of \gls{d2d} links which is needed for a fair comparison of designs with varying numbers of links per chiplet. Our evaluations show that HexaMesh is not only superior to a grid arrangement in theory but also in practice, as it reduces the latency by $19\%$ on average and improves the throughput by $34\%$ on average. HexaMesh uses uniform and rectangular chiplets, which ensures that employing the HexaMesh arrangement does not increase the complexity of designing or manufacturing a chip. \section*{Acknowledgements} \label{sec:acknowledgements} This work was supported by the ETH Future Computing Laboratory (EFCL), financed by a donation from Huawei Technologies. It also received funding from the European Research Council \raisebox{-0.25em}{\includegraphics[height=1em]{erc.pdf}} (Project PSAP, No.~101002047) and from the European Union's HE research and innovation programme under the grant agreement No.~101070141 (Project GLACIATION). We thank Timo Schneider for help with computing infrastructure at SPCL.
1710.02938
\section{Introduction} We work throughout over the field of complex numbers. Our main result is the following explicit weak solution to the classical Schottky problem. \begin{mthm*} \label{main} For any $g\ge 4$, let $S_{34}$ denote the following degree $2^{3\cdot 2^{g-4}+1}$ polynomial in genus $g$ theta constants, evaluated at some period matrix $\tau$: $$ \prod_{\substack{a_\varepsilon,b_\varepsilon,c_\varepsilon=\pm 1\\ a_{0,\ldots,0}=1}}\ \sum_{\varepsilon \in({\mathbb{Z}}/2{\mathbb{Z}})^{g-4}} a_\varepsilon\left(\tc{E & 0 & 0 & 0&\varepsilon }{0 & 0 & 0 & 0&{\bf 0} }\tc{E & 0 & 0 & 0&\varepsilon }{ 1 & 1 & 1 & 1&{\bf 1} } \tc{E & 0 & 1 & 1&\varepsilon }{0 & 1 & 0 & 0&{\bf 0} }\tc{E & 0 & 1 & 1&\varepsilon }{1 & 0 & 1 & 1&{\bf 1} }\right.\cdot $$ \vskip-4mm $$ \begin{aligned} &\left.\cdot\,\tc{1+E & 1 & 0 & 0&\varepsilon }{0 & 0 & 0 & 1&{\bf 0} }\tc{1+E & 1 & 0 & 0&\varepsilon }{ 1 & 1 & 1 & 0&{\bf 1} } \tc{1+E & 1 & 1 & 1&\varepsilon }{0 & 1 & 0 & 1&{\bf 0} }\tc{1+E & 1 & 1 & 1&\varepsilon }{1 & 0 & 1 & 0&{\bf 1} }\right)^{1/2}\\ +&b_\varepsilon\left(\tc{1+E & 0 & 1 & 0&\varepsilon }{0 & 0 & 0 & 0&{\bf 0} }\tc{1+E & 0 & 1 & 0&\varepsilon }{1 & 1 & 1 & 1&{\bf 1} } \tc{1+E & 0 & 0 & 1&\varepsilon }{0 & 1 & 0 & 0&{\bf 0} }\tc{1+E & 0 & 0 & 1&\varepsilon }{1 & 0 & 1 & 1&{\bf 1} }\right.\cdot\\ &\ \ \left.\cdot\,\tc{E & 1 & 1 & 0&\varepsilon }{0 & 0 & 0 & 1&{\bf 0} }\tc{E & 1 & 1 & 0&\varepsilon }{1 & 1 & 1 & 0&{\bf 1} } \tc{E & 1 & 0 & 1&\varepsilon }{0 & 1 & 0 & 1&{\bf 0} }\tc{E & 1 & 0 &1&\varepsilon }{1 & 0 & 1 & 0&{\bf 1} } \right)^{1/2}\\ +&c_\varepsilon \left(\tc{E & 0 & 0 & 0&\varepsilon }{0 & 0 & 1 & 1&{\bf 0} }\tc{E & 0 & 0 & 0&\varepsilon }{1 & 1 & 0 & 0&{\bf 1} } \tc{E & 0 & 1 & 1&\varepsilon }{0 & 1 & 1 & 1&{\bf 0} }\tc{E & 0 & 1 & 1&\varepsilon }{1 & 0 & 0 & 0&{\bf 1} }\right.\cdot\\ &\ \ \left.\cdot\,\tc{1+E & 1 & 0 & 0&\varepsilon }{0 & 0 & 1 & 0&{\bf 0} }\tc{1+E & 1 & 0 & 0&\varepsilon }{1 & 1 & 0 & 1&{\bf 1} } \tc{1+E & 1 & 1 & 1&\varepsilon }{0 & 1 & 1 & 0&{\bf 0} }\tc{1+E & 1 & 1 & 1&\varepsilon }{1 & 0 & 0 & 1&{\bf 1} }\right)^{1/2}, \end{aligned} $$ where for any $\varepsilon=(\varepsilon_1,\ldots,\varepsilon_k)\in({\mathbb{Z}}/2{\mathbb{Z}})^{k}$ we let $E:=\varepsilon_1+\ldots+\varepsilon_k\in{\mathbb{Z}}/2{\mathbb{Z}}$. For any $3\le j<k\le g$ let $S_{jk}$ be obtained from $S_{34}$ by swapping columns $3$ and $j$, and columns $4$ and $k$ of the characteristics of all theta constants appearing in the expression. Then the collection of equations $\lbrace S_{jk}=0\rbrace_{3\le j<k\le g}$ gives a weak solution to the Schottky problem, i.e.~the common zero locus of the modular forms $\lbrace S_{jk}\rbrace_{3\le j<k\le g}$ contains the Jacobian locus as an irreducible component. \end{mthm*} Here, and throughout the paper, we write $\bf 0$ and $\bf 1$ for strings of zeroes or ones of appropriate length. \medskip One can say that the theory we deal with here began with Riemann's papers~\cite{riemann1} and~\cite{riemann2}. In~\cite{riemann2} in particular it seems clear that Riemann was working towards understanding what we now think of as the Schottky problem, even though the main immediate application was a proof of the Jacobi inversion theorem. The field then blossomed, with a flurry of activity by A.~Krazer, W.~Wirtinger, M.~Noether, F.~Schottky, G.~Frobenius, H.~Baker, and many others --- see the many references in~\cite{rafabook}. In the middle of the 20th century the interest in the Schottky problem seems to have waned, probably due to the fact that the length of the identities increased exponentially, and not much new was discovered on the classical Schottky problem. The interest in the subject was then rekindled in the 1970s in particular by Rauch's rediscovery of~\cite{scju}, where what are now called the Schottky-Jung proportionalities were stated without proof. These were proven rigorously by the first author~\cite{fasc}, and their connection with the Schottky problem was discussed in~\cite{farascju}. A period of intense activity followed, including Mumford's development of the algebraic theory of the theta function, and integrable systems entering the picture. Various approaches to the Schottky problem were developed by the 1980s, and various geometric solutions to the problem were then obtained (see eg.~surveys~\cite{vgsurvey,grschottky} for more details). Igusa~\cite{igusagen4} and Freitag~\cite{freitaggen4} showed that Schottky's original equation~\cite{schottky} is indeed the solution to the Schottky problem in genus 4; Arbarello and De Concini~\cite{adc} showed that there exists a finite set of equations in theta constants and their derivatives that characterize Jacobians (however, making them explicit requires elimination of $3g$ complex numbers from a system of equations); Shiota~ \cite{shiota} proved Novikov's conjecture characterizing Jacobians by their theta function satisfying the KP equation, and various other approaches were developed. In a spirit closest to the current paper, van Geemen~\cite{vgeemenscju} and Donagi~\cite{donagiscju} showed that the classical Schottky-Jung approach gave a weak solution to the Schottky problem --- however, their results do not lead to explicit equations, as we will explain shortly. More recently, Krichever~\cite{krichevertrisecant} proved the celebrated Welters'~\cite{welters} trisecant conjecture, characterizing Jacobians by their Kummer varieties having trisecant lines. We hope that our explicit solution of the weak Schottky problem, motivated by the viewpoint of~\cite{rafabook}, may lead to a further rejuvenation of interest in this classical subject. \medskip We now state the Schottky problem more precisely, and motivate our main theorem. Denote by ${\mathcal M}_g$ the moduli space of curves of genus $g$, denote by ${\mathcal A}_g$ the moduli space of complex principally polarized abelian varieties (ppav) of dimension $g$, so that we have the Torelli morphism $J:{\mathcal M}_g\to{\mathcal A}_g$. The {\em Schottky problem} is to characterize {\em the locus of Jacobians} ${\mathcal J}_g$, which is defined to be the closure of $J({\mathcal M}_g)$ in ${\mathcal A}_g$. The {\em classical Schottky problem}, the problem addressed by Riemann and Schottky, is to write down the defining modular forms for the Jacobian locus. More precisely, recall that theta constants with characteristics define an embedding $Th:{\mathcal A}_g(4,8)\hookrightarrow {\mathbb{P}}^{2^{g-1}(2^g+1)-1}$ of the level cover of ${\mathcal A}_g$ (see the next section for definitions and details), and the classical Schottky problem is to determine the defining ideal ${\mathcal I}_g^J$ of $Th({\mathcal J}_g(4,8))\subset Th({\mathcal A}_g(4,8))$. The {\em weak Schottky problem} is the problem of characterizing the locus of Jacobians up to extra irreducible components. Classically, this means finding an ideal $I_g$ of polynomials in theta constants, such that the zero locus of $I_g$ within $Th({\mathcal A}_g(4,8))$ contains $Th({\mathcal J}_g(4,8))$ as an irreducible component. The Schottky problem is non-trivial for $g\ge 4$, and Schottky's original equation solves the classical Schottky problem in genus 4, as discussed above. Despite many approaches to the Schottky problem having been developed, a solution to the classical Schottky problem and its weak version have remained elusive for any genus $g \ge 5$. \smallskip The Schottky-Jung proportionalities (reviewed in section 3) relate theta constants of a genus $g$ Jacobian to the theta constants of its Prym, which is a $(g-1)$-dimensional ppav. More precisely, denote by ${\mathcal R}_g$ the moduli space of connected unramified double covers of smooth genus $g$ curves, thought of as pairs consisting of a curve $C\in{\mathcal M}_g$ and a non-zero two-torsion point $\eta$ on the Jacobian of $C$. The Prym construction is the map $Pr:{\mathcal R}_g\to{\mathcal A}_{g-1}$, and the Schottky-Jung proportionalities relate the theta constants of $Pr(C,\eta)$ and of $C$. Let ${\mathcal I}_g^A$ denote the defining ideal of $Th({\mathcal A}_g(4,8))\subset {\mathbb{P}}^{2^{g-1}(2^g+1)-1}$. Then given any element $P\in{\mathcal I}_{g-1}^A$, one applies the Schottky-Jung proportionalities to each theta constant appearing in $P$, and thus obtains, for any given $\eta$, an element $SJ^\eta(P)\in{\mathcal I}_g^J$, which we will call the corresponding Schottky-Jung identity. The {\em big Schottky} locus is defined to be the locus within $Th({\mathcal A}_g(4,8))$ defined by the equations $SJ^\eta(P)$ for all $P\in{\mathcal I}_{g-1}^A$, for {\em one fixed} $\eta$, while the {\em small Schottky} locus is the locus defined by such equations for {\em all possible} $\eta$. van Geemen~\cite{vgeemenscju} and Donagi~\cite{donagiscju} showed that respectively the small and the big Schottky loci give weak solutions to the Schottky problem, while in~\cite{donagiintermjac} Donagi showed that already in genus 5 the big Schottky locus contains an extra irreducible component, containing the locus of intermediate Jacobians of cubic threefolds. In the preprint~\cite{siegel} it is shown that in genus 5 the small Schottky locus is in fact equal to the Jacobian locus. Note, however, that for $g\ge 3$ the ideal ${\mathcal I}_g^A$ of relations among the theta constants of a general ppav is not known --- it is conjectured that it is generated by Riemann's quartic relations (see~\cite{fsm} for details), but no approaches to proving this are available. Thus for any $g\ge 5$ the results of van Geemen and Donagi cannot be made explicit, as one cannot write down a set of generators of ${\mathcal I}_{g-1}^A$. Our main theorem is thus the first known explicit weak solution to the classical Schottky problem. \smallskip Our equations $S_{jk}$ arise by applying the Schottky-Jung proportionalities to certain quartic identities in theta constants. We note, however, that while the usually applied case of the Schottky-Jung proportionalities is for the two-torsion point $\eta_0:=\chars{0&0&\ldots&0}{1&0&\ldots&0}$, it turns out (see remark~\ref{rem:etag}) that for our methods we need to use the two-torsion point $\eta_g:=\chars{0&\ldots&0}{1&\ldots&1}$. The polynomials $S_{jk}$ arise by applying the Schottky-Jung proportionalities for $\eta_g$ to the following quartic identity in theta constants (see remark~\ref{rem:noRiemann} for an explanation why a linear combination of Riemann's quartic relations must be used, instead of the relations themselves). \begin{prop}\label{prop:Rsum} For any $g\ge 3$, write ${\bf 0}$ for the string of $g-3$ zeroes, and let $R_{34}$ be the following quartic polynomial in theta constants, all evaluated at some period matrix $\tau$: \begin{equation}\label{eq:Rgeng} \begin{aligned} R_{34}:=\sum_{\varepsilon\in({\mathbb{Z}}/2{\mathbb{Z}})^{g-3}} &\tc{0 & 0 & 0&\varepsilon}{0 & 0 & 0&{\bf 0}}\tc{0 & 1 & 1&\varepsilon}{1 & 0 & 0&{\bf 0}} \tc{1 & 0 & 0&\varepsilon}{0 & 0 & 1&{\bf 0}}\tc{1 & 1 & 1&\varepsilon}{1 & 0 & 1&{\bf 0}}\\ -&\tc{0 & 1 & 0&\varepsilon}{0 & 0 & 0&{\bf 0}}\tc{0 & 0 & 1&\varepsilon}{1 & 0 & 0&{\bf 0}} \tc{1 & 1 & 0&\varepsilon}{0 & 0 & 1&{\bf 0}}\tc{1 & 0 & 1&\varepsilon}{1 & 0 & 1&{\bf 0}} \\ +&\tc{0 & 0 & 0&\varepsilon}{0 & 1 & 1&{\bf 0}}\tc{0 & 1 & 1&\varepsilon}{1 & 1 & 1&{\bf {\bf 0}}} \tc{1 & 0 & 0&\varepsilon}{0 & 1 & 0&{\bf 0}}\tc{1 & 1 & 1&\varepsilon}{1 & 1 & 0&{\bf 0}}. \end{aligned} \end{equation} Let $R_{jk}$ be obtained by permuting columns $2$ and $j-1$, and $3$ and $k-1$ in the expression of $R_{34}$. Then each $R_{jk}$ vanishes identically in $\tau$, i.e.~$R_{jk}\in{\mathcal I}_g^A$. \end{prop} This proposition is an immediate corollary of the ``doubling trick'' (given by proposition~\ref{prop:doubling}), applied to Riemann's quartic relation in genus 3; since it is a quartic polynomial in theta constants, from the results of~\cite{smrelations} it follows that each $R_{jk}$ is a linear combination of Riemann's quartic relations, as we will review and reprove below. In remark~\ref{rem:noRiemann} we will explain why this doubling trick is needed, and why simply applying the Schottky-Jung proportionalities to Riemann's quartic relations would not work. The Schottky-Jung proportionalities for~$\eta_g$ are given explicitly by~\eqref{eq:SJetag}, and since $\eta_g$ is invariant under any permutation of columns (and this is one reason for our choice of $\eta_g$ rather than $\eta_0$), we obtain lemma~\ref{lm:SR}: the statement that $S_{jk}=SJ^{\eta_g}(R_{jk})$ for any $3\le j<k\le g$. As a corollary, we thus obtain an explicit proof of the main result of Donagi's paper~\cite{donagiscju}, which implies the main result of van Geemen's paper~\cite{vgeemenscju}: \begin{cor} The big Schottky locus gives a weak solution to the Schottky problem, i.e.~the common zero locus of $SJ^{\eta_g}(R)$, for all $R\in{\mathcal I}_{g-1}^A$, contains ${\mathcal J}_g$ as an irreducible component. \end{cor} \noindent (Of course we have in fact proven that it is enough to take Riemann's quartic relations, as a subset of ${\mathcal I}_{g-1}^A$, and among those, to only take those that imply all $R_{jk}$). We prove the main theorem by expanding $S_{jk}$ near the locus of diagonal period matrices, and showing that the lowest degree terms of the expansions give a collection of what are called Poincar\'e relations. These are infinitesimal (i.e.~up to terms of higher order) relations satisfied by period matrices of Jacobians that are close to being diagonal. The only such relation in genus 4 was proven by Poincar\'e~\cite{poincare}. While Poincar\'e states in~\cite[pp.~298--299]{poincare} that his relation generalizes to arbitrary genus. To the best of our knowledge our work gives the first complete proof of Poincar\'e relations for arbitrary Riemann surfaces of arbitrary genus. Rauch in~\cite[p.~228]{rafabook} asks whether in general one can find Schottky(-Jung) identities that imply Poincar\'e relations, and in particular our work answers this question in the affirmative. By analyzing these lowest order terms of $S_{jk}$, we then show that in a neighborhood of a generic diagonal period matrix these equations are functionally independent, and thus that the common zero locus of all $S_{jk}$ is $(3g-3)$-dimensional, which implies that it contains the Jacobian locus as an irreducible component. \smallskip The structure of the text is as follows. In section 2 we fix the notation and review and extend the results of Fay~\cite{fayriemann} and the third author~\cite{smrelations} on the linear span of Riemann's quartic relations. The main technical result here is the doubling proposition~\ref{prop:doubling}, which allows one to pass from relations in genus $g$ to relations in genus $g+1$, and implies proposition~\ref{prop:Rsum}. In section 3 we recall the Schottky-Jung proportionalities, and give the explicit formula~\eqref{eq:SJetag} for them for the two-torsion point $\eta_g$. In section 4 we briefly recall the well-known expansion of theta constants near the locus of diagonal period matrices. In section 5 we recall the notion of Poincar\'e ``infinitesimal" relations for periods of Riemann surfaces near diagonal matrices, and prove theorem~\ref{thm:Poincareholds}, showing that they are in fact valid infinitesimal relations for period matrices of Riemann surfaces. The proof is by looking at the lowest order terms of the expansions of our $S_{jk}$. Finally, in section 6 we combine all of these ingredients to prove that $S_{jk}$ are locally functionally independent near ${\mathcal D}_g$, which implies the the main theorem. \subsection*{Acknowledgements} We are grateful to G.~Codogni and B.~van Geemen for interesting discussions and useful comments on the manuscript. \section{Riemann's quartic relations and their linear combinations} In this section we fix the notation for moduli spaces of curves, abelian varieties, and their covers. We then recall the theta constants, their properties, and relations among them, for the details on all of this we refer to~\cite{igusabook} and \cite{rafabook}. We denote by ${\mathcal H}_g:=\lbrace\tau\in\operatorname{Mat}_{g\times g}({\mathbb{C}}):\tau^t=\tau; \operatorname{Im} \tau>0\rbrace$ the Siegel space consisting of symmetric $g\times g$ complex matrices with positive definite imaginary part. The symplectic group $\op{Sp}(2g,{\mathbb{Z}})$ acts on ${\mathcal H}_g$, and the quotient ${\mathcal A}_g:={\mathcal H}_g/\op{Sp}(2g,{\mathbb{Z}})$ is the moduli space of complex principally polarized abelian varieties. For any even $\ell$ the level subgroup $\Gamma_g(\ell)\subset\op{Sp}(2g,{\mathbb{Z}})$ is defined to be the normal subgroup that is the kernel of the map to $\op{Sp}(2g,{\mathbb{Z}}/\ell{\mathbb{Z}})$. Furthermore, $\Gamma_g(\ell,2\ell)$ is the subgroup of $\Gamma_g(\ell)$ consisting of matrices such that the diagonals of $A^t B$ and $C^t D$ (where $\gamma\in\op{Sp}(2g,{\mathbb{Z}})$ is written in block form $\gamma=\left(\begin{smallmatrix}A&B\\ C&D\end{smallmatrix}\right)$) are congruent to zero modulo $2\ell$. The level covers of moduli of ppav are then the quotients ${\mathcal A}_g(\ell):={\mathcal H}_g/\Gamma_g(\ell)$ and ${\mathcal A}_g(\ell,2\ell):={\mathcal H}_g/\Gamma_g(\ell,2\ell)$. Given $\varepsilon,\delta\in({\mathbb{Z}}/2{\mathbb{Z}})^g$, the theta constant with characteristics $ \chars\varepsilon\delta$ is defined as $$ \tc\varepsilon\delta(\tau):=\sum_{n\in{\mathbb{Z}}^g}\exp\left(\pi i (n+\varepsilon/2)^t\left(\tau (n+\varepsilon/2)+\delta\right)\right). $$ By an abuse of notation, we will write $\varepsilon,\delta$ as row vectors, but they will be treated as column vectors as will all vectors used in calculation. We call $\varepsilon$ the top, and $\delta$ the bottom characteristic, and say $(g)$-characteristic when we want to emphasize the dimension in which we are working. When adding characteristics, we always mean in a vector space over ${\mathbb{Z}}/2{\mathbb{Z}}$. A characteristic is called even or odd depending on whether $\varepsilon^t\cdot\delta\in{\mathbb{Z}}/2{\mathbb{Z}}$ is equal to $0$ or $1$, correspondingly. All theta constants with odd characteristics vanish identically in $\tau$. \begin{ntn} For convenience, we denote by $K_g=({\mathbb{Z}}/2{\mathbb{Z}})^{2g}$ the set of characteristics, denote by $K_g^+,K_g^-\subset K_g$ the sets of even and odd characteristics, respectively, and let $k_g^\pm:= 2^{g-1}(2^g\pm 1)$ be the cardinalities of the sets $K_g^\pm$. \end{ntn} Defining the action of $\op{Sp}(2g,{\mathbb{Z}})$ on characteristics via $$ \gamma\circ\chars\varepsilon\delta:=\left(\begin{smallmatrix} D& -C\\ -B& A\end{smallmatrix}\right)\chars\varepsilon\delta+\left[\begin{smallmatrix}\diag(CD^t)\\ \diag(AB^t)\end{smallmatrix}\right], $$ theta constants satisfy the following transformation formula (see~\cite{igusabook}): $$ \begin{aligned} \theta&\left[\gamma\circ\chars{\varepsilon}{\delta}\right](\gamma\circ\tau)= \kappa(\gamma)\sqrt{\det(C\tau+D)}\tc\varepsilon\delta(\tau)\\ &\cdot \exp\left((-\pi i/4) \varepsilon^t D^tB\varepsilon - 2 \delta C^t B\varepsilon +\delta^t C^t A\delta -2(D\varepsilon^t-C\delta^t)\diag(AB^t)\right), \end{aligned} $$ where $\kappa$ is some eighth root of unity independent of the characteristic $\chars\varepsilon\delta$. It moreover turns out that $\kappa(\gamma)=1$ for any $\gamma\in\Gamma_g(4,8)$, and thus each theta constant is a modular form with respect to $\Gamma_g(4,8)$, which is to say that for any $\gamma\in\Gamma_g(4,8)$ $$ \tc\varepsilon\delta(\gamma\circ\tau)=\sqrt{\det(C\tau+D)}\tc\varepsilon\delta(\tau) $$ (where the square root can in fact be chosen globally). The map sending a ppav to the set of all even theta constants then defines an embedding $$ Th:{\mathcal A}_g(4,8)\hookrightarrow{\mathbb{P}}^{2^{g-1}(2^g+1)-1}. $$ The classical, and still unsolved, question of determining all relations among theta constants, is the question of determining the defining ideal ${\mathcal I}_g^A$ of $Th({\mathcal A}_g(4,8))\subset {\mathbb{P}}^{2^{g-1}(2^g+1)-1}$. The only known relations among theta constants are Riemann's quartic relations. To write them, recall the Weil pairing of two characteristics $m=\chars{\varepsilon_1}{\delta_1}$ and $n=\chars{\varepsilon_2}{\delta_2}$ defined by $$e(m,n):= (-1)^{ \varepsilon_1^t\cdot\delta_2 - \varepsilon_2^t\cdot\delta_1}.$$ Moreover, following~\cite{schottky}, for a triple of characteristics $m_1 =\chars{\varepsilon }{\delta },m_2=\chars{\alpha}{\beta}, m_3=\chars{\sigma}{\mu}$ we introduce the tricharacter $$(m_1, m_2, m_3)=(-1)^{\sum_{i=1}^g (\varepsilon_i\beta_i\mu_i+\delta_i\alpha_i\mu_i+\delta_i\beta_i\sigma_i)}$$ Then the usual form of the Riemann's quartic relation, for given characteristics $m_1,m_2,m_3,$ is the identity \begin{equation}\label{eq:Riemann} 2^g\left( \chars{\varepsilon}{\delta} , \chars{\alpha}{\beta}, \chars{\sigma}{\mu} \right) \tc\varepsilon\delta\tc{\varepsilon+\alpha}{\delta+\beta}\tc{\varepsilon+\sigma}{\delta+\mu}\tc{\varepsilon+\alpha+\sigma}{\delta+\beta+\mu}= \end{equation} $$ \sum_{\chars{a}{b}\in K_g}\!\!(-1)^{(\delta+b)^t (\alpha+\sigma) }e\left(\chars{\varepsilon}{\delta} , \chars{a}{b}\right) \left(\chars{a}{b}, \chars{\alpha}{\beta}, \chars{\sigma}{\mu}\right) \tc{a}{b}\tc{a+\alpha}{b+\beta}\tc{a+\sigma}{b+\mu}\tc{a+\alpha+\sigma}{b+\beta+\mu}, $$ valid for theta constants evaluated at any $\tau\in{\mathcal H}_g$, and for any $\chars\varepsilon\delta,\chars\alpha\beta,\chars\sigma\mu$, see~\cite{igusach}. Our identities $R_{jk}$ are in fact linear combinations of these, and to see this we describe the linear span of Riemann's quartic relations, following mainly the exposition in~\cite{fayriemann} and~\cite{smrelations}. We write down the Weil pairings of every pair of characteristics $m, n \in K_g$ to obtain a square matrix $M(g)$ of size $2^{2g}$. We shall write this matrix as $$M(g):=\left(\begin{smallmatrix} M^+(g)&N(g)\\ N(g)^t &M^-(g)\end{smallmatrix}\right), $$ where the set of characteristics is ordered in such a way that the set $K_g^+$ of even characteristics appears first, followed by $K_g^-$. Thus $M^\pm(g)$ are square matrices of size $k_g^\pm$, and $N(g)$ is a $k_g^+\times k_g^-$ matrix. We drop $(g)$ when it is understood, and recall from~\cite{fayriemann} the following \begin{prop}\label{nach} The matrix~$M$ has only two distinct eigenvalues, equal to~$\pm 2^g$, and the corresponding eigenspaces have dimensions~$k_g^\pm$, respectively. Each of the two matrices~$M^{\pm}$ has only two distinct eigenvalues, equal to~$\pm 2^g$ and~$\mp 2^{g-1}$, respectively. The corresponding eigenspaces have dimensions $\tfrac13(2^g \pm 1)(2^{g-1} \pm 1)$ and $\tfrac13(2^{2g}-1)$, respectively. Explicitly, these eigenspaces are characterized by the following equations. For $X\in {\mathbb{C}}^{k_g ^+}$ and $Y\in {\mathbb{C}}^{k_g ^-}$, we have $$M\left(\begin{smallmatrix} X\\Y\end{smallmatrix}\right)= 2^g\left(\begin{smallmatrix} X\\ Y\end{smallmatrix}\right) \iff M^- Y=(2^{g-1})Y=N^t X$$ $$M\left(\begin{smallmatrix} X\\Y\end{smallmatrix}\right)= -2^g\left(\begin{smallmatrix} X\\ Y\end{smallmatrix}\right) \iff M^+ X=-2^{g-1}X=NY$$ $$M^+ X= 2^g X \iff N^t X=0,\quad M^- Y=-2^gY\iff NY=0;$$ $$M^+ X=-2^{g-1} X\mbox{ if }M^+ X+NY=0,\quad M^- Y=2^{g-1}Y\mbox{ if } N'X-M^- Y=0.$$ \end{prop} The following result is stated in~\cite{fayriemann} without a proof. Since it is fundamental for our exposition on $R_{jk}$, we give a complete argument. \begin{lm} If $X=\lbrace x_{\chars\varepsilon\delta}\rbrace\in{\mathbb{C}}^{k_g^+}$ is an eigenvector for $M^+$ with eigenvalue $-2^{g-1}$, then the following identity holds \begin{equation}\label{eq:Riemann2} \sum_{\chars{\varepsilon}{\delta}\in K_g^+} x_{\chars{\varepsilon}{\delta}} (-1)^{\delta^t (\alpha+\sigma) } \tc\varepsilon\delta(\tau)\tc{\varepsilon+\alpha}{\delta+\beta}(\tau)\tc{\varepsilon+\sigma}{\delta+\mu}(\tau)\tc{\varepsilon+\alpha+\sigma}{\delta+\beta+\mu}(\tau)=0. \end{equation} \end{lm} \begin{proof} Multiplying the left-hand-side of the Riemann's quartic relation~\eqref{eq:Riemann} by $x_{\chars{\varepsilon}{\delta}} (-1)^{\delta^t (\alpha+\sigma) }$ and then summing over all $\chars\varepsilon\delta\in K_g^+$, we obtain $$2^g \sum_{\chars{\varepsilon}{\delta}\in K_g}x_{\chars{\varepsilon}{\delta}} (-1)^{\delta^t (\alpha+\sigma) }\left( \chars{\varepsilon}{\delta} , \chars{\alpha}{\beta}, \chars{\sigma}{\mu} \right)\tc\varepsilon\delta\tc{\varepsilon+\alpha}{\delta+\beta}\tc{\varepsilon+\sigma}{\delta+\mu}\tc{\varepsilon+\alpha+\sigma}{\delta+\beta+\mu},$$ where from now on we consistently omit the variable $\tau\in{\mathcal H}_g$, for brevity. The corresponding sum of right-hand-sides of~\eqref{eq:Riemann} gives $$\sum_{\chars {a} {b}\in K_g}\sum_{\chars{\varepsilon}{\delta}\in K_g^+} x_{\chars{\varepsilon}{\delta}}\cdot (-1)^{(\alpha^t+\sigma^t)b}\cdot e\left(\chars{\varepsilon}{\delta}, \chars a b\right) \cdot\left( \chars{a}{b} , \chars{\alpha}{\beta}, \chars{\sigma}{\mu} \right)\cdot$$ \vskip-8mm $$\hskip6cm\cdot\tc{a}{b}\tc{a+\alpha}{b+\beta}\tc{a+\sigma}{b+\mu}\tc{a+\alpha+\sigma}{b+\beta+\mu}= $$ \vskip-3mm $$-2^{g-1} \sum_{\chars{a}{b}\in K_g} x_{\chars{a}{b}} (-1)^{(\alpha^t+\sigma^t)b} ( \chars{a}{b} , \chars{\alpha}{\beta}, \chars{\sigma}{\mu} ) \tc{a}{b}\tc{a+\alpha}{b+\beta}\tc{a+\sigma}{b+\mu}\tc{a+\alpha+\sigma}{b+\beta+\mu}, $$ where we have used the fact that $M^+X=-2^{g-1}X$. Thus finally the sum of the left-hand-side and the sum of the right-hand-side of~\eqref{eq:Riemann} differ only by a factor of two, and thus such a sum has to vanish. \end{proof} It was shown in~\cite{smrelations} that all quartic identities satisfied by theta constants arise this way --- that is, any degree four polynomial in theta constants that vanishes identically, for all $\tau\in{\mathcal H}_g$, is equal to the left-hand-side of~\eqref{eq:Riemann2} for some eigenvector $X$ of $M^+$, with eigenvalue $-2^{g-1}$. Thus our next goal is to write down explicitly all such eigenvectors. A simple computation shows that any column of the matrix $N$ is in fact such an eigenvector, and that this eigenspace is spanned by the columns of $N$, see~\cite{ak} or~\cite{aps}. To eventually obtain the ``doubling'' proposition~\ref{prop:doubling}, we write out the last column of a $g$-characteristic separately, so that we write $ \chars{\varepsilon}{\delta}\in K_g$ as $ \chars{\varepsilon'&a}{\delta'&b}$. This induces a decomposition of the set $K_g^+$ as follows: $$K_g^{+}= \left(K_{g-1}^+\oplus K_1^+\right)\sqcup \left(K_{g-1}^-\oplus K_1^- \right),$$ where we note that if the last column is equal to $\chars11$, then the $(g-1)$-characteristic must be odd, and otherwise the $(g-1)$-characteristic must be even. We can then write down the matrix $M^+$ as follows: \begin{equation}\label{eq:M+} M^+ (g)=\left({\tiny{\begin{array}{rrrr} M^+(g-1)&\ M^+(g-1)&\ M^+(g-1)&\ N(g-1)\\ M^+(g-1)&\ M^+(g-1)&-M^+(g-1)&-N(g-1)\\ M^+(g-1)&-M^+(g-1)&\ M^+(g-1)&-N(g-1)\\ N^t(g-1)&-N^t(g-1)&-N^t(g-1)&\ M^-(g-1) \end{array}}}\right). \end{equation} We can then describe the $-2^{g-1}$ eigenvectors of $M^+(g)$ as follows \begin{lm}\label{lm:us} The eigenspace of $M^+(g)$ with eigenvalue $-2^{g-1}$ is equal to the direct sum of vector spaces $U_1\oplus U_2\oplus U_3\oplus U_4$, where \begin{itemize} \item $U_1$ and $U_2$ are spanned by the vectors of the form \begin{equation}\label{eq:doublingvectors} u_1:=(X,X,0,0)^t ,\quad \mbox {and}\quad u_2:=(X,0,X,0)^t, \end{equation} respectively, for $X$ being any eigenvector of $M^+(g-1)$ with eigenvalue $-2^{g-2}$; \item $U_3$ is spanned by the vectors of the form $u_3:=(X,0,0,Y)^t$, for $\left(\begin{smallmatrix}X\\ Y\end{smallmatrix}\right)$ being any eigenvector of $M(g-1)$ with eigenvalue $- 2^{g-1}$; and \item $U_4$ is spanned by the vectors of the form $u_4:=(X,-X,-X,0)^t$, for $X$ being any eigenvector of $M^+ (g-1)$ with eigenvalue $2^{g-1}$. \end{itemize} \end{lm} \begin{proof} Proposition~\ref{nach} gives the dimensions of all the eigenspaces, and we verify that $\sum\dim U_i$ is equal to the dimension of the $-2^{g-1}$ eigenspace of $M^+(g)$: $$ \tfrac{2}{3}(4^{g-1}-1)+\tfrac{1}{2}(4^{g-1}-2^{g-1})+ \tfrac{1}{3} (2^{g-1}+1) (2^{g-2}+1)= \tfrac{1}{3}(4^{g}-1).$$ From the block form of the matrix $M^+(g)$ given by~\eqref{eq:M+}, and using the formulas from Proposition~\ref{nach} characterizing the appropriate eigenspaces, it is immediate to see that each $U_i$ is a subspace of its $-2^{g-1}$ eigenspace. Thus to prove the proposition it suffices to show that the sum of $U_i$'s is in fact a direct sum. We see this by looking at the four blocks of entries of the vectors $u_i$. It is straightforward to verify that if for some $u_i\in U_i$ the sum $u_1+u_2+u_3+u_4$ is equal to zero, then each $u_i$ is equal to zero. \end{proof} Note that the construction of the vectors of the form $u_1$ and $u_2$ can then be applied again, to increase the genus further, while having more and more copies of the original $X$ in various places. We think of this part of the statement as a ``doubling principle''. To illustrate how this lemma allows one to explicitly write down the eigenvectors, and thus all quartic identities among theta constants, we work out the low genus examples explicitly. Consider the classical unique degree 4 relation in genus 1: $$\theta^4\chars00=\theta^4\chars01+\theta^4\chars10.$$ This relation corresponds to the eigenvector $X_1=(1,-1, -1)^t$ of $M^+(1)$, with eigenvalue $-1$. Then the corresponding vector of the form $u_2$, given by ~\eqref{eq:doublingvectors}, is the genus two eigenvector $X_2=(1,-1,-1, 0, 0, 0, 1, -1, -1, 0)^t\in{\mathbb{C}}^{10}={\mathbb{C}}^{k_2^+}$ of $M^+(2)$. By lemma~\ref{lm:us}, the vector $X_2$ gives rises to quartic identities among genus 2 theta constants. If $\alpha=\sigma=0$, equation~\eqref{eq:Riemann2} gives $$\theta^4\chars{00}{00}+\theta^4\chars{01}{00}- \theta^4\chars{10}{00}-\theta^4\chars{11}{00}-\theta^4\chars{00}{10}-\theta^4\chars{01}{10}=0,$$ while for the same $X_2$, but for $\chars\alpha\beta=\chars\sigma\mu=\chars{01}{00}$, equation~\eqref{eq:Riemann2} gives $$ 2\left( \theta^2\chars{00}{00}\theta^2\chars{01}{00}- \theta^2\chars{10}{00}\theta^2\chars{11}{00}-\theta^2\chars{00}{10}\theta^2\chars{01}{10}\right)=0.$$ Perhaps the most interesting Riemann's quartic relations are those where all characteristics involved are different, such as \begin{equation}\label{eq:Rgen3} \begin{aligned} 0&=\tc{0&0&0}{0&0&0}\tc{0&1&1}{1&0&0}\tc{1&0&0}{0&0&1}\tc{1&1&1}{1&0&1}\\ &-\tc{0&1&0}{0&0&0}\tc{0&0&1}{1&0&0}\tc{1&1&0}{0&0&1}\tc{1&0&1}{1&0&1}\\ &+\tc{0&0&0}{0&1&1}\tc{0&1&1}{1&1&1}\tc{1&0&0}{0&1&0}\tc{1&1&1}{1&1&0}. \end{aligned} \end{equation} In the notation of equation~\eqref{eq:Riemann2}, this identity arises from $\chars\alpha\beta=\chars{011}{100}, \chars\sigma\mu=\chars{100}{001}$, and the vector $X_3 \in {\mathbb{C}}^{k_3^+}={\mathbb{C}}^{36}$ that has 24 zero entries. A similar procedure applies in general, so that we obtain our main technical statement about quartic identities in theta constants. \begin{prop}\label{prop:doubling} For any genus $g$, for a given vector $X=\lbrace x_{\chars\varepsilon\delta}\rbrace\in{\mathbb{C}}^{k_g^+}$, and for fixed $\chars\alpha\beta,\chars\sigma\mu\in K_g$, the quartic polynomial \begin{equation} \label{eq:anyquartic} \sum_{\chars\varepsilon \delta\in K_g^+} x_{\chars\varepsilon\delta} (-1)^{\delta^t (\alpha+\sigma)} \tc\varepsilon\delta\tc{\varepsilon+\alpha}{\delta+\beta}\tc{\varepsilon+\sigma}{\delta+\mu}\tc{\varepsilon+\alpha+\sigma}{\delta+\beta+\mu} \end{equation} lies in ${\mathcal I}_g^A$ if and only if the quartic polynomial \begin{equation} \label{eq:anyquarticg+1} \begin{aligned} \sum_{\chars\varepsilon\delta\in K_g^+} x_{\chars\varepsilon\delta} (-1)^{\delta^t (\alpha+\sigma)} &\left(\tc{\varepsilon&0}{\delta&0}\tc{\varepsilon+\alpha&0}{\delta+\beta&0}\tc{\varepsilon+\sigma&0}{\delta+\mu&0}\tc{\varepsilon+\alpha+\sigma&0}{\delta+\beta+\mu&0}\right.\\ &\!\!\!+\left.\tc{\varepsilon&1}{\delta&0}\tc{\varepsilon+\alpha&1}{\delta+\beta&0}\tc{\varepsilon+\sigma&1}{\delta+\mu&0}\tc{\varepsilon+\alpha+\sigma&1}{\delta+\beta+\mu&0}\right) \end{aligned} \end{equation} lies in ${\mathcal I}_{g+1}^A$. \end{prop} \begin{proof} By the results of~\cite{smrelations} we know that all quartic identities in theta constants are of the form~\eqref{eq:Riemann2} for some eigenvector $X$ of $M^+(g)$ with eigenvalue $-2^{g-1}$. Any quartic polynomial of course has the form~\eqref{eq:anyquartic} for some coefficients $x_{\chars\varepsilon\delta}$, and thus a quartic polynomial gives a quartic identity (that is, lies in the ideal ${\mathcal I}_g^A$) if and only if $X$ is an eigenvector of $M^+(g)$ with eigenvalue $-2^{g-1}$. For any such eigenvector $X$, by lemma~\ref{lm:us} $ u_2=(X,0,X,0)^t $ is then an eigenvector of $M^+(g+1)$ of eigenvalue $-2^{g}$, and thus~\eqref{eq:anyquarticg+1} gives then a quartic identity in ${\mathcal I}_{g+1}^A$. Vice versa, if~\eqref{eq:anyquarticg+1} gives a quartic identity in ${\mathcal I}_{g+1}^A$, then its coefficients must form an eigenvector $u_2$ of $M^+(g+1)$ with eigenvalue $-2^{g}$ . The block structure of the eigenvector, i.e.~the fact that $ u_2=(X,0,X,0)^t $, implies that under the direct sum decomposition of the eigenspace of $M^+(g+1)$ as $U_1\oplus\ldots\oplus U_4$, this eigenvector must lie in $U_2$. But then by construction $X$ must be an eigenvector of $M^+(g)$ with eigenvalue $-2^{g-1}$, and thus the expression~\eqref{eq:anyquartic} must give an element of ${\mathcal I}_g^A$. \end{proof} \begin{rem} The special case of this doubling principle for the case $\chars\alpha\beta=\chars\sigma\mu=\chars00$ (and for the choices of the extra column being $\chars00$ and $\chars01$, instead of $\chars00$ and $\chars10$ that we use) was obtained by van Geemen~\cite{vgeemenscju}. \end{rem} Applying this doubling principle repeatedly shows that $R_{jk}$ are indeed quartic identities satisfied by theta constants: \begin{proof}[Proof of proposition~\ref{prop:Rsum}] To prove that $R_{34}$ lies in ${\mathcal I}_g^A$, we simply apply proposition~\ref{prop:doubling} to the genus 3 Riemann's quartic relation~\eqref{eq:Rgen3} $g-3$ times. In doing this, each of the three terms in~\eqref{eq:Rgen3} is replaced by $2^{g-3}$ terms, where each of the characteristics appearing is extended by an arbitrary top $(g-3)$-characteristic, and by the zero bottom $(g-3)$-characteristics. This gives precisely the expression for $R_{34}$ given by formula~\eqref{eq:Rgeng}, and thus $R_{34}$ lies in ${\mathcal I}_g^A$, since the genus 3 quartic lies in ${\mathcal I}_3^A$. Furthermore, each $R_{jk}$ is obtained from $R_{34}$ by permuting some columns of the characteristics. This permutation can of course be obtained by the action of some element $\gamma$ of the symplectic group (or we could also apply proposition~\ref{prop:doubling} to add columns in arbitrary position, and not just columns number $4,\ldots,g$), and since ${\mathcal I}_g^A$ is invariant under $\op{Sp}(2g,{\mathbb{Z}})$, it follows that also $R_{jk}=\gamma\circ R_{34}\in{\mathcal I}_g^A$. \end{proof} \section{The Schottky-Jung proportionalities} We denote by ${\mathcal M}_g(4,8)$ the fiber product of ${\mathcal M}_g$ and ${\mathcal A}_g(4,8)$ over ${\mathcal A}_g$ under the Torelli map and the forgetful morphism. The Torelli map thus naturally lifts to $J:{\mathcal M}_g(4,8)\to{\mathcal A}_g(4,8)$, for which we will keep the same notation. The Schottky-Jung proportionalities relate the theta constants of the Jacobian and of the Prym. They were discovered in~\cite{scju}, rigorously proven to hold in~\cite{fasc,farascju}, and recast algebraically by Mumford in~\cite{mumfordprym}. Denote by ${\mathcal R}_g$ the moduli space of pairs $(C,\eta)$, where $C\in{\mathcal M}_g$, and $\eta\in J(C)[2]\setminus\{0\}$ is a non-zero 2-torsion point on the Jacobian. Such a point $\eta$ defines an unramified connected double cover $\tilde C\to C$, and the Prym $Pr(C,\eta)$ is then defined to be the connected component of the kernel of the map $J(\tilde C)\to J(C)$. The Prym turns out to have a natural principal polarization, so that the construction defines a morphism $Pr:{\mathcal R}_g\to{\mathcal A}_{g-1}$. For the two-torsion point \begin{equation}\label{eq:eta0} \eta_0:=\chars{0&0&\ldots& 0}{1&0&\ldots&0}, \end{equation} the {\em Schottky-Jung proportionalities} are the equalities \begin{equation}\label{eq:SJ} \theta^2\chars\varepsilon\delta(Pr(C,\eta_0))=c\,\tc{0&\varepsilon}{0&\delta}(J(C))\tc{0&\varepsilon}{1&\delta}(J(C)), \end{equation} which hold for some non-zero constant $c$ independent of $\chars\varepsilon\delta$. The Schottky-Jung proportionalities were described algebraically by Mumford~\cite{mumfordprym}. We refer to the surveys~\cite{dosurvey},\cite{beprymsurvey}, and especially~\cite{vgsurvey} for the details of the Schottky-Jung approach from the algebraic viewpoint, and refer to ~\cite{faprymsurvey} for a survey on Pryms. The Schottky-Jung proportionalities for arbitrary two-torsion point $\eta$, as a relation between modular forms, is described explicitly in~\cite{vgsurvey}. For our purposes, the combinatorics is such that we will need the explicit form of the Schottky-Jung proportionalities for the two-torsion point \begin{equation}\label{eq:etag} \eta_g:=\chars{0&0&\ldots& 0}{1&1&\ldots&1}. \end{equation} In~\cite{farkashscju} the case of Schottky-Jung proportionalities for $\eta'=\chars{0&0&0&\ldots&0}{1&1&0&\ldots&0}$ was studied explicitly, and the Schottky-Jung proportionalities for that case are $$ \theta^2\chars{\varepsilon_1&\varepsilon}{\delta_1&\delta}(Pr(C,\eta'))=\pm c\,\tc{\varepsilon_1&\varepsilon_1&\varepsilon}{0&\delta_1&\delta}(J(C))\tc{\varepsilon_1&\varepsilon_1&\varepsilon}{1&1+\delta_1&\delta}(J(C)), $$ where $\chars{\varepsilon_1}{\delta_1}\in K_1$, $\chars{\varepsilon}{\delta}\in K_{g-1}$, and the sign depends on $\chars\varepsilon\delta$. While the main focus in~\cite{farkashscju} was on the appearance of additional signs depending on $\chars\varepsilon\delta$, we will not be able to track the signs in general anyway, and only the characteristics appearing in the proportionality will play a role. To deduce the Schottky-Jung proportionalities for arbitrary $\eta$, note that $\op{Sp}(2g,{\mathbb{Z}}/2{\mathbb{Z}})$ acts transitively on $J(C)[2]\setminus\lbrace 0\rbrace$, and thus by acting on the Schottky-Jung proportionalities~\eqref{eq:SJ} for $\eta_0$ by a suitable element of the symplectic group, one obtains the general case of the Schottky-Jung proportionalities. Following Mumford's algebraic approach to Pryms~\cite{mumfordprym}, the details are discussed in~\cite{vgsurvey}, for a given $\eta$ one has an isomorphism $j:Pr(C,\eta)[2]\to V/\eta$, where $Pr(C,\eta)[2]$ is the set of two-torsion points of the Prym, and $V=\eta^\perp\subset K_g$ is the set of all $v$ such that the symplectic pairing $e(\eta,v)=0$. Then for any $\chars\varepsilon\delta\in Pr(C,\eta)[2]$, the characteristics appearing in the right-hand-side of Schottky-Jung proportionalities would be $j\left(\chars\varepsilon\delta\right)$ and $\eta+j\left(\chars\varepsilon\delta\right)$. This is a priori confusing, as the proportionalities depend on the choice of $j$ --- and it seems that this was never discussed explicitly in the literature. To see how the choice of $j$ arises, recall that analytically the Schottky-Jung proportionalities arise due to vanishing of certain theta constants of $J(\tilde C)$ due to symmetry. The period matrix of $J(\tilde C)$ can only be chosen up to $\op{Sp}(2(2g-1),{\mathbb{Z}})$, and the choice of $j$ corresponds to a given choice of a basis of $J(\tilde C)[2]$, which then gives the lifting (denoted $\pi^*$ by Mumford) $Pr(C,\eta)[2]\to J(\tilde C)[2]$, and $J$ is obtained by composing this with projection to $J(C)[2]$. To summarize, any choice of $j$ can arise; the standard choice for $\eta_0$ is to embed $({\mathbb{Z}}/2{\mathbb{Z}})^{2g-2}$ into $K_g$ as the characteristics with first column equal to $\chars00$, while the choice implicitly made in~\cite{farkashscju} is to note that a characteristic $\chars\varepsilon\delta$ is orthogonal to $\eta'$ if and only if the first two entries of $\varepsilon$ are equal, and then let $j\left(\chars{\varepsilon_1&\varepsilon}{\delta_1&\delta}\right):=\chars{\varepsilon_1&\varepsilon_1&\varepsilon'}{0&\delta_1&\delta}$. For our choice of $\eta_g$, we will choose $j$ to be $$ j\left(\chars{\varepsilon_1&\ldots&\varepsilon_{g-1}}{\delta_1&\ldots&\delta_{g-1}}\right):=\chars{\varepsilon_1+\ldots+\varepsilon_{g-1}&\varepsilon_1&\ldots&\varepsilon_{g-1}}{0&\delta_1&\ldots&\delta_{g-1}}. $$ Then the Schottky-Jung proportionalities take the explicit form \begin{equation}\label{eq:SJetag} \!\! \theta^2\chars\varepsilon\delta(Pr(C,\eta_g))=\pm c\, \tc{\sum\varepsilon_i&\varepsilon_1&\ldots&\varepsilon_{g-1}}{0&\delta_1&\ldots&\delta_{g-1}}(J(C))\,\cdot\, \tc{\sum\varepsilon_i&\varepsilon_1&\ldots&\varepsilon_{g-1}}{1&1+\delta_1&\ldots&1+\delta_{g-1}}(J(C)), \end{equation} where the sign depends on $\chars\varepsilon\delta$. This is the form of the proportionalities that we will use from now on. \medskip We now recall how the Schottky-Jung proportionalities are used in the classical approach to the Schottky problem. One starts with any equation $P\in{\mathcal I}_{g-1}^A$. Then for any $C\in {\mathcal M}_g$ and any two-torsion point $\eta$, the equation $P$ is satisfied by the theta constants of $P(C,\eta)$. Replacing in the polynomial $P$ each theta constant of $Pr(C,\eta)$ by the square root of the product of the two corresponding theta constants of $J(C)$ given by the Schottky-Jung proportionalities gives a polynomial in the {\em square roots} of genus $g$ theta constants of $J(C)$. Since the square roots cannot be chosen globally, to make this a well-defined relation we multiply the product of the results of such substitutions for {\em all possible choices} of the values of square roots of the monomials involved --- except choosing one square root of one monomial to be given (or otherwise one obtains each factor twice, once with plus and once with minus sign). We denote this product by $SJ^{\eta}(P)$, and call it the {\em Schottky-Jung identity} corresponding to $P$ and $\eta$. It is then a polynomial in genus $g$ theta constants, of degree equal to the degree of $P$, times $2$ raised to the power equal to the number of monomials in $P$ minus one, and Schottky-Jung proportionalities imply that $SJ^{\eta}(P)\in{\mathcal I}_g^J$. The simplest non-trivial case of this is for $g=4$: the genus 3 Riemann's quartic relation~\eqref{eq:Rgen3} involves three monomials and has the form $r_1-r_2+r_3=0$, where each $r_i$ is a product of four genus 3 theta constants. Evaluating this at $Pr(C,\eta)$ for some $(C,\eta)\in{\mathcal R}_4$ and replacing each theta constant of $Pr(C,\eta)$ by the square root of the product of the two theta constants of $J(C)$, given by the Schottky-Jung proportionalities, gives $\sqrt{R_1}-\sqrt{R_2}-\sqrt{R_3}=0$, where each $R_i$ is a product of eight theta constants evaluated at $ \tau \in J(C)$. To obtain a polynomial in theta constants, one needs to take the product $(\sqrt{R_1}+\sqrt{R_2}+\sqrt{R_3})(\sqrt{R_1}-\sqrt{R_2}+\sqrt{R_3}) (\sqrt{R_1}+\sqrt{R_2}-\sqrt{R_3})(\sqrt{R_1}-\sqrt{R_2}-\sqrt{R_3})$. Finally evaluating this product gives \begin{equation}\label{eq:Rconjugates} SJ^\eta(r_1-r_2+r_3)=R_1^2+R_2^3+R_3^2-2R_1R_2-2R_1R_3-2R_2R_3=0, \end{equation} which is a degree 16 {\em polynomial} in theta constants of the genus 4 Jacobian $J(C)$. This equation was discovered by Schottky, and was rigorously proven by Igusa~\cite{igusagen4} and Freitag~\cite{freitaggen4} to generate ${\mathcal I}_4^J$, i.e.~to be the unique defining equation for $Th({\mathcal J}_4)\subset Th({\mathcal A}_4)$. In our case we are interested in applying Schottky-Jung proportionalities for $\eta_g$ to $R_{jk}$, and the result is \begin{lm}\label{lm:SR} For any $3\le j<k\le g$ the identity $S_{jk}=SJ^{\eta_g}(R_{jk})$ holds. \end{lm} \begin{proof} Indeed, applying the Schottky-Jung proportionalities~\eqref{eq:SJetag} to $R_{34}$, one obtains an expression in square roots of theta constants that is one of the factors in $S_{34}$, and then taking the product over all possible choices of square roots gives exactly the product given by $S_{34}$, so that $S_{34}=SJ^{\eta_g}(R_{34})$. Since the columns number $3,\ldots,g$ of $\eta_g$ are all equal, permuting the columns $3$ and $j$, and columns $4$ and $k$ gives the desired equality for all $j,k$. \end{proof} \section{Expansion of theta constants near the diagonal} The locus ${\mathcal J}_g$ contains the locus of ppav that are products of $g$ elliptic curves. Explicitly, we think of this locus as the image in ${\mathcal A}_g$ of the locus ${\mathcal D}_g\subset{\mathcal H}_g$ consisting of diagonal period matrices ${\mathcal D}_g:=\lbrace \tau=\diag(t_{1},t_{2},\ldots,t_{g})\rbrace$. The proof of the main theorem will consist of showing that the $S_{jk}$ are locally functionally independent near ${\mathcal D}_g$, by computing the lowest order term of their expansions. We thus recall the expansion of theta constants near the diagonal, which was used by Rauch~\cite{rauchschpo} in his genus 4 computations. First recall that the theta constant of a diagonal period matrix decomposes as a product: $$ \tc\varepsilon\delta(\diag(t_{1},t_{2},\ldots,t_{g}))=\tc{\varepsilon_1}{\delta_1}(t_1)\cdot\ldots\cdot\tc{\varepsilon_g}{\delta_g}(t_g). $$ Recall further for any $j<k$ the heat equation $$ \frac{\partial\tc\varepsilon\delta}{\partial\tau_{jk}}=\frac{1}{2\pi i}\frac{\partial^2\tc\varepsilon\delta}{\partial z_jz_k}. $$ We then evaluate for any $j<k$ the partial derivative $$\begin{aligned} \frac{\partial\tc\varepsilon\delta}{\partial\tau_{jk}}|_{\tau=\diag(t_{1},t_{2},\ldots,t_{g})}&=\frac{1}{2\pi i} \tc{\varepsilon_1}{\delta_1}(t_1)\cdot\ldots\cdot\tc{\varepsilon_{j-1}}{\delta_{j-1}}(t_{j-1})\\ &\!\!\!\!\!\!\cdot\frac{\partial\tc{\varepsilon_j}{\delta_j}(t_j,z)}{\partial z}|_{z=0}\cdot\tc{\varepsilon_{j+1}}{\delta_{j+1}}(t_{j+1})\cdot\ldots\cdot\tc{\varepsilon_{k-1}}{\delta_{k-1}}(t_{k-1})\\ &\!\!\!\!\!\!\cdot\frac{\partial\tc{\varepsilon_k}{\delta_k}(t_k,z)} {\partial z}|_{z=0}\cdot \tc{\varepsilon_{k+1}}{\delta_{k+1}}(t_{k+1})\cdot\ldots\cdot\tc{\varepsilon_g}{\delta_g}(t_g). \end{aligned} $$ Since all theta functions with $1$-characteristics are even unless the characteristic is $\chars11$, this partial derivative vanishes unless $\chars{\varepsilon_j}{\delta_j}=\chars{\varepsilon_k}{\delta_k}=\chars11$. Thus expanding theta constants in Taylor series in $\tau_{jk}$ near ${\mathcal D}_g$, {\em for $t_1,\ldots,t_g$ fixed}, the constant and linear terms are \begin{equation}\label{eq:expand}\begin{aligned} &\tc\varepsilon\delta(\tau)=\tc{\varepsilon_1}{\delta_1}(t_1)\cdot\ldots\cdot\tc{\varepsilon_g}{\delta_g}(t_g)\\ &+\frac{1}{2\pi i}\!\!\! \sum_{j<k,\chars{\varepsilon_j}{\delta_j}=\chars{\varepsilon_k}{\delta_k}=\chars11}\!\!\!\!\!\!\!\!\tau_{jk}\cdot \frac{\partial\tc11(t_j,z)} {\partial z}|_{z=0}\cdot \frac{\partial\tc11(t_k,z)} {\partial z}|_{z=0}\cdot\!\! \prod_{m\ne j,k}\!\! \tc{\varepsilon_m}{\delta_m}(t_m), \end{aligned} \end{equation} and the full Taylor series expansion includes further monomials in $\tau_{jk}$ that are of total degree 2 or higher. Note that if no column $\chars{\varepsilon_m}{\delta_m}$ is equal to $\chars11$, then the Taylor series have a non-zero constant term, and zero linear term. If precisely two different columns $\chars{\varepsilon_j}{\delta_j}$ and $\chars{\varepsilon_k}{\delta_k}$ are equal to $\chars11$, then the Taylor series has zero constant term, and the linear term is a multiple of $\tau_{jk}$. Finally, if more than two columns are equal to $\chars11$, then both the constant and linear terms of the Taylor series are zero, and in fact the lowest order term has degree equal to half the number of columns equal to $\chars11$. Furthermore, recalling Jacobi's triple product identity in genus 1: \begin{equation}\label{eq:tripleproduct} \frac{\partial\tc11(t,z)}{\partial z}|_{z=0}=-\pi\tc00(t)\tc01(t)\tc10(t), \end{equation} the linear term can be written in terms of theta constants of $t_j$ and $t_k$. \section{Poincar\'e relations} The Poincar\'e relation in genus 4 was discovered by Poincar\'e~\cite{poincare}; it is an infinitesimal relation for period matrices of Jacobians near ${\mathcal D}_g$ --- Poincar\'e calls it ``approximate identity''. Rather than thinking of it as being the lowest order terms of a suitable power series expansion, Poincar\'e derived it from something he called a ``translation surface''. Poincar\'e then stated that his proof in genus 4 could be easily generalized to higher genus, but gave no details. Garabedian~\cite{garabedian} proved the Poincar\'e relations for some special Riemann surfaces, in arbitrary genus. Rauch in~\cite{rauchschpo} showed that expanding a suitable genus $4$ Schottky-Jung identity near ${\mathcal D}_4$ yielded the original Poincar\'e relation in genus 4, and thus he reproved it. In~\cite[Appendix 2]{rafabook} Rauch also asked whether Poincar\'e relations for any genus could be obtained in a similar way by expanding suitable Schottky-Jung identities. We will show that this is indeed the case for our $S_{jk}$, though see remark~\ref{rem:noRiemann} for a discussion of why using Riemann's quartic relations instead of $R_{jk}$ would not work. To the best of our knowledge, our derivation below is the first complete proof that Poincar\'e relations hold in arbitrary genus, for a general Jacobian of a curve, such that period matrix is close to ${\mathcal D}_g$, i.e.~such that $0<|\tau_{ab}|<\varepsilon$ for all $a<b$. The Poincar\'e relations are the following equations for the off-diagonal elements of the period matrix, for all $i<j<k<l$: \begin{equation}\label{eq:Poin} (\tau_{ij}\tau_{jk}\tau_{kl}\tau_{li})^{1/2}\pm(\tau_{ik}\tau_{kl}\tau_{lj}\tau_{ji})^{1/2}\pm(\tau_{il}\tau_{lj}\tau_{jk}\tau_{ki})^{1/2} =O(\varepsilon^3). \end{equation} Note that the Poincar\'e relations do not depend on the diagonal entries $t_m$ of the period matrix, which is a priori surprising. Similar to the case of applying Schottky-Jung proportionalities to Riemann's quartic relations, the signs of the square roots in the Poincar\'e relation may not be chosen globally, and to get a well-defined polynomial equation in the entries of the period matrix, one multiplies the relations~\eqref{eq:Poin} for all four possible choices of square roots --- this was explained by Igusa~\cite[p.167]{iguproblems}. As in the derivation of~\eqref{eq:Rconjugates}, if we denote the three terms in the Poincar\'e relation by $\sqrt{P_1},\sqrt{P_2},\sqrt{P_3}$, the resulting equation that is polynomial in the entries of the period matrix has the form \begin{equation}\label{eq:Pconjugates} P_1^2+P_2^2+P_3^2-2P_1P_2-2P_1P_3-2P_2P_3+O(\varepsilon^{9})=0 \end{equation} which is a degree 8 polynomial in $\tau_{ab}$. We give a proof of Poincar\'e relations ${\mathcal D}_g$ by expanding the factors of $S_{jk}$ near ${\mathcal D}_g$. \begin{thm}\label{thm:Poincareholds} Let $C$ be any curve in ${\mathcal M}_g$ sufficiently close to a union of $g$ elliptic curves. Then after an appropriate choice of $A$ and $B$ cycles, the period matrix $\tau$ of $J(C)$ satisfies all Poincar\'e relations~\eqref{eq:Poin}. \end{thm} In genus 4 there is a unique Poincar\'e relation, for the quadruple $(ijkl)=(1234)$, and we first present a streamlined version of Rauch's computation in~\cite{rauchschpo} deriving it from a Schottky-Jung identity. \begin{proof}[Proof of theorem~\ref{thm:Poincareholds} in genus 4] Applying the Schottky-Jung proportionalities for $\eta_4$, given by~\eqref{eq:SJetag}, to relation~\eqref{eq:Rgen3} gives the identity \begin{equation}\label{eq:irr} \begin{aligned} 0&=s_{34}:=\\ &(\tc{0 & 0 & 0 & 0}{0 & 0 & 0 & 0}\tc{0 & 0 & 0 & 0}{ 1 & 1 & 1 & 1}\tc{0 & 0 & 1 & 1}{0 & 1 & 0 & 0}\tc{0 & 0 & 1 & 1}{1 & 0 & 1 & 1} \tc{1 & 1 & 0 & 0}{0 & 0 & 0 & 1}\tc{1 & 1 & 0 & 0}{ 1 & 1 & 1 & 0}\tc{1 & 1 & 1 & 1}{0 & 1 & 0 & 1}\tc{1 & 1 & 1 & 1}{1 & 0 & 1 & 0})^{1/2}\\ &\pm(\tc{1 & 0 & 1 & 0}{0 & 0 & 0 & 0}\tc{1 & 0 & 1 & 0}{1 & 1 & 1 & 1}\tc{1 & 0 & 0 & 1}{0 & 1 & 0 & 0}\tc{1 & 0 & 0 & 1}{1 & 0 & 1 & 1}\tc{0 & 1 & 1 & 0}{0 & 0 & 0 & 1}\tc{0 & 1 & 1 & 0}{1 & 1 & 1 & 0}\tc{0 & 1 & 0 & 1}{0 & 1 & 0 & 1}\tc{0 & 1 & 0 &1}{1 & 0 & 1 & 0} )^{1/2}\\ &\pm (\tc{0 & 0 & 0 & 0}{0 & 0 & 1 & 1}\tc{0 & 0 & 0 & 0}{1 & 1 & 0 & 0}\tc{0 & 0 & 1 & 1}{0 & 1 & 1 & 1}\tc{0 & 0 & 1 & 1}{1 & 0 & 0 & 0} \tc{1 & 1 & 0 & 0}{0 & 0 & 1 & 0}\tc{1 & 1 & 0 & 0}{1 & 1 & 0 & 1}\tc{1 & 1 & 1 & 1}{0 & 1 & 1 & 0}\tc{1 & 1 & 1 & 1}{1 & 0 & 0 & 1})^{1/2}. \end{aligned} \end{equation} We denote $RR_1,RR_2,RR_3$ the three degree 8 monomials in theta constants appearing in this expression. As discussed above, the signs of the square roots in this identity cannot be chosen consistently. We now use the expansion~\eqref{eq:expand} computed in the previous section to expand each theta constant involved in $s_{34}$ up to linear order terms in $\tau_{ab}$, for all $t_m$ fixed. Note that for each monomial $RR_i$ four of the theta characteristics involved have all columns being even $1$-characteristics, and the remaining four theta characteristics have precisely two columns equal to $\chars11$. As discussed in the previous section, for those 4 theta constants where all columns are even, the lowest degree term of the expansion is the constant term, while for the remaining 4 theta constants the lowest degree term is linear, and equal to a multiple of $\tau_{ab}$, where the columns $\chars{\varepsilon_a}{\delta_a}=\chars{\varepsilon_b}{\delta_b}=\chars11$. For example for $RR_1$ in the 6th characteristic involved we get the term with $\tau_{12}$, in the 7th characteristic --- the term with $\tau_{24}$, in the 4th --- $\tau_{34}$, and in the 8th --- $\tau_{13}$. We thus compute the lowest degree term of the expansion of $RR_1$ near ${\mathcal D}_4$ to be the product of these four linear terms, and the four constant terms from the expansion of the other theta constants. By checking that the number of times in each product $RR_i$ that each column $\chars{\varepsilon_m}{\delta_m}$ is equal to each of the even $1$-characteristics is equal to two, this gives for the lowest degree term $$ \begin{aligned} RR_1=&(2\pi i)^{-4} \tau_{12}\tau_{24}\tau_{34}\tau_{13}\cdot \\ &\cdot\prod_{m=1}^4\Big((\tc00(t_m)\tc01(t_m)\tc10(t_m)\Big)^2\cdot\prod_{m=1}^4\left(\frac{\partial\tc{1}{1}(t_m,z)} {\partial z}|_{z=0}\right)^2, \end{aligned} $$ The lowest order terms of $RR_2$ and $RR_3$ are similar. They have exactly the same factor in theta constants and derivaties, while the entries of the period matrix that appear are $$ \tau_{13}\tau_{23}\tau_{24}\tau_{14} $$ from the 2nd, 6th, 7th, and 4th theta constants appearing in the product $RR_2$, and similarly $$ \tau_{14}\tau_{34}\tau_{12}\tau_{23} $$ for $RR_3$. Using Riemann's triple product identity~\eqref{eq:tripleproduct}, we see that the overall theta factor in each of these is simply equal to the product $$ c:=\prod_{m=1}^4\prod_{\chars\varepsilon\delta\in K_1^+}\theta^4\chars\varepsilon\delta(t_m), $$ which is non-zero for any $t_1,t_2,t_3,t_4$ in the upper half-plane. Up to this common factor, the square root of the lowest degree term of the expansion of each $RR_i$ is then equal to the corresponding summand in the Poincar\'e relation~\eqref{eq:Poin} for the quadruple $(1234)$. Thus altogether we have computed the lowest order term of the expansion: $$ s_{34}=c \left(\pm(\tau_{34}\tau_{12}\tau_{24}\tau_{13})^{1/2}\pm( \tau_{13}\tau_{14}\tau_{23}\tau_{24})^{1/2}\pm (\tau_{12}\tau_{14}\tau_{23}\tau_{34})^{1/2}\right)+O(\varepsilon^3). $$ Recall now that in genus 4 the identity $S_{34}$ is obtained as a product of four factors of the form $s_{34}$, with different choices of signs. The expansion of each such factor near the diagonal is as above, with suitable choices of signs, and thus the expansion of the product of the four factors gives exactly the Poincar\'e's relation in its polynomial form~\eqref{eq:Pconjugates}. \end{proof} \begin{rem}\label{rem:etag} Many other choices of $3$-characteristics for the genus 3 Riemann's quartic relation, instead of~\eqref{eq:Rgen3}, and different choices of $\eta$ would also yield a Schottky-Jung idendity that would work in the above proof. One just needs to make sure that in each resulting $RR_i$ precisely four theta characteristics have all columns even, and the remaining four characteristics have precisely two columns equal to $\chars11$. For generalizing to higher genus, to be able to deal with the small Schottky locus rather than the big Schottky, we need to use the same two-torsion point $\eta$ for all proportionalities, and thus in our approach the 3rd, 4th,\dots, $g$'th columns of $\eta$ should all be equal, so that permuting them would leave $\eta$ invariant. Thus perhaps a simplest choice of such a two-torsion point could be $\chars{000\ldots 0}{110\ldots 0}$, so that the 3rd, 4th,\dots,$g$'th columns are all equal to $\chars00$. However, using a computer we checked for all possible Riemann's quartic relations in genus 3, that applying the Schottky-Jung proportionalities for the two-torsion point $\chars{0000}{1100}$ to them cannot give in genus 4 a Schottky-Jung identity where the columns of $RR_1,RR_2,RR_3$ satisfy this necessary combinatorial property. Thus our choice of $\eta_g$ is the simplest possible. \end{rem} We now generalize this computation to arbitrary genus; note that this of course uses the specifics of our choice of quartic identities $R_{jk}$ and of the corresponding Schottky-Jung identities $S_{jk}$. \begin{proof}[Proof of theorem~\ref{thm:Poincareholds} for arbitrary genus] The proof is again by computing the lowest order terms of the appropriate expansions, and crucially noticing that in fact the cases when some of the $\varepsilon_5,\ldots,\varepsilon_g$ are equal to $1$ lead to higher order terms, as in some of the theta constants involved in $s_{34}$ there are then more columns equal to $\chars11$. Recall that $S_{34}$ is the product of $2^{3\cdot 2^{g-4}-1}$ terms of the form $$ \begin{aligned} 0&=s_{34}:=\\ &\sum_{\varepsilon \in({\mathbb{Z}}/2{\mathbb{Z}})^{g-4}} a_\varepsilon\left(\tc{E & 0 & 0 & 0&\varepsilon }{0 & 0 & 0 & 0&{\bf 0} }\tc{E & 0 & 0 & 0&\varepsilon }{ 1 & 1 & 1 & 1&{\bf 1} } \tc{E & 0 & 1 & 1&\varepsilon }{0 & 1 & 0 & 0&{\bf 0} }\tc{E & 0 & 1 & 1&\varepsilon }{1 & 0 & 1 & 1&{\bf 1} }\right.\cdot\\ &\left.\cdot\,\tc{1+E & 1 & 0 & 0&\varepsilon }{0 & 0 & 0 & 1&{\bf 0} }\tc{1+E & 1 & 0 & 0&\varepsilon }{ 1 & 1 & 1 & 0&{\bf 1} } \tc{1+E & 1 & 1 & 1&\varepsilon }{0 & 1 & 0 & 1&{\bf 0} }\tc{1+E & 1 & 1 & 1&\varepsilon }{1 & 0 & 1 & 0&{\bf 1} }\right)^{1/2}\\ +&b_\varepsilon\left(\tc{1+E & 0 & 1 & 0&\varepsilon }{0 & 0 & 0 & 0&{\bf 0} }\tc{1+E & 0 & 1 & 0&\varepsilon }{1 & 1 & 1 & 1&{\bf 1} } \tc{1+E & 0 & 0 & 1&\varepsilon }{0 & 1 & 0 & 0&{\bf 0} }\tc{1+E & 0 & 0 & 1&\varepsilon }{1 & 0 & 1 & 1&{\bf 1} }\right.\cdot\\ &\ \ \left.\cdot\,\tc{E & 1 & 1 & 0&\varepsilon }{0 & 0 & 0 & 1&{\bf 0} }\tc{E & 1 & 1 & 0&\varepsilon }{1 & 1 & 1 & 0&{\bf 1} } \tc{E & 1 & 0 & 1&\varepsilon }{0 & 1 & 0 & 1&{\bf 0} }\tc{E & 1 & 0 &1&\varepsilon }{1 & 0 & 1 & 0&{\bf 1} } \right)^{1/2}\\ +&c_\varepsilon \left(\tc{E & 0 & 0 & 0&\varepsilon }{0 & 0 & 1 & 1&{\bf 0} }\tc{E & 0 & 0 & 0&\varepsilon }{1 & 1 & 0 & 0&{\bf 1} } \tc{E & 0 & 1 & 1&\varepsilon }{0 & 1 & 1 & 1&{\bf 0} }\tc{E & 0 & 1 & 1&\varepsilon }{1 & 0 & 0 & 0&{\bf 1} }\right.\cdot\\ &\ \ \left.\cdot\,\tc{1+E & 1 & 0 & 0&\varepsilon }{0 & 0 & 1 & 0&{\bf 0} }\tc{1+E & 1 & 0 & 0&\varepsilon }{1 & 1 & 0 & 1&{\bf 1} } \tc{1+E & 1 & 1 & 1&\varepsilon }{0 & 1 & 1 & 0&{\bf 0} }\tc{1+E & 1 & 1 & 1&\varepsilon }{1 & 0 & 0 & 1&{\bf 1} }\right)^{1/2}, \end{aligned}$$ for different choices of the signs $a_\varepsilon,b_\varepsilon,c_\varepsilon$. If $\varepsilon=\bf 0$, then for all theta constants involved in the three corresponding summands in $s_{34}$, in columns $5,\ldots,g$ only characteristics $\chars00$ and $\chars01$ appear. Thus the lowest order term for the expansion near ${\mathcal D}_g$ of each theta constant involved is simply equal to the lowest term of the expansion of the genus 4 theta constant with the first four columns as characteristics, times the suitable product of theta constants of $t_5,\ldots,t_g$. Since in each of the three summands each of the columns $5,\ldots,g$ takes each of the values $\chars00$ and $\chars01$ exactly 4 times, the lowest order term of the expansion near ${\mathcal D}_g$ of each of the three terms with $\varepsilon=\bf 0$ is equal to the expansion near ${\mathcal D}_4$ of the corresponding term in genus 4, times the factor of $\prod_{m=5}^g\theta^2\chars00(t_m)\theta^2\chars01(t_m)$ --- which is the same for these three terms. If any of the $\varepsilon_5,\ldots,\varepsilon_g$ are equal to 1, then in each of the three summands appearing in the expression for $s_{34}$ for such $\varepsilon$, four of the theta characteristics --- those that have ${\bf 1}$ on the bottom --- will have extra columns equal to $\chars11$. Thus the lowest order term for the expansion of such summand near ${\mathcal D}_g$ would be of higher order, as explained after formula~\eqref{eq:expand}. Hence in the expansion of $s_{34}$ near ${\mathcal D}_g$ the only terms of degree 4 in $\tau_{ab}^{1/2}$ arise from the case $\varepsilon_5=\ldots=\varepsilon_g=0$, and they are equal to the expansion of the expression for $s_{34}$ in genus $4$, times $\prod_{m=5}^g\theta^2\chars00(t_m)\theta^2\chars01(t_m)$. Thus the lowest order term of the expansion near ${\mathcal D}_g$ of $S_{34}$ --- which is a product of $2^{3\cdot 2^{g-4}-1}$ terms of the form $s_{34}$, with different choices of signs --- is equal to the lowest order term of the expansion of $S_{34}$ in genus 4, taken to power $2^{3\cdot 2^{g-4}-3}$ (corresponding to the choices of signs $a_\varepsilon,b_\varepsilon,c_\varepsilon$ for all $\varepsilon\ne{\bf 0}$, which do not change the lowest order term of $s_{34}$), times a power of $\prod_{m=5}^g\tc00(t_m)\tc01(t_m)$. Since this product of genus one theta constants is never zero, the vanishing of the lowest order term of the expansion near ${\mathcal D}_g$ of $S_{34}$ implies the vanishing of the power of the lowest order term that appears in the Poincar\'e relation for $(1234)$. Thus the Poincar\'e relation for the quadruple $(1234)$ holds, up to terms of higher order. By interchanging the columns $(1234)$ and $(ijkl)$ of the characteristics involved, we note that the correspondingly permuted Schottky-Jung identity implies the Poincar\'e identity for any given quadruple $(ijkl)$. \end{proof} \begin{rem}\label{rem:noRiemann} The proof above shows why the doubling trick is necessary, and why applying the Schottky-Jung proportionalities for $\eta_g$ to Riemann's quartic relations directly would not work. Indeed, if one were to take the genus 3 Riemann's quartic relation~\eqref{eq:Rgen3} and extend the genus 3 characteristics $\chars\alpha\beta=\chars{011}{100}, \chars\sigma\mu=\chars{100}{001}$ to genus $g-1$ simply by zero characteristics (or in fact in any other way), then instead of $R_{34}$, where only the sum over top genus $(g-4)$-characteristics $\varepsilon$ is taken, we would have a quartic identity where the sum over all $\chars\varepsilon\delta\in K_{g-4}$ is taken. Then the lowest degree terms of the expansion of $s_{34}$ near the diagonal would arise if no column $\chars{\varepsilon_m}{\delta_m}$ or $\chars{\varepsilon_m}{\delta_m+1}$ is equal to $\chars11$; thus the lowest degree terms would arise from the cases when all $\chars{\varepsilon_m}{\delta_m}$ are equal to $\chars00$ or $\chars01$. However, consider then the lowest degree term of the expansion of two monomials where all columns are the same, except say for $\chars{\varepsilon_g}{\delta_g}=\chars00$ in one monomial versus $\chars{\varepsilon_g}{\delta_g}=\chars01$ in the other. Then for these two monomials the lowest degree terms would give the same expression in $\tau_{ab}$, for $1\le a<b\le 4$, times the same product of theta constants in variables $t_5,\ldots,t_{g-1}$, and also times the same factor of $\theta^2\chars00(t_g)\theta^2\chars01(t_g)$ in both cases --- as in each case both of these characteristics appear four times under the square root. Thus altogether this lowest degree term is the same for these two monomials that appear in $s_{34}$. However, as the signs of the individual square roots of degree 8 monomials cannot be determined, it could be that the signs are such that these two lowest degree terms simply cancel. In that case the desired lowest degree term of the expansion of $s_{34}$ would cancel out, and the argument above would fail. The doubling method allows us to deal with this problem. Alternatively, one could take another explicit suitably chosen linear combination of the Riemann's quartic relations, to ensure that the terms with $\chars\varepsilon\delta=\chars{\bf 0}{\bf 0}$ cannot fully cancel out, so that the proof above would also work. \end{rem} \bigskip For arbitrary genus $g$, there are $\binom{g}{4}$ Poincar\'e relations. In particular, already for $g=5$ only 3 out of the 5 Poincar\'e relations can be locally independent for dimension reasons. Following Rauch's ideas~\cite[p.228]{rafabook} for $g=5$, one can exhibit for any $g\ge 4$ a collection of $$ \tfrac{(g-3)(g-2)}{2}=\tfrac{g(g+1)}2-(3g-3)=\dim{\mathcal A}_g-\dim{\mathcal J}_g $$ Poincar\'e relations that are locally functionally independent. \begin{prop}\label{lm:Pindep} For any $g\ge 4$ the $(g-3)(g-2)/2$ Poincar\'e relations \eqref{eq:Poin}, corresponding to the quadruples of the form $(12jk)$ for all $3\le j<k\le g$, are functionally independent in a neighborhood of a generic $\tau\in{\mathcal D}_g$. In particular, the codimension of the locus in ${\mathcal A}_g$ determined by these Poincar\'e relations is locally equal to $\tfrac{(g-3)(g-2)}2$ near such $\tau$, and the dimension at $\tau$ of the locus of period matrices satisfying these Poincar\'e relations is equal to $3g-3$. \end{prop} While it is possible to give a direct proof of this statement by showing that locally given $t_1,\ldots,t_g $, $\tau_{12},\tau_{13}, \tau_{23},\ldots,\tau_{1g},\tau_{2g}$, all the $\tau_{ab}$ can be determined from this set of Poincar\'e relations, this proposition follows from the proof of local functional independence of the corresponding $S_{jk}$, given in the next section. \section{Schottky-Jung and Poincar\'e: proof of the main theorem} In this section we will prove the main theorem, by showing that the lowest order terms of the expansion of $S_{jk}$ near ${\mathcal D}_g$ give a collection of functionally independent relations. \begin{proof}[Proof of the main theorem] We first consider the genus 4 case. In this case we only have one identity $S_{34}$, and lemma~\ref{lm:SR} yields $S_{34}=SJ^{\eta_4}(R_{34})$. The full Taylor series expansion of $S_{34}$ in the variables $\tau_{jk}$, near the point $\tau=\diag(t_1,t_2,t_3,t_4)\in{\mathcal D}_g$, for all $t_m$ fixed, is a series such that its lowest degree term has degree $8$ in the $\tau_{jk}$. By the computations in the previous section, this lowest degree term is equal to a non-zero multiple of the symmetrization ~\eqref{eq:Pconjugates} of the Poincar\'e relation. Thus the whole series is not identically zero, and consequently its zero locus is of codimension one in $Th({\mathcal A}_g(4,8))$ --- hence of dimension 9. Since the zero locus of $S_{34}$ contains the irreducible 9-dimensional locus ${\mathcal J}_4$, it follows that ${\mathcal J}_4$ must be an irreducible component of this zero locus. We now consider the case of arbitrary genus $g$. By the proof of theorem~\ref{thm:Poincareholds} we know that the lowest order terms of the expansions of $S_{jk}$ near ${\mathcal D}_g$ are non-zero multiples of powers of the Poincar\'e relations for quadruples $(12jk)$. To prove that $S_{jk}$ are functionally independent near a generic point of ${\mathcal D}_g$, we can follow the idea of the argument given in~\cite[pp.~227ff]{rafabook} (which is possible now that we have found Schottky-Jung identities whose expansions give Poincar\'e relations, and now that we have handled the issue of signs, and ascertained suitable non-vanishing correctly). Consider the jacobian matrix of derivatives of the $S_{jk}$ with respect to the variables $\tau_{ab} $ with $3\leq a<b\leq g$, evaluated very close to a generic point of ${\mathcal D}_g$ --- i.e.~for $\tau_{ii}=t_i$ generic, for all $1\le i\le g$, and for $0<|\tau_{ab}|<\varepsilon\ll 1$, for any $1\leq i<j\leq g$. To compute $\partial S_{34}/\partial \tau_{ab}$ for $3\leq a<b\leq g$, note that the lowest degree term is always zero except for the case of $\partial S_{34}/\partial \tau_{34}$. Since each $S_{jk}$ is obtained from $S_{34}$ by permuting the columns, it follows that to lowest order the only non-zero partial derivative is $\partial S_{jk}/\partial \tau_{jk}$. Thus the jacobian matrix is diagonal, plus terms of higher order in the off-diagonal entries $\tau_{ab}$ of the period matrix. Since we have assumed that all $|\tau_{ab}|<\varepsilon$, the determinant of this jacobian matrix is equal to the jacobian of the lowest order diagonal matrix, plus some $O(\varepsilon)$. Since the determinant of the diagonal matrix is non-zero, for sufficiently small $\varepsilon$ it thus follows that the jacobian determinant $\det (\partial S_{jk}/\partial\tau_{ab})$ is non-zero, and thus that the equations $S_{jk}$ are functionally independent. \end{proof}
1707.05477
\section*{Methods Section} {\em Fabrication:} Mylar (BoPET) films were purchased from McMaster--Carr (Mylar, 8567K96), and had a thickness of $h=0.127$ mm. To relieve any residual stress in the films, apparent from their natural curvature, we annealed the films in the oven at 85$^{o}$C under the weight of thick metal sheets for 2 hours, resulting in flat sheets. Vector patterns were drawn in Adobe Illustrator CS6, and cut with an Epilog Mini 24, 75W laser cutter in vector mode, at 80$\%$ speed and 10$\%$ power. Sheet widths of $w=40$mm, 60mm, 80mm, and 100mm were used, and sheet lengths of $L=20$mm to 200mm in linear increments of 20mm were used. For the single cut experiments, cut lengths ranging from $b=20$mm to 70mm were used. The cut mylar films were adhered to 3mm thick acrylic sheets (McMaster--Carr, acrylic, 8560K191) with cyanoacrylate glue (McMaster-Carr, Loctite 403, 74765A53), which served as the clamped boundary conditions for the films. {\em Mechanical Measurements:} Uniaxial tension tests were performed by clamping the mylar sheets to the Instron 5943 mechanical testing system, using a 500N load cell. Displacement--controlled tests were performed at a rate of 0.15mm/min to a maximum extension of 1.5mm. Since the mylar did not experience inelastic strains, actuation was reversible, and 3 tests were run for each sample. Actuator deformation was captured from the side with a microscopic lens (Navitar Zoom 6000) attached to a Nikon D610 camera, and from the front using a Nikon D610 camera with a Micro-NIKKOR 105mm f/2.8 Lens, a Nikon 55mm f/2.8 Lens, and a high contrast Rosco Color Filter (B\&H Photo Video, ROCEK1212). The critical buckling force was determined from identifying both the slope change in the force vs. displacement curve, and the out--of--plane deflection from the microscopic imaging of the crack profile. {\em Finite Element Method (FEM):} FEM simulations were undertaken using COMSOL Multiphysics 5.2~\cite{COMSOL} along with the Structural Mechanics Module. Shell Mechanics and Plates were the environments within COMSOL in which all of our studies were performed. A geometry matching those used in the experiment was created in COMSOL's Design Module. Mesh refinement studies were undertaken to ensure convergence of the results. For the single cut geometry in figure~2, the sheet was modeled as an isotropic elastic thin sheet with thickness of $h = 0.127$mm, Young's Modulus $E = 3.5$GPa, and Poisson's ratio $\nu = 0.38$. The results shown in figure~2$c$ were attained through linear buckling studies with varying thickness in the range $h\in\left[0.1\mathrm{mm},0.14\mathrm{mm}\right]$ and the values $b/w\in\left[0.3,0.8\right]$. The in--plane results shown in figure~2$a$ and post-buckling results in figures~2$d$, 3, and 4 were calculated from a stationary study with displacement ($\Delta$) controlled analysis. In order to induce out--of--plane symmetry breaking, we added random small imperfections (ten orders of magnitude smaller than the sheet thickness) to the initial surface. The parameters in figure~2$d$ varied and lay in the ranges $h\in\left[0.15\mathrm{mm},0.21\mathrm{mm}\right]$ and $b/w\in\left[0.1,0.9\right]$. For the results shown in figure~5, linear buckling studies were undertaken with the boundary at $x=0$ fixed in space while the boundary at $x=L$ had an imposed displacement of $\Delta$ in the $x$ direction. Both of these boundaries were not permitted to rotate. All other boundaries were free. Small imperfections in the form of the first eigenmodes were then added to the initially flat geometry through use of MeshPerturb 1.0 \cite{saha2014meshperturb}. These imperfect geometries were then used for the stationary studies with the same boundary conditions and the same mesh density as was used in the linear studies. {\em Molecular Simulations:} We used Sandia-developed open source LAMMPS Molecular Dynamics Simulator to simulate graphene sheets~\cite{plimptonLAMMPS}. To describe the carbon-carbon interactions, we used AIREBO potential~\cite{stuart2000reactive} as has been used previously in atomistic study of graphene kirigami~\cite{Qi2014}. The cutoffs for the Lennard-Jones and the REBO term in AIREBO potential are chosen to be 2~\AA~and 6.8~\AA, respectively. For MoS$_{2}$ actuators we used the Stillinger-Weber potential developed by Jiang~\cite{jiang-Nanotechnology-26-315706-2015}, which we have previously employed to study MoS$_{2}$ kirigami~\cite{hanakata-Nanoscale-8-458-2016}. Graphene with a single crack and the MoS$_{2}$ actuator were first relaxed for 50--200~ps at 4.2K within the NVT (fixed number of atoms $N$, volume $V$, and temperature $T$) ensemble. Non-periodic boundary conditions were applied in all three directions. After the relaxation, the strains were applied by displacing both ends at a uniform rate. \section*{Author contributions} M.A.D. and D.P.H. conceived the study and proposed the research; M.A.D. and D.R.K. performed the mathematical modeling and FEM in consultation with D.P.H.; M.P.M. and D.P.H. performed the experiments in consultation with M.A.D.; P.Z.H., D.K.C., and H.S.P. performed the Molecular Simulations studies; M.A.D. and D.P.H. wrote the main article; and all authors jointly edited the entire article. \section*{Acknowledgements} M.A.D. is grateful to M. Adda-Bedia, J. Bico, B. Roman, and K. Seffen for many insightful discussions. M.A.D. would also like to thank ENS-Lyon and ESPCI-Paris for hosting the author during part of the development of this manuscript and funding from the 4-VA program for collaborative research at JMU. D.P.H. is grateful to the National Science Foundation (CMMI CAREER--1454153) for financial support. M.A.D., D.R.K., and D.P.H. would like to thank I. Metta for the useful discussions at the start of this project. D.K.C. acknowledges the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611, where part of this work was completed. D.R.K. acknowledges funding from Academy of Finland. \section*{Conflict of interest} The authors declare no conflict of interest.
1910.04997
\section{INTRODUCTION} \label{sec:intro} Recent advances in deep learning brought great improvements in solving many hard computer vision tasks. For many applications, databases with large amounts of data are available for training of deep neural networks (e.g. the KITTI Vision Benchmark Suite \cite{Geiger2012} for autonomous driving). For many industrial quality control applications, however, it is very hard to collect large amounts of data – at least in early design stages before hardware installation. Images that are acquired in quality control vision applications typically depict very application-specific objects and often require some expert knowledge for interpretation. Labeling large amount of data is costly and often fails due to the lack of domain experts. Additionally, specific defects may occur very rarely. The idea of using artificial training data has been addressed by several authors in the past. Using 3D renderings of street scenes based on a conventional gaming engine was proposed for autonomous driving \cite{johnson2016driving}. To perform 3D hand tracking based on 2D images, artificially generated data was used \cite{Mueller2018}. Heindl et al. \cite{heindlrobotpose} predict robot poses in photos from generated data. In this paper, we propose a novel structured approach for the use of artificial data to enable the application of machine learning methods for a specific quality control vision application: Monitoring of automated fiber placement (AFP) processes. The vision system involved is a laser triangulation sensor that is mounted on the lay-up machinery. This sensor delivers depth information immediately after placement of carbon fiber tows. We employ a probabilistic graphical model to describe the creation of depth maps. Expert knowledge and process parameters can easily be included into the model. Depth maps sampled from the model are used to train a deep neural network to perform the actual task of image segmentation. A real data example is used to assess segmentation performance of the deep neural network that was exclusively trained on artificial data. \section{RELATED WORK} \textbf{Data generation} The lack of annotated datasets for supervised machine learning has begun to impede the advance of successful usage of such methods in industrial applications. To cope with this problem, a variety of methods have been proposed. Active learning methods \cite{druck2009active, settles2012active} better involve domain experts by presenting only those data samples of high value to the current training progress. Semi-supervised learning \cite{chapelle2009semi} transfers annotations from small labeled datasets to larger unlabeled ones by making task-specific smoothness assumptions. Weakly supervised learning \cite{zhou2017brief} attempts to infer precise labels from noisier, more global annotations, which are often easier to obtain. Transfer learning \cite{pan2010survey} limits the amount of required training data by adapting pre-trained models to specific tasks. In contrast, methods that make use of artificial data generation utilize a simulation engine that generates data along with ground truth labels. Such methods are gaining popularity \cite{johnson2016driving, peng2015learning, heindlrobotpose}, due to the availability of general purpose simulation engines. Like in this work, the simulator is driven by samples from a probabilistic model that has to be designed specific for the task at hands. \textbf{Segmentation} Image segmentation is the task of partitioning an image into regions of common characteristics. Early works include image thresholding \cite{davis1975region} and clustering \cite{jain2010data}. With the raise of Convolutional Neural Nets (CNNs), image segmentation \cite{chen2014semantic, milletari2016v} is understood as image-to-image conversion. U-Nets \cite{ronneberger2015u}, which this work builds upon, consist of contracting and expanding data paths. This assures that global context and local details are both exploited to calculate the final segmentation result. In contrast to the present work, quality inspection of automated fiber placement (AFP) is rarely formalized as segmentation problem that is learned end-to-end. Frequently, pipelines consisting of handcrafted filters and feature detectors are proposed, which are tuned for specific measurement devices. Cemenska et al. \cite{cemenska2015automated} proposes automatic detection of ply boundaries and tow ends based on laser profilometers. Juarez et al. \cite{juarez2016advances} studies the usage of thermographic cameras for gap detection based on heat diffusion. Similarly, Denkena et al. \cite{denkena2016thermographic} use thermographic inspection to detect overlaps, gaps, twisted tows and bridges based on thresholding. \section{QUALITY CONTROL FOR AUTOMATED FIBER PLACEMENT} For production of carbon fiber reinforced plastics (CFRP) parts, typically layers of carbon fiber material are placed one layer after the other on some tooling. In automated fiber placement (AFP) this is done by large machines that are able to automatically lay-up tows (“stripes” of carbon fiber material). Typically, these machines are able to lay-up 8, 16, or even 32 tows in parallel next to each other. In order to assure mechanical stability of the final parts, it must be verified that carbon fibers are placed in the right way. There is a set of possible defects that relate either to incorrect placement of tows or foreign objects. While our method generalizes to any measurement method, we propose the use of a laser triangulation sensor to perform inline quality control. The type of data acquired by such a system is a sequence of laser line profiles. Our system accumulates multiple such profiles into a depth map where individual pixels describe depth. We are not so much interested in absolute depth values. Rather, we focus on small depth variations that reveal different defects on the surface. The specific surface defects that we address here are: \begin{itemize} \item{Gaps: Irregular larger spacings between neighboring tows.} \item{Overlaps: Irregular overlaps of tows that should actually be placed next to each other.} \item{Fuzzballs: Accumulation of carbon fibers that form small balls and fall onto the surface during lay-up.} \end{itemize} Besides the above defect types, we add regular tow surface to the list of possible class labels for the segmentation problem at hand. In order to perform quality control, it is necessary to assign one of these class labels to each pixel in the input depth map. Therefore, the problem that needs to be solved is classical image segmentation. \section{PROBABILISTIC MODEL} Bayesian Networks are a concept to enable modeling of complex joint probability distributions in terms of usually sparser conditional probabilities. The modeling process typically starts with setting up a list of relevant entities of the problem. In the next step, conditional probability distributions are defined to model the relationships between the different entities. For both steps, expert knowledge is exploited. \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[width=0.6\textwidth]{figures/probabilisticModel/ProbabilisticModel.pdf} \end{tabular} \end{center} \caption[probModel] { \label{fig:probModel}: Illustration of the probabilistic model. Random variables in different abstraction levels represent either high-level properties, labels (used as target output of neural network), and actual images (used as input for neural network). Pixel labels are visualized with different colors: gap (red), regular tow (green), overlap (blue), and fuzzball (yellow).} \end{figure} Figure \ref{fig:probModel} illustrates the structure of the Bayesian Network. There are four layers that make up our probabilistic model. Each layer builds upon the previous one. The last layer directly outputs artificial depth maps. In the following, we outline details about each of the layers. \textbf{Global parameters:} At the top level, we model very basic properties of the image generation process: Size of the depth map $w \times h$ and width of individual tows $t$. The tow width is specified in terms of pixels. We use fixed values for $w$, $h$, and $t$ for experiments in this work. However, it would be possible to assign a probability distribution to the tow width in order to cover different sensor setups with varying field of view, resolution, or tow width. \begin{figure}[h] \begin{center} \begin{tabular}{c} \includegraphics[width=0.99\textwidth]{figures/trainingExamples/trainingExamples.png} \end{tabular} \end{center} \caption[trainingExamples] { \label{fig:trainingExamples}: Examples of artificial data: Depth maps (gray scale) and labels(colored). Pixel labels are visualized with different colors: gap (red), regular tow (green), overlap (blue), and fuzzball (yellow). } \end{figure} \textbf{Tow geometry} Based on global parameters, a rectilinear grid of control points is generated. This models the arrangement of individual tows in the simulated field of view. Each column of the grid corresponds to a single tow within the field of view. Initially, the spacing between control points is equal to tow width $t$. In order to model small deviations of tows from a perfect rectilinear grid, we add random displacements of control points. The size of displacements $\mathbf{D}$ (vector with displacements in x- and y-axis for all grid points) is assumed normally distributed with zero mean and standard deviation of 3\% of the tow width $t$. Besides small deviations, we are interested in modeling gaps and overlaps as they might occur in a real production environment. Therefore, we randomly select a single column in the control grid and apply a larger horizontal shift $S$ to its points. We assume $S$ to be uniformly distributed in the range $[0.05t, 0.5t]$. We denote the final vector of control grid coordinates by $\mathbf{G}$. Besides gaps and overlaps, we intend to detect fuzzballs which may occur practically at any location within the field of view. The fuzzball center $\mathbf{F}$ is assumed uniformly distributed across the depth map. We model the geometry of a fuzzball simply by a set of individual thin fibers at random locations near the center of the fuzzball. The first end-point of the fiber is chosen near the fuzzball center. The second end-point is calculated by adding a vector with orientation uniformly distributed between $[-\pi, \pi]$. The length $L$ of the vector is uniformly distributed between $[v, 2v]$, where we choose a fixed value of 30 pixels for $v$. \textbf{Plain depth map and labels} The next layer of the probabilistic model takes the geometric representation of tows and fuzzball and converts it into a depth map representation. To accomplish this, individual tow control points are converted to polygonal contours of tows. The resulting polygons are filled with a specific depth value to make tows more elevated than the background. This is accomplished by a standard polygon filling algorithm. In case of overlaps, the sequence in which the overlapping tows are added is important. In real data, an overlap has a sharp edge at the tow boundary of the top tow. At the edge of the lower tow, that is "buried" under the top tow, a smooth depth transition is typically observed. We account for this by explicitly applying a distance transform on one side of the overlap. We apply a sigmoid function across a fixed distance range to account for the smooth depth transition. To derive the depth map $\mathbf{Z}$, the fuzzball is added at its sampled location. This is done by simply accumulating the individual simulated fibers as thin lines with fixed thickness. In addition to $\mathbf{Z}$, a map of pixel labels $\mathbf{Y}$ is calculated. $\mathbf{Y}$ serves as ground truth for neural network training. \textbf{Further randomization} Real sensor data typically contains global geometry variations and exhibits some kind of texture. Our model accounts for both effects by additional randomized modifications of the depth map. The modifications are first calculated as separate depth maps. These are blended over the plain depth map $\mathbf{Z}$ to generate the final artificial depth map $\mathbf{X}$. In order to account for global (low frequency) surface variation, we calculate a linear ramp with a slope in horizontal direction of the depth map. The ramp is zero at the horizontal center of the depth map and has a random slope $R$ which is assumed normally distributed with zero mean. It is difficult to create a probabilistic model that generates a rich set of textures similar to those observed in real data. We avoid explicit modeling of such textures. Instead, we take an image database of 232 photos of urban scenes which we find convey similar image frequencies compared to real data. We first convert these images to gray scale. Then, each depth map is blended with two randomly chosen gray-scale photos: the first image is blended over regions of the top layer of tows, the second image is blended over regions of the bottom layer. By adding the above modifications, we force the subsequent neural network training to focus on the relevant content (signal) and ignore global depth variations, texture, or noise. Two examples of generated training samples (depth map and labels) are shown in figure \ref{fig:trainingExamples}. \section{INFERENCE VIA NEURAL NETWORK} It is easily possible to draw samples from the joint probability distribution $\mathrm{P}(S, \mathbf{F}, ... \mathbf{X}, \mathbf{Y}, \mathbf{Z})$ of the above outlined probabilistic model. For quality control, we are interested in the conditional probability (or maximum a-posteriori assignments) of labels $\mathbf{Y}$ given the observed depth map $\mathbf{X}$: $\mathrm{P}(\mathbf{Y} \mid \mathbf{X})$. We use a neural network to learn a distribution $\mathrm{Q}(\mathbf{Y} \mid \mathbf{X})$ that seeks to approximate this term. We deploy a neural network with an architecture similar to U-Nets \cite{ronneberger2015u}. U-Nets are instances of a broader class of so called pixel-to-pixel networks. In the context of segmentations, U-Nets are fed an input image that is transformed into an output image corresponding to the segmentation of the input. The input image undergoes a sequence of down-sampling and up-sampling steps. In addition, activations are forwarded from layers before down-sampling to layers after up-sampling. Down-sampling and up-sampling supports use of context information. Forwarding maintains spatial resolution. Our model is illustrated in figure \ref{fig:model}. The main building blocks are encoding blocks and decoding blocks. The internal structure of both is shown on the right in figure \ref{fig:model}. In the first encoding block E$^*$ the max pooling operation is skipped. The specified dimensions of tensors provided in the figure are only examples. In general, width and height of activations in each layer are half of those of the layer above. The number of features in each layer are twice of those of the layer above. Our model contains some modifications compared to the original U-Net architecture. Instead of transposed convolution we use simple spatial up-scaling in the decoding blocks. We avoid spatial shrinkage by introducing padding steps after convolution and after up-scaling. Therefore, we do not need cropping of activations that are forwarded from encoding blocks to decoding blocks at the same level. The output of our model has the same width and height as the input image. We convert class scores per pixel to probability distributions over classes using the soft-max operator. During training we optimizes the pixel-wise cross-entropy between the true distribution $\mathrm{P}(Y_{i,j} \mid \mathbf{X})$ and its approximation $\mathrm{Q}(Y_{i,j}\mid \mathbf{X})$ given by \begin{equation} H(\mathrm{P},\mathrm{Q})=H(\mathrm{P})+D_{\mathrm{KL}}(\mathrm{P}||\mathrm{Q}) \end{equation} where $H$ denotes the (cross-)entropy and $D_\mathrm{KL}$ is the Kullback–Leibler (KL) divergence. Hence, training our neural network is equivalent to optimizing the KL divergence of $D_{\mathrm{KL}}\left[\mathrm{P}(\mathbf{Y} \mid \mathbf{X})||\mathrm{Q}(\mathbf{Y} \mid \mathbf{X})\right]$ where the distribution $\mathrm{Q}$ factors into \begin{equation} \mathrm{Q}(\mathbf{Y} \mid \mathbf{X}) = \prod\limits_{i,j \in \Omega}\mathrm{Q}(Y_{i,j}\mid \mathbf{X}) \end{equation} where $\Omega$ is the image domain. \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[width=0.99\textwidth]{figures/network/networkModel.pdf} \end{tabular} \end{center} \caption[trainingExamples] { \label{fig:model}: Network architecture used for segmentation involves encoding (E) and decoding (D) blocks. In the first encoding block E$^*$ the max pooling operation is skipped. The right part of the figure shows the internal structure of encoding and decoding blocks. The size of activations (height $\times$ width $\times$ features) shown in the figure are examples. See text for details. } \end{figure} \section{RESULTS} We train a neural network on 5000 artificial examples. Each of these examples consists of an artificial depth map and corresponding pixel-wise labels (i.e. ground truth segmentation). After training, the network is validated on a set of 1000 unseen artificial examples and a single real depth map. Input and output size of artificial data is equal to that of training data: 200x300 pixels. The real depth map has a size of 200x800 pixels. Segmentation results are shown in figure \ref{fig:segResults} for artificial (left) and real (right) data. The top row shows (normalized) depth maps which are used as input for the neural network. The center row shows ground truth labels. Ground truth for artificial data is directly derived from hidden variables of the probabilistic model. For the real depth map ground truth comes from manual labeling. The bottom row shows the output of the neural network. Details of segmentation performance for validation are shown in tables \ref{tab:confusionArtificial} and \ref{tab:confusionReal}. These tables represent confusion matrices for ground truth and predicted labels in percentage of the total number of pixels. The total number of correctly classified pixels (sum of diagonal elements in the confusion matrix) on average is 99.4\% for 1000 unseen artificial examples and 95.0\% for a real depth map acquired by a real laser triangulation sensor. All experiments are conducted on a computer with 2x Intel Xeon E5-2650v4 12-Core and NVIDIA Tesla V100 SXM2 32 GB GPU. Total training takes 3 hours and 3 minutes of which artificial data generation consumes 38 minutes. For 100 runs the average duration of a network forward pass takes 6.66ms (standard deviation: 1.04ms) for input depth maps of 200x300 pixels. For a real depth map with size 200x800 pixels, the forward pass takes 15.10ms (standard deviation: 0.72ms). \begin{table}[h] \caption{Confusion matrix for 1000 unseen artificial test examples. The numbers represent percentages of the total number of pixels.} \label{tab:confusionArtificial} \begin{center} \begin{tabular}[t]{ccc|c|c|c|c|} & & \multicolumn{4}{c}{\textbf{Ground truth}} \\ \cline{3-7} {\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{Prediction~~~~}}}} & & \multicolumn{1}{|c|}{\textbf{Gap}} & \textbf{Tow} & \textbf{Overlap} & \textbf{Fuzzball} & \textbf{\textbf{$\sum$}} \\ \cline{2-7} & \multicolumn{1}{|c|}{\textbf{Gap}} & 10.01 & 0.06 & 0.00 & 0.00 & 10.07 \\ \cline{2-7} & \multicolumn{1}{|c|}{\textbf{Tow}} & 0.29 & 78.04 & 0.12 & 0.02 & 78.48 \\ \cline{2-7} & \multicolumn{1}{|c|}{\textbf{Overlap}} & 0.00 & 0.05 & 6.44 & 0.00 & 6.49 \\ \cline{2-7} & \multicolumn{1}{|c|}{\textbf{Fuzzball}} & 0.00 & 0.02 & 0.00 & 4.93 & 4.96 \\ \cline{2-7} & \multicolumn{1}{|c|}{\textbf{$\sum$}} & 10.30 & 78.18 & 6.56 & 4.96 & 100.00 \\ \cline{2-7} \end{tabular} \end{center} \end{table} \begin{table}[h] \caption{Confusion matrix for real sensor data. The numbers represent percentages of the total number of pixels.} \label{tab:confusionReal} \begin{center} \begin{tabular}[t]{ccc|c|c|c|c|} & & \multicolumn{4}{c}{\textbf{Ground truth}} \\ \cline{3-7} {\multirow{5}{*}{\rotatebox[origin=c]{90}{\textbf{Prediction~~~~}}}} & & \multicolumn{1}{|c|}{\textbf{Gap}} & \textbf{Tow} & \textbf{Overlap} & \textbf{Fuzzball} & \textbf{\textbf{$\sum$}} \\ \cline{2-7} & \multicolumn{1}{|c|}{\textbf{Gap}} & 1.23 & 1.39 & 0.00 & 0.02 & 2.64 \\ \cline{2-7} & \multicolumn{1}{|c|}{\textbf{Tow}} & 0.60 & 91.12 & 0.76 & 0.13 & 92.60 \\ \cline{2-7} & \multicolumn{1}{|c|}{\textbf{Overlap}} & 0.00 & 0.64 & 0.62 & 0.01 & 1.27 \\ \cline{2-7} & \multicolumn{1}{|c|}{\textbf{Fuzzball}} & 0.01 & 1.41 & 0.00 & 2.07 & 3.49 \\ \cline{2-7} & \multicolumn{1}{|c|}{\textbf{$\sum$}} & 1.84 & 94.56 & 1.37 & 2.23 & 100.00 \\ \cline{2-7} \end{tabular} \end{center} \end{table} \begin{figure} \begin{center} \begin{tabular}{c} \includegraphics[width=0.99\textwidth]{figures/result/results.pdf} \end{tabular} \end{center} \caption[segResults] { \label{fig:segResults}: Results of segmentations of unseen artificial data (left) and real data (right). Neural network input (top), ground truth (center), and neural network output (bottom). Pixel labels are visualized with different colors: gap (red), regular tow (green), overlap (blue), and fuzzball (yellow). } \end{figure} \section{CONCLUSIONS AND FUTURE WORK} In this paper we propose a probabilistic model to allow a structured approach for the creation of artificial training data. A deep neural network inspired by the U-Net architecture is used to infer pixel labels from observed depth maps. In general, this approach follows the concept of analysis by synthesis. The focus is put on synthesis, i.e. artificial data generating model. The related inference problem is tackled with a deep neural network. We consider this approach appealing because: (1) It enables the use powerful machine learning techniques even if no real data is available. (2) No tedious manual labeling is required. (3) Expert knowledge is directly exploited for the design of the probabilistic model. Results so far indicate that segmentation quality is lower on real data than on artificial data. At least to some extent this can be explained by the fact that the probabilistic model does not exactly describe the real data generating process. In future work we plan to investigate in more detail how individual parts of the probabilistic model influence segmentation performance. This might help to better understand what are the most important aspects in designing probabilistic models for similar applications. \acknowledgments Work presented in this paper has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 721362 (project “ZAero”) and by the European Union in cooperation with the State of Upper Austria within the project “Investition in Wachstum und Beschäftigung” (IWB).
1612.01226
\section{Introduction}\label{intro} For a finite field $\mathbb{F}$, it is a basic result of Galois theory that $\mathbb{F}(x)/\mathbb{F}$ is not a Galois extension.\footnote{While some authors consider only algebraic Galois extensions, here (in the transcendental case) we apply the more general definition used in \cite{hun80}, whereby an extension $K\subset F$ is Galois if the fixed field of $\text{Aut}(F/K)$ is $K$ itself.} In other words, if $G = \mbox{Aut}(\mathbb{F}(x)/\mathbb{F})$ and $E$ is the fixed field of $G$, then $\mathbb{F} \subsetneq E$ is a proper extension. This follows from the fact that $G$ is finite, which implies (by a well known theorem due to Artin \cite[p.~252]{hun80}) that \begin{eqnarray}\label{artin} [\mathbb{F}(x):E] = |G|, \end{eqnarray} whereas $[\mathbb{F}(x):\mathbb{F}] = \infty$.\footnote{This fact is the answer to an exercise in Thomas Hungerford's $\emph{Algebra}$, which motivates the present work\cite[p.~256, ex.\:9b]{hun80}. Thanks are due to Professor Khalid Bou-Rabee, in whose class the exercise was encountered.} However, constructing $E$ for a general finite field is more difficult than establishing this fact.\\ In this expository $ $paper, we construct the fixed field $E$ of $\mbox{Aut}(\mathbb{F}(x)/\mathbb{F})$ for all finite fields $\mathbb{F}$, demonstrating that it is an extension of the form $\mathbb{F}(f)$, where $f\in \mathbb{F}(x)$ is explicitly determined. The generator $f$ is shown to be easily computable in all cases using a simple formula.\\ A more concise elementary construction of $E$, due to Paul Rivoire, may be found in \cite{paul}, in which the fixed fields of the subgroups of $\mbox{Aut}(\mathbb{F}(x)/\mathbb{F})$ are also constructed. However, our exposition may be more accessible to the modern undergraduate student, and presents the generator $f$ in a simple summation formula not given in \cite{paul}. One can also construct $E$ using more sophisticated machinery from the theory of algebraic function fields; for insight into this approach, we refer the reader to \cite[p.~237, ex.\:6.9]{henning}.\\ The strategy will to be to look for an element $f \in E$ such that \linebreak $[\mathbb{F}(x):\mathbb{F}(f)] = |G|$, implying that $\mathbb{F}(f) = E$ by equation \eqref{artin}. \section{Definitions and Notation}\label{defs} The following will hold throughout. \begin{enumerate} \item \label{defm}Let $q = p^n$, where $p$ is prime and $n \in \mathbb{N}$. \item The field $\mathbb{F}$ will always refer to $\mathbb{F}_q$, the unique (up to isomorphism) field containing $q$ elements. \item For a field $F$, $F(x)$ (resp. $F[x]$) denotes the field of rational functions (resp. ring of polynomials) in one indeterminate over $F$. The symbol `x' will only be used in this context. \item For fields $K\subset F$, $\mbox{Aut}(F/K)$ denotes the group of automorphisms of $F$ which fix $K$. Let $G = \mbox{Aut}(\mathbb{F}(x)/\mathbb{F})$. \item Let $E$ denote the fixed field of $G$. \item For $k\in \mathbb{N}$, let $f_k\in \mathbb{F}(x)$ be defined as follows: \[f_k := \sum_{\varphi \in G} \varphi(x)^k.\] \end{enumerate} \begin{example} \em For $\mathbb{F} = \mathbb{F}_2$, $$f_k = x^k + (x+1)^k + \frac{1}{x^k} + \frac{1}{(x+1)^k} + \left (\frac{x+1}{x}\right )^k + \left (\frac{x}{x+1}\right )^k$$ (see Fact \ref{aut} below for a characterization of the elements of $G$). \end{example} \section{Main Results} For any $\sigma \in G$, $k\in \mathbb{N}$ we have $$\sigma(f_k) = \sigma \left ( \sum_{\varphi \in G} \varphi(x)^k \right ) = \sum_{\varphi \in G} [\sigma\! \circ\! \varphi(x)]^k = \sum_{\varphi \in G} \varphi(x)^k = f_k$$ so that $f_k \in E$. It is thus reasonable to inquire as to whether $f_k$ generates $E$ for a suitable $k$. In the following result, we will show that this is indeed the case. \begin{theorem}\label{thm1} Let $m = q^2-1$. Then \[E = \mathbb{F}(f_m)\] (where $E,\mathbb{F}$ and $f_k$ are as defined in section \ref{defs}) \end{theorem} While this theorem provides a generator for $E$ over $\mathbb{F}$, it will soon be clear that the definition of $f_m$ in section \ref{defs} is not practical for computation (at least in all but the smallest finite fields). The other main result addresses this with a surprisingly simple formula for $f_m$: \begin{theorem}\label{formula} Let $$\theta_i = \begin{cases} &2 \mbox{ for }i = q,2q,\ldots ,q^2 \\ &1 \mbox{ otherwise}\end{cases}.$$ and let \begin{eqnarray*} g &=& \sum_{i = 0}^{q(q+1)} \theta_i x^{i(q-1)}\\ h &=& \sum_{i = 1}^q x^{iq(q-1)}. \end{eqnarray*} Then $g,h$ are relatively prime and $f_m = g/h$. \end{theorem} Theorem \ref{formula} has the somewhat surprising consequence that $E\subset \mathbb{F}_p(x)$, since the coefficients of $f_m$ are shown to be either 1 or 2 (or just 1, in characteristic 2).\\ We postpone the proofs of these theorems in order to establish some preliminary facts and lemmas. Note that $m = q^2-1$ will be henceforth fixed. \section{Facts and Lemmas} We recall the following well known facts. In most cases, complete proofs are omitted in favor of a brief justification. \begin{fact}\label{artinian} For any field extension $K \subset F$, if $H$ is a finite subgroup of $\mbox{Aut}(F/K)$ and $H'$ the fixed field of $H$, then $$[F:H'] = |H|.$$ \end{fact} This is a generalization of equation \eqref{artin} in section \ref{intro}, which follows from identical reasoning. \begin{fact}\label{div} For any $h\in E$, if $[\mathbb{F}(x):\mathbb{F}(h)] < \infty$, then $[\mathbb{F}(x):\mathbb{F}(h)]$ is divisible by $|G|$. \end{fact} This follows immediately from Fact \ref{artinian}, since \[[\mathbb{F}(x):\mathbb{F}(h)] = [\mathbb{F}(x):E][E:\mathbb{F}(h)] = |G|[E:\mathbb{F}(h)].\] \begin{fact}\label{frob} The map $\phi:\mathbb{F}(x) \rightarrow \mathbb{F}(x)$ given by $\phi(s) = s^p$ is an endomorphism of fields. \end{fact} It is clear that $\phi$ is an endomorphism of the multiplicative group of $\mathbb{F}(x)$, so it remains only to verify that $\phi$ is an endomorphism of the additive group. Observing that $\binom{p}{n} = 0$ for $n = 1,2,\ldots,p-1$, this follows immediately from an application of the binomial theorem to $(s+t)^p$. This is known as the Frobenius endomorphism. \begin{fact}\label{fermat}(Fermat's little theorem) For all $\alpha \in \mathbb{F}$, we have \[\alpha^q = \alpha\] If $\alpha\neq 0$, we also have \[\alpha^{q-1} = 1\] \end{fact} The second identity, of which the first is an immediate consequence, follows from the fact that the multiplicative group of a finite field is cyclic \cite[p.~279]{hun80}. Since this result implies that each $\alpha \in \mathbb{F}$ is a root of $x^q-x$, we obtain the following corollary: $$\prod_{\alpha \in \mathbb{F}} (x-\alpha) = x^q-x$$ \begin{fact}\label{sum} \[\sum_{\alpha \in \mathbb{F}} \alpha^k = \begin{cases} -1 &\mbox{ if } (q-1)\big | k \\ \;\;\:0 &\mbox{ otherwise}\end{cases}\] \end{fact} \begin{proof} Recalling that the multiplicative group of $\mathbb{F}$ is cyclic (see Fact \ref{fermat}), let $y$ be a generator for this group and let $(q-1)\! \not \big | \:k$. Then \[\sum_{\alpha \in \mathbb{F}} \alpha^k = \sum_{i=1}^{q-1} y^{ik} = \frac{y^{qk}-y^k}{y^k - 1}\] where the denominator on the right hand side is assured to be nonzero by our assumption that $q-1$ not divide $k$. It follows from Fact \ref{fermat} that the numerator on the right hand side is zero, proving the ``otherwise" case. If $(q-1)\big |k$, then by Fact \ref{fermat} we have \[\sum_{\alpha \in \mathbb{F}} \alpha^k = \sum_{\substack{\alpha \in \mathbb{F} \\ \alpha \neq 0}} 1 = q-1 = -1\] completing the proof.\\ \end{proof} \begin{fact} \label{degfact} If $F$ is any field, $g,h\in F[x]$ are relatively prime and $g/h \not \in F$, then \[[F(x):F(g/h)] = \mbox{Max}\{\mbox{deg }g, \mbox{deg }h\}\] \end{fact} \begin{proof} Let $\psi \in F(g/h)[t]$ be defined as \[\psi(t) = \frac{g(x)}{h(x)} h(t) - g(t)\] Then $\psi\neq 0$ (since $g/h\not \in F$) and $\psi(x) = 0$. We wish to show that $\psi$ is irreducible over $F(g/h)$.\\ Since $g/h$ is transcendental over $F$ (because $g/h\not \in F$ \cite[p.~233]{hun80}) we may replace $g/h$ with the indeterminate $z$. The resulting polynomial $zh(t)-g(t)$ is primitive over $F[z]$ (because $h,g$ are relatively prime) so by Gauss' lemma \cite[p.~163]{hun80} it suffices to show that $zh(t)-g(t)$ is irreducible over $F[z]$. Let us assume it is not. As this polynomial is linear in $z$, there must be a factorization of the form \[zh(t)-g(t) = [z a(t)+b(t)]c(t)\] with $a,b,c \in F[t]$, which implies that $c(t)$ divides both $g$ and $h$. But this contradicts our assumption that $g,h$ are relatively prime, so $\psi$ is irreducible over $F(g/h)$.\\ Since $x$ is a root of $\psi$, which we have just shown to be irreducible over $F(g/h)$, we have \[[F(x):F(g/h)] = [F(g/h)(x):F(g/h)] = \mbox{deg } \psi = \mbox{Max}\{\text{deg } f,\text{deg } g\}\] which completes the proof. \end{proof} \begin{fact}\label{aut} For any field $F$, the elements $\varphi$ of $\mbox{Aut}(F(x)/\mathbb{F})$ are determined by equations of the form \[\varphi(x) = \frac{ax+b}{cx+d}\] where $ad \neq bc$. \end{fact} \begin{proof} Let $\varphi \in \mbox{Aut}(F(x)/F)$, and let $g,h\in F[x]$ be relatively prime and such that $\varphi(x) = g/h$. Then $F(g/h) = \text{Im } \varphi = F(x)$, so that Fact \ref{degfact} implies that \[\text{Max}\{\text{deg } g,\text{deg }h\} = [F(x),F(g/h)] = 1.\] Thus, $g/h = \frac{ax+b}{cx+d}$ for some $a,b,c,d\in F$, with $ad \neq bc$ (this restriction is equivalent to the requirement that $ax+b$ and $cx+d$ be relatively prime and not both constant).\\ Conversely, given $a,b,c,d\in F$ where $ad\neq bc$, let $\varphi: F(x) \rightarrow F(x)$ be the homomorphism induced by $x\mapsto \frac{ax+b}{cx+d}$. Since $\frac{ax+b}{cx+d}\not \in F$, it follows that $\frac{ax+b}{cx+d}$ is transcendental over $F$, so $\varphi$ is injective. Moreover, $\text{Im }\varphi = F(\frac{ax+b}{cx+d})$, so by Fact \ref{degfact} $[K(x):\text{Im }\varphi] = 1$, implying that $\varphi$ is surjective and hence $\varphi \in \mbox{Aut}(F(x)/F)$. \end{proof} \begin{fact}\label{order} $|G| = (q+1)q(q-1)$ \end{fact} This can be demonstrated in a number of ways. For convenience, we first show that \begin{eqnarray}\label{PGL}G \cong \mbox{PGL}_2(\mathbb{F}), \end{eqnarray} from which the statement follows immediately using the well known formula for $|\mbox{PGL}_2(\mathbb{F})|$. To show that \eqref{PGL} is true, let $\varphi_{abcd}$ be the element of $G$ which maps $x \mapsto \frac{ax+b}{cx+d}$. In view of Fact \ref{aut}, it is easily verified that the map $\mu: GL_2(\mathbb{F})\rightarrow G$, where $$\left (\begin{array}{cc} a & c \\ b & d \end{array}\right ) \overset{\mu}{\longmapsto} \varphi_{abcd},$$ is a surjective homomorphism. Since $\mbox{Ker}\,\mu$ consists of all multiples of the identity, \eqref{PGL} follows immediately.\\ \subsection{}\label{subsec} In light of Fact \ref{aut}, we may write $f_k$ as follows \begin{eqnarray}\label{kform} f_k = \sum_{\substack{a,b \in \mathbb{F}\\a \neq 0}} (ax + b)^k + \sum_{\substack{a,b,c \in \mathbb{F}\\ ac \neq b}} \left(\frac{ax+b}{x+c}\right)^k. \end{eqnarray} Thus, letting $h_k = \prod_{c\in \mathbb{F}}(x+c)^k = (x^q-x)^k$ (where the second equation follows from the corollary to Fact \ref{fermat}), we obtain $f_k = g_k/h_k$ for an appropriate $g_k \in \mathbb{F}[x]$. If the terms of degree $k$ in the first sum on the right hand side of \eqref{kform} do not sum to zero, the highest degree term of $g$ will come from $$h_k\sum_{\substack{a,b \in \mathbb{F}\\a \neq 0}} (ax + b)^k = (x^q-x)^k\sum_{\substack{a,b \in \mathbb{F}\\a \neq 0}} (ax + b)^k $$ so that $\mbox{deg }g_k = k(q+1)$. Thus, $[\mathbb{F}(x):\mathbb{F}(f_k)] \leq k(q+1)$ by Fact \ref{degfact}. Since we need $[\mathbb{F}(x):\mathbb{F}(f_k)] = |G| = (q+1)q(q-1)$, this means that, roughly speaking, we are looking for some $k \geq q(q-1)$ such that the high degree terms of $$\sum_{\substack{a,b \in \mathbb{F}\\a \neq 0}} (ax + b)^k$$ do not sum to zero. It turns out that most choices of $k$ do not have the desired property, because \begin{eqnarray}\label{degofg} \sum_{\substack{a,b \in \mathbb{F}\\a \neq 0}} (ax + b)^k = \sum_{\substack{a\in \mathbb{F}\\a \neq 0}} a^k\sum_{b \in \mathbb{F}} (x + b/a)^k = \left (\sum_{a\in \mathbb{F}} a^k\right )\sum_{b \in \mathbb{F}} (x + b)^k \end{eqnarray} (where the second equation holds because $b/a$ runs through every element of $\mathbb{F}$, and the terms in which $a=0$ are included on the right hand side as they contribute nothing to the sum) and by Fact \ref{sum} the right hand side is zero whenever $(q-1)\big|k$. Hence, $m = (q-1)(q+1)$ is a promising candidate. The following 2 lemmas formalize these notions, and will finally allow us to prove Theorem \ref{thm1}. \begin{lemma}\label{cond} If $g,h \in \mathbb{F}[x]$ are polynomials such that $g/h \in E$ and $\mbox{deg }h < \mbox{deg }g < 2|G|$, then $$E = \mathbb{F}(g/h).$$ \end{lemma} \begin{proof} Suppose that the above conditions hold, and let $g/h = g'/h'$ with $g',h'$ relatively prime. By Fact \ref{degfact}, \[[\mathbb{F}(x):\mathbb{F}(g/h)] = \mbox{Max}\{\mbox{deg }g', \mbox{deg }h'\} = \mbox{deg }g' \leq \mbox{deg }g < 2|G|.\] Since $\mbox{deg }h < \mbox{deg }g$, it follows that $g/h \in E\setminus \mathbb{F}$, so that $[\mathbb{F}(x):\mathbb{F}(g/h)] < \infty$. Thus, Fact \ref{div} implies that $$|G| \Big| [\mathbb{F}(x):\mathbb{F}(g/h)] < 2|G| $$ proving that $[\mathbb{F}(x):\mathbb{F}(g/h)] = |G| = [\mathbb{F}(x):E]$. Since $\mathbb{F}(g/h) \subset E$, the desired result follows. \end{proof} \begin{lemma}\label{degreelem} Let $k \in \mathbb{N}$ be a multiple of $q-1$. Then there exists a polynomial $g_k \in \mathbb{F}[x]$ such that $$f_k = g_k/(x^q-x)^k$$ and $\mbox{deg }g_k = k(q+1)$. \end{lemma} As this is essentially a restatement of the arguments presented at the beginning of the present section, the proof is omitted. \section{Proof of Main Results} \begin{proof1} By Lemma \ref{degreelem}, we have $$f_m = g_m/(x^q-x)^m$$ where $\mbox{deg }g_m = m(q+1) = |G| + m$ (by Fact \ref{order}). Since $|G| < |G|+m < 2|G|$, an application of Lemma \ref{cond} completes the proof. \end{proof1} \begin{example} \em Let $\mathbb{F} = \mathbb{F}_2 = \mathbb{Z}_2$, so that $|G| = 6$ and \[f_m = f_3 = x^3 + (x+1)^3 + \frac{1}{x^3} + \frac{1}{(x+1)^3} + \left (\frac{x+1}{x}\right )^3 + \left (\frac{x}{x+1}\right )^3 \] which, with a little work, simplifies to \[f_m = \frac{x^6+x^5+x^3+x+1}{x^4+x^2}.\] \end{example} It is clear that such computations are prohibitively complex for larger fields. In order to derive the simple formula of Theorem \ref{formula}, the following technical lemmas are required. \begin{lemma}\label{AB} For $A, B \in \mathbb{F}(x)$, $k\in \mathbb{N}$ we have \[(A-B)^{p^k-1} = A^{p^k-1}+A^{p^k-2}B + \ldots +A B^{p^k-2}+B^{{p^k-1}}\] \end{lemma} \begin{proof} If $A=B$, then $$A^{p^k-1}+A^{p^k-2}B+ \ldots + A B^{p^k-2}+ B^{{p^k-1}} = p^k A^{p^k-1} = 0 = (A-B)^{p^k-1}$$ and the statement holds. If $A \neq B$, then \begin{eqnarray*} (A-B)^{p^k-1} &=& \frac{(A-B)^{p^k}}{A - B} = \frac{A^{p^k}-B^{p^k}}{A - B} = A^{p^k-1}+A^{p^k-2}B+ \ldots + A B^{p^k-2}+ B^{{p^k-1}} \end{eqnarray*} (where the second equation follows from Fact \ref{frob}) completing the proof.\\ \end{proof} \begin{lemma} If $(q-1)\big |k$, then \begin{eqnarray}\label{tech1} f_k = 1 - \left (1+\sum_{b \in \mathbb{F}} (x - b)^k \right) \left ( 1 + \sum_{c \in \mathbb{F}} \frac{1}{(x - c)^k}\right ).\end{eqnarray} \end{lemma} \begin{proof} First, note that the right hand side of \eqref{tech1} is equal to \begin{eqnarray}\label{plus} 1 - \left (1+\sum_{b \in \mathbb{F}} (x + b)^k \right) \left ( 1 + \sum_{c \in \mathbb{F}} \frac{1}{(x + c)^k}\right ) \end{eqnarray} since $-b,-c$ run through all elements of $\mathbb{F}$. Starting with equation \eqref{kform} in section \ref{subsec}, let us separate the terms in the rightmost sum according to whether or not the numerator is constant, obtaining $$f_k = \sum_{\substack{a,b \in \mathbb{F}\\a \neq 0}} (ax + b)^k + \sum_{\substack{b,c \in \mathbb{F}\\ b \neq 0}} \left(\frac{b}{x+c}\right)^k + \sum_{\substack{a,b,c \in \mathbb{F}\\a \neq 0 \\ ac \neq b}} \left(\frac{ax+b}{x+c}\right)^k.$$ Applying the same reasoning as in equation \eqref{degofg} of section \ref{subsec}, the right hand side becomes \begin{eqnarray*}\left (\sum_{a\in \mathbb{F}} a^k \right ) \left ( \sum_{b \in \mathbb{F}} (x + b)^k + \sum_{c \in \mathbb{F}} \left(\frac{1}{x+c}\right)^k + \sum_{\substack{b,c \in \mathbb{F}\\b \neq c}} \left(\frac{x+b}{x+c}\right)^k\right ) \end{eqnarray*} and by our assumption we may apply Fact \ref{sum} to obtain \begin{eqnarray*} f_k = - \sum_{b \in \mathbb{F}} (x + b)^k - \sum_{c \in \mathbb{F}} \left(\frac{1}{x+c}\right)^k - \sum_{\substack{b,c \in \mathbb{F}\\b \neq c}} \left(\frac{x+b}{x+c}\right)^k. \end{eqnarray*} which is precisely the expression obtained by expanding the right hand side of \eqref{plus}. \end{proof} \begin{lemma}\label{gh} Let us define \begin{eqnarray*} g &=& (x^q-x)^{q(q-1)}f_m \notag\\ h &=& (x^q-x)^{q(q-1)} \end{eqnarray*} then \begin{enumerate} \item \label{poly} $g \in \mathbb{F}[x]$ \item \label{deg}$\mbox{deg }g = |G|$ \item \label{relprime} $g$ and $h$ are relatively prime \item \label{trivial}$f_m = g/h$. \end{enumerate} \end{lemma} \begin{proof} Statement \eqref{trivial} is trivial, and \eqref{relprime} is a consequence of \eqref{poly} and \eqref{deg}, because if $g$ and $h$ are polynomials which are not relatively prime then (by Fact \ref{degfact}) $$[\mathbb{F}(x):\mathbb{F}(g/h)] < |G|$$ contradicting Theorem \ref{thm1}. To prove \eqref{poly} and \eqref{deg}, let us consider the expression \begin{eqnarray*} 1+\sum_{b \in \mathbb{F}} (x-b)^m \end{eqnarray*} that is, the left factor in \eqref{tech1} with $k = m$. Applying Lemma \ref{AB} to expand the summands, we obtain \begin{eqnarray*}1 + \sum_{b \in \mathbb{F}} \sum_{i=0}^m x^{m-i}b^i = 1 + \sum_{i=0}^m x^{m-i} \sum_{b \in \mathbb{F}} b^i. \end{eqnarray*} By Lemma \ref{sum}, the terms on the right hand side vanish except those in which $i$ is a multiple of $q-1$. Of those terms, all become $-x^{m-i}$ except for $i=0$, which also vanishes since $\sum_{b \in \mathbb{F}} b^0 = \sum_{b \in \mathbb{F}} 1 = 0$. We thus obtain \[-(x^{q(q-1)} + x^{(q-1)(q-1)} + \ldots + x^{q-1})\] which is equal to \begin{eqnarray*} -(x^q-x)^{q-1} \end{eqnarray*} as may be seen by applying Lemma \ref{AB} to the latter expression. Substituting this into \eqref{tech1} (with k = m), we obtain \begin{eqnarray}\label{form1}f_m = 1+(x^q-x)^{q-1}\left ( 1 + \sum_{c \in \mathbb{F}} \frac{1}{(x - c)^m}\right ) = 1 + (x^q-x)^{q-1} + \sum_{c \in \mathbb{F}} \frac{(x^q-x)^{q-1}}{(x-c)^m} \end{eqnarray} and since the denominators in the sum on the right are all factors of $$\prod_{\beta \in \mathbb{F}} (x-c)^m = (x^q-x)^m = (x^q - x)^{q(q-1)}(x^q - x)^{q-1}$$ it follows that $g = (x^q-x)^{q(q-1)}f_m \in \mathbb{F}[x]$ and $\mbox{deg }g = (q+1)q(q-1) = |G|$, proving \eqref{poly} and \eqref{deg}. \end{proof} \begin{lemma}\label{binom} \begin{eqnarray}\label{d} \sum_{i = j}^{m} \binom{i(q-1)}{j(q-1)} = \begin{cases} 0 &\mbox{for } j = 0,1,\ldots, q \\ 1 &\mbox{for } j = q\!+\!1 , q\!+\!2,\ldots, m\end{cases} \end{eqnarray} where $\binom{i(q-1)}{j(q-1)}$ denotes the binomial coefficient. \end{lemma} \begin{proof} We will make use of the following identity\footnote{This identity, and its derivation, are due to MathManiac on the math.stackexchange forum; we have generalized from $\alpha = 1$ to all $\alpha \in \mathbb{F}$ (see bibliography).}, which holds for any $\alpha \in \mathbb{F}$ \begin{eqnarray}\label{maniac} \sum_{i=0}^q (x+\alpha)^{i(q-1)} = \sum_{i=0}^q x^{i(q-1)} \end{eqnarray} and may be derived as follows $$\sum_{i=0}^q (x+\alpha)^{i(q-1)} = \frac{(x+\alpha)^{q^2-1} - 1}{(x+\alpha)^{q-1}-1} = \frac{\frac{x^{q^2}+\alpha}{x+\alpha} - 1}{\frac{x^q+\alpha}{x+\alpha} - 1} = \frac{x^{q^2}-x}{x^q-x} = \frac{x^{q^2-1}-1}{x^{q-1}-1} = \sum_{i=0}^q x^{i(q-1)} $$ where the second equation follows from Fact \ref{frob}.\\ The left hand side of \eqref{d} is equal to the coefficient of $x^{j(q-1)}$ in \begin{eqnarray*}(x+1)^{m(q-1)} + (x+1)^{(m-1)(q-1)} + \ldots + (x+1)^{q-1} + 1 \end{eqnarray*} which may be written as \begin{eqnarray*} \left (\sum_{i=0}^q (x+1)^{i(q-1)} \right ) \left ( \sum_{i=0}^{q-1} (x+1)^{iq(q-1)} \right ) - \sum_{i=1}^q (x+1)^{iq(q-1)} \end{eqnarray*} (where the rightmost sum corrects for a $(x+1)^{q^2(q-1)}$ term as well as ``double counted" terms). Using \eqref{maniac} on the first and third sums (as well as Fact \ref{frob} to deal with the $q$ in the powers of the third sum), this becomes \begin{eqnarray}\label{cake} \left (\sum_{i=0}^q x^{i(q-1)} \right ) \left ( \sum_{i=0}^{q-1} (x+1)^{iq(q-1)} \right ) - \sum_{i=1}^q x^{iq(q-1)}. \end{eqnarray} Applying Fact \ref{frob} and \eqref{maniac} to the second sum yields \begin{eqnarray*}&&\sum_{i=0}^{q-1} (x+1)^{iq(q-1)} = \left ( \sum_{i=0}^{q-1} (x+1)^{i(q-1)} \right )^q = \left ( \sum_{i=0}^{q} x^{i(q-1)} - (x+1)^{q(q-1)} \right )^q \\ &=& \sum_{i=0}^{q} x^{iq(q-1)} - (x+1)^{q^2(q-1)} = \sum_{i=0}^{q} x^{iq(q-1)} - (x^{q^2}+1)^{q-1} \end{eqnarray*} whose rightmost term we may expand using Lemma \ref{AB} to obtain \begin{eqnarray*} &&\sum_{i=0}^{q} x^{iq(q-1)} - (x^{(q-1)q^2} - x^{(q-2)q^2} + \ldots + x^{2q^2} - x^{q^2} + 1) \\ &=& \sum_{i=1}^{q-1} x^{iq(q-1)} - (x^{(q-2)q^2} + \ldots + x^{2q^2} - x^{q^2}). \end{eqnarray*} Since none of the terms in parentheses have a power divisible by $q-1$, removing them from \eqref{cake} will not affect the coefficient of $x^{j(q-1)}$. Thus, the coefficient of $x^{j(q-1)}$ in \eqref{cake} is equal to the same coefficient in \begin{eqnarray*} \left (\sum_{i=0}^q x^{i(q-1)}\right ) \left ( \sum_{i=1}^{q-1} x^{iq(q-1)} \right ) - \sum_{i=1}^q x^{iq(q-1)} = x^{m(q-1)} + x^{(m-1)(q-1)}+ \ldots + x^{(q+1)(q-1)} \end{eqnarray*} from which the result follows. \end{proof} We are now in a position to prove Theorem \ref{formula}. \begin{proof2} Let $g,h$ be as in Lemma \ref{gh}. A straightforward application of Fact \ref{frob} and Lemma \ref{AB} shows that $h = \sum_{i = 1}^q x^{iq(q-1)} $, so it remains only to show that $$g = \sum_{i = 0}^{q(q+1)} \theta_i x^{i(q-1)}.$$ From the definition of $g$ and \eqref{form1} (in Lemma \ref{gh}) we have \begin{eqnarray}\label{g} g = (x^q-x)^{q(q-1)} + (x^q-x)^m + \sum_{c \in \mathbb{F}} \left (\frac{x^q-x}{x-c}\right )^m. \end{eqnarray} Observing that $$\frac{x^q - x}{x-c} = x^{q-1} + c x^{q-2} + \ldots + c^{q-2}x = (x^{q-1} + c x^{q-2} + \ldots + c^{q-2}x + 1) - 1$$ an application of Lemma \ref{AB} to the right hand side yields $\frac{x^q - x}{x-c} = (x-c)^{q-1} - 1$, so that \begin{eqnarray*} \sum_{c \in \mathbb{F}} \left (\frac{x^q-x}{x-c}\right )^m = \sum_{c \in \mathbb{F}} \left ((x-c)^{q-1} - 1 \right )^m = \sum_{c \in \mathbb{F}} \sum_{i = 0}^m (x-c)^{i(q-1)} = \sum_{c \in \mathbb{F}} \sum_{i = 0}^m (x + c)^{i(q-1)} \end{eqnarray*} where the second equation follows from another application of Lemma \ref{AB}, and the third holds because $c$ runs through all elements of $\mathbb{F}$. The coefficient of $x^j$ on the right hand side is equal to \begin{eqnarray*} \sum_{c \in \mathbb{F}} \sum_{\frac{j}{q-1} \leq i \leq m} \binom{i(q-1)}{j} c^{i(q-1) - j} = \sum_{\frac{j}{q-1} \leq i \leq m} \binom{i(q-1)}{j} \sum_{c \in \mathbb{F}} c^{i(q-1) - j} \end{eqnarray*} and by Lemma \ref{sum} the terms on the right hand side are zero whenever $(q-1) \nmid j$, and $-\binom{i(q-1)}{j}$ otherwise. Thus (with an appropriate reindexing of $j$) the coefficient of $x^{j(q-1)}$ is \begin{eqnarray*}\label{coeff} -\sum_{i = j}^m \binom{i(q-1)}{j(q-1)} \end{eqnarray*} with all other coefficients zero. Applying Lemma \ref{binom} to this expression thus produces all of the nonzero coefficients of $\sum_{c \in \mathbb{F}_q} \left (\frac{x^q-x}{x-c}\right )^m$, so that \[\sum_{c \in \mathbb{F}_q} \left (\frac{x^q-x}{x-c}\right )^m = -(x^{(q+1)(q-1)} + x^{(q+2)(q-1)} + \ldots + x^{m(q-1)}).\] Substituting this into \eqref{g}, the desired result follows from a straightforward expansion. \end{proof2} \begin{example} \em Let $\mathbb{F} = \mathbb{F}_3$. Then $|G| = 24$, $m = 8$ and by Theorem 2 we have that $E$ is generated over $\mathbb{F}$ by the element $$f_8 = \frac{x^{24}+x^{22}+x^{20}+2x^{18}+x^{16}+x^{14}+2x^{12}+x^{10}+x^8+2x^6+x^4+x^2+1}{x^{18}+x^{12}+x^6}.$$ \end{example} \begin{example} \em Let $\mathbb{F} = \mathbb{F}_4$. Then $|G| = 60$, $m = 15$ and $E$ is generated over $\mathbb{F}$ by $$\frac{x^{60}+x^{57}+x^{54}+x^{51}+x^{45}+\ldots +x^{15}+x^9+x^6+x^3+1}{x^{48}+x^{36}+x^{18}+x^{12}}$$ where the terms in the numerator with coefficient 2 (i.e. those appearing in the denominator) vanish since $\mbox{char}(\mathbb{F}_4) = 2$. \end{example}
1612.01432
\section{Introduction}\label{sec:intro} Last year the CERN Large Hadron Collider (LHC) started its second run of operation colliding protons at 13 TeV. While convincing hints of new physics are still eluding the experiments, one of the main goals of Run II remains precision physics near the electro-weak scale. These analyses are performed with an increasing amount of data, making accurate theoretical predictions for differential distributions more relevant than ever. One of the most extensively studied distributions at hadron colliders is the transverse momentum ($Q_T$) spectrum of electro-weak bosons produced via the Drell-Yan (DY) mechanism~\cite{Affolder:1999jh,Abbott:1999yd,Abazov:2007ac,Abazov:2008ez,Abazov:2010kn,Abazov:2010mk,Aad:2011gj,Chatrchyan:2011wt,Aaij:2012mda,Aad:2014xaa,Aaij:2015vua,Khachatryan:2015oaa,Aaij:2015gna,Aad:2015auj,Khachatryan:2016nbe}. Studies of $Q_T$ spectra and related angular correlations of DY lepton pairs provide a useful testing ground for an even more interesting Higgs and new physics program. Remarkably, the accuracy of LHC measurements in the context of electro-weak boson distributions has now reached the percent level~\cite{Aaij:2015gna,Aad:2015auj,Khachatryan:2016nbe}. Consequently, substantial effort has been recently put by the theory community in order to shrink the uncertainty affecting theoretical predictions. High-precision fixed-order calculations, which have been recently performed to next-to-next-to-leading order (NNLO) accuracy~\cite{Boughezal:2015aha,Boughezal:2015dra,Boughezal:2015dva,Boughezal:2013uia,Chen:2014gva,Ridder:2015dxa}, can be employed to describe the moderate-to-large region of the $Q_T$ spectrum. However, the small transverse momentum region is dominated by the emission of soft and collinear partons, and it is characterized by the presence of large logarithms of $Q_T/Q$, where $Q$ is the invariant mass of the final state, which need to be resummed. Thus, reliable predictions across a vast range of $Q_T$ values can be obtained by matching fixed-order and resummed predictions. While providing an all-order prediction for the shape of transverse momentum spectrum, $Q_T$ resummation carries very little information about its normalization, beyond the fixed-order that it is matched to. On the other hand, the determination of inclusive cross sections can be improved beyond fixed order by including threshold resummation. Partonic coefficient functions contain plus distributions which exhibit logarithmic enhancement in the variable $z=1-Q^2/\hat{s}$ where $\hat{s}=x_1 x_2 s$ is the partonic center of mass energy squared. Even though the collision energy of the protons is much larger than the electro-weak scale, these contributions can still be large because parton distribution functions (PDFs) at large $Q^2$ preferentially sample the region of low momentum fractions $x_1$ and $x_2$. Transverse momentum and threshold logarithms originate from the emission of soft gluons. Therefore, it is natural to look for a framework that allows for a consistent resummation of both. The general formalism to perform this joint resummation was derived some time ago~\cite{Li:1998is,Laenen:2000ij} and explicitly worked out to next-to-leading logarithmic (NLL) accuracy in both variables. It was then applied to a number of phenomenological studies including prompt photon~\cite{Laenen:2000ij}, Drell-Yan~\cite{Kulesza:2002rh}, Higgs~\cite{Kulesza:2003wn}, top pair~\cite{Banfi:2004xa} and electro-weak supersymmetric particle~\cite{Bozzi:2007tea,Fuks:2013vua} production. Moreover, the universality properties of transverse momentum and threshold resummation have been extensively discussed in Ref.~\cite{Catani:2013tia}. However, despite the fact that both threshold and $Q_T$ resummation are known to at least NNLL, the application of this method has not yet been extended to higher logarithmic accuracy. We also note that recent progress has also been made in describing joint resummation in the context of the Soft-Collinear-Effective Theory (SCET)~\cite{Li:2016axz,Lustermans:2016nvk}. In addition, threshold resummation has been included into a parton shower, which also allows the description of the small transverse momentum region~\cite{Nagy:2016pwq}. In this paper we concentrate on the case of vector boson production via the DY mechanism and we extend the study of Ref.~\cite{Kulesza:2002rh} to NNLL accuracy. This paper is organized as follows. We begin in Section~\ref{sec:recap} with a brief overview of threshold and $Q_T$ resummations. We then describe joint resummation in Section~\ref{sec:combination}, first reviewing the NLL case and then extending it to NNLL accuracy. Next, in Section~\ref{sec:numerics} we present our numerical results together with a study of the reliability of the approximations employed in our implementation of joint resummation, before concluding in Section~\ref{sec:conclusion}. More technical details are collected in the Appendices. \section{A recap of transverse momentum and threshold resummations}\label{sec:recap} In this section we provide a brief overview of threshold and transverse-momentum resummations. In addition, the all-order results will be written in a way that allows for a comparison between the two which eases their generalization to joint resummation. \subsection{Threshold resummation}\label{sec:threshold} Threshold resummation was originally introduced at the end of the 1980s~\cite{Sterman:1986aj,Catani:1989ne}. After that it was extended to NNLL accuracy~\cite{Vogt:2000ci,Catani:2003zt}, and even to N$^3$LL accuracy, e.g.~\cite{Moch:2005ba, Moch:2005ky, Laenen:2005uz, Bonvini:2014joa,Catani:2014uta,Bonvini:2014tea,Schmidt:2015cea,Bonvini:2016frm} for electro-weak final states. Moreover, the resummation of large threshold logarithms has also been formulated using SCET, see e.g.~\cite{Becher:2007ty}. In this study we concentrate on the production of a (neutral) vector boson $F$ and we are interested in resumming logarithms of $1-Q^2/\hat{s}$, where $\sqrt{\hat{s}}$ is the partonic center-of-mass energy and $Q$ is the vector boson invariant mass. Threshold resummation is usually performed in Mellin space, where the threshold limit corresponds to $N \to \infty$ and large logarithms of $1-Q^2/\hat{s}$ are mapped into logarithms of $N$. The cross section can be written as \begin{align} \sigma_F(s,Q^2)&=\sum_{a}\sigma^{(0)}_{a\bar{a}\to F}\int_{C_{T}}\frac{dN}{2\pi i} \left(\frac{Q^2}{s}\right)^{-N+1}\tilde{f}_{a/h_1}(N,\mu_{\sss\rm F}^2)\;\tilde{f}_{\bar{a}/h_2}(N,\mu_{\sss\rm F}^2)\nonumber\\ &\times G_{a\bar{a}}(N,\alpha_s(\mu_{\sss\rm R}),Q^2/\mu_{\sss\rm R}^2,Q^2/\mu_{\sss\rm F}^2), \end{align} where $\sigma_{a\bar{a}\to F}^{(0)}$ is the lowest order cross section for the partonic process $a\bar{a}\to F$ and the parton densities are indicated by $f_{a/h}(x,\mu^2)$. We have also introduced the renormalization scale $\mu_{\sss\rm R}$ and the factorization scale $\mu_{\sss\rm F}$, while $C_{T}$ indicates the contour for the inverse Mellin transform. At leading power, threshold resummation does not receive any contribution from initial-state off-diagonal flavor components, therefore the only partonic subprocess that contributes is $a\bar{a}$. The function $G_{a\bar{a}}$ is given by \begin{equation} G_{a\bar{a}}(N,\alpha_s(\mu_{\sss\rm R}),Q^2/\mu_{\sss\rm R}^2,Q^2/\mu_{\sss\rm F}^2)=\mathcal{C}_{a\bar{a}}(\alpha_s(\mu_{\sss\rm R}),Q^2/\mu_{\sss\rm R}^2,Q^2/\mu_{\sss\rm F}^2)\;\exp\left[\mathcal{G}_\text{thr}(N,\alpha_s(\mu_{\sss\rm R}),Q^2/\mu_{\sss\rm R}^2,Q^2/\mu_{\sss\rm F}^2)\right], \end{equation} where we have introduced the Sudakov exponent~\cite{Catani:2003zt} \begin{align}\label{eq:sudakov-thr} \mathcal{G}_\text{thr}(N,\alpha_s(\mu_{\sss\rm R}),Q^2/\mu_{\sss\rm R}^2,Q^2/\mu_{\sss\rm F}^2)=&-\int_{N_0/N}^{1}\frac{dy}{y}\left[2\int^{y^2 Q^2}_{\mu_F^2}\frac{dq^2}{q^2}A_{a}(\alpha_s(q))+\tilde{D}_{a}(\alpha_s(yQ))\right]\nonumber \\ =&\int^{Q^2}_{Q^2/\bar{N}^2}\frac{dq^2}{q^2}\left[2A_{a}(\alpha_s(q))\log\left(\frac{\bar{N}q}{Q}\right)-\frac{1}{2}\tilde{D}_{a}(\alpha_s(q))\right] \nonumber \\ & -2\log\bar{N}\int_{\mu_{\sss\rm F}^2}^{Q^2}\frac{dq^2}{q^2}A_{a}(\alpha_s(q)), \end{align} with $\bar N=N/N_0=Ne^{\gamma_{\scriptscriptstyle E}}$. The functions $A$, $\tilde{D}$ and $\mathcal{C}$ admit a perturbative expansion in the strong coupling \begin{align} A_a(\alpha_s) = \sum_{n=1}^{\infty} \left(\frac{\alpha_s}{\pi}\right)^n A_a^{(n)}\,,\qquad& \tilde{D}_a(\alpha_s) = \sum_{n=2}^{\infty} \left(\frac{\alpha_s}{\pi}\right)^n \tilde{D}_a^{(n)}, \end{align} \begin{align}\label{CfuncTH} \mathcal{C}_{a\bar{a}}(\alpha_s) = 1+\sum_{n=1}^{\infty} \left(\frac{\alpha_s}{\pi}\right)^n \mathcal{C}_{a\bar{a}}^{(n)}\,. \end{align} The function $A_a$ is the cusp anomalous dimension, $\tilde{D}_a$ accounts for soft emissions at large angle, while $\mathcal{C}_{a\bar{a}}$ takes into account the virtual corrections. Explicit expressions are collected in Appendix~\ref{sec:coef}. The perturbative order at which the above coefficients are included determines the logarithmic accuracy of the result. Throughout this paper we adopt a logarithmic counting in the exponent $\mathcal{G}$. Therefore, N$^k$LL accuracy is achieved if $A_a$ is included up to (and included) $\mathcal{O} \left(\alpha_s^{k+1} \right)$, $\tilde{D}_a$ up to $\mathcal{O} \left(\alpha_s^{k} \right)$ and $\mathcal{C}_a$ up to $\mathcal{O} \left(\alpha_s^{k-1} \right)$. Moreover, the accuracy can be promoted to N$^k$LL$^\prime$ if also the $\mathcal{O} \left(\alpha_s^{k} \right)$ contribution to $\mathcal{C}_a$ is included. We keep the same convention for $Q_T$ and joint resummation. While in this paper we concentrate on NNLL accuracy, these coefficients have actually been computed to high-enough accuracy to achieve N$^3$LL$^\prime$ accuracy, with the exception of the four-loop contribution to the cusp. They can be found in \cite{Catani:1989ne,Catani:1990rr,Catani:1998tm,Vogt:2000ci,Catani:2001ic,Catani:2003zt,Moch:2004pa,Vogt:2004mw,Moch:2005ba}. Moreover, in order to achieve the desired logarithmic accuracy, the integrals over the QCD running coupling $\alpha_s(q)$ must be performed with the $\beta$ function at the appropriate perturbative order. \subsection{$Q_T$ resummation} \label{sec:qt} Since the original paper on $Q_T$ resummation \cite{Collins:1984kg} a lot of effort has gone into further improving the accuracy of theoretical predictions in order to perform meaningful comparisons to experimental results. Resummed results at NNLL$^\prime$ matched to NLO have been available for quite some time, see e.g.~\cite{Bozzi:2003jy,Bozzi:2005wk,Bozzi:2007pn,Catani:2010pd,Gehrmann:2014yya,Monni:2016ktx}. The resummation of small $Q_T$ logarithms has also been formulated in the context of SCET~\cite{Gao:2005iu,Idilbi:2005er,Mantry:2009qz,Mantry:2010mk,Becher:2010tm,GarciaEchevarria:2011rb,Becher:2012yn,Chiu:2012ir,Neill:2015roa,Ebert:2016gcn}. In addition, several computer codes have been developed for $Q_T$ resummation at this accuracy in the case of neutral boson production, e.g.~\cite{Ladinsky:1993zn,Landry:2002ix,Bozzi:2008bb,Bozzi:2010xn,Banfi:2012du,deFlorian:2011xf,deFlorian:2012mx}. % Recently, the calculation of the NNLO corrections to the $Q_T$ distribution of neutral boson production processes has been completed~\cite{Boughezal:2015aha,Boughezal:2015dra,Boughezal:2015dva,Boughezal:2013uia,Chen:2014gva,Ridder:2015dxa}, and N$^3$LL precision is within reach~\cite{Li:2016ctv}. $Q_T$ resummation is usually performed in Fourier space with $b$ being the variable conjugate to the transverse momentum $Q_T$. In this conjugate space the small $Q_T$ limit corresponds to the large $b$ limit. The resummed transverse momentum distribution of an electroweak final state $F$ can be written as \begin{equation} \frac{d\sigma_{F}}{dQ_T^2} = \frac{Q^2}{s}\int_{0}^{\infty}db\frac{b}{2}J_0(bQ_T)W^F(b,Q,s). \end{equation} The goal of this study is to understand the similarities in the structure of $Q_T$ and threshold resummation in order to combine these two methods of resummation. It is easier to understand the overlap and differences between the two if we transform the expression for $Q_T$ resummation into Mellin space with respect to $z=Q^2/s$: \begin{align} \tilde{W}^F(N,b,Q) &= \int_{0}^{1}dz \; z^{N-1}\;W^F(b,Q,s)\nonumber\\ &=\; \sum_{a}\sigma_{a\bar{a}\to F}^{(0)}(\alpha_s(Q))H^F_a(\alpha_s(Q))S_a(Q,b)\nonumber\\ &\times \; \sum_{c,d}\tilde{C}_{ac}(N,\alpha_s(Q/{\bar b}))\tilde{C}_{\bar{a}d}(N,\alpha_s(Q/{\bar b}))\tilde{f}_{c/h_1}(N,Q^2/{\bar{b}}^2)\tilde{f}_{d/h_2}(N,Q^2/{\bar{b}}^2) \end{align} where $\bar b= Q b/b_0$, with $b_0=2 e^{-\gamma_{\scriptscriptstyle E}}$. The function $S_a(Q,b)$ is the Sudakov form factor for particle $a$ and is given by:\footnote{Note that a resummation scale $\mu_Q$ is often introduced in the context of $Q_T$ resummation as a means to estimated the size of higher-order logarithmic corrections. On the other hand, this kind of variation is not usually considered for threshold resummation. Therefore, here we fix $\mu_Q=Q$, while we still allow for renormalization ($\mu_{\sss\rm R}$) and factorization ($\mu_{\sss\rm F}$) scale variations.}. \begin{equation} S_a(Q,b)=\exp\left\{-\int_{Q^2/{\bar b}^2}^{Q^2}\frac{dq^2}{q^2}\left[A_a(\alpha_s(q))\log\frac{Q^2}{q^2}+B_a(\alpha_s(q))\right]\right\} \end{equation} The functions $A$, $B$, $C$ and $H^F$ are given as perturbative series in $\alpha_s$ and can be found in \cite{Kodaira:1981nh,Kodaira:1982az,Davies:1984hs,Vogt:2000ci,Berger:2002sv,Moch:2004pa,Vogt:2004mw,Catani:2009sm,Becher:2010tm} (see also Appendix~\ref{sec:coef}): \begin{align}\label{QTcoeff} A_a(\alpha_s) = \sum_{n=1}^{\infty} \left(\frac{\alpha_s}{\pi}\right)^n A_a^{(n)}\,,\qquad& B_a(\alpha_s) = \sum_{n=1}^{\infty} \left(\frac{\alpha_s}{\pi}\right)^n B_a^{(n)}\, , \nonumber \\ H^F_a(\alpha_s) = 1+\sum_{n=1}^{\infty} \left(\frac{\alpha_s}{\pi}\right)^n H_a^{F,\;(n)}\,,\qquad& \tilde{C}_{ac}(N,\alpha_s) = \delta_{ac}+\sum_{n=1}^{\infty} \left(\frac{\alpha_s}{\pi}\right)^n \tilde{C}_{ac}^{(n)}(N)\, \end{align} The coefficients $A_a^{(1)}$ and $A_a^{(2)}$ are the same as for threshold resummation and they correspond to the cusp anomalous dimension. However, starting from $A_a^{(3)}$ this contribution becomes observable-dependent, see. e.g~\cite{Becher:2010tm,Monni:2011gb}, and, in particular, it is different for $Q_T$ and threshold resummations. Henceforth, we will denote these coefficients with $A_{a,\;Q_T}^{(3)}$ and $A_{a,\;\mathrm{thr}}^{(3)}$, respectively. Furthermore, in order to facilitate the comparison to threshold resummation we run the PDFs to the factorization scale $\mu_F$, we evaluate the coefficient functions $C_{ij}$ at the hard scale of the process $Q$ and we combine the hard functions, $H^F$, $C$ and $\sigma^{(0)}$, into one perturbative hard factor. Because of the non-diagonal nature in flavor space of the Dokshitzer-Gribov-Lipatov-Altarelli-Parisi (DGLAP) splitting functions and of the closely related $C_{ab}$ functions, $Q_T$ resummation requires dealing with path-ordered exponentials of the anomalous dimension matrix. In order to simplify our discussion, we consider here only one flavor-diagonal contribution, which is the one that is enhanced at threshold and we drop all flavor indices. With this simplification, we obtain \begin{align} \label{WqT} \tilde{W}^F(N,b,Q) &=\; \mathcal{H}^{F}(N,Q,\alpha_s(\mu_{\sss\rm R}),Q^2/\mu_{\sss\rm R}^2,Q^2/\mu_{\sss\rm F}^2) \;\tilde{f}_{a/h_1}(N,\mu_{\sss\rm F}^2)\;\tilde{f}_{\bar a/h_2}(N,\mu_{\sss\rm F}^2)\nonumber\\ &\times \;\exp\left\{\mathcal{G}_{Q_T}(N,\alpha_s(\mu_{\sss\rm R}),b,Q^2/\mu_{\sss\rm R}^2)\right\}. \end{align} where the hard-function is now defined by \begin{align}\label{HfuncQT} &\mathcal{H}^{F}(N,Q,\alpha_s(\mu_{\sss\rm R}),Q^2/\mu_{\sss\rm R}^2,Q^2/\mu_{\sss\rm F}^2) = \nonumber \\ &\sigma_{a\bar{a}\to F}^{(0)}(\alpha_s(Q)) H^F(\alpha_s(Q))\left(\tilde{C}(N,\alpha_s(Q))\right)^2 \exp\left\{\int_{\mu_{\sss\rm F}^2}^{Q^2}\frac{dq^2}{q^2}2\gamma(N,\alpha_s(q))\right\}, \end{align} and $\gamma(N,\alpha_s)$ is the DGLAP anomalous dimension. Finally the Sudakov exponent is given by \begin{equation}\label{eq:sudakov-qt} \mathcal{G}_{Q_T}(N,\alpha_s(\mu_{\sss\rm R}),b,Q^2/\mu_{\sss\rm R}^2)=-\int_{Q^2/{\bar b}^2}^{Q^2}\frac{dq^2}{q^2}\left[A(\alpha_s(q))\log\frac{Q^2}{q^2}+\tilde{B}(N,\alpha_s(q))\right], \end{equation} with \begin{equation}\label{eq:mod-B} \tilde{B}(N,\alpha_s)=B(\alpha_s)+2\beta(\alpha_s)\frac{d\log \tilde{C}(N,\alpha_s)}{d\log\alpha_s}+2\gamma(N,\alpha_s), \end{equation} where the running coupling renormalization group equation is defined as \begin{align} \beta(\alpha_s)=-\alpha_s \sum_{k=0}^\infty\left( \frac{\alpha_s}{\pi}\right)^{k+1} \beta_k. \end{align} We will be restoring the full flavor dependence for our numerical studies. The treatment of the exponentiation of the full flavor dependence is explained in detail in Appendix A of~\cite{Bozzi:2005wk}. This method is implemented in the computer code \texttt{DYqT}~\cite{Bozzi:2008bb,Bozzi:2010xn}, which we employ in our numerical studies. \section{Combining transverse momentum and threshold resummations}\label{sec:combination} We write the transverse-momentum distribution in joint resummation as~\cite{Kulesza:2002rh} \begin{equation}\label{hadron-level-joint} \frac{d\sigma_{F}^{\mathrm{(res)}}}{dQ_T^2} = \int_{0}^{\infty}db\frac{b}{2}J_0(bQ_T)\int_{C_{T}}\frac{dN}{2\pi i} \left(\frac{Q^2}{s}\right)^{-N+1}\tilde{W}^F_\text{joint}(b,N,Q), \end{equation} where $\tilde{W}^F_\text{joint}(N,b,Q)$ has been defined in analogy with the function $\tilde{W}^F(N,b,Q)$ that appears in $Q_T$ resummation Eq.~(\ref{WqT}) \begin{align}\label{eq:WF-joint} \tilde{W}^F_\text{joint}(N,b,Q) &=\; \mathcal{H}^{F}_\text{joint}(N,Q,\alpha_s(\mu_{\sss\rm R}),Q^2/\mu_{\sss\rm R}^2,Q^2/\mu_{\sss\rm F}^2) \;\tilde{f}_{a/h_1}(N,\mu_{\sss\rm F}^2)\;\tilde{f}_{\bar a/h_2}(N,\mu_{\sss\rm F}^2)\nonumber\\ &\times \;\exp\left\{\mathcal{G}_\text{joint}(N,\alpha_s(\mu_{\sss\rm R}),b,Q^2/\mu_{\sss\rm R}^2)\right\}. \end{align} The aim of this section is to revise in some detail the calculation of the Sudakov exponent $\mathcal{G}_\text{joint}$ and hard factor $\mathcal{H}^{F}_\text{joint}$ so that Eq.~(\ref{hadron-level-joint}) is valid at NLL accuracy. The extension to NNLL will be instead discussed in Section~\ref{sec:nnll}. In order to provide a better understanding concerning the origin of the $Q_T$ and threshold logarithms, and their overlap, we study the $\mathcal{O}(\alpha_s)$ correction to the DY process in the eikonal approximation. We consider the emission of a soft (real) gluon off a quark-antiquark dipole. Because we are seeking an all-order result, we consider a two-dimensional Fourier transform with respect to the soft gluon transverse momentum, as well as a Laplace transform with respect to the gluon energy~\footnote{Laplace moments with respect to $2k_0\propto(1-z)$ are equivalent at leading power to the more familiar Mellin moments with respect to $z$, see e.g. Ref.~\cite{Laenen:2000ij}.}. We work in $d=4-2\epsilon$ dimensions and employing the $\overline{\text{MS}}$ scheme, we obtain: \begin{align} \mathcal{G}_\text{joint}^\text{NLO}\left(b,N\right) &= 8\pi \alpha_s C_{F}\left(\frac{\mu^2e^{\gamma_{\scriptscriptstyle E}}}{4\pi}\right)^\epsilon\int\frac{d^{4-2\epsilon}k}{\left(2\pi\right)^{3-2\epsilon}}\theta\left(k_{0}\right)\delta\left(k^{2}\right) e^{-\frac{2k_{0}}{Q}N-i\mathbf{b}\cdot\mathbf{k_T}}\,\frac{p_1\cdot p_2}{(p_1\cdot k)(p_2\cdot k)} \nonumber\\ &= 8\pi \alpha_s C_{F}\left(\frac{\mu^2e^{\gamma_{\scriptscriptstyle E}}}{4\pi}\right)^\epsilon\int\frac{d^{4-2\epsilon}k}{\left(2\pi\right)^{3-2\epsilon}}\theta\left(k_{0}\right)\delta\left(k^{2}\right) e^{-\frac{k_{+}+k_{-}}{Q}N-i\mathbf{b}\cdot\mathbf{k_T}}\, \frac{2}{k_{T}^{2}}, \end{align} where in the second line we have made use of light-cone coordinates and set $|\mathbf{k_T}|^2=k_T^2$. The integral over the light-cone components $k_-$ and $k_+$ can be easily performed, leading to \begin{equation}\label{eq:NLO-eik} \mathcal{G}_\text{joint}^\text{NLO}\left(b,N\right) = 8\pi \alpha_s C_{F}\left(\frac{\mu^2e^{\gamma_{\scriptscriptstyle E}}}{4\pi}\right)^\epsilon\int\frac{d^{2-2\epsilon}k_T}{\left(2\pi\right)^{3-2\epsilon}} e^{-i\mathbf{b}\cdot\mathbf{k_T}}\frac{2}{k_{T}^{2}}\left[K_0\left(\frac{2Nk_T}{Q}\right)+\mathcal{O} \left(\frac{k_T^2}{Q^2} \right)\right], \end{equation} where $K_0$ is the modified Bessel function of the second kind of order zero. In order to remove the infrared divergencies from this real emission contribution, the virtual corrections and the contribution from the PDFs also need to be included. We obtain \begin{equation} \mathcal{G}_\text{joint}^\text{NLO}\left(b,N\right) = 8\pi \alpha_s C_{F}\left(\frac{\mu^2e^{\gamma_{\scriptscriptstyle E}}}{4\pi}\right)^\epsilon\int\frac{d^{2-2\epsilon}k_T}{\left(2\pi\right)^{3-2\epsilon}} \frac{2}{k_{T}^{2}}\left[e^{-i\mathbf{b}\cdot\mathbf{k_T}}K_0\left(\frac{2Nk_T}{Q}\right)-\log\left(\frac{Q}{k_T}\right)+\log\bar N\right], \end{equation} where the second term is the result of the virtual contribution and can be identified as the integral of $1/(2k_+)$ over $k_+$. The final term is the result of the PDF contribution with $\bar N = N e^{\gamma_{\scriptscriptstyle E}}$. Combining the logarithms and integrating over the azimuthal angle results in: \begin{equation}\label{eq:NLO-joint-epsi} \mathcal{G}_\text{joint}^\text{NLO}\left(b,N\right) = 2\frac{\alpha_s}{\pi}C_{F}\left(\mu^2e^{\gamma_{\scriptscriptstyle E}}\right)^\epsilon\int_0^{Q^2}\frac{dk_T^2}{k_T^{2+2\epsilon}} \left[\left(\frac{bk_T}{2}\right)^{\epsilon}J_{-\epsilon}\left(bk_T\right)K_0\left(\frac{2Nk_T}{Q}\right)+\frac{1}{\Gamma(1-\epsilon)}\log\left(\frac{\bar N k_T}{Q}\right)\right], \end{equation} where $J_a$ is the modified Bessel function of the first kind of order $a$. The logarithm and the $K_0$ Bessel function cancel one another in the limit $k_T\to 0$, because $K_0\left(x\right)\sim -\log\left(xe^{\gamma_{\scriptscriptstyle E}}/2\right)$ and $J_0\sim1$. Due to this cancellation, the integral is infrared finite and we can evaluate it in $\epsilon\to0$ limit: \begin{equation}\label{eq:NLO-joint} \mathcal{G}_\text{joint}^\text{NLO}\left(b,N\right) = 2\frac{\alpha_s}{\pi}C_{F}\int_0^{Q^2}\frac{dk_T^2}{k_T^{2}} \left[J_{0}\left(bk_T\right)K_0\left(\frac{2Nk_T}{Q}\right)+\log\left(\frac{\bar N k_T}{Q}\right)\right]. \end{equation} This calculation can be extended to all orders and it leads to the simultaneous resummation of logarithms of $N$ and $b$, as discussed in detail in Ref.~\cite{Laenen:2000ij}. The resummed exponent at NLL can be written in a rather compact form: \begin{align} \mathcal{G}_\text{joint}^\text{NLL}\left(b,N,Q,\mu_F\right) =& 2 \int_0^{Q^2}\frac{dq^2}{q^2}\left\{ A(\alpha_s(q)) \left[J_0\left(bq\right)K_0\left(\frac{2Nq}{Q}\right)+\log\left(\frac{\bar N q}{Q}\right)\right]\right\}\nonumber\\ & -2\log\bar N \int_{\mu_{\sss\rm F}^2}^{Q^2}\frac{dq^2}{q^2} A(\alpha_s(q)). \end{align} Note that the first term is essentially the running-coupling generalization of the one-loop computation in Eq.~(\ref{eq:NLO-joint}), while the second term takes into account the difference between the factorization scale $\mu_F$ and the hard scale of the process $Q$. The NLL result above can be further manipulated and rewritten a way that is similar to both $Q_T$ and threshold resummation and it is suitable for phenomenological applications~\cite{Kulesza:2002rh}. In particular, we use the fact that up to NNLL accuracy we can replace the Bessel function $J_0$ with a step function (see Appendix~\ref{sec:J0} for details): $J_{0}\left(bq\right)\to 1-\Theta\left(q-Q/\bar{b}\right)=\Theta\left(Q/\bar{b}-q\right)$. Thus we obtain \begin{align} \mathcal{G}_\text{joint}^\text{NLL}\left(b,N,Q,\mu_F\right) = & 2\int_{0}^{Q^{2}/\bar{b}^{2}}\frac{dq^{2}}{q^{2}}A\left(\alpha_{\mathrm{s}}\left(q\right)\right)K_{0}\left(\frac{2Nq}{Q}\right)\nonumber\\&+ 2\int_{0}^{Q^{2}}\frac{dq^{2}}{q^{2}} A\left(\alpha_{\mathrm{s}}\left(q\right)\right)\log\left(\frac{\bar{N}q}{Q}\right) -2\log\bar{N}\int_{\mu_{F}^{2}}^{Q^{2}}\frac{dq^{2}}{{q}^{2}}A\left(\alpha_{\mathrm{s}}\left(q\right)\right). \end{align} Following Refs.~\cite{Laenen:2000ij,Kulesza:2002rh} we note that the desired logarithmic behavior is capture if the Bessel function $K_{0}\left(x\right)$ is expanded at small values of its argument, provided that the upper bound of the integration is changed from $Q^{2}/\bar{b}^{2}$ to $Q^{2}/\chi^{2}$ \begin{align}\label{eq:Eeik-joint} \mathcal{G}_\text{joint}^\text{NLL}\left(\chi,N,Q,\mu_F\right) = &- 2\int_{0}^{Q^{2}/\chi^2}\frac{dq^{2}}{q^{2}}A\left(\alpha_{\mathrm{s}}\left(q\right)\right)\log\left(\frac{Nq}{Q}\right)\nonumber\\&+ 2\int_{0}^{Q^{2}}\frac{dq^{2}}{q^{2}} A\left(\alpha_{\mathrm{s}}\left(q\right)\right)\log\left(\frac{\bar{N}q}{Q}\right) -2\log\bar{N}\int_{\mu_{F}^{2}}^{Q^{2}}\frac{dq^{2}}{q^{2}}A\left(\alpha_{\mathrm{s}}\left(q\right)\right) \nonumber\\ &= 2\int_{Q^2/\chi^2}^{Q^2}\frac{dq^2}{q^2}A(\alpha_s(q)) \log\left(\frac{\bar{N}q}{Q}\right) -2\log\bar N \int_{\mu_{\sss\rm F}^2}^{Q^2}\frac{dq^2}{q^2}A(\alpha_s(q)). \end{align} The function $\chi(\bar{N},\bar{b})$ is defined so that it behaves as $\bar{b}$ in the large $\bar{b}$ limit and as $\bar{N}$ in the large $\bar{N}$ limit. Furthermore, if we require $\chi(\bar{N},0)=\bar{N}$, then the integral over $Q_T$ results in the inclusive threshold-resummed cross section. An example of such a function is $\chi=\bar{b}+\bar{N}$. For $\bar b=0$, $\chi= \bar N$ and Eq.~(\ref{eq:Eeik-joint}) reduces to the threshold Sudakov exponential up to NLL accuracy. We can also re-arrange the contributions in a different way, so that the result resembles more closely the Sudakov exponent that appears in $Q_T$ resummation: \begin{align}\label{eq:Sudakov-joint} \mathcal{G}_\text{joint}^\text{NLL}\left(\chi,N,Q,\mu_F\right) =& -\int_{Q^2/\chi^2}^{Q^2}\frac{dq^2}{q^2}\left[A(\alpha_s(q)) \log\left(\frac{Q^2}{q^2}\right)+B(\alpha_s(q))\right]\nonumber\\ & +\int_{\mu_{\sss\rm F}^2}^{Q^2/\chi^2}\frac{dq^2}{q^2}\left[-2\log\bar N A(\alpha_s(q))-B(\alpha_s(q))\right]. \end{align} The first term can be recognized as the exponential for $Q_T$ resummation Eq.~(\ref{eq:sudakov-qt}), with the replacement $\bar b \to \chi$, and the second term is the large $\bar N$ limit of the DGLAP evolution of the PDFs from a scale $Q/\chi$ to $\mu_F$. However, because the identification of $B^{(i)}$ with the constant part of the DGLAP anomalous dimension (i.e. the $\delta$-function contribution to the splitting function) only holds for $B^{(1)}$, this way of rewriting the joint Sudakov exponent only holds up to NLL accuracy. The extension to NNLL accuracy will be discussed in the next section. It is worth pointing out a difference in the logarithmic counting between joint and transverse momentum resummation. DGLAP contributions affect the $Q_T$ spectrum with single logarithms of $Q_T$. However, the flavor-diagonal anomalous dimensions carry an additional $A(\alpha_s) \log \bar{N}$ contribution. Therefore, when computing joint resummation at N$^k$LL order, parton evolution, or at least its large-$N$ behavior, has to be included up to N$^k$LO order, while N$^{k-1}$LO suffices for $Q_T$. At NLL level the treatment of the hard factor is relatively straightforward because the one-loop coefficient functions $\tilde{C}$ do not contain logarithms of $N$, i.e. $D^{(1)}=0$. However, we have to make sure that the threshold-enhanced part of the $\mu_{\sss\rm F}$-dependent contribution is exponentiated. We have \begin{align} \mathcal{G}_\text{joint}^\text{NLL}(\alpha_s(\mu_{\sss\rm R}),b,N,Q^2/\mu_{\sss\rm R}^2)=&-\int_{Q^2/\chi^2}^{Q^2}\frac{dq^2}{q^2}\left[A(\alpha_s(q))\log\frac{Q^2}{q^2}+\tilde{B}(N,\alpha_s(q))\right] +2\int_{\mu_{\sss\rm F}^2}^{Q^2}\frac{dq^2}{q^2}\gamma(N,\alpha_s(q)), \end{align} which is equivalent to Eq.~(\ref{eq:Sudakov-joint}) in the large $N$ limit, and the hard factor is simply \begin{equation} \mathcal{H}^{F,\text{NLL}}_\text{joint}(N,Q,\alpha_s(\mu_{\sss\rm R}),Q^2/\mu_{\sss\rm R}^2,Q^2/\mu_{\sss\rm F}^2) =\;\sigma_{a\bar{a}\to F}^{(0)} \left[1+ \frac{\alpha_s(Q)}{\pi}\left(H^{F,(1)}+ 2 C^{(1)}(N)\right) \right]. \end{equation} We will see in the next section that in order to achieve NNLL accuracy, the way we treat $\mathcal{H}^{F}$ must be refined. \section{Joint resummation at NNLL} \label{sec:nnll} After recalling the main ingredients of joint resummation, we are ready to implement it to NNLL accuracy. We discuss first the resummed exponent, followed by an analysis of the hard factor. \subsection{Sudakov exponent at NNLL} A few issues must be addressed in order to ensure NNLL accuracy in both $N$ and $b$. Firstly, the full PDF evolution now needs to be taken into account at NLO accuracy, together with the large $N$ limit of the NNLO anomalous dimension. At the central scale the latter is computed as: \begin{equation} \Delta \mathcal{G}_\text{joint}^{\mathrm{DGLAP}}=-2 A^{(3)}_\text{thr}\log\bar N \int_{\mu_{\sss\rm F}^2}^{Q^2/\chi^2}\frac{dq^2}{q^2} \left(\frac{\alpha_s(q)}{\pi}\right)^3. \end{equation} A second term that starts to contribute at NNLL accuracy is the soft wide-angle contribution of threshold resummation, $\alpha_s^2\tilde{D}^{(2)}\log \bar N$. This contribution is not exponentiated in $Q_T$ resummation but, it is present in $C^{(2)}_{aa}$ Eq.~(\ref{QTcoeff}), or equivalently in $\mathcal{H}^{(2)}$ Eq.~(\ref{HfuncQT}), while it contributes to the Sudakov exponent of threshold resummation Eq.~(\ref{eq:sudakov-thr}) \begin{equation} \Delta \mathcal{G}_\text{joint}^\mathrm{wide-angle}=-\frac{1}{2}\int^{Q^2}_{Q^2/\bar{N}^2}\frac{dq^2}{q^2}\tilde{D}(\alpha_s(q)) \end{equation} Thus, this term contributes to the resummed exponent in joint resummation and it has to be subtracted from $\mathcal{H}^{(2)}$ in order to prevent double counting: \begin{equation} \mathcal{H}^{(2)}\to\mathcal{H}^{(2)}+\tilde{D}^{(2)}\log\bar{N}. \end{equation} A similar method was performed for joint resummation of heavy quark production, where the soft wide-angle contribution enters at NLL \cite{Banfi:2004xa}. The last contribution to the exponent that we need to take into account is the aforementioned difference between $A_{Q_T}^{(3)}$ and $A_{\mathrm{thr}}^{(3)}$. \begin{equation}\label{eq:col-anom} \Delta \mathcal{G}_\text{joint}^{\mathrm{cusp}}=-\int_{Q^2/\chi^2}^{Q^2/\bar{N}^2}\frac{dq^2}{q^2}\left(A^{(3)}_{Q_{T}}-A^{(3)}_{\mathrm{thr}}\right)\left(\frac{\alpha_s(q)}{\pi}\right)^{3}\log\frac{Q^2}{q^2}, \end{equation} where $A^{(3)}_{Q_{T}}-A^{(3)}_{\mathrm{thr}}=-\beta_0 \tilde{D}^{(2)}$. In the language of SCET this contribution is known as the collinear anomaly~\cite{Becher:2010tm}. It essentially arises because one evaluates both soft and collinear contributions at the same scale~\cite{Monni:2011gb}. Note that we have some freedom in choice of the integration boundaries in Eq.~(\ref{eq:col-anom}). We demand that the lower limit approaches $Q^2/ \bar b^2$ in the large $\bar b$ limit, in order to reproduce the $Q_T$ case. Moreover, with the above choice this contribution vanishes the inclusive case $\chi(\bar N, 0)=\bar N$. \subsection{Treatment of the hard factor} Thus far we have concentrated on discussing the Sudakov exponent. However, the pre-factors for $Q_T$ and threshold resummation, $\mathcal{C}$ and $\mathcal{H}$, respectively in Eq.~(\ref{CfuncTH}) and Eq.~(\ref{HfuncQT}), actually differ already at one-loop level (see Ref.~\cite{Catani:2013tia} for an all-order discussion). In order to better understand this difference, it is useful to go back to the one-loop calculation of Section~\ref{sec:combination}. In particular, we can perform the transverse momentum integral at fixed-coupling in Eq.~(\ref{eq:NLO-joint-epsi}), keeping the upper limit of the integration for the virtual and for the collinear counterterm at $Q$ and taking $\epsilon \to 0$, we find (see also~\cite{Li:2016axz}) \begin{equation} \mathcal{G}_\text{joint}^\text{NLO}\left(b,N\right)=\frac{\alpha_s}{\pi} C_{F}\left[2\log^{2}\bar{N}+{\rm Li}_{2}\left(-\frac{\bar{b}^{2}}{\bar{N}^{2}}\right)+\zeta_{2}\right]. \end{equation} If this is approximated in the threshold limit, i.e. at large $N$, it approaches \begin{equation} \lim_{N\to\infty}\mathcal{G}_\text{joint}^\text{NLO}\left(b,N\right)=\frac{\alpha_s}{\pi}C_{F}\left[\zeta_{2}+2\log^{2}\bar{N}\right]. \end{equation} If instead this is approximated in the $Q_T$-resummation limit, large $b$, this results in \begin{equation} \lim_{b\to\infty}\mathcal{G}_\text{joint}^\text{NLO}\left(b,N\right)=\frac{\alpha_s}{\pi}C_{F}\left[-2\log^{2}\bar{b}+4\log\bar{b}\log\bar{N}\right]. \end{equation} This reproduces the logarithmic structure of both threshold and $Q_T$ resummation, however the constant term has a difference of $\zeta_2=\pi^2/6$, which is indeed the difference between the $\mathcal{H}$ and $\mathcal{C}$ at one-loop~\cite{Kulesza:2002rh,Catani:2013tia}. In order to account for this difference we add the NLO computation minus the expansion of the logarithmic exponential at NLO: \begin{align}\label{eq:DeltaH} \Delta{\cal H}^{\left(1\right)} = &\; A^{(1)}\left[2\log^{2}\bar{N}+{\rm Li}_{2}\left(-\frac{\bar{b}^{2}}{\bar{N}^{2}}\right)+\zeta_{2}+2\log^{2}\chi-4\log\chi\log\bar{N}\right]\nonumber \\ = &\; A^{(1)}\left[\zeta_{2}+{\rm Li}_{2}\left(-\frac{\bar{b}^{2}}{\bar{N}^{2}}\right)+2\log^{2}\left(\chi/\bar{N}\right)\right] \simeq A^{(1)}\left[\zeta_{2}-{\rm Li}_{2}\left(\frac{\bar{b}^{2}}{\chi^2}\right)\right], \end{align} where the last step is valid up to power-suppressed terms. There is also an analogous term in $\mathcal{H}^{(2)}$, the correct treatment of which would be necessary in order to achieve NNLL$^\prime$ accuracy in both threshold and $Q_T$. However, in this work we only consider NNLL accuracy for joint resummation and therefore we do not have to worry about it. For our numerical studies we take this contribution from transverse resummation and, therefore, we do reach NNLL$^\prime$ for $Q_T$ but not for threshold resummation. Note that the modification of $\mathcal{H}^{(1)}$ not only influences the hard coefficient, but also the exponential. The $N$-dependent contribution that we find is naturally a part of $C_{aa}$ and therefore it should be computed with the strong coupling $\alpha_s$ at the scale $Q/\chi$. If we then express the pre-factor at the hard scale, we induce a new term in the resummed exponent, see Eq.~(\ref{eq:mod-B}), which effectively amounts to a modification of the coefficient $\tilde{B}^{(2)}$ : \begin{equation} \label{newB2} \tilde{B}^{(2)} \to\tilde{B}^{(2)}-\beta_0 \Delta{\cal H}^{\left(1\right)}. \end{equation} \subsection{NNLL joint cross section}\label{NNLL-joint} We are now ready to put together all the contributions discussed in the previous sections and finally arrive at an expression for the DY transverse momentum spectrum that simultaneously resums threshold and $Q_T$ logarithms to NNLL. We start with the resummed exponent, that reads \begin{align}\label{eq:joint-exponent} \mathcal{G}_\text{joint}^{\mathrm{NNLL}}(\alpha_s(\mu_{\sss\rm R}),b,N,Q^2/\mu_{\sss\rm R}^2)=&-\int_{Q^2/\chi^2}^{Q^2}\frac{dq^2}{q^2}\left[A(\alpha_s(q))\log\frac{Q^2}{q^2}+\tilde{B}(N,\alpha_s(q))\right]\nonumber\\ &\hspace{-5em}+\int_{\mu_{\sss\rm F}^2}^{Q^2}\frac{dq^2}{q^2}2\gamma(N,\alpha_s(q))-2A_\text{thr}^{(3)}\log\bar N\int_{\mu_{\sss\rm F}^2}^{Q^2/\chi^2}\frac{dq^2}{q^2} \left(\frac{\alpha_s(q)}{\pi}\right)^3\nonumber\\ &\hspace{-5em}-\frac{1}{2}\int^{Q^2}_{Q^2/\bar{N}^2}\frac{dq^2}{q^2}\tilde{D}_{a}(\alpha_s(q))+\beta_0A^{(1)}\left[\zeta_{2}-{\rm Li}_{2}\left(\frac{\bar{b}^{2}}{\chi^2}\right)\right]\int_{Q^2/\chi^2}^{Q^2}\frac{dq^2}{q^2}\left(\frac{\alpha_s(q)}{\pi}\right)^2\nonumber\\ &\hspace{-5em}-\int_{Q^2/\chi^2}^{Q^2/\bar{N}^2}\frac{dq^2}{q^2}\left(A^{(3)}_{Q_{T}}-A^{(3)}_{\mathrm{thr}}\right)\left(\frac{\alpha_s(q)}{\pi}\right)^{3}\log\frac{Q^2}{q^2}. \end{align} Note the second line of this expression contains the contribution from DGLAP evolution: the anomalous dimension $\gamma$ is taken up to NLO order, while its NNLO contribution is considered only in the soft limit. The above result can be brought to a rather compact form (details are given in Appendix~\ref{sec:coef}): \begin{align}\label{eq:joint-exponent-final} \mathcal{G}_\text{joint}^{\mathrm{NNLL}}(\alpha_s(\mu_{\sss\rm R}),b,N,Q^2/\mu_{\sss\rm R}^2)=& -\int_{Q^2/\chi^2}^{Q^2}\frac{dq^2}{q^2}\Bigg[A(\alpha_s(q))\log\frac{Q^2}{q^2}+\tilde{B}(N,b,\alpha_s(q))+\frac{1}{2}\tilde{D}(\alpha_s(q))\Bigg]\nonumber \\&- \frac{1}{2}\log \left(\frac{\bar N^2}{\chi^2}\right) \tilde{D}\left(\alpha_s\left(\frac{Q}{\chi}\right) \right)+2\int_{\mu_{\sss\rm F}^2}^{Q^2}\frac{dq^2}{q^2}\gamma_\text{soft}(N,\alpha_s(q)), \end{align} where now $A$ is always the cusp and $\tilde{B}(N,b,\alpha_s)$ has been put into a form that closely resembles the analogous coefficient $\tilde{B}(N,\alpha_s)$ appearing in $Q_T$ resummation Eq.~(\ref{eq:mod-B}): \begin{equation} \tilde{B}(N,b,\alpha_s)=B(\alpha_s) +2 \beta(\alpha_s)\frac{d\log \tilde{C}(N,b,\alpha_s)}{d\log \alpha_s}+2\gamma(N,\alpha_s). \end{equation} Note however that the coefficient function $\tilde{C}$ differs in two ways with respect to the one entering standard $Q_T$ resummation: first, threshold-enhanced terms are subtracted off and, secondly, it contains the contribution \begin{align} F(N,b,\alpha_s)= \frac{\alpha_s}{\pi} A^{(1)} \left[\zeta_{2}-{\rm Li}_{2}\left(\frac{\bar{b}^{2}}{\chi^2}\right)\right] + \mathcal{O}\left(\alpha_s^2\right), \end{align} resulting in \begin{equation} \tilde{C}(N,b,\alpha_s)=\tilde{C}(N,\alpha_s)+\frac{F(N,b,\alpha_s)}{2}+\frac{1}{2}\left(\frac{\alpha_s}{\pi}\right)^2\tilde{D}^{(2)}\log\bar{N}, \end{equation} where $\tilde{C}(N,\alpha_s)$ is the same as for $Q_T$ resummation (see Appendix~\ref{sec:coef} for explicit expressions for the coefficients). In order to achieve NNLL in both variables, the DGLAP anomalous dimension $\gamma$ needs to be evaluated at $n$NLO accuracy, which is defined as NLO accuracy plus the $\log \bar N$ contributions from the NNLO. On the other hand, $\gamma_\text{soft}$ in Eq.~(\ref{eq:joint-exponent-final}) only contains the threshold-enhanced, while the residual $\mu_{\sss\rm F}$ dependence is included at fixed-order. We note that Eq.~(\ref{eq:joint-exponent-final}) can easily be reduced to the threshold exponent by setting $\bar b=0$, i.e. $\chi=\bar N$. In order to recover the $Q_T$-resummation exponent in the limit $\chi \to \bar b$ a few algebraic steps are necessary, as detailed in Appendix~\ref{sec:coef}. Finally, the hard factor is given by \begin{equation}\label{eq:Hard-function} \mathcal{H}^{F,\;\mathrm{NNLL}}_\text{joint}=1+\frac{\alpha_s}{\pi}\left\{\mathcal{H}^{F,(1)}+A^{(1)}\left[\zeta_{2}-{\rm Li}_{2}\left(\frac{\bar{b}^{2}}{\chi^2}\right)\right]\right\}+\left(\frac{\alpha_s}{\pi}\right)^2\left\{\mathcal{H}^{F,(2)}+\tilde{D}^{(2)}\log\bar{N}\right\}. \end{equation} Thus far, only the flavor diagonal contributions have been discussed in the context of joint resummation. However, the treatment of the full flavor dependence can be recovered by using the same method as for $Q_T$ resummation. The details of this method are described in Appendix A of~\cite{Bozzi:2005wk}. These off-diagonal contributions are suppressed in the threshold limit, therefore their inclusion in joint resummation comes with some freedom: we can can either include them or treat them only in $Q_T$ resummation, thus providing two results in joint resummation that differ by power-suppressed contributions in the threshold limit. \section{Phenomenological studies}\label{sec:numerics} Having obtained a joint resummed cross section at NNLL accuracy in both $Q_T$ and threshold, we can explore numerical results. In order to analyze the numerical effect of the joint resummation formalism we make use of a modified version of the \texttt{DYqT}~code~\cite{Bozzi:2010xn,Bozzi:2008bb}. We also use \texttt{DYqT}~to produce results for $Q_T$ resummation only. We choose the CT14~\cite{Dulat:2015mca} set of parton distributions, which are used at NLO accuracy for the LO and LO+NLL$^\prime$ distributions and NNLO accuracy for NLO and NLO+NNLL. As is the case in~\cite{Kulesza:2002rh} we have chosen $\chi=\bar{b}+\bar{N}/(1+\eta\;\bar{b}/\bar{N})$ with the choice $\eta=1/4$. A more detailed discussion of the impact from different choices of $\chi$ or $\eta$ can be found in Appendix~\ref{sec:chi}. We first explore the expansion of the resummation and compare it to the fixed order computation. This allows us to comment on the validity of the approximation. Next, we present the fully resummed result at LO+NLL$^\prime$ and NLO+NNLL accuracies. \subsection{Expansion} We start our study by considering the production of a $Z$ boson at the Tevatron, which is the same setup used in the previous study~\cite{Kulesza:2002rh}. In Fig.~\ref{fig:expansion-Teva-NLL} we show the comparison of the LO $Q_T$ distribution with the expansion of the joint and $Q_T$ resummation differential cross sections using different approximations. The curve labelled "Joint~NLL$|_{\mathrm{LO}}$" corresponds to the expansion of the NLL result of Ref.~\cite{Kulesza:2002rh}, which does not include the modification of the hard coefficient. If the additional contribution to the hard coefficient is included, the lines indicated by~"Joint~NLL$^\prime|_{\mathrm{LO}}$" are obtained. Our default result corresponds to perform joint resummation also in the flavor off-diagonal contributions, which are usually not included in threshold resummation because they are power-suppressed (for recent progress on all-order understanding of power-suppressed contributions see Ref.~\cite{Bonocore:2016awd} and references therein). This correctly captures the next-to-leading power corrections at $\mathcal{O}(\alpha_s)$, but provides only partial information beyond that. Alternatively, one can exclude these contributions from joint resummation, so that the integral over $Q_T$ precisely reproduce the inclusive cross section obtained with threshold resummation, without additional power corrections. We implement this second resummation scheme by separating the contributions to the $\tilde{B}$ term in Eq.~(\ref{eq:joint-exponent-final}) in two classes: those that do not vanish at large $N$ are treated in joint resummation, while power-corrections in the threshold limit are only integrated over the range $[Q^2/\bar{b}^2,Q^2]$, which is the same as $Q_T$ resummation. The results of this second implementation are labeled "Joint~(diag)~NLL$^\prime|_{\mathrm{LO}}$", because they include only contribution from the $q\bar q$ initial state. We stress again that these two implementations of joint resummation are the same up to power corrections in the threshold limit. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{Plots/DYqT-NLL-Teva-exp} & \includegraphics[width=0.45\textwidth]{Plots/DYqT-NLL-Teva-exp-qq-full-ratio} \tabularnewline \hspace{1.7em} (a) &\hspace{1.7em} (b) \tabularnewline \end{tabular} \caption{$Z$ boson transverse momentum distribution at the Tevatron with $\sqrt{S}=1.8$ TeV collision energy. The different approximations are obtained by expanding the NLL resummation to first order and they are compared to the LO result. In the panel (a) the ratio to fixed order, $1-\frac{d\sigma_X}{dQ_T}/\frac{d\sigma_{\mathrm{LO}}}{dQ_T}$ is plotted, while in panel (b) the fraction in the $q\bar{q}$ channel, $f_q=\frac{d\sigma_q}{dQ_T}/\frac{d\sigma}{dQ_T}$ is shown.} \label{fig:expansion-Teva-NLL} \end{figure} \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{Plots/DYqT-NNLL-Teva-exp} & \includegraphics[width=0.45\textwidth]{Plots/DYqT-NNLL-Teva-exp-qq-full-ratio} \tabularnewline \hspace{1.7em} (a) &\hspace{1.7em} (b) \tabularnewline \end{tabular} \caption{The same as Fig.~\ref{fig:expansion-Teva-NLL}, but comparing the expansion of the NNLL resummation to NLO accuracy.} \label{fig:expansion-Teva-NNLL} \end{figure} In Fig.~\ref{fig:expansion-Teva-NLL}(a) we show $1-\frac{d\sigma_X}{dQ_T}/\frac{d\sigma_{\mathrm{LO}}}{dQ_T}$, where $X$ stands for the different approximations reported in the legend. From this plot we can see that the inclusion of additional threshold contribution to the hard coefficient produces a better agreement to the fixed order computation up to scales of at least 50 GeV. Moreover, while the "Joint~(diag)" does not perform as well as our default implementation, it does better than just the expansion of $Q_T$ resummation. In Fig.~\ref{fig:expansion-Teva-NLL}(b) we concentrate on the partonic subprocess that we have under theoretical control, namely $q \bar q$. We plot the fraction of the cross section that can be attributed to the $q\bar{q}$ initial state channel: $f_q=\frac{d\sigma_q}{dQ_T}/\frac{d\sigma}{dQ_T}$. As we move to larger values of $Q_T$ the contribution from the other partonic channels become more significant and since these terms are not correctly approximated in the "Joint~(diag)" method this results in a deviation from the total fixed order differential cross section. Here it can also be seen that the $Q_T$ expansion is worse in this individual channel, however a cancellation makes it work somewhat better for the sum of all channels. On the other hand, our default implementation for joint resummation does include power-suppressed contributions both in the $q \bar q $ and in the off-diagonal channels, which renders this type of cancellation more moderate. In Fig.~\ref{fig:expansion-Teva-NNLL} a similar comparison can be seen at NLO accuracy in the $Q_T$ distribution. The conclusions are the same as for LO accuracy and joint resummation works just as well with the extension to NNLL accuracy as at NLL$^\prime$ accuracy. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{Plots/DYqT-NLL-LHC-exp} & \includegraphics[width=0.45\textwidth]{Plots/DYqT-NLL-LHC-exp-qq-full-ratio} \tabularnewline \hspace{1.7em} (a) &\hspace{1.7em} (b) \tabularnewline \end{tabular} \caption{$Z$ boson transverse momentum distribution at the LHC with $\sqrt{S}=13$ TeV collision energy. The different approximations are obtained by expanding the NLL resummation to first order and they are compared to the LO result. In the panel (a) the ratio to fixed order, $1-\frac{d\sigma_X}{dQ_T}/\frac{d\sigma_{\mathrm{LO}}}{dQ_T}$ is plotted, while in panel (b) the fraction in the $q\bar{q}$ channel, $f_q=\frac{d\sigma_q}{dQ_T}/\frac{d\sigma}{dQ_T}$ is shown.} \label{fig:expansion-LHC-NLL} \end{figure} \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{Plots/DYqT-NNLL-LHC-exp} & \includegraphics[width=0.45\textwidth]{Plots/DYqT-NNLL-LHC-exp-qq-full-ratio} \tabularnewline \hspace{1.7em} (a) &\hspace{1.7em} (b) \tabularnewline \end{tabular} \caption{The same as Fig.~\ref{fig:expansion-LHC-NLL}, but comparing the expansion of the NNLL resummation to NLO accuracy.} \label{fig:expansion-LHC-NNLL} \end{figure} We continue our study by considering $Z$ production the LHC at 13 TeV center-of-mass energy. Unfortunately, in this setup our findings are on less solid ground than what was obtained at Tevatron energies. The results plotted in Fig.~\ref{fig:expansion-LHC-NLL}(a) prevent us to claim that the expansion of joint resummation provides an improved approximation of the fixed order over $Q_T$ resummation alone. This is perhaps surprising because if we look only at the $q \bar q$ channel, as in Fig.~\ref{fig:expansion-LHC-NLL}(b) we are drawn to the opposite, rather positive, conclusion. However, the same plot shows us that the importance of $q \bar q$ channel relatively to other partonic subprocesses is decreased. Moreover, power-corrections to the threshold expansion are rather important as indicated by the spread of our default and flavor-diagonal results. This should not come as a surprise as we move further away from the threshold for $Z$ production. Fig.~\ref{fig:expansion-LHC-NNLL} shows that the conclusions remain similar at the next perturbative order. In order to analyze a process closer to threshold we study $Z'$ production. A mass $M_{Z'}=3$ TeV is used and the other parameters are kept the same as for $Z$-boson production. In order to improve the fit of the PDFs in Mellin space at these scales, the same implementation as in the code {\sc Resummino}~\cite{Fuks:2013vua} was used. As can be seen from Fig.~\ref{fig:expansion-LHC-Zp-NLL}(a) all three expansions provide a good approximation of the fixed order. The size of threshold effects which are not already captured by the $Q_T$ formalism is rather small at central scale. In addition, it can be noted that the two different methods of joint resummation now agree. The reason for this can be seen in Fig.~\ref{fig:expansion-LHC-Zp-NLL}(b). For $Z'$ production with a high enough mass the dominant channel is the $q\bar{q}$ and therefore the difference between the two methods of joint resummation will be small. Finally the $Z'$ $Q_T$-distribution at NLO accuracy can be seen in Fig.~\ref{fig:expansion-LHC-Zp-NNLL}. Fig.~\ref{fig:expansion-LHC-Zp-NNLL}(a) shows that the expansion is slightly worse for joint resummation when compared to $Q_T$ resummation, however this difference is around the 1\% level. The $q\bar{q}$ fraction, as seen in Figure~\ref{fig:expansion-LHC-Zp-NNLL}, is slightly better for joint resummation. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{Plots/DYqT-NLL-LHC-Zp3TeV-exp} & \includegraphics[width=0.45\textwidth]{Plots/DYqT-NLL-LHC-Zp3TeV-exp-qq-full-ratio} \tabularnewline \hspace{1.7em} (a) &\hspace{1.7em} (b) \tabularnewline \end{tabular} \caption{$Z^\prime$ boson transverse momentum distribution ($M_{Z^\prime}=3$ TeV) at the LHC with $\sqrt{S}=13$ TeV collision energy. The different approximations are obtained by expanding the NLL resummation to first order and they are compared to the LO result. In the panel (a) the ratio to fixed order, $1-\frac{d\sigma_X}{dQ_T}/\frac{d\sigma_{\mathrm{LO}}}{dQ_T}$ is plotted, while in panel (b) the fraction in the $q\bar{q}$ channel, $f_q=\frac{d\sigma_q}{dQ_T}/\frac{d\sigma}{dQ_T}$ is shown.} \label{fig:expansion-LHC-Zp-NLL} \end{figure} \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{Plots/DYqT-NNLL-LHC-Zp3TeV-exp} & \includegraphics[width=0.45\textwidth]{Plots/DYqT-NNLL-LHC-Zp3TeV-exp-qq-full-ratio} \tabularnewline \hspace{1.7em} (a) &\hspace{1.7em} (b) \tabularnewline \end{tabular} \caption{The same as Fig.~\ref{fig:expansion-LHC-Zp-NLL}, but comparing the expansion of the NNLL resummation to NLO accuracy.} \label{fig:expansion-LHC-Zp-NNLL} \end{figure} \subsection{Resummation} Having explored the regime of validity of the expansion, we now focus our attention on the effect of joint resummation on the transverse momentum distribution and its theoretical uncertainties. We begin by showing resummed results in the Tevatron setup. In Fig.~\ref{fig:resummation-Teva} the transverse momentum distribution is shown in fixed-order perturbation theory and resummed perturbation theory. In particular, the plot in Fig.~\ref{fig:resummation-Teva}(a) shows LO (dotted) and LO+NLL$^\prime$ for joint (solid), joint diagonal (dotted-dashed) and $Q_T$ resummation. Uncertainty bands are provided for $Q_T$ resummation and joint resummation and they are determined by independently varying the factorization and renormalization scale by a factor 2 using the 7-point method. The bottom panel shows the ratio of the joint-resummation result to standard $Q_T$ resummation at the central scale. Joint resummation causes a small increase in the peak region, followed by a decrease for larger values of $Q_T$ and finally an increase in the tail. In the low $Q_T$ region the two methods of joint resummation agree and for increasing $Q_T$ values the difference becomes larger. This shows that it is important to have a correct representation of the power corrections in order to have good control at larger values of $Q_T$. We note that, with the exception of the region roughly between 15 and 30 GeV, joint resummation does reduce the uncertainty. We believe that scale variation in $Q_T$ resummation underestimates the uncertainty because the curves for different scales have a pinch point in this region, while the pinch for joint resummation is less pronounced and it appears at lower transverse momentum, in the $Q_T \sim 5$~GeV region where $Q_T$ resummation is dominant. Next the we consider the $Q_T$ spectrum one order higher in perturbation theory. In Fig.~\ref{fig:resummation-Teva}(b) we plot NLO (dotted), NLO+NNLL for joint (solid) and joint diagonal (dotted-dashed), and NLO+NNLL$^\prime$ for $Q_T$ resummation. As expected, joint resummation further reduces the scale uncertainty. In addition the difference between the two methods of joint is smaller at this accuracy, albeit outside the uncertainty band. The behavior now also changes in comparison to LO+NLL$^\prime$ accuracy. In the low $Q_T$ region, joint resummation agrees with $Q_T$ resummation, while there is still an increase in the tail region. We believe this to be an indication that $Q_T$ resummation alone, if considered at high-enough orders, does capture most of the threshold effects. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{Plots/DYqT-NLL-Teva-joint} & \includegraphics[width=0.45\textwidth]{Plots/DYqT-NNLL-Teva-joint} \tabularnewline \hspace{1.7em} (a) &\hspace{1.7em} (b) \tabularnewline \end{tabular} \caption{The $Z$-boson transverse momentum distribution at $\sqrt{S}=1.8$ TeV Tevatron collision energy. Fixed-order and resummed and matched results are compared at different perturbative accuracies. The uncertainty bands are computed by independently varying the factorization and renormalization scales with the 7-point method. The lower panel shows the ratio with respect to the central value of $Q_T$ resummation.} \label{fig:resummation-Teva} \end{figure} We have already seen that the expansion of joint resummation does not approximate fixed order any better than $Q_T$ resummation in the case of $Z$ production at the LHC. However, it is still interesting to look at the behavior the resummed cross section would have. This result is presented in Fig.~\ref{fig:resummation-LHC}. We note that the behavior of joint resummation at LO+NLL$^\prime$ with respect to $Q_T$ resummation is comparable to the lower-energy (Tevatron) case. However, we do notice a significant difference between the two methods of joint resummation, which indicates a strong dependence on the power corrections. This difference becomes smaller at NLO+NNLL accuracy, but it is still significant. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{Plots/DYqT-NLL-LHC-joint} & \includegraphics[width=0.45\textwidth]{Plots/DYqT-NNLL-LHC-joint} \tabularnewline \hspace{1.7em} (a) &\hspace{1.7em} (b) \tabularnewline \end{tabular} \caption{The same as Figure~\ref{fig:resummation-Teva}, for the LHC at $\sqrt{S}=13$ TeV.} \label{fig:resummation-LHC} \end{figure} Finally, in Fig.~\ref{fig:resummation-LHC-Zp}, we show resummed results for $Z'$ production . At LO+NLL$^\prime$ accuracy, shown in (a), an increase can be seen for the low values of $Q_T$. The two different methods for joint resummation do agree with each another. In addition, a significant reduction of the scale dependence can be observed. At NLO+NNLL accuracy, shown in (b), joint resummation and $Q_T$ resummation provide very similar result but we do notice a further decrease in the scale uncertainty. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{Plots/DYqT-NLL-LHC-Zp3TeV-joint} & \includegraphics[width=0.45\textwidth]{Plots/DYqT-NNLL-LHC-Zp3TeV-joint} \tabularnewline \hspace{1.7em} (a) &\hspace{1.7em} (b) \tabularnewline \end{tabular} \caption{The same as Figure~\ref{fig:resummation-LHC}, but for $Z'$ production with $M_{Z'}=3$ TeV.} \label{fig:resummation-LHC-Zp} \end{figure} \section{Conclusions}\label{sec:conclusion} In this paper we have considered the transverse momentum distribution of an electro-weak vector boson in joint resummation. This formalism, which was first developed in Refs.~\cite{Li:1998is,Laenen:2000ij} allows for the simultaneous resummation of logarithmic contributions that are enhanced at small $Q_T$ and those that are enhanced at threshold. While phenomenological applications of this formalism existed at NLL~\cite{Kulesza:2002rh,Kulesza:2003wn,Banfi:2004xa}, to our knowledge, no analysis was performed beyond this logarithmic accuracy. In this paper, we have derived and implemented joint resummation at NNLL. In particular, we have considered the production of a $Z$ boson via the DY mechanism at the Tevatron and at the LHC, as well as of a heavier $Z^\prime$ at the LHC. By comparing fixed-order results with their approximations obtained by expanding the joint-resummed result in powers of the strong coupling, we have performed a detailed study of the regime of validity for our implementation. We have found that its use is fully justified for $Z$ production at the Tevatron, while at the LHC the situation is much less clear, because there are significant contributions from power-corrections to the threshold limit. For instance, the $q\bar{q}$ channel is not the dominant channel away from the small $Q_T$ limit. On the other hand, the formalism works well if the production of heavier particles, such as a hypothetical $Z'$ with a mass of 3~TeV, is considered. When looking at all-order results, we have found that joint resummation at NLL$^\prime$ gives noticeable corrections when compared to standard $Q_T$ resummation at the same accuracy. However, differences between the two are much smaller when both resummations are upgraded to NNLL. Nevertheless, NNLL joint resummation leads to a further decrease of the scale dependence. We see several possible directions for future developments of this work. The first one, more theoretical, consists of revisiting the original derivation of Ref.~\cite{Laenen:2000ij} in order to better understand the role of power corrections to the threshold limit with the aim of including them in phenomenological studies. This will put the joint resummation of Standard Model processes at LHC energies on a firmer ground. Moreover, it would be interesting to quantitatively compare the approach presented in this work to the threshold resummation of the $Q_T$ spectrum, as done for instance in Refs.~\cite{Gonsalves:2005ng,Kidonakis:2014zva,deFlorian:2005fzc}. A special, and particularly interesting, case is given by Higgs production in gluon gluon fusion, in which power-suppressed contributions at threshold are known to play a less important role than in DY. In this context, we plan to explore the possibility of combining these results with other kinds of joint resummation, such as the simultaneous resummation of small- and large-$x$~\cite{Ball:2013bra} contributions, as well the joint resummation of small-$x$ and $Q_T$ logarithms, recently proposed in Ref.~\cite{Marzani:2015oyb}. Furthermore, one could also concentrate on Beyond the Standard Model processes. For instance, one could imagine to apply our results to the production of supersymmetric particles and therefore upgrade the accuracy of the computer code {\sc Resummino}~\cite{Fuks:2013vua} to NNLL. \begin{acknowledgments} We thank Giancarlo Ferrera, Anna Kulesza, and Eric Laenen for useful discussions and Claudio Muselli for a critical reading of the manuscript. % This work was supported by the U.S.\ National Science Foundation, under grants PHY-0969510, the LHC Theory Initiative, PHY-1417317 and PHY-1619867. Support was provided by the Center for Computational Research at the University at Buffalo. \end{acknowledgments}
0705.0786
\section{Introduction} Electronic properties of carbon systems are dramatically depend on their geometry and size. In particular, intensive research has been made on carbon nanotubes in the last decade. The large interest centers their peculiar electronic properties inherent to quasi-one-dimensional systems. A similarly fascinating carbon system is a ribbon-like stripe of a graphite sheet, which is named graphene nanoribbons or carbon nanoribbons\cite% {Ezawa,Nakada}. Recent experimental developments\cite{Novoselov} enable us to isolate a graphene, which is a monolayer graphite. Nanoribbons have a higher variety than nanotubes because of the existence of edges. Though quite attractive materials, their too rich variety has made it difficult to carry out a systematic analysis of nanoribbons. \begin{figure}[!tbh] \includegraphics[width=0.6\textwidth]{DecoColor} \caption{(a) A typical structure of nanoribbons. A black circle stands for a carbon atom with one $\protect\pi $ electron, while a blue circle for a different atom such as a hydrogen.\ A closed area represents a unit cell. It is possible to regard the lattice made of black circles as a part of a honeycomb lattice.\ (b) A nanoribon is constructed from a chain of $m$ connected carbon hexagons, as depicted in red, and by translating this chain by the translational vector $\mathbf{T}=\pm q\mathbf{a}+\mathbf{b}$ many times, as depicted in pink, where $q<m$. A nanoribbon is indexed by a set of two integers $\langle p,q\rangle $\ with $p=m-q$. Here we have taken $m=4$, $q=2$, $p=2$. } \end{figure} In this work, characterizing a wide class of nanoribbons by a set of two integers $\langle p,q\rangle $, we present a systematic analysis of their electronic property in parallel to that of nanotubes. They are shown to exhibit a rich variety of band gaps, from metals to typical semiconductors. We reveal that there exist sequences of metallic or almost metallic nanoribbons which look like streams in valley made of semiconductors. They approach equi-width curves for wide nanoribbons. We also find a peculiar dependence of the electronic property of nanoribbons on the width $w$. Furthermore, we point out that the variety of nanoribbons make them promising candidates for electronic device applications. \section{Classification and Electronic Structures of Graphene Nanoribbons} The classification rule for the nanotubes says that it is metallic when $% n_{1}-n_{2}$ is an integer multiple of $3$, and otherwise semiconducting, where $\left( n_{1},n_{2}\right) $ is a chiral vector of the nanotube. Suggested by this classification, as we illustrate in Fig.1, we consider a wide class of nanoribbons indexed by a set of two integers $\langle p,q\rangle $ representing edge shape and width. Nanoribbons with $q=0$ possess zigzag edges, nanoribbons with $q=1$ possess armchair edges and the other nanoribbons possess chiral edges. \begin{figure}[!tbh] \includegraphics[width=0.7\textwidth]{Band.eps} \caption{{}The band structure of nanoribbons. The horizontal axis is the crystal momentum $k$, $-\pi < k <\pi$, while the vertical axis is the energy $\epsilon$, $-3|t| <\epsilon < 3|t|$ with $|t|=3.033$eV. The band structure depends strongly on the index $q$ for fixed $p$, but depends on the index $p$ only weakly for fixed $q$.} \end{figure} \begin{figure}[!tbh] \includegraphics[width=0.45\textwidth]{Valley} \caption{{}The band gap structure of nanoribbons. (a) The horizontal and vertical axes represent the indices $p$ and $q$, respectively. Magnitudes of band gaps are represented by colored squares. Redish (blue) gray squares represent small (large) gap semiconductors. Especially, metallic states are represented by red squares. (b) A bird's eye view. The vertical axis is the energy gap $\Delta $ in unit of $|t|=3.033$eV. Nanoribbons make a valley structure with stream-like sequences of metallic points in the $pq$ plane. We observe clearly three emergence patterns of metallic points: (a) All zigzag nanoribbons are metallic points. (b) Armchair nanoribbons with $p=2,5,8,11$, is metallic (c) Several sequences of metallic points on "streams" in valleys. In particular, nanoribbons are metallic on the points $\langle p,q\rangle $ with integers $p$ and $q$ with $ q=p(p-1)/2$. They constitute the principal sequence of metallic points. Several sequences of metallic or almost metallic points are found in the valley of semiconducting nanoribbons. These sequences correspond to equi-width curves indexed by w as $p=-q+(w/2)\sqrt{(3(2q+1)^2+9)}$.} \end{figure} We carry out a systematic analysis of the electronic property of nanoribbons. It is well known that the electronic properties of graphene and nanotube are well explained by taking the nearest neighbor tight banding model. The analysis of nanoribbons can be done in a similar way. The tight-binding Hamiltonian is defined by% \begin{equation} H=\sum_{i}\varepsilon _{i}c_{i}^{\dagger }c_{i}+\sum_{\left\langle i,j\right\rangle }t_{ij}c_{i}^{\dagger }c_{j}, \label{HamilTB} \end{equation}% where $\varepsilon _{i}$ is the site energy, $t_{ij}$ is the transfer energy, and $c_{i}^{\dagger }$ is the creation operator of the $\pi $ electron at the site $i$. The summation is taken over the nearest neighbor sites $\left\langle i,j\right\rangle $. Carbon nanotubes are regarded as a periodic-boundary-condition problem, while carbon (graphene) nanoribbons are as a fixed-boundary-condition problem imposed on graphene. \begin{figure}[!tbh] \includegraphics[width=0.6\textwidth]{MosikizColor.eps} \caption{{}Illustration of metallic points, sequences and equi-width curves. Metallic points are denoted by red circles. Solid curves represent sequences of metallic or almost metallic points, while dotted curves represent the points $\langle p,q\rangle $ possessing the same width. The width w is defined as $w=2(p+q)/\sqrt{(3(2q+1)^2+9)}$. The $n$-th sequnce is tangent to the equi-width curve with $w=n$ at $q=1$. These two curves become almost identical for sufficiently wide nanoribbons.} \end{figure} \begin{figure}[!tbh] \includegraphics[width=0.6\textwidth]{WidthColor.eps} \caption{{}The band gaps $\Delta$ in unit of $|t|$=3.033eV as a function of the width $w$. They oscillate and take local minima almost at the same values of the width $w$ for any $q.$ Their envelope decrease inversely against $w$.} \end{figure} The band gap of nanoribbons is calculated for each point $\langle p,q\rangle $, as in Fig.2. Collecting all these results, we display the band gap structure in Fig.3. Nanoribbons exhibit a variety of properties in electronic conduction, from metals to typical semiconductors. It is remarkable that it makes a valley structure with stream-like sequences of metallic or almost metallic nanoribbons (Fig.3). It is observed that nanoribbons indexed by $\langle p,0\rangle $ are metallic for all $p$, which are in the polyacene series with zigzag edges. Nanoribbons indexed by $% \langle p,1\rangle $ with $p=2,5,8,11,\cdots $ are found to be also metallic, which have armchair edges. This series has period $3$, as is a reminiscence of the classification rules familiar for nanotubes. We point out a peculiar dependence of the electronic property of nanoribbons on the width $w$, where% \begin{equation} w=\frac{2\left( p+q\right) }{\sqrt{3\left( 2q+1\right) ^{2}+9}}. \label{WidthParam} \end{equation}% We extract the stream curves out of Fig.3, and draw them in Fig.4. On this figure we also present equi-width curves. It is remarkable that these sequences and equi-width curves become almost identical for wide nanoribbons, as is clear in Fig.4. We have depicted the band gap as a function of the width $w$ for each fixed $q$ in Fig.5. The band gaps oscillate and the envelope of the band gap decreases inversely against $w$. \section{Van Hove singularities} \begin{figure}[!tbh] \includegraphics[width=0.7\textwidth]{Peak.eps} \caption{(a) The density of state (DOS) of the $\langle 1,0\rangle$ nanoribbon. (b) Plot of van Hove singularities in the $w$-$\varepsilon$ plane. For a given $\langle p,q\rangle$ nanoribbon, we calculate the width w and the energies $\varepsilon$ at which van-Hove singularities develop. We have plotted the points ($w$,$\varepsilon$) for $q=0,1,2,3,4$ and for all $p$ in the region $w<3$. A stripe pattern is manifest.} \label{EdgeBandE} \end{figure} We calculate the energies $\varepsilon $ at which van-Hove singularities develop due to the local band flatness at $k=0$ for various $\left\langle p,q\right\rangle $ nanoribbons [Fig.6(a)]. Note that the optical absorption is dominant at $k=0$ because the dispersion relation $\varepsilon =ck$ with $% c$ the light velocity. On the other hand the width $w$ is determined by $p$ and $q$ as in (\ref{WidthParam}). We show the energy $\varepsilon $ of this peak as a function of $w$ in Fig.6. A peculiar stripe pattern\ is manifest there. In particular, the maximum and minimum values take almost the same values $\pm 3\left\vert t\right\vert $, reflecting the electronic property of a graphite. The fact that there are on smooth curves present another justification to call $w$ the width of nanoribbon. \section{Devices} \begin{figure}[!tbh] \includegraphics[width=0.9\textwidth]{Nanodevice.eps} \caption{{}Several possible nanoribbon devices. (a) Metal-semiconductor hetro-junction of nanoribbons with different indices (Schottky diode). (b) Hetro-junction of nanoribbons with transistor configuration (Schottky gate field effect transistor). (c) Point contact. (d) Quantum dot. (e) Aharonov-Bohm ring.} \end{figure} We have revealed a rich variety of band gaps in graphene nanoribbons. They are either quasi-one-dimensional metals or semiconductors depending on their edge shape and width. Graphene nanoribbons could be promising candidates of molecule devices, similarly to nanotubes\cite{Yao}, because of ballistic transport at room temperature. We mention the merits of nanoribbons in comparison to nanotubes. Two nanoribbon segments with different atomic and electronic structures can be seamlessly fused together to create intramolecular metal-metal, metal-semiconductor, or semiconductor-semiconductor junctions without introducing a pentagon and a heptagon into the hexagonal carbon lattice. Diodes or transistors could be made of nanoribbons, as illustrated in Fig.7. For instance, a metal-semiconductor junction makes Schottky barrier and may behave like a Schottky diode. Similarly Schottky gate field-effect transistor, point contact, quantum dot and Aharonov-Bohm ring may be realized. We might even design a complex of electronic circuits by etching a monolayer graphite in future. It is interesting problems to calculate electronic characteristics of various junctions, which will be studied elsewhere.
0705.1299
\section{\label{s:intro}Introduction} Powerful experimental cooling techniques have been developed in the past decades that allow us to probe the micro and nanokelvin regime while controlling the internal and external degrees of freedom of atomic systems. As a result dilute ultracold gases that qualify perfectly for the study of quantum phenomena on a macroscopic scale \cite{pethick01,dalfovo99,pitaevskii03} can nowadays be prepared almost routinely. Although being dilute, interactions play an important role and rich collective phenomena, reminiscent of e.g. those in traditional condensed matter physics, appear. The attractiveness of Rydberg atoms arises from their extraordinary properties \cite{gallagher}. The large displacement of the valence electron and the atomic core is responsible for the exaggerated response to external fields and, therewith, for their enormous polarizability. Rydberg atoms possess large dipole moments and, despite being electronically highly excited, they can possess lifetimes of the order of milliseconds or even more. Due to their susceptibility with respect to external fields and/or their long range interaction, ensembles of Rydberg atoms represent intriguing many-body systems with rich excitations and decay channels. Starting from laser cooled ground state atoms, a laser typically excites a subensemble of the atoms to the desired Rydberg states. Since the ultraslow motion of the atoms can be ignored on short timescales, Rydberg-Rydberg interactions dominate the system and we encounter a so-called frozen Rydberg gas \cite{mourachko}. The strength of the interaction can be varied by tuning external fields and/or by selecting specific atomic states. An exciting objective are the many-body effects to be unraveled in ultracold Rydberg gases (see Refs. \cite{pohl1,pohl2} and references therein). At a certain stage of the evolution ionization might take over leading to a cold Rydberg plasma. Beyond the above there is a number of topical and promising research activities involving cold Rydberg states. One example are long range molecular Rydberg states \cite{greene} with unusual properties if exposed to magnetic fields \cite{igor:jpb39}. Another one is due to the strong dipole-dipole interaction of Rydberg atoms which strongly inhibits excitation of their neighbors \cite{singer,tong}. The resulting local excitation blockade is state dependent and can turn Rydberg atoms into possible candidates for quantum information processing schemes~\cite{ryabtsev,lukin}. A precondition for enabling the processing of Rydberg atoms is the availability of tools to control their quantum behavior and properties. An essential ingredient in this respect is the trapping of electronically highly excited atoms. The present work provides a major contribution on this score. Let us briefly address previous works on Rydberg atoms exposed to inhomogeneous static field configurations. First evidence for trapped Rydberg gases has been experimentally found by Choi~et~al.~\cite{choi,choi:epjd}. The authors use strong bias fields to trap ``guiding center'' drift atoms for up to $200$~ms. Quantum mechanical studies of highly excited atoms in magnetic quadrupole fields demonstrated the existence of e.g. intriguing spin polarization patterns and magnetic field-induced electric dipole moments \cite{igor:epl,igor:jpb}. These investigations were based on the assumption of an infinitely heavy nucleus. A description of the coupled center of mass (c.m.) and electronic dynamics has been presented in Refs.\cite{igor:prl,igor:pra72}: Trapping has been achieved for quantum states with sufficiently large total, i.e.\ electronic and c.m., angular momentum. Pictorially speaking this addresses atoms that circle around the point of zero field at a sufficiently large distance. Recently it has been demonstrated that trapping in a Ioffe-Pritchard configuration is possible without imposing the condition of large c.m. angular momenta \cite{hezel:prl}. The present investigation works out this setup in detail and provides comprehensive results for Rydberg atoms exposed to the Ioffe-Pritchard field configuration. In detail we proceed as follows. Sect.~\ref{s:h} contains a derivation of our working Hamiltonian for a highly excited atom in the inhomogeneous field including the coupling of the electronic and c.m.\ motion of the atom. In Sect.~\ref{s:aa} we introduce an adiabatic approximation in order to solve the corresponding stationary Schr\"odinger equation. In Sect.~\ref{s:ees} we analyze the obtained spectra and point out the capacity of the Ioffe bias field to regulate the distance between the surfaces, and with that the quality of the adiabatic approach. Intersections through the surfaces show their deformation when the field gradient is increased. Subsequently we characterize the electronic wave functions by discussing relevant expectation values. Sect.~\ref{s:cm} is dedicated to the c.m. dynamics in the uppermost adiabatic energy surface. We arrive at a confined quantized c.m. motion without the need to impose any restriction on its properties. Examining the fully quantized states we observe that the extension of the electronic cloud can exceed the extension of the c.m.\ wave function. \section{\label{s:h}Hamiltonian} \subsection{\label{s:h:tbh}Two-body Approach} The large distance of the highly excited valence electron (particle 1) from the remaining closed-shell ionic core of an alkali Rydberg atom (particle 2) renders it possible to model the mutual interaction by an effective potential which is assumed to depend only on the distance of the two particles. For alkali atoms, in particular, whose core possess zero total angular momentum and zero total spin, the only essential difference to the Coulombic case is due to the finite size of the core. In any case, the effective potential $V(r)$ only noticeably differs from the pure Coulomb potential at small distances~$r$. States of high \emph{electronic} angular momenta $l$, on which we focus in the present investigation, almost exclusively probe the Coulombic tail of this potential. The coupling of the charged particles to the external magnetic field is introduced via the minimal coupling, $\bm{p}\rightarrow\bm{p}-q \bm{A}$, where $q$ is the charge of the particle and $\bm{A}$ is a vector potential belonging to the magnetic field $\bm{B}$. Including the coupling of the magnetic moments to the external field ($\bm{\mu}_1$ and $\bm{\mu}_2$ originate from the electronic and nuclear spin, respectively), our initial Hamiltonian reads (we use atomic units except when stated otherwise) \begin{eqnarray} \label{eq:Hinit} H_{init}&=&\frac{1}{2M_1}\left(\bm{p}_1-q_1\bm{A}(\bm{r}_1)\right)^2 +\frac{1}{2M_2}(\bm{p}_2-q_2\bm{A}(\bm{r}_2))^2 \nonumber\\ &&+V(\left|\bm{r}_1-\bm{r}_2\right|) -\bm{\mu}_1\cdot\bm{B}(\bm{r}_1)-\bm{\mu}_2\cdot\bm{B}(\bm{r}_2) \; . \end{eqnarray} We do not take into account spin-orbit-coupling and relativistic mass changes. The difference in energy shift for adjacent, large angular momentum states ($l$, $l\pm 1$) due to these relativistic corrections is $\Delta W_{FS}=\alpha^2/2n^5$ \cite{bethe}, where $\alpha$ is the fine structure constant, and therefore negligible for Rydberg states. At $n=30$ one receives $\Delta W_FS=1.1\times 10^{-12}$ atomic units. To give an idea of the scope of this approximation we anticipate a result from Sec.~\ref{s:ees}: The energy gap between two adjacent high-l electronic states is approximately $E_{dist}=B/2$ a.u. Demanding $\Delta W_{FS} / E_{dist} \ll 1$ results is constraining the Ioffe field strength $B$ to be much larger than $5$ mG. Before we focus on the Ioffe-Pritchard configuration let us first examine a general field $\bm B$ composed of a constant term $\bm B_c$, a linear term $\bm B_l$ and higher order terms, $\bm{B}=\sum \bm{B}_i$. The vector potential shall satisfy the Coulomb gauge. The squared terms can then be simplified taking advantage of the vanishing commutator $[\bm{A}(\bm{r}_1),\bm{p}_1]$ to obtain $(\bm{p}_1-q \bm{A}(\bm{r}_1))^2= \bm{p}_1^2-2 q \bm{A}(\bm{r}_1)\cdot\bm{p}_1+ q^2 \bm{A}(\bm{r}_1)^2$. In the so-called symmetric gauge the vector potential of a constant magnetic field is given by $\bm{A}_c(\bm{r}_1)=1/2\: \bm{B}_c\times\bm{r}_1$. The analogon for a linear field is $\bm{A}_l(\bm{r}_1)=1/3\: \bm{B}_l(\bm{r}_1)\times\bm{r}_1$. It can be proven that the vector potential of an arbitrary magnetic field can be expanded in a corresponding form \cite{igor:diss} permitting a representation of the vector potential as a cross product $\bm{A}(\bm{r}_1) = \sum_i \bm{A}_i(\bm{r}_1) =\bm{\tilde{B}}(\bm{r}_1)\times\bm{r}_1$, where $\bm{\tilde{B}}(\bm{r}_1)=\sum g_i \bm{B}_i(\bm{r}_1)$ and $i\in \{c,l,\dots\}$ denotes the order of the corresponding terms of $\bm A$ and $\bm B$ with respect to spacial coordinates. $g_i$ are the coefficients $\frac{1}{2}$, $\frac{1}{3}$ etc. The particular form of this potential and the vanishing divergence of magnetic fields admit the simplification \begin{equation} \label{eq:simpleAp} \bm{A}(\bm{r}_1)\cdot\bm{p}_1=(\bm{r}_1\times\bm{p}_1)\cdot\bm{\tilde{B}}(\bm{r}_1) =\bm{L}_1\cdot\bm{\tilde{B}}(\bm{r}_1) \quad , \end{equation} where we exemplarily defined the angular momentum of particle 1, $\bm{L}_1=\bm{r}_1\times\bm{p}_1$. Since the interaction potential depends only on the distance of the two particles, it is natural to introduce relative and c.m. coordinates, $\bm r_1=\bm R+(M_2/M)\bm r$ and $\bm r_2=\bm R-(M_1/M)\bm r$ with the total mass $M=M_1+M_2$. If no external field was present, the new coordinates would decouple the internal degrees of freedom from the external c.m.\ ones. Yet even a homogeneous magnetic field couples the relative and the c.m.~motion \cite{dippel,peter:pla}. For neutral systems in static homogeneous magnetic fields, however, a so-called `pseudoseparation' can be performed providing us with an effective Hamiltonian for the relative motion, that depends on the c.m. motion only parametrically via the eigenvalues of the pseudomomentum \cite{avron,peter:cpl208,peter:ctqc,dippel} which is associated with the c.m.\ motion. Such a procedure is not available in the present case of a more general inhomogeneous field. In the new coordinate system the Hamiltonian (\ref{eq:Hinit}) becomes \begin{multline} H= H_0 + \bm{L}_1 \bm{\tilde{B}}(\bm{R}+\frac{M_2}{M}\bm{r}) - \bm{L}_2 \bm{\tilde{B}}(\bm{R}-\frac{M_1}{M}\bm{r}) \\ - \bm{\mu}_1\bm{B}(\bm R+\frac{M_2}{M}\bm r) - \bm{\mu}_2\bm{B}(\bm R-\frac{M_1}{M}\bm r) + \mathcal O(\bm A^2)\ , \end{multline} where the angular momenta of the particles read \begin{align} \bm{L}_{1} = ({M_{1}}/{M})\bm{L}_R + ({M_{2}}/{M})\bm{L}_r + \bm{R}\times \bm{p} + ({m}/{M}) \bm{r}\times\bm{P} \nonumber \\ \bm{L}_{2} = ({M_{2}}/{M})\bm{L}_R + ({M_{1}}/{M})\bm{L}_r - \bm{R}\times \bm{p} - ({m}/{M}) \bm{r}\times\bm{P} \nonumber \end{align} (see also Ref.\ \cite{igor:pra72}), and the terms that do not depend on the field are summarized to $H_0=\frac{\bm{p}^2}{2m}+\frac{\bm{P}^2}{2M}+V(\bm{r})$. Here, $\bm{L}_{\bm{r}}=\bm{r}\times\bm{p}$, $\bm{L}_{\bm{R}}=\bm{R}\times\bm{P}$, and the reduced mass $m=M_1 M_2/M$ have been introduced. To simplify the Hamiltonian we apply a unitary transformation that eliminates c.m.~momentum dependent coupling terms generated by the homogeneous field component \begin{equation} \label{eq:U} U=\exp\left\{\frac{i}{2}\,\bm{B}_c\times \bm{r}\cdot\bm{R}\right\} \; . \end{equation} $H_0$ transforms as follows \[ U^{\dagger}H_0 U= H_0+\frac{1}{2}\bm{B}_c \left(-\frac{1}{m}\bm{R}\times \bm{p}+\frac{1}{M}\bm{r}\times\bm{P}\right) +\mathcal{O}(\bm{B}_c^2).\] The transformation of the remaining terms generates exclusively additional terms, that are quadratic with respect to the magnetic field. Exploiting now the fact that the mass of the ionic core is much larger than the mass of the valence electron, we only keep magnetic field dependent terms of the order of the inverse light mass $1/M_1$ (which becomes $1$ in atomic units). We arrive at the Hamiltonian \begin{multline} \label{eq:UHU} U^\dagger H U = {\bm{p}^2}/{2} + U^{\dagger}V(\bm{r})U + {\bm{P}^2}/{2M} + {1}/{2}\;\bm{L}_{\bm r}\cdot\bm{B}_c \\ + \bm{A}_l(\bm{R}+\bm{r})\cdot\bm{p} +(\bm{L}_{\bm r} +\bm{R}\times\bm{p})\cdot\bm{\tilde{B}}_{n}(\bm R + \bm r) \\ - \bm{\mu}_1\cdot\bm B(\bm R + \bm r)-\bm{\mu}_2\cdot \bm{B}(\bm R) \; . \end{multline} The diamagnetic terms which are proportional to $\bm{A}^2$ (and herewith proportional to $\bm{B}^2$, see Eq.~(\ref{eq:simpleAp})) have been neglected. Due to the unitary transformation $U$, $\bm R$-dependent terms that are quadratic in the Ioffe field strength $B$ do not occur and only an electronic term $B^2 (x^2+y^2)/8$ remains whose typical energy contribution amounts to $B^2 n^4/8\approx 10^{5} B^2$ for $n=30$. Besides we obtain a term quadratic in the field gradient $G$. The term quadratic in the Ioffe field is negligible in comparison with the dominant shift due to the linear Zeeman term as long as $B$ is significantly smaller than $10^4$~Gauss which is guaranteed in our case. Moreover, the c.~m.\ coordinate dependance of this diamagnetic term is much weaker than the c.~m.\ coordinate dependance of the terms linear in the field gradient. The term quadratic in the field gradient can be neglected in comparison with the corresponding linear term. Up to now we did not use the explicit form of the Ioffe-Pritchard field configuration. (In anticipation of the special field configuration we leave the term containing $\bm A_l$ in its original form.) \subsection{\label{ch:ip}Ioffe-Pritchard Field Configuration} Two widely spread magnetic field configurations that exhibit a local field minimum and serve as key ingredients for the trapping of weak-field seeking atoms are the 3D quadrupole and the Ioffe-Pritchard configuration. The Ioffe-Pritchard configuration resolves the problem of particle loss due to spin flip by means of an additional constant magnetic field. A macroscopic realization uses four parallel current carrying Ioffe bars which generate the quadrupole field. Encompassing Helmholtz coils create the additional constant field. There are many alternative layouts, the field of a clover-leaf trap for example features the same expansion around the origin \cite{cloverleaf}. On a microscopic scale the Ioffe-Pritchard trap has been implemented on atom chips by a $Z$-shaped wire \cite{folman}. The vector potential and the magnetic field read \begin{gather} \label{eq:overallpotential} \bm{A}=\underbrace{ \frac{B}{2} \left( \begin{array}{ccc} -y \\ x \\ 0 \end{array} \right)}_{=\bm{A}_c} +\underbrace{G\left( \begin{array}{ccc} 0 \\ 0 \\ x y \end{array} \right)}_{=\bm{A}_l} +\bm{A}_q \; , \\ \label{eq:overallfield} \bm{B}= \underbrace{B \left( \begin{array}{ccc} 0 \\ 0 \\ 1 \end{array} \right)}_{=\bm{B}_c} +\underbrace{G \left( \begin{array}{ccc} x \\ -y \\ 0 \end{array} \right)}_{=\bm{B}_l} +\bm{B}_q \; . \end{gather} where $\bm{A}_q=\frac{Q}{4}(x^2+y^2-4 z^2) (-y \mathbf{e}_x + x \mathbf{e}_y)$ and $\bm{B}_q=Q (2xz\mathbf{e}_x + 2yz\mathbf{e}_y + (x^2+y^2-2 z^2)\mathbf{e}_z)$. $\bm{B}_c$ is the constant field created by the Helmholtz coils with $B$ being the Ioffe field strength. $\bm{B}_l$ originates from the Ioffe bars and depends on the field gradient $G$. $\bm{B}_q$ designates the quadratic term generated by the Helmholtz coils whose magnitude, compared to the first Helmholtz term, can be varied by changing the geometry of the trap, $Q= B\cdot \frac{3}{2}(R^2-4D^2)/(R^2+D^2)^{2}$, where $R$ is the radius of the Helmholtz coils, and $2D$ is their distance from each other. If we now insert the special Ioffe-Pritchard field configuration using Eqs.~(\ref{eq:overallpotential},\ref{eq:overallfield}) into the transformed Hamiltonian (\ref{eq:UHU}) we obtain \begin{align} \label{eq:HIP} &H_{IP} = H_A +{\bm{P}^2}/{2M} \nonumber +{BL_z}/{2} +G(x+X)(y+Y)p_z \nonumber\\ &+Q/4\big[\big((x+X)p_y-(y+Y)p_x\big) \nonumber\\ &\cdot\big((x+X)^2+(y+Y-2(z+Z))(y+Y+2(z+Z))\big)\big]\nonumber\\ &- \bm{\mu}_1 \bm B(\bm R + \bm r)-\bm{\mu}_2\bm{B}(\bm R) \; , \end{align} where $H_A = {\bm p^2}/{2}-{1}/{r}$ is the operator for a field free atom. The well known Zeeman term ${BL_z}/{2}$ comes from the uniform Ioffe field generated by the Helmholtz coils. The following term, involving the field gradient $G$, arises from the linear field generated by the Ioffe bars and couples the relative and c.m.\ dynamics. The part in the squared brackets originates from the quadratic term, again created by the coils. It is the only one that depends on the $Z$ coordinate, we will see below that its contribution is negligible under certain conditions. The last term couples the spin of particle two to the magnetic field. Since the electronic spins of closed shells combine to zero, the spin of particle two is the \emph{nuclear} spin only. Even though $\bm \mu_2\bm B$ scales with ${1}/{M_2}$, we will still keep the term. Being the only one containing the nuclear spin it is essential for a proper symmetry analysis. \subsection{\label{ch:SSS}Symmetries, Scaling and the Approximation of a Single $n$-Manifold} Our Hamiltonian is invariant under a number of symmetry transformations $U_S$ that are composed of the elementary operations listed in Tab.~\ref{t:sym}. The parity operations $P_j$, $j \in \{x,y,z\}$, are defined by their action on the spatial laboratory coordinates of the particles which translates one-to-one to c.m. and relative coordinates. In order to exchange the $x$ and $y$ components of the electronic spin we introduce the operator \[S_{xy}= \begin{pmatrix} -i&0\\ 0&1 \end{pmatrix} \; , \] where $S_{xy} S_{xy}^{*}=1$. $T$ represents the conventional time reversal operator for spinless particles which, in the spatial representation, corresponds to complex conjugation. Our unitary symmetries are \begin{subequations}\label{eq:sym} \begin{gather} P_x P_y \hat S_z \hat \Sigma_z \label{eq:symI} \\ P_y P_z I_{xy} S_{xy} \Sigma_{xy} \label{eq:symII} \\ P_x P_z I_{xy} S^{*}_{xy} \Sigma^{*}_{xy} \; . \label{eq:symIII} \end{gather} \end{subequations} The Hamiltonian is also left invariant under the antiunitary symmetry transformation \begin{equation} T P_y . \label{eq:symA} \\ \end{equation} By consecutively applying the latter operator and the unitary operators (\ref{eq:symI}), (\ref{eq:symII}) and (\ref{eq:symIII}) it is possible to create further antiunitary symmetries: \begin{subequations}\label{eq:symAall} \begin{gather} T P_x \hat S_z \hat \Sigma_z \label{eq:symAI} \\ T P_z I_{xy} S_{xy} \Sigma_{xy} \label{eq:symAII} \\ T P_x P_y P_z I_{xy} S^{*}_{xy} \Sigma^{*}_{xy} . \label{eq:symAIII} \end{gather} \end{subequations} Paying regard to the fact that $S_{xy}^2=-\hat S_z$ and $\Sigma_{xy}^2=-\hat \Sigma_z$ and that $T$ neither commutes with $\hat S_y$ nor with $S_{xy}$ and $\Sigma_{xy}$, one finds that the operators (\ref{eq:symI}-\ref{eq:symAIII}) form a symmetry group. \begin{table}[tb] \begin{center} \begin{tabular}{lll} \hline \\[-12pt] operator & & operation\\ \hline \hline \\[-9pt] $P_{x}$ & $x$ parity & $x\rightarrow -x$, $X\rightarrow -X$ \\ $\hat S_x$ & electronic spin $x$ op. & $S_y\rightarrow -S_y$, $S_z\rightarrow -S_z$ \\ $\hat \Sigma_x$ & nuclear spin $x$ op. & $\Sigma_y\rightarrow -\Sigma_y$, $\Sigma_z\rightarrow -\Sigma_z$ \\ $I_{xy}$ & coordinate exchange & $x\leftrightarrow y$, $X\leftrightarrow Y$ \\ $S_{xy}$ & el.\ spin component exc.& $S_x\rightarrow -S_y$, $S_y\rightarrow S_x$ \\ $\Sigma_{xy}$ & nuclear spin comp. exc.& $\Sigma_x\rightarrow -\Sigma_y$, $\Sigma_y\rightarrow \Sigma_x$ \\ $T$ & conventional time reversal & $A\rightarrow A^{*}$ \\ \hline \hline \\[-25pt] \end{tabular} \end{center} \caption[Symmetry operation nomenclature]{\label{t:sym} Symmetry operation nomenclature. $P_j$, $\hat S_j$, and $\hat \Sigma_j$ are exemplified by $j=x$, but hold of course also for $j=y,z$.} \end{table} If no Ioffe field is present ($B=0$), eight additional symmetries can be found leaving the Hamiltonian invariant. For an effective one particle approach (and the corresponding one particle symmetries) this was discussed in Ref.~\cite{igor:pra70:4}. As indicated before, the quadratic magnetic field term is small and can be tuned by changing the trap geometry. It can provide a longitudinal confinement which may be treated by perturbative methods. In the case of negligible quadratic field $\bm B_q$, which we assume in the following, the term in the squared brackets of the Hamiltonian (\ref{eq:HIP}) drops out and the $Z$ coordinate is cyclic. The corresponding conjugated momentum $P_z$ is consequently conserved and the longitudinal motion is integrated by simply employing plane waves $| k_Z\rangle = \exp\{i Z k_Z\}$. The constraints for this approximation to be valid can be obtained by comparing the above-mentioned term in squared brackets with the Zeeman term, $BL_z/2$. Estimating $\langle x\rangle \approx n^2$, $\langle xp_y\rangle \approx \langle yp_x\rangle \approx n$, and using $|Q|\lessapprox B/(D^2+R^2)$ we find \begin{align} D^2+R^2 &\gg n^4 \quad\quad \textrm{and} \label{eq:constraintA}\\ \sqrt[3]{n(D^2+R^2)} &\gg X,Y \; ,\label{eq:constraintB} \end{align} where $D$ and $R$ characterize the trap geometry. Eqs.~(\ref{eq:constraintA},\ref{eq:constraintB}) are easily fulfilled. We are therefore left with the Hamiltonian \begin{equation} \label{eq:Hnotscaled} H = H_A + (P_x^2+P_y^2)/{2} + H_e \; , \end{equation} where the electronic Hamiltonian reads \begin{equation} \label{eq:Henotscaled} H_{e} = {BL_z}/{2} +G(x+X)(y+Y)p_z - \bm{\mu}_1 \bm B(\bm R + \bm r) \; . \end{equation} For all laboratory fields one finds the magnetic field strength $B$ and the magnetic field gradient $G$ to be a lot smaller than 1. Our Hamiltonian (\ref{eq:HIP}) is thus dominated by $H_A$. The energies of the field free spectrum $E_A^n= -1/2 n^2$ are $n^2$-fold degenerate. We can assume the Ioffe-Pritchard field not to couple adjacent $n$-manifolds as long as $|{E_A^{n}-E_A^{n\pm 1}}|/{E_{Zee}} \gg 1$. The resulting constraints $B \ll n^{-4}$, $G\ll n^{-6}$ and $GR\ll n^{-4}$ yield $B \ll 2900$~G, $G\ll 6\cdot 10^{6}$~T/m for $n=30$ and $R\ll 2.9$~mm if we additionally assume the field gradient $G$ to be as large as $100$~T/m. In our parameter regime each $n$-manifold can therefore be considered separately. We thus project the full Hamiltonian on the hydrogenic eigenfunctions $| \alpha \rangle = | n, l, m_l, m_s \rangle$, $H_A | \alpha \rangle = E_A^n | \alpha \rangle $, with fixed principal quantum number $n$, that cover an entire n-manifold. $l$ denotes the orbital angular momentum quantum number, $m_l$ the one of its $z$ component $L_z$ and $m_s$ stands for the quantum number of the electronic spin. Working in a single $n$-manifold we can reformulate the term in the Hamiltonian (\ref{eq:Hnotscaled}) involving the field gradient $G$ into a more compact form. We first consider the commutator $[yz, H_A] = [yz, \bm p^2]/2 = i (yp_z + zp_y)$. This yields \begin{equation} \langle\alpha | yp_z |\alpha^{\prime}\rangle + \langle\alpha | zp_y |\alpha^{\prime}\rangle = -i \langle\alpha | [yz,H_A] |\alpha^{\prime}\rangle = 0 \; , \end{equation} since $|\alpha\rangle$ and $|\alpha^\prime\rangle$ are eigenkets to the same eigenvalue $E_n$. Establishing the relation to the orbital angular momentum operator via $yp_z= L_x + zp_y$ results in \begin{equation} (\langle\alpha | yp_z |\alpha^{\prime}\rangle) = \frac{1}{2} (\langle\alpha | L_x |\alpha^{\prime}\rangle) \; . \end{equation} The same procedure can be applied to $xp_z$ leading to \begin{equation} (\langle\alpha | xp_z |\alpha^{\prime}\rangle) = -\frac{1}{2} (\langle\alpha | L_y |\alpha^{\prime}\rangle) \; . \end{equation} Furthermore $\langle \alpha | XYp_z | \alpha' \rangle = 0$ since $p_z \sim [H_A,z]$, and eventually we can write \begin{equation} G(x+X)(y+Y)p_z = G(xyp_z+XL_x/2-YL_y/2)\; , \end{equation} where we omitted the bracketing alphas, but keep in mind that the above identity holds in a single $n$-manifold only. In order to remove the separate dependencies on the field parameters $B$, $G$, and on the mass $M$ from the coupling terms, we introduce scaled c.m.~coordinates, $\mathbf{R}\rightarrow \gamma^{-\frac{1}{3}} \mathbf{R}$, with $\gamma=G M$, and simultaneously we introduce the energy unit $\epsilon=\gamma^\frac{2}{3}/M$. Introducing the effective magnetic field \begin{equation} \bm{G}(X,Y) = \left( \begin{array}{ccc} X \\ -Y \\ \zeta \end{array} \right) \; , \quad \zeta =BM\gamma^{-\frac{2}{3}}\; , \end{equation} and omitting the constant energy offset $E_A^n$, the Hamiltonian can be given the advantageous form \begin{equation} \label{eq:Hwork} \mathcal H= \frac{P_x^2+P_y^2}{2} + \bm{\mu}\cdot\bm{G}(X,Y) + \gamma^{\frac{1}{3}} (xyp_z+x S_x-y S_y). \end{equation} The first term is the c.m.~kinetic energy. $\bm \mu$ is the $2n^2$-dimensional matrix representation of the total magnetic moment of the electron, $\frac{1}{2}(\bm L_{\bm r} +2\bm S)$, and the second term in (\ref{eq:Hwork}) describes its coupling to the effective magnetic field $\bm{G}$. The latter results from the original field~$\bm B_c+\bm B_l$ in Eq.~(\ref{eq:overallfield}) taking into account the corresponding coordinate and energy scaling factors. $S_i$ are the components of the electronic spin, $\bm S=-\bm\mu_1$. The nuclear spin term $-\bm{\mu}_2\cdot\bm{B}(\bm R)$ has been omitted since it is several orders of magnitude smaller than the electronic one. \section{\label{s:aa}Adiabatic Approach} The large difference of the particles' masses and velocities in our two body system makes it plausible to adiabatically separate the electronic and the c.m.~motion. The corresponding time scales differ substantially even for large principal quantum numbers $n$. However, due to the enormous level density in case of Rydberg atoms it is \emph{a priori} unclear whether isolated energy surfaces might exist or whether, as one might naturally assume, non-adiabatic couplings are ubiquitous and therefore an adiabatic approach might invalidate itself. The procedure is reminiscent of the Born-Oppenheimer ansatz in molecular systems and is based on the idea that the slow change of the heavy particle's position allows the electron to adapt instantaneously to the inhomogeneous field. The electronic energy of the system can thus be considered as a function of the position of the heavy particle. The adiabatic approximation is introduced by subtracting the transversal c.m.~kinetic energy, $\mathcal T={(P_x^2+P_y^2)}/{2}$, from the total Hamiltonian (\ref{eq:Hwork}). The remaining electronic Hamiltonian for fixed center of mass reads \begin{equation} \label{eq:BOHe} \mathcal H_e= \bm{\mu}\cdot\bm{G}(X,Y) + \gamma^{\frac{1}{3}} (xyp_z+x S_x-y S_y). \end{equation} The electronic wave function $\varphi_\kappa$ depends parametrically on $\bm R$ and the total atomic wavefunction can be written as \begin{equation} |\Psi(\bm r,\bm R)\rangle = |\varphi_\kappa(\bm r;\bm R)\rangle \otimes |\psi_\nu(\bm R)\rangle \; , \end{equation} where $|\psi_\nu(\bm R)\rangle$ is the center of mass wave function. The internal problem posed by the stationary, electronic Schr\"odinger equation \begin{equation} \label{eq:internal} \mathcal H_e \; |\varphi_\kappa(\bm r;\bm R)\rangle = E_\kappa(X,Y)\; |\varphi_\kappa(\bm r;\bm R)\rangle \end{equation} is solved for the adiabatic electronic potential energy surfaces $E_\kappa(X,Y)$, that serve as a potential for the c.~m.~dynamics. Within this approximation, the equation of motion for the center of mass wave function reads \begin{equation} \label{eq:BOcm} \left( \mathcal T + E_\kappa(X,Y) \right) \; |\psi_\nu(\bm R)\rangle = \epsilon_{\nu} \; |\psi_\nu(\bm R)\rangle \; . \end{equation} The spatially dependent transformation $\mathcal U(X,Y)$, that diagonalizes the matrix representation $\mathcal{H}_e$ of the electronic Hamiltonian, is composed of the vector representations of the electronic eigenfunctions, $\bm{\mathcal{U}}_\kappa = \left( \mathcal{U}_{\kappa\alpha} \right) = \left( \langle \alpha |\varphi_\kappa(\bm r;\bm R)\rangle \right)$. Since $\mathcal{U}$ depends on the c.m.~coordinates, the transformed kinetic energy involves non-adiabatic couplings \nolinebreak $\Delta \mathcal T$ \begin{equation} \mathcal{U}^\dagger \mathcal H \mathcal{U} = \mathcal{U}^\dagger \mathcal H_e \mathcal{U} + \mathcal{U}^\dagger \mathcal T \mathcal{U} = E_\kappa(X,Y) + \mathcal T + \Delta \mathcal T \end{equation} that have been neglected in the adiabatic approximation of Eq.~(\ref{eq:BOcm}), \begin{multline} \label{eq:DeltaT} \Delta \mathcal T = -1/2 \cdot\big( \mathcal{U}^\dagger (\partial^2_X \mathcal{U}) + \mathcal{U}^\dagger (\partial^2_Y \mathcal{U}) \\ + 2 \mathcal{U}^\dagger (\partial_X \mathcal{U}) \partial_X + 2 \mathcal{U}^\dagger (\partial_Y \mathcal{U})\partial_Y \big) \; . \end{multline} They can be calculated explicitly as soon as the electronic adiabatic eigenfunctions have been computed. Non-adiabatic contributions can be neglected if the conditions \begin{gather} \label{eq:neglectdeltaT1} \rvert \frac{\langle\varphi_{\kappa\prime} | (\partial_X \mathcal H) |\varphi_\kappa\rangle }{E_{\kappa\prime}-E_\kappa} \rvert \ll 1 \; , \quad \rvert \frac{\langle\varphi_{\kappa\prime} | (\partial_Y \mathcal H) |\varphi_\kappa\rangle }{E_{\kappa\prime}-E_\kappa} \rvert \ll 1 \; ,\\ \label{eq:neglectdeltaT2} \rvert \frac{\langle\varphi_{\kappa\prime} | (\partial^2_X \mathcal H) |\varphi_\kappa\rangle }{E_{\kappa\prime}-E_\kappa} \rvert \ll 1 \; , \quad \rvert \frac{\langle\varphi_{\kappa\prime} | (\partial^2_Y \mathcal H) |\varphi_\kappa\rangle }{E_{\kappa\prime}-E_\kappa} \rvert \ll 1 \phantom{\; ,} \end{gather} are fulfilled \cite{igor:pra72}. The energy denominator in (\ref{eq:neglectdeltaT1}) and (\ref{eq:neglectdeltaT2}) indicates that one can expect non-adiabatic couplings to become relevant between the adiabatic energy surfaces when they come very close in energy, i.e.\ in the vicinity of avoided crossings. Recalling the results of the symmetry analysis, it can be demonstrated that the energy surfaces $E_\kappa$, exhibit three mirror symmetries. Within the adiabatic approximation, $X$ and $Y$ are parameters in the electronic Schr\"odinger equation. Symmetry operations applied to the electronic Hamiltonian thereby merely act onto the electronic subspace. If we apply the corresponding restricted symmetry operation $U_{P} = P_x P_y \hat S_z \hat \Sigma_z$ (\ref{eq:symI}), that was already shown to leave the full Ioffe-Pritchard Hamiltonian (\ref{eq:HIP}) invariant, to the electronic Hamiltonian $H_e$ (\ref{eq:Henotscaled}), we find \begin{equation} U_{P}^\dagger H_e(\bm r; X,Y) U_{P} = H_e(\bm r; -X,-Y) \; . \end{equation} Since unitarily equivalent observables, $A$ and $U^\dagger A U$, possess the same eigenvalue spectrum, we find the energy surfaces to be inversion symmetric with respect to the origin in the $X$-$Y$ plane. The symmetry operator $U_{Y}=T P_y$, and the operator that is composed of $U_{Y}$ and $U_{P}$, namely $U_{X}=T P_x \hat S_z \hat \Sigma_z$ (see (\ref{eq:symA}) and (\ref{eq:symAI})), mirror the energy surfaces at the axes, \begin{align} U_{Y}^\dagger H_e(\bm r; X,Y) U_{Y} &= H_e(\bm r; X,-Y) \; ,\\ U_{X}^\dagger H_e(\bm r; X,Y) U_{X} &= H_e(\bm r; -X,Y) \; . \end{align} The electronic problem (\ref{eq:internal}), with the core fixed at an arbitrary position, is three-dimensional. No symmetry arguments can be exploited to reduce the dimensionality of the problem. In order to solve it, we employ the variational method, which maps the stationary Schr\"odinger equation onto an ordinary algebraic eigenvalue problem. Since the matrix representation of the electronic Hamiltonians is sparsely occupied, an Arnoldi decomposition is used. Both, this decomposition and the surfaces' mirror symmetries, help to reduce the computational cost of solving the electronic Schr\"odinger equation. \section{\label{s:ees}Electronic potential energy surfaces} In this section the properties of the electronic adiabatic energy surfaces are analyzed for different regimes of Ioffe field strengths and field gradients. These two parameters can be used to shape the potential in which the center of mass dynamics takes place. To understand how this takes place, we inspect the electronic Hamiltonian to unravel the influence of the individual terms for different parameter regimes. The characteristic length scale of the center of mass dynamics is of the order of one in scaled atomic units. It is therefore adequate to compare the magnitudes of the different parts of the electronic Hamiltonian (\ref{eq:BOHe}) in order to estimate their impact on the center of mass motion, putting $X$ and $Y$ equal to one. The first part, $\bm{\mu}\cdot\bm{G}(X,Y)$, consists of the coupling terms $X(\frac{1}{2}L_x+S_x) - Y(\frac{1}{2}L_y+S_y)$, that are then of the order of $\langle L_i \rangle \approx n$ for high angular momentum states, and of the Zeeman term $\zeta(\frac{1}{2}L_z+S_z)$, which can be as large as $\zeta n$. The second part, $\gamma^{{1}/{3}}(xyp_z+x S_x-y S_y)$, is quadratic in the relative coordinates which makes it particularly important for high principal quantum numbers $n$. If we consider the expectation values of the relative coordinates to be of the order of $n^2$, and $\langle yp_z \rangle \approx \langle L_x \rangle \approx n$, the overall magnitude can be estimated to $\gamma^{{1}/{3}} n^3$. In a nutshell, we have for the mentioned three terms the following relative orders of magnitude, \begin{equation} \label{eq:factors} 1 \; , \quad \zeta \quad \textrm{and} \quad \gamma^{\frac{1}{3}} n^2 \; . \end{equation} Due to the special form of the electronic Hamiltonian, changing the magnetic field parameters~$B$ and~$G$ while keeping their ratio~$\zeta/ \gamma^{{1}/{3}}=B/G$ (and $n$) constant results in a mere scaling of the c.m.~coordinates. We provide typical examples for values of the quantities (\ref{eq:factors}) in table \ref{t:parameters}. \begin{table}[tbp] \centering \begin{small} \begin{tabular}{lr|ccccccc|} \\[-5pt] \multicolumn{3}{r}{G [T/m]\quad\; \textbf{0.01$\phantom{||}$}} & \textbf{0.1} & \textbf{1} & \textbf{10} & \textbf{100} & \textbf{1000} & \multicolumn{1}{l}{\textbf{10000}} \\ \hline\\[-12pt] \multicolumn{1}{r}{\textbf{}} & \multicolumn{1}{r}{n\, } & \multicolumn{1}{l}{\textbf{}} & \multicolumn{1}{l}{\textbf{}} & \multicolumn{1}{l}{\textbf{}} & \multicolumn{1}{l}{\textbf{}} & \multicolumn{1}{l}{\textbf{}} \\ \cline{3-9} \multicolumn{1}{r}{\textbf{}} & & \multicolumn{1}{l}{\textbf{}} & \multicolumn{1}{l}{\textbf{}} & \multicolumn{1}{l}{\textbf{}} & \multicolumn{1}{l}{\textbf{}} & \multicolumn{1}{l}{\textbf{}} & &\\ [-12pt] {\colorbox{mygrey}{$\gamma^{\frac{1}{3}}n^2$}} & \textbf{3} & 0.001 & 0.001 & 0.003 & 0.006 & 0.014 & 0.030 & 0.064 \\ & \textbf{10} & 0.007 & 0.015 & 0.033 & 0.071 & 0.153 & 0.329 & 0.709 \\ & \textbf{30} & 0.064 & 0.138 & 0.296 & 0.638 & 1.375 & 2.963 & 6.383 \\ & \textbf{50} & 0.177 & 0.382 & 0.823 & 1.773 & 3.820 & 8.229 & 17.729 \\ & \textbf{80} & 0.454 & 0.978 & 2.107 & 4.539 & 9.778 & 21.067 & 45.387 \\ \cline{3-9} \\[-10pt] \multicolumn{3}{c}{B [Gauss]\;\;} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \multicolumn{1}{l}{} \\ \cline{3-9} \multicolumn{1}{r}{\textbf{}} & & \multicolumn{1}{l}{\textbf{}} & \multicolumn{1}{l}{\textbf{}} & \multicolumn{1}{l}{\textbf{}} & \multicolumn{1}{l}{\textbf{}} & \multicolumn{1}{l}{\textbf{}} & &\\ [-12pt] {\colorbox{mygrey}{$\zeta$}} & \textbf{0.01} & 134.0 & 28.87 & 6.220 & 1.340 & 0.289 & 0.062 & 0.013 \\ & \textbf{0.1} & 1340 & 288.7 & 62.20 & 13.40 & 2.887 & 0.622 & 0.134 \\ & \textbf{1} & 13402 & 2887 & 622.0 & 134.0 & 28.87 & 6.220 & 1.340 \\ & \textbf{10} & 134015 & 28873 & 6220 & 1340 & 288.7 & 62.20 & 13.40 \\ \cline{3-9} \\[-8pt] \hline \end{tabular} \end{small} \caption[Parameters]{Explicit values for $\gamma^{{1}/{3}} n^2=(GM)^{{1}/{3}} n^2$ and $\zeta=BM^{{1}/{3}}G^{-{2}/{3}}$ for $^{87}$Rb in atomic units. The first block lists $\gamma^{{1}/{3}} n^2$ for different values of the field gradient $G$ and for different principal quantum numbers~$n$. The second block lists~$\zeta$ for different field gradients and for different field strengths~$B$. } \label{t:parameters} \end{table} \subsection{\label{ch:regulatingcapacity}Regulating Capacity of the Ioffe Field} To understand the impact of the Ioffe field strength $B$ on the adiabatic energy surfaces, we isolate its effect by suppressing other influences. This can be done by choosing a relatively low field gradient $G$ and/or a small principal quantum number $n$ (see Tab.~\ref{t:parameters}). The factor $\gamma^{\frac{1}{3}} n^2$ becomes small, and the last term in Eq.~(\ref{eq:BOHe}) will hardly provide any contribution. Within this regime, that we focus on in this subsection, approximate analytical expressions for the electronic adiabatic energy surfaces can be derived. We diagonalize the approximate electronic Hamiltonian \begin{equation} \label{eq:Hetilde} \tilde H_e= \frac{1}{2} \bm{G} \, (\bm L + 2\bm S) \; . \end{equation} by applying the spatially dependent unitary transformation \begin{equation} \label{eq:diagtrafo} U_D(X,Y)= e^{i\phi(L_z+S_z)} e^{i\beta(L_y+S_y)} \; , \end{equation} with $\phi=\arctan\frac{Y}{X}$, $\cos\beta=\gamma^{-\frac{2}{3}}M_2 B|\bm{G}(X,Y)|^{-1}$ and $\sin\beta=-\sqrt{X^2+Y^2}|\bm{G}(X,Y)|^{-1}$. This yields \begin{equation} U^\dagger_D \tilde H_e U_D = \frac{1}{2} ( L_z + 2 S_z) |\bm G(X,Y)| \; \end{equation} for the transformed approximate electronic Hamiltonian. The spatially dependent transformation~$U_D$ locally rotates the magnetic moment of the electron, which includes its spin and its angular momentum, such that it is parallel to the local direction of the magnetic field. The operators~$L_z$ and~$S_z$ are not identical to the ones before having applied the transformation~(\ref{eq:diagtrafo}), they are rather related to the local quantization axis defined by the local magnetic field direction~\cite{igor:prl}. The adiabatic potential surfaces evaluate to \begin{align} \label{eq:Ekappasmallgradient} E_\kappa(X,Y) &= \frac{1}{2} (m_l + 2m_s) |\bm G(X,Y)| \nonumber\\ &= \frac{1}{2} (m_l + 2m_s) \sqrt{X^2+Y^2+\zeta^2} \; . \end{align} The possible combinations of $m_l$ and $m_s$ yield $2n+1$ energy surfaces. The surfaces highest and lowest in energy correspond to circular states, ($|m_l|=l_{max}=n-1$, $m_l+2m_s=\pm n$), and they are the only non-degenerate ones. For the other surfaces ($|m_l+2m_s| < n$), the multiplicity of $(m_l+2m_s)$, and with that the degree of degeneracy of the corresponding surfaces, is given by $2n - |m_l+2m_s+1| - |m_l+2m_s-1|$. Starting from the highest energy surface, the levels of degeneracy thus are 1, 2, 4, 6, \dots. The approximate surfaces $E_\kappa$ (\ref{eq:Ekappasmallgradient}) are rotationally symmetric around the $z$-axis. An expansion around this axis ($\rho = \sqrt{X^2+Y^2} \ll \zeta$) yields a harmonic potential, \begin{equation} \label{eq:Eksmallrho} E_\kappa(\rho) \approx (\zeta + \frac{1}{2\zeta} \rho^2)\cdot\frac{1}{2} (m_l + 2m_s) \; , \end{equation} while we find a linear behavior, \begin{equation} \label{eq:Ekbigrho} E_\kappa(\rho) \approx \frac{\rho}{2}\cdot(m_l + 2m_s) \; , \end{equation} when the center of mass is far from the $z$-axis ($\rho \gg \zeta$). \begin{figure}[tbh!] \centering \includegraphics[width=8cm]{sectionx_3_different_zetas.eps} \caption[Sections for increasing Ioffe field, $n=3$.]{Sections along the $X$-axis through the electronic adiabatic energy surfaces of an entire $n=3$ manifold. The field gradient is fixed at $G=1$ Tesla/m in order to suppress the influence of the last term in $H_e$ (\ref{eq:BOHe}). From left to right, $\zeta=B M \gamma^{-{2}/{3}}$ increases due to an increasing Ioffe field.} \label{f:differentzetas} \end{figure} For reasons of illustration we demonstrate the behavior of the adiabatic surfaces with increasing Ioffe field by means of a somewhat artificial example where other, previously neglected interactions might be more important. Fig.~\ref{f:differentzetas} shows sections through all the surfaces for $n=3$. This principal quantum number has been chosen in order to keep the sections simple while displaying the entire $n$-manifold. We employ $^{87}$Rb in this expository example although the electronic ground state of its outermost electron is $5$s. The sections have been calculated for the field gradient $G=1$~T/m and for different field strengths~$B$ using the total electronic Hamiltonian (\ref{eq:BOHe}). These parameters yield $\gamma^{{1}/{3}} n^2=0.003$, and values for~$\zeta$ ranging from $0.01$ to $1$. The surfaces in the different graphs of Fig.~\ref{f:differentzetas} indeed validate the approximate expression~(\ref{eq:Ekappasmallgradient}): We find $2n+1$ degenerate surfaces and the harmonic behavior for $|X| \ll \zeta$ gives way to a linear increase for $|X| \gg \zeta$. The energetic distances and lengths in the different graphs are comparable, since the scaling factor for the center of mass coordinates $\gamma=c_2M$ has not been changed. We can conclude that increasing the Ioffe field strength $B$ separates the surfaces from each other. \begin{figure}[tb!] \centering \includegraphics[width=7.5cm]{fig2sectionn30smallgamma.eps} \caption[Sections for increasing Ioffe field, $n=30$.]{Sections along the $X$-axis through the uppermost $21$ surfaces of the $n=30$ manifold of $^{87}$Rb for increasing ratios $B/(G n^2)$. The field gradient is fixed at $G=10$ T/m while the Ioffe field is increased from top left to bottom right. ($B=24$ mG, $B=48$ mG, $B=0.24$ G, $B=0.48$ G). For small ratios $B/(G n^2)$ the influence of the second term in (\ref{eq:BOHe}) is not completely suppressed as can be seen from the lifted degeneracies in the upper subfigures.} \label{f:n30differentzetas} \end{figure} The data presented in Fig.~\ref{f:n30differentzetas} have been computed for the $n=30$ manifold. In order to keep the last term in (\ref{eq:BOHe}) small, the field gradient has been set to $G=0.1$~T/m ($\rightarrow \gamma^{{1}/{3}} n^2=0.14$). The uppermost $21$ energy surfaces are shown for different values of the magnetic field strength $B$. Similar to the $n=3$ case, one can see the harmonic behavior around the origin. The surfaces' minimal distance becomes larger for increasing $\zeta$. Since $\zeta$ and $\gamma^{{1}/{3}} n^2$ are of the same order of magnitude in subfigure (a), the contribution of the last term in (\ref{eq:BOHe}), that lifts the degeneracy of the curves, is visible. The energetic distance of the approximate surfaces described by Eq.~(\ref{eq:Ekappasmallgradient}) increases with larger distances from the $Z$-axis, $\rho$, and with larger $\zeta$. The minimum energetic gap between two adjacent surfaces is at the origin and reads \begin{equation} \label{eq:mindist} | E_\kappa(O) - E_{\kappa\pm 1}(O) | = \frac{B}{2} M\gamma^{-\frac{2}{3}} = \frac{\zeta}{2} \; . \end{equation} The parameter $\zeta$ (an hence the field strength $B$) is the tool to control the energetic distance between the adiabatic surfaces. Increasing $\zeta$, one can thus also minimize the non-adiabatic couplings $\Delta \mathcal T$ (\ref{eq:DeltaT}) discussed in Sect.~\ref{s:aa}, since they scale with the reciprocal energetic distance of the surfaces. \begin{table}[bt] \centering \begin{tabular}{cccccc} $\textrm{B [G]}$&$\zeta$&$\textrm{G [T/m]}$&$\gamma^{1/3} n^2$&$\Delta E $&$\Delta \textrm{ [\%]}$\\ \hline \\[-10pt] $0.01$&$0.288$&$100$&$1.375$&$0$&\\ $0.1$&$2.89$&$100$&$1.375$&$1.291$&$15.193$\\ $1$&$28.87$&$100$&$1.375$&$14.421$&$1.476$\\ $0.01$&$1.340$&$10$&$0.638$&$0.600$&$11.101$\\ $0.1$&$13.40$&$10$&$0.638$&$6.694$&$0.103$\\ $1$&$134.0$&$10$&$0.638$&$67.006$&$0.002$\\ $10$&$1340$&$10$&$0.638$&$670.07$&$0.001$\\ $0.01$&$6.220$&$1$&$0.296$&$3.107$&$0.104$\\ $0.1$&$62.20$&$1$&$0.296$&$31.101$&$0.001$\\ $1$&$622.0$&$1$&$0.296$&$311.022$&$0.000$\\ \\[-10pt] \hline \end{tabular} \caption[Minimal Distance]{Minimal distance $\Delta E$ of the two uppermost surfaces of the $n=30$ manifold. $\Delta$ denotes the discrepancy between $\Delta E$ and the approximate predicted value for the distance, $\frac{\zeta}{2}$, according to Eq.~(\ref{eq:mindist}).} \label{t:mindist} \end{table} To check the range of validity of our approximation, the minimal energetic distance between the two uppermost adiabatic surfaces in the $n=30$ manifold has been calculated for different parameters, subtracting the full 2D surfaces from each other, that have been obtained using the electronic Hamiltonian (\ref{eq:BOHe}). One finds the minimal distance to be located at the origin, as expected. $\Delta$ in Tab.~\ref{t:mindist} denotes the relative deviation between the predicted (Eq.~(\ref{eq:mindist})) and the computed value in percent. It is small for large Ioffe field strengths $B$ and low field gradients $G$. Then we have~$\zeta \gg \gamma^{{1}/{3}}n^2$, the last term in the electronic Hamiltonian is negligible and our approximation that leads to~(\ref{eq:mindist}) is justifiable. \subsection{High Gradients} \label{ch:highgradients} A more complicated picture of the surface properties arises when the field gradients become larger. The last term in the electronic Hamiltonian, that accounts for finite size effects of the atom, \begin{equation} \label{eq:finitesize} \gamma^{\frac{1}{3}} (xyp_z+x S_x-y S_y) \; , \end{equation} is no longer small compared to the others in equation (\ref{eq:BOHe}). This results in modulations of the adiabatic surfaces we already spotted in the previous section, even though the term does not feature any dependency on $X$ and $Y$. These modulations lift the degeneracy that was found in the limit of small gradients. Their dependency on the c.m.~coordinates is introduced by the transformation $\mathcal U(X,Y)$ that diagonalizes the electronic problem (cf.~Sec.~\ref{s:aa}). \begin{figure}[tbp] \centering \includegraphics[width=7.5cm]{fig3sectionn30smallgamma.eps} \caption[Sections for increasing field gradient, $n=30$.]{Sections ($Y=0$) through the adiabatic potential energy surfaces belonging to the $n=30$ manifold of $^{87}$Rb for decreasing ratios $B/(G n^2)=\zeta/(\gamma^{{1}/{3}} n^2)$. The influence of the Zeeman term in $\mathcal H_e$ (\ref{eq:BOHe}) is fixed ($\zeta=5$) while $\gamma^{{1}/{3}} n^2$ increases. (a) $B/(G n^2)=10 \leftrightarrow B=22.9$ mG, $G=4.81$ T/m; (b) $B/(G n^2)=5 \leftrightarrow B=91.6$ mG, $G=38.5$ T/m; (c) $B/(G n^2)=1 \leftrightarrow B=2.29$ G, $ G=4807$ T/m; (d) draws the indicated region in (c) to a larger scale.} \label{f:zeta1_gammas} \end{figure} In order to isolate the effect of the term (\ref{eq:finitesize}) on the adiabatic surfaces, we vary the scaling factor~$\gamma=GM$ by changing the field gradient~$G$, while keeping~$\zeta=BM^{{1}/{3}}G^{-{2}/{3}}$ constant. It is, for example, reasonable to demand $\zeta=5$ and to adjust the Ioffe field strength~$B$ to meet this condition. Fig.~\ref{f:zeta1_gammas} demonstrates the increasing influence of the interaction (\ref{eq:finitesize}) when $G$ is increased. The spectra are computed for the $n=30$ manifold of $^{87}$Rb, $\zeta=5$, while $G$ is varied from $4.8$ to $4800$~T/m. For small field gradients ((a), $B/(G n^2)=10$), the surfaces approach the shapes predicted in the limit addressed in the previous subsection (\ref{ch:regulatingcapacity}): The adiabatic surfaces with the same value of the magnetic moment $(m_l+2m_s)/2$ are approximately degenerate. The uppermost energy is the only non-degenerate one and to the corresponding eigenstate the quantum numbers $m_l=n-1$ and $m_s={1}/{2}$ can be assigned. An increasing field gradient lifts the degeneracy and groups of curves can be observed ((b), $B/(G n^2)=5$). The energetic distance between these groups stays tunable by the bias field strength, as we elucidated above (see Eq.~(\ref{eq:mindist})). For even higher field gradients, the different parts of the electronic Hamiltonian are of comparable size and finite size effects substantially alter the shape of the energy surfaces ((c), (d), $B/(G n^2)=1$). Avoided level crossings appear and non-adiabatic transitions are likely to occur. The uppermost energy surface, however, proves to be very robust when the field gradient is varied. It is energetically well-isolated from the other adiabatic surfaces. Its distance to the surface, that is formed by the second highest eigenvalue, only decreases significantly when the ratio $B/(G n^2)$ approaches one ((c), (d)). This holds true for the entire X-Y-plane. Inspecting the full uppermost surface one furthermore finds the azimuthal symmetry, that is found for large ratios $B/(G n^2)$ (see Sect.~\ref{ch:regulatingcapacity}), to be approximately conserved. \begin{figure}[bt!] \centering \includegraphics[width=8.5cm]{section001G20T.eps} \caption[Section $0.01$ G $20$ T/m, $n=30$.]{ Section through the $n=30$ manifold for a field strength of $0.01$ Gauss and a field gradient of $20$ T/m ($^{87}$Rb). A large number of avoided crossings can be observed. The uppermost curve, however, stays isolated from the other curves. The insets show the linear behavior of the surfaces far away from the $z$-axis. } \label{f:001G20T} \end{figure} Another example for the complicated structure of the adiabatic electronic energy surfaces is shown in Fig.~\ref{f:001G20T}. The data are calculated for a Ioffe field strength of $0.01$~G and a field gradient of $20$~T/m. For these parameters, the contributions of all terms in the electronic Hamiltonian are of the same order of magnitude around $X=1$. One immediately notices the large number of avoided crossings between the surfaces. The uppermost curve however remains isolated from the rest of the curves. Far away from the trap center, i.e.~for large $\rho=\sqrt{X^2+Y^2}$, the coupling term in (\ref{eq:BOHe}), $X(\frac{1}{2}L_x+S_x) - Y(\frac{1}{2}L_y+S_y)$, becomes dominant. A Zeeman like splitting of the surfaces emerges, visible in the smaller graphs on the right. \subsection{Electronic Wave Functions} To characterize the electronic wave function $\varphi_\kappa(\bm r;\bm R)$, that corresponds to the energy eigenvalues constituting the uppermost adiabatic surface, we analyze its radial extension, angular momentum and spin. \begin{figure}[tbhp] \centering \includegraphics[width=8.5cm]{fig5praprint1rall.eps} \caption[Expectation value $\langle r \rangle_\varphi$]{ Expectation value $\langle r\rangle_\varphi$ of the wave functions that correspond to the uppermost electronic energy surface for $G=100$~T/m ($n=30$, $^{87}$Rb). $B$ is varied yielding different values for the ratio $\zeta/\gamma^{{1}/{3}}n^2=B/Gn^2$: (a) $0.01$~G $\rightarrow B/Gn^2=0.21 $~a.u., (b) $0.1$~G $\rightarrow B/Gn^2=2.1 $, (c) $1$~G $\rightarrow B/Gn^2=21 $. The depicted ranges of~$X$ and~$Y$ correspond to 30 characteristic lengths of the c.m.~motion in scaled units.} \label{f:rexpall} \end{figure} The electronic wave function depends parametrically on the c.m.~position and is, in general, distorted compared to the field free case by the external magnetic field. This is reflected in the expectation value $\langle r \rangle_e (\bm R) = \langle \varphi_\kappa(\bm r;\bm R)| r |\varphi_\kappa(\bm r;\bm R)\rangle$ which is shown in Fig.~\ref{f:rexpall} for different ratios $B/(G n^2)$. The limits of the graphs with respect to $X$ and~$Y$ correspond to thirty characteristic lengths of the c.m.~motion. While keeping $G=100$~T/m, $B$ is increased for the different plots from left to right. For the smallest ratio under consideration ((a), $B/Gn^2<1$), a pronounced maximum of the expectation value $\langle r \rangle_e$ can be observed at the trap center. This maximum breaks up into four maxima arranged along the diagonals when the ratio is increased ((b), $B/Gn^2>1$), while the amplitude of the spatial variation of $\langle r \rangle_e$ decreases. For an even higher value of $B$ ((c), $B/Gn^2\gg 1$), only a marginal deviation from the hydrogenic field free value for the highest possible angular momentum quantum number remains (for $n=30$ one finds \hbox{$\langle r \rangle_H(n=30,l=29)=915$}). In the region of local homogeneity, where the magnetic field does not vary significantly over the extension of the electronic cloud (i.e.~far from the $z$-axis), the expectation value approaches the field free value in all subfigures that are shown in Fig.~\ref{f:rexpall}. In accordance with the abovementioned scaling property of the electronic Hamiltonian~$\mathcal H_e$, changing the field parameters while keeping the ratio $B/Gn^2$ unaltered only modifies the scale of the c.m.~coordinates, whereas the shape of the bright regions and the energy range of the eigenvalues are not changed. \begin{figure}[tbhp] \centering \includegraphics[width=8.5cm]{30_0.1G_100T_30xoLi_proj_Lsquare.eps} \caption[]{ Expectation values $\langle L_x\rangle$, $\langle L_y\rangle$ and $\langle L_z\rangle$ (a,b,c, respectively) for a ratio $B/(G n^2)=2.1$ (Ioffe field $B=0.1$ G, gradient $G=100$ T/m, $^{87}$Rb, n=30). In (d) the projection $\Pi$ of $\langle \bm {L_r}\rangle$ onto the local magnetic field direction $\bm G$ is displayed. It is close to the field free maximum value for the angular momentum projection, $m_{l,max}=n-1$. Subplot (e) shows the spatial behavior of $\langle \bm L^2\rangle$. The range of $X$ and $Y$ corresponds to 30 times the characteristic length of the c.m.\ motion. } \label{f:Lrsmallratio} \end{figure} Let us study the angular momentum and its orientation. It is to be expected that for dominating Ioffe field, i.e.~for very large ratios $B/(G n^2)$, the expectation value of the angular momentum, $\langle\bm L_{\bm r}\rangle = (\langle L_{x}\rangle,\langle L_{y}\rangle,\langle L_{z}\rangle)$, is oriented in the Ioffe-field direction ($z$-axis). Since the Ioffe field in any case dominates around the origin, $\langle L_x \rangle$ and $\langle L_y \rangle$ are expected to vanish at $(X,Y)=(0,0)$ while $\langle L_z \rangle$ becomes maximal. This behavior can be observed in Fig.~\ref{f:Lrsmallratio} where $\langle L_i\rangle$ are displayed (a,b,c) for $B=0.1$ G and $G=100$ T/m. These parameters yield $B/(G n^2)=2.1$. The alignment of $\langle\bm L_{\bm r}\rangle$ and the local field direction~$\bm{G}(X,Y)$ is found to be very good in the entire $X$-$Y$-plane (the maximum angle between the two is smaller than $3.6^\circ$). In subplot (d) we provide the spatial behavior of the projection of $\langle\bm L_{\bm r}\rangle$ onto this local field axis, $\Pi = \langle\bm L_{\bm r}\rangle\cdot {\bm{G}(\bm R)}/{|\bm{G}(\bm R)|}$. In the local homogeneity limit, $\Pi$ approaches the maximal value for $\langle L_z\rangle$, namely $m_{l,max}=n-1$. In the same manner the expectation value $\langle \bm L^2\rangle$, which is displayed in subplot (e), converges to the maximal value, $l_{max}(l_{max}+1)=n(n-1)$. Far from the $z$-axis, the uppermost surface hence corresponds to the circular state $|m_{l,max},l_{max}\rangle$. The deviation of $\Pi$ and $\langle \bm L^2\rangle$ from the maximal values close to the $z$-axis reflect the admixture of states with lower quantum numbers $m$ and $l$ to the state of the uppermost surface. \begin{figure}[tbhp] \centering \includegraphics[width=5.5cm]{30_1Gonlyprojection.eps} \caption[]{ Spatial dependence of the projection $\Pi$ of $\langle \bm {L_r}\rangle$ onto the local field axis for $B=1$ G (all other parameters are the same as in Fig.~\ref{f:Lrsmallratio}). For this ratio, $B/(G n^2)=21$, the deviations from the maximal value $m_{l,max}=n-1$ are marginal. (Equally, $\langle \bm {L}^2\rangle\approx l_{max}(l_{max}+1)$, not shown.) } \label{f:Lrbigratio} \end{figure} Increasing the applied Ioffe field by a factor of $10$ ($\rightarrow B/(G n^2)=21$), decreases the angle between $\langle\bm L_{\bm r}\rangle$ and $\bm{G}(X,Y)$ by a factor of $10^2$, i.e.\ a quasi perfect alignment is found. As can be seen in Fig.~\ref{f:Lrbigratio}, the projection $\Pi$ now only deviates marginally from $m_{l,max}$. Consequently, also $\langle \bm L^2\rangle$ exhibits only minor deviations from its maximum value in the whole $X$-$Y$-plane. For high ratios $B/(G n^2)$, the admixture is therefore marginal and one can in a very good approximation assume the electronic state in the uppermost surface to be the circular state $|m_{l,max},l_{max}\rangle$ for any c.m.\ position. Similar observations can be made considering the respective expectation values for the spin. For the parameters in Fig.~\ref{f:Lrbigratio} the projection of $\langle\bm S\rangle$ onto $\bm G$ differs less than $10^{-4}$ from $1/2$. The expectation values of the examined electronic observables converge to the field free values for increasing ratios $B/(G n^2)$. Our findings indicate that the electronic structure of the atom is barely changed in the limit of large ratios $B/(G n^2)$. The radiative lifetimes can hence be expected to differ only slightly from the field free ones~\cite{igor:pra72}. \section{\label{s:cm}Quantized center of mass motion} The energetically uppermost adiabatic electronic energy surface is the most appropriate to achieve confinement. It does not suffer a significant deformation when the field gradient is increased and it stays well isolated from lower surfaces for a wide range of parameters. Large energetic distances to adjacent surfaces suppress nonadiabatic couplings (Eqs.~(\ref{eq:neglectdeltaT1}) and (\ref{eq:neglectdeltaT2})). In order to obtain the quantized c.m.~states we therefore solve the Schr\"odinger equation~(\ref{eq:BOcm}) for the c.m.~motion in the uppermost surface $E_{2n^2}$ by discretizing the Hamiltonian on a grid. The wave function for the fully quantized state is hence composed of the eigenfunction $|\varphi_\kappa(\bm r;\bm R)\rangle$ of the electronic Hamiltonian in equation (\ref{eq:internal}), the wave function for the center of mass motion in the $X$-$Y$ plane, $|\psi_\nu(\bm R)\rangle$, and the plain wave in $Z$ direction, \begin{equation} |\Psi(\bm r,\bm R)\rangle = |\varphi_\kappa(\bm r;\bm R)\rangle \otimes |\psi_\nu(\bm R)\rangle \otimes | k_Z\rangle \; . \end{equation} \begin{figure}[tbhp] \centering \includegraphics[width=8.5cm]{cmnebeneinander.eps} \caption[Probability densities of c.m.~states]{ Probability densities of the ground state and the first and tenth excited states of the c.m.~motion in the uppermost adiabatic potential surface of the $n=30$ manifold of~$^{87}$Rb (from left to right). The Ioffe field strength is set to $B=0.1$ G and the field gradient is $G=10$ T/m.} \label{f:cm} \end{figure} In Fig.~\ref{f:cm} the probability densities of the ground state and two excited states of the c.m.~motion in the uppermost surface of the $n=30$ manifold of $^{87}$Rb are displayed. These densities reflect the spatial symmetries of the electronic Hamiltonian $\mathcal H_e$ (\ref{eq:BOHe}) and consequently those of the electronic energy surface. They are computed for a Ioffe field strength $B=0.1$~G and a field gradient of $G=10$~T/m, which yields $\zeta=13.4$ and $\zeta/ \gamma^{\frac{1}{3}}n^2= B/Gn^2= 21$. According to the discussion in Sec.~\ref{ch:regulatingcapacity}, the electronic surface then exhibits a harmonic behavior around the origin, and the system resembles the two dimensional isotropic harmonic oscillator in the potential $E_h(X,Y) = (\zeta + \rho^2/2\zeta)\cdot {n}/{4}$ (cf.~Eq.~(\ref{eq:Eksmallrho}), $m_l=n-1$). The first two probability densities (from left to right) in Fig.~\ref{f:cm} explicitely demonstrate the analogy to the harmonic oscillator. The nodal structure of the tenth excited state is not due to a Cartesian product of 1D harmonic oscillators but a different combination of the harmonic oscillators in the corresponding degenerate subspace. The energies of the c.m.~wave functions in the approximative potential $E_h(X,Y)$ read \begin{equation} \epsilon_{h,\nu}= (N_1+N_2+1) \ \omega \; , \quad N_1,N_2=0,1,2\dots \; , \end{equation} where $\omega^2 = {n}/{2\zeta}$, which are in very good agreement with the exact results in the regime where the electronic Hamiltonian (\ref{eq:finitesize}) is negligible. Within this approximation, the energy level spacing scales with the inverse square root of $\zeta$, $\Delta \epsilon_{h,\nu} = \omega \sim {1}/{\sqrt{\zeta}}$, whereas the energetic distance of adjacent surfaces scales linearly with~$\zeta$, see Eq.~(\ref{eq:mindist}). \begin{figure}[bth] \centering \includegraphics[width=7.5cm]{praprintrho.eps} \caption[Expectation value $\langle \rho \rangle$.]{ Double logarithmic plot of the expectation value $\langle \rho \rangle$ for the c.m.~ground state (circles,~$\circ$) in the uppermost adiabatic energy surface ($n=30$, $^{87}$Rb). The corresponding expectation values for the c.m.~wave function in a perfectly harmonic potential are depicted for comparison~({\tiny{+}}). } \label{f:rho} \end{figure} To describe the properties of the compound quantized state, we analyze the extension of the center of mass motion, which can be measured by the expectation value \begin{equation} \langle\rho\rangle = \langle\psi_\nu(\bm R)| \ \sqrt{X^2+Y^2} \ |\psi_\nu(\bm R)\rangle \; , \end{equation} and the mean distance of the core and the electron $\langle r\rangle$. Fig.~\ref{f:rho} presents the radial expectation value $\langle\rho\rangle$ in Bohr radii for the c.m.~ground state in the uppermost energy surface for different parameter sets of the magnetic field. For comparison, the expectation value of the c.m.~state in a perfectly harmonic potential, $\langle\rho\rangle_h = \frac{\sqrt{\pi}}{2} x_0 \sim \zeta^{{1}/{4}}$, is also depicted. The characteristic length of the c.m.~motion is $x_0 = 1/\sqrt{\omega} = \sqrt[4]{{2\zeta}/{n}}$. (Due to the rescaling of the c.m.~coordinates with $\gamma^{-{1}/{3}}$ in Sec.~\ref{ch:ip} this is of the order of $1$ for a wide range of parameter sets \{$B$, $G$, $n$\} in scaled atomic units (cf.~Tab.~\ref{t:parameters}).) The expectation values for the real system, $\langle\rho\rangle$, deviate from the straight line formed by $\langle\rho\rangle_h$, as the ratio $B/G$ becomes very small. Hence, by choosing large gradients and appropriate bias fields, very tightly confining traps for highly excited atoms can be obtained ($B=0.1$~G and $G=100$~T, for instance, give rise to a trap frequency of approximately $1.4$~MHz). The mean distance of the Rydberg electron from the core $\langle r \rangle$ is calculated weighting that very quantity for a fixed c.m.~position $\langle r \rangle_e (X,Y)$, with the probability density of the c.m.~wave function: \begin{equation} \langle r \rangle = \langle\psi_\nu(\bm R)| \ \langle\varphi_\kappa(\bm r;\bm R)| \ r \ |\varphi_\kappa(\bm r;\bm R)\rangle \ |\psi_\nu(\bm R)\rangle \; . \end{equation} It is depicted in Fig.~\ref{f:r_rho}, along with $\langle\rho\rangle$, versus the degree of excitation of the c.m.~motion~$\nu$. $\langle\rho\rangle$ and $\langle r \rangle$ are of comparable size due to the very tight confinement. For a Ioffe field strength of $B=0.1$ G and a field gradient of $G=100$ T/m, for instance, the ratio of $\langle \rho \rangle$ and~$\langle r \rangle$ for the ground state ($\nu=1$) is as small as ${\langle \rho \rangle}/{\langle r \rangle} = 0.4$. The extension of the c.m.~wave function is thus smaller than the extension of the electronic cloud. This strongly supports the proposition that our Rydberg atoms cannot be considered as point-like particles. \begin{figure}[tbhp] \centering \includegraphics[width=8.5cm]{praprintexp_r_rho_all.eps} \caption[Expectation values $\langle r \rangle$ and $\langle \rho \rangle$.]{ Comparison of the mean extension of the c.m.~wave function, $\langle\rho\rangle$, and the mean distance of the core and the electron, $\langle r \rangle$, for the $n=30$ manifold of $^{87}$Rb. } \label{f:r_rho} \end{figure} The expectation value $\langle r \rangle$ for the electron remains nearly constant as the degree of excitation increases, and it barely differs from the corresponding field free value (dashed line in Fig.~(\ref{f:r_rho})). As indicated previously, we find the electron to be in the circular state with $m_l=n-1$, which features the smallest mean square deviation of the nucleus-electron separation $\langle r^2\rangle - \langle r\rangle^2 = n^2(2n+1)/4$. It is therefore possible, that the c.m.~and the electronic wave function do not even overlap. This is indicated in the inset of the upper right plot in Fig.~\ref{f:r_rho} for $\nu=1$. \section{\label{s:c}Conclusion} We have studied the quantum properties of ultracold Rydberg atoms in a Ioffe-Pritchard field configuration and find trapped c.m.~quantum states to be readily achievable. Our starting point is a two-body approach to the Rydberg atom. Relativistic effects and deviations of the core potential from the Coulomb-potential as well as diamagnetic interactions have not been taken into account, which is well justified a posteriori. Applying a spatially dependent unitary transformation and additionally exploiting the major mass difference of the electron and the core, we arrived at a two-particle Hamiltonian for highly excited atoms in an inhomogeneous field where the appearance of the coupling of the relative and c.m.~dynamics is simplified substantially. Thenceforward we have concentrated on the special case of a Ioffe-Pritchard trap. A symmetry analysis of the resulting Hamiltonian has been performed revealing seven discrete unitary and anti-unitary symmetries. Comparing the energetic contributions of the different interactions we find it legitimate to limit our considerations to a single $n$-manifold to solve the corresponding stationary Schr\"odinger equation. Consequently an adiabatic approach was applied. In the ultracold regime the Rydberg electron is much faster than the c.m.\ motion of the atom. This justifies an adiabatic separation of the internal (relative) and the external (c.m.) dynamics. The corresponding adiabatic electronic potential surfaces have been obtained by diagonalizing the electronic Hamiltonian matrix. In the limit of large ratios of Ioffe field strength and field gradient, $B/(G n^2)$, an approximate analytical expression for the adiabatic surfaces has been provided. In this limit, the surfaces arrange equidistantly and all but the uppermost surface are degenerate. The inter-surface distance is then proportional to the Ioffe field strength. The structure of the electronic surfaces becomes more complex when this ratio decreases. The shape of the uppermost surface and its energetic separation from others, however, prove very robust with respect to changes of the field parameters. We hence consider it the most appropriate to achieve confinement. Exploring the properties of the electronic wave functions we find that the expectation values approach the field free values when the ratio of the field strength and the field gradient,~$B/(G n^2)$, is increased. This indicates that, despite the strong localization of the c.m., the electronic structure of the atom is barely changed compared to the field free case. Examining the compound quantized states we have found a regime where the extension of the c.m.~wave function falls below the extension of the electronic cloud, i.e.~the c.m.~is stronger localized than the valence electron. In this regime Rydberg atoms in inhomogeneous magnetic fields can therefore not be considered as point-like particles. We conclude that the Ioffe-Pritchard trap provides a strong confinement for Rydberg atoms in two dimensions that permits their trapping on a microscopic scale. For such a one-dimensional guide, a relatively weak longitudinal confinement along the $z$-axis could additionally be provided for a non-Helmholtz configuration by the quadratic term. As a natural enrichment of the system one could study many atoms in that guide. Challenging issues are to stabilize such a one-dimensional Rydberg gas or to answer the question if it is feasible to use the strong Rydberg-Rydberg interaction to create a chain of trapped atoms \cite{mayle:1d} that could then serve as a tool for quantum information processing~\cite{lukin,sorensen,jaksch} making use of the state dependent atom-atom interaction. \section{Acknowledgment} Financial support by the Deutsche Forschungsgemeinschaft is gratefully acknowledged.
1202.5028
\section{Introduction} A simple generic optical sensor may be envisaged as porous material (labelled $a$, say), which is infiltrated by a fluid (labelled $b$, say). An agent to be sensed is contained within the fluid. It assumed that the fluid and the agent to be sensed have quite different optical properties. Thus, the concentration of the agent within the fluid may be gauged by the optical properties of the infiltrated porous material \c{Stefano,Pinet,Sipe_OE}. The optical properties used to detect the presence of the agent may be the reflectances or transmittances of the infiltrated porous material. Alternatively, if one surface of the porous material were coated with a thin metallic film, measurements could be based on the excitation of surface-plasmon-polariton (SPP) waves at the interface of the porous material and metal film \c{Homola2003,AZL2,Scarano}. For example, sculptured thin films (STFs) represent rather promising porous materials for such optical sensors \c{LMBR,Lgas,L_Optik}. These constitute parallel arrays of nanowires which are grown on substrates by physical vapour deposition \c{STF_Book,Messier}. By controlled manipulation of the substrate during the deposition process, a range of nanowire shapes can be achieved. Thereby, the multiscale porosity of such STFs can be tailored to order, to a considerable degree. Additionally, since STFs can fabricated from a wide range of organic and inorganic materials, a wide range of optical properties for the porous material can be delivered \c{Polo,Horn}. Chiral STFs are especially interesting for optical sensing applications, as these support the circular Bragg phenomenon, courtesy of the helical nature of their nanowires \c{STF_Book}; furthermore, they also support more than one mode of SPP wave \c{Polo_PRSA,DPL,PML_JOSAB} which may be usefully exploited for sensing \c{ML_IEEE_SJ}. In the design of such an optical sensor, what values should one choose for the optical properties of the porous material and infiltrating fluid, in order to maximize sensitivity? What value should one choose for the porosity, and what shape should one choose for the pores, in order to maximize sensitivity? These are the questions that we address here. We do so by considering the simplest scenario wherein the porous material and infiltrating fluid are both made from lossless, homogeneous, isotropic dielectric materials, characterized by relative permittivities $\epsilon^a$ and $\epsilon^b$, respectively.\footnote{The entire analysis presented herein may also be applied to lossless, homogeneous, isotropic magnetic materials, by replacing relative permittivities by the corresponding relative permeabilities throughout.} The infiltrated porous material is regarded as a homogeneous composite material (HCM), which is a reasonable approximation provided that the linear dimensions of the pores are much smaller than the wavelength(s) involved. Thus, in the case of optical sensors operating at visible wavelengths, we have in mind pore linear dimensions $\lessapprox 38$ nm for the smallest values of $\epsilon^{a,b}$ considered and $\lessapprox 10$ nm for the largest values of $\epsilon^{a,b}$ considered. The infiltrated porous material may be either isotropic or anisotropic depending upon the shape of the pores. We use the well-established Bruggeman homogenization formalism to estimate the relative permittivity dyadic of the infiltrated porous material, namely $\=\epsilon^{Br}$ \c{Ward,M_JNP}. The Bruggeman formalism has recently been implemented to study the prospects of infiltrated STFs as optical sensors \c{ML_inverse_homog}, based on both changes in reflectance/transmittance \c{ML_IEEEPJ} and SPP wave excitation \c{ML_IEEE_SJ,ML_PNFA}. Regardless of whether the sensor is based on changes in reflectance/transmittance or the excitation of SPP waves, the sensitivity of the sensor depends crucially on how much the optical properties of the infiltrated porous material change in response to changes in the optical properties of the infiltrating fluid. Thus, the derivative $d \=\epsilon^{Br} / d \epsilon^b$ is a key indicator of sensitivity. In the following we explore how this derivative varies as a function of the porosity, the pore shape and the relative permittivities of the infiltrating fluid and the porous material. \section{Homogenization theory} Within our homogenization framework, the pores are all assumed to have the same shape, which is spheroidal in general. These spheroidal pores are randomly distributed but identically oriented. The surface of each spheroid relative to its centroid is prescribed by the vector \c{M_JNP} \begin{equation} \#r_{\,s} (\theta, \phi) = \eta \, \=U \. \hat{\#r} (\theta, \phi), \end{equation} with $ \hat{\#r} $ being the radial unit vector originating from the spheroid's centroid, specified by the spherical polar coordinates $\theta$ and $\phi$. The linear dimensions of the spheroid, as determined by the parameter $\eta$, are assumed to be small relative to the electromagnetic wavelength(s). The spheroidal shape is captured by the dyadic \begin{equation} \l{Ushape} \=U = U_\perp \=I + \left( U_\parallel - U_\perp \right) \, \hat{\#c} \, \hat{\#c}\,, \end{equation} where $\=I$ is the identity 3$\times$3 dyadic and the unit vector $\hat{\#c}$ is parallel to the spheroid's axis of rotational symmetry. The linear dimension parallel to $\hat{\#c}$, relative to the equatorial radius of the spheroid, is provided by the shape parameter $\rho = U_\parallel / U_\perp$. A schematic illustration of such a spheroidal pore is provided in Fig.~\ref{fig1}. The form of the relative permittivity dyadic $\=\epsilon^{Br} $, as estimated using the Bruggeman homogenization formalism, mirrors that of the shape dyadic $\=U$. That is, it has the uniaxial form \begin{equation} \=\epsilon^{Br} = \epsilon^{Br}_\perp \=I + \left( \epsilon^{Br}_\parallel - \epsilon^{Br}_\perp \right)\, \hat{\#c} \, \hat{\#c}. \l{eps_Br} \end{equation} It emerges as the solution of the dyadic Bruggeman equation \c{WLM} \begin{equation} f_a \, \=\alpha^{a} + f_b \, \=\alpha^{b} = \=0\,, \l{Br} \end{equation} where $\=0$ is the null 3$\times$3 dyadic. The scalars $f_a$ and $f_b = 1 - f_a$ denote the respective volume fractions of the porous material and infiltrating fluid. Thus, $f_b$ represents the porosity of the optical sensor. The dyadics \begin{equation} \=\alpha^{\ell } = \left( \epsilon^\ell \=I - \=\epsilon^{Br} \right) \.\left[ \, \=I + \=D \. \left( \epsilon^\ell \=I - \=\epsilon^{Br} \right) \,\right]^{-1}, \qquad (\ell = a,b), \l{polar} \end{equation} are the polarizability density dyadics of the spheroids in the HCM, while the depolarization dyadic $\=D$ in eqn.~\r{polar} is given by the double integral \c{M97,MW97} \begin{equation} \=D = \frac{1}{ 4 \pi} \, \int^{2 \pi}_0 \, d \phi \, \int^\pi_0 \, d \theta \, \sin \theta \, \left( \frac{1}{ \hat{\#r}\.\=U^{-1}\.\=\epsilon^{Br}\.\=U^{-1}\.\hat{\#r}} \right) \=U^{-1}\.\hat{\#r} \, \hat{\#r} \. \=U^{-1}\,. \l{depol} \end{equation} It may be expressed in the uniaxial form \begin{equation} \=D = D_\perp\, \=I + \left( D_\parallel - D_\perp \right) \hat{\#c} \,\hat{\#c}\,, \end{equation} with components \begin{eqnarray} D_\parallel &=& \frac{\gamma}{ \epsilon^{Br}_\parallel } \, \Gamma_\parallel ( \gamma ), \l{Dx}\\ D_\perp&=& \frac{1}{ \epsilon^{Br}_\perp} \, \Gamma_\perp (\gamma), \l{D} \end{eqnarray} wherein the terms \begin{eqnarray} \Gamma_\parallel (\gamma) &=& \frac{1}{4 \pi}\, \int^{2 \pi}_0 \, d \phi \, \int^\pi_0 \, d \theta \, \frac{\cos^2 \phi \sin^3 \theta}{\cos^2 \theta + \sin^2 \theta \left( \gamma \cos^2 \phi + \sin^2 \phi \right)}, \l{dx}\\ \Gamma_\perp(\gamma) &=& \frac{1}{4 \pi}\, \int^{2 \pi}_0 \, d \phi \, \int^\pi_0 \, d \theta \, \frac{\sin^2 \phi \sin^3 \theta}{\cos^2 \theta + \sin^2 \theta \left( \gamma \cos^2 \phi + \sin^2 \phi \right)} \l{d} \end{eqnarray} are functions of the scalar parameter \begin{equation} \gamma = \frac{U^2_\perp \epsilon^{Br}_\parallel}{U^2_\parallel \epsilon^{Br}_\perp}. \end{equation} The double integrals on the right sides of eqns.~\r{dx} and \r{d} may be evaluated as \begin{eqnarray} \Gamma_\parallel (\gamma )&=& \left\{ \begin{array}{lcr} \displaystyle{ \frac{\sinh^{-1} \sqrt{\frac{1 -\gamma}{\gamma} }}{\left( 1 - \gamma \right)^{\frac{3}{2}}} - \frac{1}{1-\gamma }} && \hspace{14mm} \mbox{for} \;\; 0 < \gamma < 1 \\ & & \\ \displaystyle{ \frac{1}{\gamma - 1} - \frac{\sec^{-1} \sqrt{\gamma} } {\left( \gamma - 1 \right)^{\frac{3}{2}}}}& & \mbox{for} \;\; \gamma > 1 \end{array} \right., \\ \Gamma_\perp( \gamma )&=& \left\{ \begin{array}{lcr} \displaystyle{ \frac{1}{2} \left( \frac{1}{1-\gamma }- \frac{ \gamma \sinh^{-1} \sqrt{\frac{1 -\gamma}{\gamma} }}{\left( 1 - \gamma \right)^{\frac{3}{2}}} \right) } && \mbox{for} \;\; 0 < \gamma < 1 \\ & & \\ \displaystyle{\frac{1}{2} \left( \frac{\gamma \sec^{-1} \sqrt{\gamma} } {\left( \gamma - 1 \right)^{\frac{3}{2}}} - \frac{1}{\gamma - 1} \right) }& & \mbox{for} \;\; \gamma > 1 \end{array} \right.. \end{eqnarray} Notice that the anomalous case $\gamma < 0$ which represents a hyperbolic HCM \c{MLD2005} is excluded from our consideration. The dyadic Bruggeman equation \r{Br} yields the two nonlinear scalar equations \begin{eqnarray} && \frac{\epsilon^a - \epsilon^{Br}_\parallel }{1 + D_\parallel \left( \epsilon^a - \epsilon^{Br}_\parallel \right)} f_a + \frac{\epsilon^b - \epsilon^{Br}_\parallel }{1 + D_\parallel \left( \epsilon^b - \epsilon^{Br}_\parallel \right)} f_b = 0 \,, \l{Br_1} \\ && \frac{\epsilon^a - \epsilon^{Br}_\perp}{1 + D_\perp\left( \epsilon^a - \epsilon^{Br}_\perp\right)} f_a + \frac{\epsilon^b - \epsilon^{Br}_\perp}{1 + D_\perp\left( \epsilon^b - \epsilon^{Br}_\perp\right)} f_b = 0\,, \l{Br_2} \end{eqnarray} which are coupled via $D_{\perp, \parallel}$. Using standard numerical techniques, this pair can be solved for $\epsilon^{Br}_\parallel$ and $\epsilon^{Br}_\perp$. Let us turn to the dyadic derivative which provides a measure of the sensitivity of the porous optical sensor under consideration, namely \begin{equation} \frac{d \=\epsilon^{Br}}{d \epsilon^b} = \frac{d \epsilon^{Br}_\perp}{d \epsilon^b}\, \=I + \left( \frac{d \epsilon^{Br}_\parallel}{d \epsilon^b} - \frac{d \epsilon^{Br}_\perp}{d \epsilon^b} \right)\, \hat{\#c} \, \hat{\#c} \,. \l{deps_Br} \end{equation} Before proceeding further, we observe that the corresponding derivatives of the depolarization dyadic components may be expressed as \begin{eqnarray} \frac{d D_\parallel}{d \epsilon^b } &=& \nu_{11} \frac{d \epsilon^{Br}_\parallel}{d \epsilon^b} + \nu_{12} \frac{d \epsilon^{Br}_\perp}{d \epsilon^b}\,,\\ \frac{d D_\perp}{d \epsilon^b} &=& \nu_{21} \frac{d \epsilon^{Br}_\parallel}{d \epsilon^b} + \nu_{22} \frac{d \epsilon^{Br}_\perp}{d \epsilon^b}\,, \end{eqnarray} with the scalars \begin{eqnarray} \nu_{11} &=& \frac{U^2_\perp}{U^2_\parallel \epsilon^{Br}_\parallel \epsilon^{Br}_\perp} \left( \Gamma_\parallel + \gamma \frac{d\Gamma_\parallel}{d \gamma} \right) - \frac{\gamma \Gamma_\parallel}{\left( \epsilon^{Br}_\parallel \right)^2} \,,\\ \nu_{12} &=& - \frac{U^2_\perp }{U^2_\parallel \left( \epsilon^{Br}_\perp\right)^2} \left( \Gamma_\parallel + \gamma \frac{d\Gamma_\parallel}{d \gamma} \right)\,,\\ \nu_{21} &=& \left( \frac{U^2_\perp}{U^2_\parallel \left( \epsilon^{Br}_\perp\right)^2} \right) \, \frac{d\Gamma_\perp}{d \gamma} \,,\\ \nu_{22} &=& - \left( \frac{U^2_\perp \epsilon^{Br}_\parallel}{U^2_\parallel \left( \epsilon^{Br}_\perp\right)^3} \right)\, \frac{d\Gamma_\perp}{d \gamma} - \frac{ \Gamma_\perp}{\left( \epsilon^{Br}_\perp\right)^2} \,, \end{eqnarray} and derivatives \begin{eqnarray} \frac{d\Gamma_\parallel}{d \gamma} &=& \left\{ \begin{array}{lcr} \displaystyle{ \frac{1}{2} \left( \frac{3 \sinh^{-1} \sqrt{\frac{1 -\gamma}{\gamma} }}{\left( 1 - \gamma \right)^{\frac{5}{2}}} - \frac{1 + 2 \gamma}{ \left( 1-\gamma \right)^2 \gamma } \right) } && \hspace{10mm} \mbox{for} \;\; 0 < \gamma < 1 \\ & & \\ \displaystyle{\frac{1}{2} \left( - \frac{1 + 2 \gamma}{\left( \gamma - 1 \right)^2 \gamma } + \frac{3 \sec^{-1} \sqrt{\gamma} } {\left( \gamma - 1 \right)^{\frac{5}{2}}} \right) }& & \mbox{for} \;\; \gamma > 1 \end{array} \right., \\ \frac{d \Gamma_\perp}{d \gamma} &=& \left\{ \begin{array}{lcr} \displaystyle{ \frac{1}{4} \left( \frac{3}{ \left( 1-\gamma \right)^2 } - \frac{\left( 2 + \gamma \right) \sinh^{-1} \sqrt{\frac{1 -\gamma}{\gamma} }}{\left( 1 - \gamma \right)^{\frac{5}{2}}} \right) } && \mbox{for} \;\; 0 < \gamma < 1 \\ & & \\ \displaystyle{\frac{1}{4} \left( - \frac{ \left( 2+ \gamma \right) \sec^{-1} \sqrt{\gamma} } {\left( \gamma - 1 \right)^{\frac{5}{2}}} + \frac{3}{\left( \gamma - 1 \right)^2} \right) }& & \mbox{for} \;\; \gamma > 1 \end{array} \right.. \end{eqnarray} Next we exploit the scalar Bruggeman equations \r{Br_1} and \r{Br_2}. Their derivatives with respect to $\epsilon^b$ may be written as \begin{eqnarray} && \beta_{11} \frac{d \epsilon^{Br}_\parallel}{d \epsilon^b} + \beta_{12} \frac{d \epsilon^{Br}_\perp}{d \epsilon^b} + \beta_{13} = 0\,,\\ && \beta_{21} \frac{d \epsilon^{Br}_\parallel}{d \epsilon^b} + \beta_{22} \frac{d \epsilon^{Br}_\perp}{d \epsilon^b} + \beta_{23} = 0 \,, \end{eqnarray} with \begin{eqnarray} \beta_{11} &=& \nu_{11} \left( \epsilon^a - \epsilon^{Br}_\parallel \right) \left( \epsilon^b - \epsilon^{Br}_\parallel \right) + D_\parallel \left( 2 \epsilon^{Br}_\parallel - \epsilon^a - \epsilon^b \right) -1 \,, \\ \beta_{12} &=& \nu_{12} \left( \epsilon^a - \epsilon^{Br}_\parallel \right) \left( \epsilon^{b} - \epsilon^{Br}_\parallel \right) \,,\\ \beta_{13} &=& f_b + D_\parallel \left( \epsilon^a - \epsilon^{Br}_\parallel \right) \,, \\ \beta_{21} &=& \nu_{21} \left( \epsilon^a - \epsilon^{Br}_\perp\right) \left( \epsilon^{b} - \epsilon^{Br}_\perp\right) \,,\\ \beta_{22} &=& \nu_{22} \left( \epsilon^a - \epsilon^{Br}_\perp \right) \left( \epsilon^b - \epsilon^{Br}_\perp \right) + D_\perp \left( 2 \epsilon^{Br}_\perp - \epsilon^a - \epsilon^b \right) -1 \,, \\ \beta_{23} &=& f_b + D_\perp \left( \epsilon^a - \epsilon^{Br}_\perp \right) \,. \end{eqnarray} Thus, the sought after derivatives of $\epsilon^{Br}_\perp$ and $\epsilon^{Br}_\parallel$ finally emerge as \begin{eqnarray} \frac{d \epsilon^{Br}_\parallel}{d \omega} &=& \frac{ \beta_{12} \beta_{23} - \beta_{22} \beta_{13}}{\beta_{11} \beta_{22} - \beta_{12} \beta_{21}} \,, \l{dexdw} \\ \frac{d \epsilon^{Br}_\perp}{d \omega} &=& \frac{ \beta_{21} \beta_{13} - \beta_{11} \beta_{23}}{\beta_{11} \beta_{22} - \beta_{12} \beta_{21}} \,. \l{dedw} \end{eqnarray} \section{Numerical investigations} \l{Num_inv} The consequences of the theory presented in the previous section are illustrated here by means of some numerical examples. We begin with the simplest case in \S\ref{section_sphere} wherein the pores are spherical and the infiltrated porous material is accordingly considered to be an isotropic HCM. Then the effects of anisotropy are considered in \S\ref{section_spheroid} wherein the pores are taken to be spheroidal in shape. For the purposes of these numerical calculations, our attention is restricted to relative permittivity values which, at optical frequencies, are attainable either using naturally-occurring materials or currently-available engineered materials. In \S\ref{closing} the results of implementing relative permittivity values which lie beyond the reach of present-day technology are commented upon. \subsection{Spherical pores} \l{section_sphere} If the pores are spherical (i.e., $\rho = 1$) then the relative permittivity dyadic characterizing the infiltrated porous material reduces the the scalar form $\=\epsilon^{Br} = \epsilon^{Br} \=I$ with $ \epsilon^{Br} = \epsilon^{Br}_\parallel \equiv \epsilon^{Br}_\perp$. For $\epsilon^a \in \left\{ 1.5, 5, 15 \right\}$, the Bruggeman estimate $\epsilon^{Br}$ and its derivative $d \epsilon^{Br} / d \epsilon^b$ are plotted versus $\epsilon^ b \in \left( 1, 3 \right)$ and $f_b \in \left( 0, 1\right)$ in Fig.~\ref{fig1}. We see that when $\epsilon^a = 1.5$, the Bruggeman estimate $\epsilon^{Br}$ varies approximately linearly with $\epsilon^b$ for all $f_b \in \left( 0, 1\right)$. However, the relationship between $\epsilon^{Br}$ and $\epsilon^b$ becomes increasingly nonlinear as $\epsilon^a$ increases. For $\epsilon^a = 1.5$, the derivative $d \epsilon^{Br} / d \epsilon^b$ increases in an approximately linear fashion as the porosity $f_b$ increases, regardless of the value of $\epsilon^b$. However, for $\epsilon^a = 5$ the trend is rather different: here the values of $d \epsilon^{Br} / d \epsilon^b$ peak at $f_b \approx 0.7$ and the height of this peak rises as $\epsilon^b$ decreases. This peak in the value of $d \epsilon^{Br} / d \epsilon^b$ becomes more pronounced as the value of $\epsilon^a$ increases. Indeed, at $\epsilon^a = 15$ this peak can be clearly observed even when $\epsilon^b = 3$. \subsection{Spheroidal pores} \l{section_spheroid} Let us now explore what happens when the pores are taken to be spheroidal. Accordingly, the HCM representing the infiltrated porous material is a uniaxial dielectric material. Following our findings in \S\ref{section_sphere}, we fix $\epsilon^a = 15$ in order that the effects of pore shape are more clearly appreciated. The Bruggeman estimates $\epsilon^{Br}_{\perp, \parallel}$ and their derivatives $d \epsilon^{Br}_{\perp, \parallel} / d \epsilon^b$ are plotted versus $\epsilon^ b \in \left( 1, 3 \right)$ and $f_b \in \left( 0, 1\right)$ in Fig.~\ref{fig2} for $\rho = 10$. Both $\epsilon^{Br}_\perp$ and $\epsilon^{Br}_\parallel$ vary relatively little as $\epsilon^b$ increases but both decrease~---~$\epsilon^{Br}_\parallel$ approximately linearly and $\epsilon^{Br}_\perp$ more nonlinearly~---~as $f_b$ increases. The derivative $d \epsilon^{Br}_\parallel / d \epsilon^b$ increases approximately uniformly as $f_b$ increases, for $\epsilon^b \gtrsim 1.5$. However, the values of $d \epsilon^{Br}_\parallel / d \epsilon^b$ for $\epsilon^b \lesssim 1.5$ are slightly peaked around $f_b \approx 0.9$. The plot of $d \epsilon^{Br}_\perp / d \epsilon^b$ is similarly peaked, but in this case the peak occurs at $f_b \approx 0.5$, it is larger in height than the $d \epsilon^{Br}_\parallel / d \epsilon^b$ peak, and it extends further into the $\epsilon^b \gtrsim 1.5$ region. Also, the height of this $d \epsilon^{Br}_\perp / d \epsilon^b$ peak is substantially larger than the corresponding peak in $d \epsilon^{Br} / d \epsilon^b$ observed in Fig.~\ref{fig1} for $\epsilon^a = 15$. The pores represented in Fig.~\ref{fig2} are prolate spheroids. The corresponding case of oblate spheroids is represented in Fig.~\ref{fig3}. The parameters for the plots in Fig.~\ref{fig3} are the same those in as Fig.~\ref{fig2} except that $\rho = 0.1$. The plot of $\epsilon^{Br}_\parallel$ versus $\epsilon^b$ and $f_b$ in Fig.~\ref{fig3} is very similar to the corresponding plot of $\epsilon^{Br}_\perp$ in Fig.~\ref{fig2}; and likewise for the plots of $\epsilon^{Br}_\perp$ in Fig.~\ref{fig3} and $\epsilon^{Br}_\parallel$ in Fig.~\ref{fig2}. Also, the plots of the derivatives $d \epsilon^{Br}_\parallel / d \epsilon^b$ and $d \epsilon^{Br}_\perp / d \epsilon^b$ versus $\epsilon^b$ and $f_b$ in Fig.~\ref{fig3} are similar to the corresponding plots of $d \epsilon^{Br}_\perp / d \epsilon^b$ and $d \epsilon^{Br}_\parallel / d \epsilon^b$, respectively, in Fig.~\ref{fig2}, albeit there are qualitative differences in the positions and shapes of the peaks in the derivative plots. \section{Discussion and closing remarks} \l{closing} An analysis based on the Bruggeman homogenization formalism has provided insights into the sensitivity of a generic porous optical sensor. Specifically, for a porous material of relative permittivity $\epsilon^a$ infiltrated by a fluid of relative permittivity $\epsilon^b$, we found that: \begin{itemize} \item the sensitivity is maximized when there is a large contrast between $\epsilon^a$ and $\epsilon^b$; \item if the contrast between $\epsilon^a$ and $\epsilon^b$ is large, maximum sensitivity is achieved at mid-range values of porosity; \item higher sensitivities may be achieved for $ \epsilon^b $ close to unity when $\epsilon^a \gg 1$, for example; and \item higher sensitivities may be achieved by incorporating elongated pores. \end{itemize} In \S\ref{Num_inv} the relative permittivities of the porous material considered were $\epsilon^a \in \left\{ 1.5, 5, 15 \right\}$. These values correspond to many common dielectric materials at optical frequencies, with the largest value being close to the relative permittivity of silicon, for example. The relative permittivities of the infiltrating fluid were taken to be in the range $1 < \epsilon^b < 3$. This range is physically-realizable, with the largest values corresponding to nanocomposite fluids developed for immersion lithography \c{Langmuir} whereas values approaching unity may be attained using water vapour \c{Steam}, for examples. We note that ongoing rapid developments in engineered materials are bringing relative permittivity parameter regimes, which were hitherto unattainable, into reach \c{Simovski,Lederer}. For example, relative permittivities in excess of 50 are now being reported for engineered materials in the terahertz frequency regime \c{Shin_PRL,Nature_high_n}, while engineered materials with positive-valued relative permittivities less than unity also appear to be attainable \c{Alu,Lovat,Cia_PRB}. Accordingly, it is of interest to consider how the sensitivities reported here would be effected if rather more exotic parameter regimes were incorporated. In further numerical studies (not presented in \S\ref{Num_inv}) it was observed that increasing $\epsilon^a$ beyond 15 results in a steady increase in sensitivity. Reducing the value of $\epsilon^b$ from unity results in a sharp increase in sensitivity. In this context let us make a couple of parenthetical remarks: First, since the entire analysis presented herein is isomorphic to the corresponding scenario for magnetic materials (with relative permittivities replaced by relative permeabilities throughout), we note that relative permeabilities less than unity can be achieved by using diamagnetic materials \c{Cook}. Second, the regime $\epsilon^ b< 0 $ with $\epsilon^a > 0$ (or vice versa) gives rise to Bruggeman estimates of the HCM relative permittivity dyadic which are not physically plausible \c{Ag} and therefore this regime is avoided here. The design parameters considered here were the relative permittivities of the porous material and the infiltrating fluid, along with the porosity and the shapes of the pores. In the operation of such a generic optical sensor~---~at least in the most straightforward mode of operation~---~it may be envisaged that the relative permittivity of the infiltrating fluid is a variable quantity (which varies according to concentration of the agent to be sensed) whereas the other design parameters remain fixed. Accordingly, the value of the relative permittivity of the fluid should be carefully chosen such that the sensitivity is maximized over the expected range of concentrations of the agent to be sensed. While our attention here has been confined to infiltrated porous materials represented as uniaxial dielectric HCMs, a straightforward extension of the presented analysis could accommodate biaxial dielectric HCMs which represent certain STFs as optical sensors \c{ML_IEEE_SJ,ML_IEEEPJ,ML_PNFA}. Finally, the study described herein provides a step towards a comprehensive study of porous platforms for optical sensing, which incorporates such matters as the absorption/desorption phenomenons that dictate the response time and the reversibility of the sensors. \vspace{10mm}
1202.6531
\section{Introduction} Kazhdan's property (T) expresses a certain rigidity for unitary Hilbert space representations of a topological group $G$. Even though it was invented by Kazhdan for discrete or more generally locally compact groups Kazhdan's property (T) has turned up in so many different aspects of mathematics that its study in any context seems of interest. The first examples of non-locally compact groups with this property were construced by Shalom in \cite{ShalomBGKPT}. We find it interesting that also holomorphic objects can have it. Our main result, in particular showing that the holomorphic loop group of $\mbox{SL}_n ({\mathbb C}} \begin{document)$ has Kazhdan's property (T) for $n\ge 3$, is the following \begin{theorem*} (see Theorem \ref{KPT}) Let $n\ge 3$ and $X$ be a Stein manifold having finitely many connected components and with the property that all holomorphic maps from $X$ to $\mbox{SL}_n({\mathbb C}} \begin{document)$ are null-homotopic. Then $\mbox{SL}_n(\mathcal{O}(X))$ has Kazhdan's property (T). \end{theorem*} Some generalization to the case when not all holomorphic maps from $X$ to $\mbox{SL}_n({\mathbb C}} \begin{document)$ are null-homotopic is contained in the last section. Let's recall the definitions: By $\mathcal{O}(X)$ we denote the algebra of holomorphic functions on $X$ endowed with compact-open topology. By Stein's original definition, simplified by later developments, a complex manifold $X$ is Stein (or, as Karl Stein put it, holomorphically complete) if it satisfies the following two conditions. \smallskip\noindent (1) Holomorphic functions on $X$ separate points, that is, if $x,y \in X, x\ne y$, then there is $f\in \mathcal{O}(X)$ such that $f(x) \ne f(y)$. \smallskip\noindent (2) $X$ is holomorphically convex, that is, if $K\subset X$ is compact, then its $\mathcal{O}(X))$-hull $\hat K$, consisting of all $x \in X$ with $ \vert f(x)\vert \le {\rm max}_K \vert f\vert$ for all $f \in \mathcal{O}(X)$, is also compact. Equivalently, if $ E \subset X$ is not relatively compact, then there is $ \in \mathcal{O}(X)$ such that $f\vert_E$ is unbounded. A domain in ${\mathbb C}} \begin{document^n$ is Stein if and only if it is a domain of holomorphy. Every noncompact Riemann surface is Stein. By the Remmert embedding theorem (see also Remark \ref{Stein,numbers}) a connected complex manifold is Stein if and only if it is biholomorphic to a closed complex submanifold of $C^N$ for some $N$. If $X$ is not smooth, i.e., a complex space, the notion of Stein space is defined by the same two conditions (1) and (2). A connected Stein space is biholomorphic to some analytic subspace of ${\mathbb C}} \begin{document^n$ if and only if it has an upper bound on the dimension of its tangent spaces. We refer to \cite{For3} for more information on Stein manifolds. Here comes the definition of property (T), for a comprehensive introduction to Kazhdan's property (T) we refer to \cite{BachirDeLaHarpeValetteKPT}. \begin{Def} Let $G$ be a topological group, $K\subset G$ a subset, $\varepsilon > 0$, $H$ a Hilbert space, and $(\pi,H)$ a continuous unitary $G$-representation. A vector $v\in H$ is called $(K,\varepsilon)$-invariant if $\| \pi(g)v-v\|<\varepsilon \|v\|$ for all $g\in K$. \end{Def} \begin{Def} A topological group $G$ has Kazhdan's property (T) (or is a Kazhdan group) if there is a compact $K\subset G$ and $\varepsilon > 0$ such that every continuous unitary $G$-representation with a $(K,\varepsilon)$-invariant vector contains a non-zero $G$-invariant vector. We call $(K,\varepsilon)$ a {\it Kazhdan pair} for $G$ and $\varepsilon$ is called a {\it Kazhdan constant} for $K$ and $G$. \end{Def} We like to thank Pierre de la Harpe for inspiring discussions on the subject and for bringing the paper of Cornulier \cite{dCKPSCF} to our attention. \section{Kazhdan's property (T) and the special linear group of holomorphic functions} The present investigations rely on results by Shalom \cite{ShalomBGKPT} and the authors \cite{IvarssonKutzschebauchHFMSLG}, see also \cite{IvarssonKutzschebauchSGVP}. \begin{Def} Let $R$ be a commutative ring with unit. The group $\mbox{SL}_n(R)$ is said to be boundedly elementary generated if there is a $\nu<\infty$ such that every matrix in $\mbox{SL}_n(R)$ can be written as a product of at most $\nu=\nu_n(R)$ elementary matrices. \end{Def} Recall that an elementary matrix in $\mbox{SL}_n(R)$ is a matrix of the form $I+ rE_{i,j}$ $i \ne j$, {\it i.e}, with ones on the diagonal and all entries outside the diagonal zero except for one entry. The following is a result of Shalom from \cite{ShalomBGKPT}, see also \cite[Theorem 4.3.5]{BachirDeLaHarpeValetteKPT}, \begin{Thm} \label{shalom} Let $n\ge 3$ and $R$ be a topological commutative ring with unit. Assume that $\mbox{SL}_n(R)$ is boundedly elementary generated and that there is a finite set $\{\alpha_1,\dots,\alpha_m\}\subset R$ generating a dense subring of $R$. Then $\mbox{SL}_n(R)$ has Kazhdan's property (T). \end{Thm} \begin{Rem} \label{Kconst} More precisely if there are $m$ elements in $R$ generating a dense subring then there is a compact $K$ in $\mbox{SL}_n(R)$ such that $\varepsilon=1/(22^{m+1}\nu_n(R))$ is a Kazhdan constant for $K$ and $\mbox{SL}_n(R)$, see \cite[Remark 4.3.6]{BachirDeLaHarpeValetteKPT}. \end{Rem} In this paper we consider the ring $R = \mbox{SL}_n(\mathcal{O}(X))$ of holomorphic maps from a Stein space $X$ to $\mbox{SL}_n(\mathbb{C})$ endowed with compact-open topology, the natural topology for holomorphic mappings. Using the above terminology the authors show in \cite{IvarssonKutzschebauchHFMSLG} that $\mbox{SL}_n(\mathcal{O}(X))$ is boundedly elementary generated when $X$ is a finite dimensional reduced Stein space with the property that all holomorphic mappings from $X$ to $\mbox{SL}_n({\mathbb C}} \begin{document)$ are null-homotopic, that is homotopic (through a family of continuous maps) to a constant map. This result is an application of the Oka-Grauert-Gromov-h-principle in Complex Analysis. For more information on that important principle in Complex Analytic Geometry we refer the interested reader to the monograph of Forstneri\v c \cite{For3}. The precise statement of the theorem proved in \cite{IvarssonKutzschebauchHFMSLG} is the following: \begin{Thm} \label{IK} Let $X$ be a finite dimensional reduced Stein space and $f\colon X\to \mbox{SL}_n(\mathbb{C})$ be a holomorphic mapping that is null-homotopic. Then there exist a natural number $K$, depending only on the dimension of $X$ and $n$, and holomorphic mappings $G_1,\dots, G_{K}\colon X\to \mathbb{C}^{m(m-1)/2}$ such that $f$ can be written as a product of upper and lower diagonal unipotent matrices \begin{equation*} f(x) = \left( \begin{matrix} 1 & 0 \cr G_1(x) & 1 \cr \end{matrix} \right) \left( \begin{matrix} 1 & G_2(x) \cr 0 & 1 \cr \end{matrix} \right) \ldots \left( \begin{matrix} 1 & G_K(x)\cr 0 & 1 \cr \end{matrix} \right). \end{equation*} \end{Thm} \begin{Rem} It is a consequence of the classical Oka-Grauert principle that the homotopy to the constant map can be chosen through a family of holomorphic maps. Therefore null-homotopic in the topological sense and holomorphic sense is equivalent in this setting. \end{Rem} Combining Shalom's and the authors' result we get the following new examples of non-locally compact Kazhdan groups. For similar results for the ring of continuous functions on a finite dimensional topological space see \cite{dCKPSCF}. \begin{Thm} \label{KPT} Let $n\ge 3$ and $X$ be a Stein manifold with finitely many connected components and with the property that all holomorphic maps from $X$ to $\mbox{SL}_n({\mathbb C}} \begin{document)$ are null-homotopic. Then $\mbox{SL}_n(\mathcal{O}(X))$ has Kazhdan's property (T). \end{Thm} \begin{proof} A finite set of functions that generate a dense subring of $\mathcal{O}(X)$ can constructed as follows. Embed $X$ into ${\mathbb C}} \begin{document^N$ using Remmert's embedding theorem. By the Oka-Weil theorem ${\mathbb C}} \begin{document[z_1,\dots,z_N]|_X$ is dense in $\mathcal{O}(X)$. Finally we see that the set of functions $S=\{z_1,\dots,z_N,\sqrt{2},i\}$ generates a dense subring of ${\mathbb C}} \begin{document[z_1,\dots,z_N]$. Therefore $S$ also generates a dense subring of $\mathcal{O}(X)$. Every unipotent matrix can be written as a product of $n(n-1)/2$ elementary matrices so $\mbox{SL}_n(\mathcal{O}(X))$ is boundedly elementary generated by Theorem \ref{IK} and by Theorem \ref{shalom} the group $\mbox{SL}_n(\mathcal{O}(X))$ is a Kazhdan group. \end{proof} \begin{Rem}\label{Stein,numbers} Let $N$ be the smallest dimension such that $X$ embeds into ${\mathbb C}} \begin{document^N$. By work of Gromov, Eliashberg and Sch\"urmann $N$ for a connected $X$ is bounded by $\lfloor 3(\dim X)/2\rfloor + 1$ if $\dim X \ge 2$, see \cite{EliashbergESMAS} and \cite{SchurmannESSMD}. We have that $\varepsilon=1/(22^{N+3}\nu_n(\mathcal{O}(X)))$ is a Kazhdan constant by Remark \ref{Kconst}. Therefore both the minimal embedding dimension and the number of elementary matrices needed to factorize a null-homotopic holomorphic map $f\colon X \to \mbox{SL}_n({\mathbb C}} \begin{document)$ is of great interest. The study of minimal embedding dimension is a classical difficult subject in complex analysis. The numbers $\nu_n(\mathcal{O}(X))$ as well as the corresponding numbers $\nu_n(C(T))$ (existing by work of Vaserstein \cite{VasersteinRMDPDFAO}) for the ring of continuous functions on a finite dimensional topological space $T$ are mostly unknown. A first study of these numbers in the holomorphic case can be found in \cite{IvarssonKutzschebauchNF}. \end{Rem} \begin{Rem} Theorem \ref{KPT} also holds for reduced Stein spaces that can be embedded in some ${\mathbb C}} \begin{document^N$. The proof is exactly the same once we have the embedding. For more on the embedding dimension for Stein spaces see \cite{SchurmannESSMD}. \end{Rem} \begin{Rem} By \cite[Example 1.7.4 (iv)]{BachirDeLaHarpeValetteKPT} $\mbox{SL}_2({\mathbb C}} \begin{document)$ is not a Kazhdan group. Since the closure of the image of a Kazhdan group under a continuous homomorphism is again a Kazhdan group, see \cite[Theorem 1.3.4]{BachirDeLaHarpeValetteKPT}, it follows that $\mbox{SL}_2(\mathcal{O}(X))$ never has Kazhdan's property (T). \end{Rem} \begin{Cor} Let $n\ge 3$ and $X$ be a contractible Stein manifold. Then $\mbox{SL}_n(\mathcal{O}(X))$ has Kazhdan's property (T). \end{Cor} Since $\mbox{SL}_n({\mathbb C}} \begin{document)$ is simply connected we get the following. \begin{Cor} For $n\ge 3$ the group $\mbox{SL}_n(\mathcal{O}({\mathbb C}} \begin{document^*))$, {\it i.e.} the holomorphic loop group of $\mbox{SL}_n({\mathbb C}} \begin{document)$, is a Kazhdan group. \end{Cor} \section{A generalization} It is natural to ask what can be said when $X$ is a Stein manifold having holomorphic maps into $\mbox{SL}_n({\mathbb C}} \begin{document)$ that are not null-homotopic. In this case Theorem \ref{IK} still can be applied to get some information. Let $\mbox{E}_n(\mathcal{O}(X))$ denote the group generated by the elementary matrices in $\mbox{SL}_n(\mathcal{O}(X))$. In general we get the following result as consequence of Theorem \ref{IK}. \begin{Thm} Let $n\ge 3$ and $X$ be a finite dimensional Stein space. Then $\mbox{E}_n(\mathcal{O}(X))$ has Kazhdan's property (T). \end{Thm} First note that all matrices in $\mbox{E}_n(\mathcal{O}(X))$ are null-homotopic. Now the proof is exactly the same as the proof of Theorem \ref{KPT}. The point where we use Theorem \ref{IK} is to conclude that $\mbox{E}_n(\mathcal{O}(X))$ is boundedly elementary generated. We now have the following. \begin{Thm}\label{steinkpt} Let $n\ge 3$ and $X$ be a finite dimensional Stein space with finite embedding dimension. Then $\mbox{SL}_n(\mathcal{O}(X))$ has Kazhdan's property (T) if and only if $$\mbox{SL}_n(\mathcal{O}(X))/\overline{\mbox{E}_n(\mathcal{O}(X))}$$ has Kazhdan's property (T). \end{Thm} \begin{proof} Let $H$ be a subgroup of a topological group $G$. It is easy to check from the definition that $\overline{H}$ is a Kazhdan group when $H$ is. The result now follows from \cite[Proposition 1.7.6 and Remark 1.7.9]{BachirDeLaHarpeValetteKPT} which says that a Frechet group $G$ has Kazhdan's property (T) if, for a normal closed subgroup $N$, both $G/N$ and $N$ has. The other implication follows from \cite[Theorem 1.3.4]{BachirDeLaHarpeValetteKPT} which says that Kazhdan's property (T) is inherited by quotients. \end{proof} \begin{Rem} We don't know if it always holds that $\mbox{E}_n(\mathcal{O}(X))= \overline{\mbox{E}_n(\mathcal{O}(X))}$ when we equip $\mbox{SL}_n(\mathcal{O}(X))$ with the compact-open topology. A Stein manifold is homotopy equivalent to a finite dimensional CW-complex and when this complex is finite then $\mbox{E}_n(\mathcal{O}(X))= \overline{\mbox{E}_n(\mathcal{O}(X))}$. However the complex can be infinite and we believe that there are examples where $\mbox{E}_n(\mathcal{O}(X)) \neq \overline{\mbox{E}_n(\mathcal{O}(X))}$. \end{Rem} \begin{Exa} As an application of Theorem \ref{steinkpt} we study the quadrics $Q_k=\{z\in {\mathbb C}} \begin{document^{k+1}; z_1^2 + \cdots + z_{k+1}^2 = 1\}$, $k\ge 1$. We claim that $Q_k$ is homotopy equivalent to the $k$-dimensional sphere $S^k$. Indeed $Q_k$ is isomorphic to the homogeneous space $\mbox{SO}_{k+1}({\mathbb C}} \begin{document)/\mbox{SO}_{k}({\mathbb C}} \begin{document)$, that is the quotient of the two reductive groups $\mbox{SO}_{k+1}({\mathbb C}} \begin{document)$ and $\mbox{SO}_{k}({\mathbb C}} \begin{document)$. By the Mostow decomposition theorem, see \cite{MostowCFKS,MostowSNDTSSG} or \cite[Section 3.1]{HeinznerGITSS}, a homogeneous space of complex reductive Lie groups $K^{{\mathbb C}} \begin{document}/L^{{\mathbb C}} \begin{document}$ admits a strong deformation retraction onto the quotient of their maximal compact subgroups $K/L$. In our case this quotient is $\mbox{SO}_{k+1}(\mathbb{R})/\mbox{SO}_{k}(\mathbb{R})$ which is isomorphic to $S^k$. We have $$\mbox{SL}_n(\mathcal{O}(Q_k))/\mbox{E}_n(\mathcal{O}(Q_k))=\pi_k(\mbox{SL}_n({\mathbb C}} \begin{document)).$$ Let $n\ge 3$. The homotopy groups $\pi_k(\mbox{SL}_n({\mathbb C}} \begin{document))$ are abelian. Discrete groups that are Kazhdan groups have finite abelianization \cite[Corollary 1.3.6]{BachirDeLaHarpeValetteKPT} and finite groups are Kazhdan groups. Therefore $\mbox{SL}_n(\mathcal{O}(Q_k))$ is a Kazhdan group precisely when $\pi_k(\mbox{SL}_n({\mathbb C}} \begin{document))$ is finite. The homotopy groups of $\mbox{SL}_n({\mathbb C}} \begin{document)$ are known to be infinite precisely when $k$ is odd and $3\le k \le 2n-1$, see for example \cite{MimuraTodaTLG}. Therefore $\mbox{SL}_n(\mathcal{O}(Q_k))$ is a Kazhdan group if and only if $k=1$, $k$ even, or $k\ge 2n$. \end{Exa}
0804.3914
\section{The Architecture of Abella} \label{sec:arch} Abella is an interactive theorem prover for the logic ${\cal G}$\xspace. The structure of Abella is influenced considerably by a two-level logic approach to specifying and reasoning about computations. There is a logic---the intuitionistic theory of second-order hereditary Harrop formulas that we call $hH^2$\xspace here---that provides a convenient vehicle for formulating structural, rule-based characterizations of a variety of properties such as evaluation and type assignment. An especially useful feature of such encodings is that derivations within this ``specification'' logic reflect the structure of derivations in the object logic.\footnote{Since $hH^2$\xspace is a subset of $\lambda$Prolog \cite{nadathur88iclp}, it turns out that such specifications can also be compiled and executed effectively \cite{nadathur99cade}.} Now, the specification logic can be embedded into ${\cal G}$\xspace through the medium of definitions. When used in this manner, ${\cal G}$\xspace plays the role of a reasoning or meta logic: formulas in ${\cal G}$\xspace can be used to encapsulate properties of derivations in the specification logic and, hence, of computations in the object logic. By keeping the correspondences simple, reasoning within ${\cal G}$\xspace can be made to directly reflect the structure of informal arguments relative to the object logics. This two-level logic approach was enunciated by McDowell and Miller already in the context of the logic $FO\lambda^{\Delta\N}$ \cite{mcdowell02tocl}. Abella realizes this idea using a richer logic that is capable of conveniently encoding more properties of computations. As a theorem prover, Abella also builds in particular properties arising out of the encoding of the specification logic. We discuss these aspects in more detail below. \lead{The specification logic} The formulas of $hH^2$\xspace are given by the following mutually recursive definitions: \begin{tabbing} \qquad $G \;=\; A \;|\; A \supset G \;|\; \forall_\tau x. G \;|\; G \land G$ \qquad\qquad $D \;=\; A \;|\; G \supset D \;|\; \forall_\tau x. D$ \end{tabbing} In these definitions, $A$ denotes an atomic formula and $\tau$ ranges over types of order 0 or 1 not containing $o$. The sequents for which proofs are constructed in $hH^2$\xspace are restricted to the form $\Delta\longrightarrow G$ where $\Delta$ is a set of $D$-formulas and $G$ is a $G$-formula. For such sequents, provability in intuitionistic logic is completely characterized by the more restricted notion of (cut-free) uniform proofs \cite{miller91apal}. In the case of $hH^2$\xspace, every sequent in a uniform proof of $\Delta\longrightarrow G$ is of the form $\Delta,{\cal L}\longrightarrow G'$ for some $G$-formula $G'$ and for some set of atoms $\cal L$. Thus, during the search for a proof of $\Delta\longrightarrow G$, the initial context $\Delta$ is {\em global}: changes occur only in the set of atoms on the left and the goal formula on the right. \begin{figure} \[ \infer{\Gamma \vdash x : a}{x : a \in \Gamma} \hspace{.5cm} \infer{\Gamma \vdash m\; n : b}{\Gamma \vdash m : (a \to b) & \Gamma \vdash n : a} \hspace{.5cm} \infer[\mbox{$x$ not in $\Gamma$}]{\Gamma \vdash (\tlam x a r) : (a \to b)}{\Gamma, x : a \vdash r : b} \] \vspace{-0.75cm} \caption{Rules for relating a $\lambda$-term to a simple type} \label{fig:typing} \begin{center} \begin{tabular}{c} $\forall m, n, a, b[\of m (\arr a b) \land \of n a \; \supset \; \of{(\app m n)} b]$\\ $\forall r, a, b[\forall x[\of x a \supset \of{(r \; x)}{b}] \supset \of{(\abs a r)}{(\arr a b)}]$ \end{tabular} \end{center} \vspace{-0.25cm} \caption{Second-order hereditary Harrop formulas ($hH^2$\xspace) encoding simply typing} \label{fig:hhtyping} \end{figure} We briefly illustrate the ease with which type assignment for the simply typed $\lambda$-calculus can be encoded in $hH^2$\xspace. There are two classes of objects in this domain: types and terms. For types we will consider a single base type called $i$ and the arrow constructor for forming function types. Terms can be variables $x$, applications $(m\; n)$ where $m$ and $n$ are terms, and typed abstractions $(\tlam x a r)$ where $r$ is a term and $a$ is the type of $x$. The standard rules for assigning types to terms are given in Figure \ref{fig:typing}. Object-level untyped $\lambda$-terms and simple types can be encoded in a simply typed (meta-level) $\lambda$-calculus as follows. The simple types are built from the two constructors $i$ and {\sl arr} and terms are built using the two constructors {\sl app} and {\sl abs}. Here, the constructor {\sl abs} takes two arguments: one for the type of the variable being abstracted and the other for the actual abstraction. Terms in the specification logic contain binding and so there is no need for an explicit constructor for variables. Thus, the (object-level) term $(\tlam f {i\to i} (\tlam x i (f\; x)))$ can be encoded as the meta-level term $\abs {(\arr i i)} (\lambda f. \abs i (\lambda x. \app f x))$. Given this encoding of the untyped $\lambda$-calculus and simple types, the inference rules of Figure~\ref{fig:typing} can be specified by the $hH^2$\xspace formulas in Figure \ref{fig:hhtyping} involving the binary predicate {\sl of}. Note that this specification in $hH^2$\xspace does not maintain an explicit context for typing assumptions but uses hypothetical judgments instead. Also, the explicit side-condition in the rule for typing abstractions is not needed since it is captured by the usual proof theory of the universal quantifier in the $hH^2$\xspace logic. \lead{Encoding specification logic provability in ${\cal G}$\xspace} The definitional clauses in Figure~\ref{fig:seq} encode $hH^2$\xspace provability in ${\cal G}$\xspace. In these and other such clauses in this paper, we use the convention that capitalized variables are implicitly universally quantified at the head. This encoding of $hH^2$\xspace provability derives from McDowell and Miller \cite{mcdowell02tocl}. As described earlier, uniform proofs in $hH^2$\xspace contain sequents of the form $\Delta,{\cal L}\longrightarrow G$ where $\Delta$ is a fixed set of $D$-formulas and $\cal L$ is a varying set of atomic formulas. Our encoding uses the ${\cal G}$\xspace predicate {\sl prog} to represent the $D$-formulas in $\Delta$: the $D$ formula $\forall\bar x.[G_1\supset\cdots\supset G_n\supset A]$ is encoded as the clause $\forall \bar x. \prog A (G_1\land\cdots\land G_n) \triangleq \top$ and $\forall \bar{x}. A$ is encoded by the clause $\forall \bar x. \prog A tt \triangleq \top$. Sequents are encoded using the atomic formula $(\seq N L G)$ where $L$ is a list encoding the set of atomic formulas $\cal L$ and $G$ encodes the $G$-formula. The argument $N$, written as a subscript, encodes the height of the proof tree that is needed in inductive arguments. The constructor $\langle \cdot \rangle$ is used to inject the special type of atom into formulas. To simplify notation, we write $L \!\Vdash\! G$ for $\exists n . \nat n \land \seq n L G$. When $L$ is $nil$ we write simply $\,\!\Vdash\! G$. \begin{figure}[t] \begin{tabbing} \qquad\=\kill \> $\element N B (B::L) \triangleq \top$ \qquad\= $\element {(s\; N)} B (C::L) \triangleq \element N B L$\\[6pt] \> $\member B L \triangleq \exists n . \nat n \land \element n B L$ \\[6pt] \> $\seq N L \langle A \rangle \triangleq \member A L$ \\ \> $\seq {(s\; N)} L (B \land C) \triangleq \seq N L B \land \seq N L C$ \\ \> $\seq {(s\; N)} L (A \supset B) \triangleq \seq N {(A :: L)} B$ \\ \> $\seq {(s\; N)} L (\forall B) \triangleq \nabla x. \seq N L (B\; x)$ \\ \> $\seq {(s\; N)} L \langle A\rangle \triangleq \exists b. \prog A b \land \seq N L b$\\ \> $\seq {(s\; N)} L \langle A\rangle \triangleq \prog A tt$ \end{tabbing} \caption{Second-order hereditary Harrop logic in ${\cal G}$\xspace} \label{fig:seq} \end{figure} Proofs of universally quantified $G$ formulas in $hH^2$\xspace are generic in nature. A natural encoding of this (object-level) quantifier in the definition of {\sl seq} uses a (meta-level) $\nabla$-quantifier. In the case of proving an implication, the atomic assumption is maintained in a list (the second argument of {\sl seq}). The penultimate clause for {\sl seq} implements backchaining over a fixed $hH^2$\xspace specification (stored as {\sl prog} atomic formulas). The matching of atomic judgments to heads of clauses is handled by the treatment of definitions in the logic ${\cal G}$\xspace, thus the penultimate rule for {\sl seq} simply performs this matching and makes a recursive call on the corresponding clause body. With this kind of an encoding, we can now formulate and prove in ${\cal G}$\xspace statements about what is or is not provable in $hH^2$\xspace. Induction over the height of derivations may be needed in such arguments and this can be realized via natural number induction on $n$ in $\seq n L P$. Furthermore, the $\hsl {def}\mathcal{L}$ rule encodes case analysis in the derivation of an atomic goal, leading eventually to a consideration of the different ways in which an atomic judgment may have been inferred in the specification logic. Abella is designed to hide much of the details of how the {\sl seq} and {\sl prog} specifications work and to reflect instead the aggregate structure described here. Since we have encoded the entire specification logic, we can prove general properties about it in ${\cal G}$\xspace that can then be used in reasoning about particular specifications. In Abella, various such specification logic properties can be invoked either automatically or through the use of tactics. For example, the following property, which is provable in ${\cal G}$\xspace, states the judgment $\ell \!\Vdash\! g$ is not affected by permuting, contracting, or weakening the context of hypothetical assumptions $\ell$. \begin{tabbing}\qquad $\forall \ell_1, \ell_2, g. (\ell_1 \!\Vdash\! g) \land (\forall e . \member e \ell_1 \supset \member e \ell_2) \supset (\ell_2 \!\Vdash\! g)$ \end{tabbing} This property can be applied to any specification judgment that uses hypothetical assumptions. Using it with the encoding of typing judgments for the simply typed $\lambda$-calculus, for example, we easily obtain that permuting, contracting, or weakening the typing context of a typing judgment does not invalidate that judgment. Two additional properties of our specification logic which are useful and provable in ${\cal G}$\xspace are called the {\em instantiation} and {\em cut} properties. The instantiation property recovers the notion of universal quantification from our representation of the specification logic $\forall$ using $\nabla$. The exact property is \begin{tabbing}\qquad $\forall \ell, g. (\nabla x. (\ell\; x) \!\Vdash\! (g\; x)) \supset \forall t. (\ell\; t) \!\Vdash\! (g\; t).$ \end{tabbing} Stated another way, although $\nabla$ quantification cannot be replaced by $\forall$ quantification in general, it can be replaced in this way when dealing with specification judgments. The cut property allows us to remove hypothetical judgments using a proof of such judgments. This property is stated as the formula \begin{tabbing} \qquad $\forall \ell_1, \ell_2, a, g. (\ell_1 \!\Vdash\! \langle a\rangle) \land (a :: \ell_2 \!\Vdash\! g) \supset (\ell_1, \ell_2 \!\Vdash\! g),$ \end{tabbing} which can be proved in ${\cal G}$\xspace: here, $\ell_1, \ell_2$ denotes the appending of two contexts. As a concrete example, we can again take our specification of simply typed $\lambda$-calculus and use the instantiation and cut properties to establish a type substitution property, {\em i.e.}, if $\Gamma_1, x:a \vdash m : b$ and $\Gamma_2 \vdash n : a$ then $\Gamma_1, \Gamma_2 \vdash m[x := n] : b$. \lead{Encoding properties of specifications in definitions} Definitions were used above to encode the specification logic and also particular specifications in ${\cal G}$\xspace. There is another role for definitions in Abella: they can be used also to capture implicit properties of a specification that are needed in a reasoning task. As an example, consider the encoding of type assignment. Here, the instances of $(\seq N L G)$ that arise all have $L$ bound to a list of entries of the form $(\of x t)$ where $x$ is a nominal constant that is, moreover, different from all other such constants appearing in $L$. Observing these properties is critical to proving the uniqueness of type assignment. Towards this end, we may define a predicate {\sl cntx} via the following clauses: \begin{tabbing} \qquad\=\kill \>$\cntx nil \triangleq \top$\qquad\qquad $(\nabla x. \cntx ( (\of x T)::L)) \triangleq \cntx L$ \end{tabbing} Reasoning within ${\cal G}$\xspace, it can now be shown that $L$ in every $(\seq N L G)$ atom whose proof is considered always satisfies the property expressed by {\sl cntx} and, further, if $L$ satisfies such a property then the uniqueness of type assignment is guaranteed. \lead{Induction on definitions} The logic ${\cal G}$\xspace supports induction only over natural numbers. Thus the definitions of {\sl element} and {\sl seq} in Figure~\ref{fig:seq} both make use of a natural number argument to provide a target for induction. In Abella, such arguments are unnecessary since the system implicitly assigns such an additional argument to all definitions. Thus when we refer to induction over a definition we mean induction on the implicit natural number argument of that definition. \section{Example: Normalizability in the Typed \texorpdfstring{$\lambda$-}{Lambda }Calculus} \label{sec:example} \begin{figure}[t] \begin{tabbing} \qquad\=\kill \> $\forall a, r [\val (\abs a r)]$\\[6pt] \> $\forall m, n, m' [ \step m m' \supset \step {(\app m n)} (\app {m'} n) ]$\\ \> $\forall m, n, n' [ \val m \land \step n n' \supset \step {(\app m n)} (\app m n') ]$\\ \> $\forall a, r, m [ \val m \supset \step {(\app {(\abs a r)} m)} (r\; m)]$\\[6pt] \> $\forall m [\steps m m]$ \qquad\qquad\= $\forall m, n, p [\step m p \land \steps p n \supset \steps m n]$\\[6pt] \> $\type i$ \> $\forall a, b [\type a \land \type b \supset \type (\arr a b)]$\\[6pt] \> $\forall a, b, m, n[\of m (\arr a b) \land \of n a \supset \of{(\app m n)} b]$\\ \> $\forall a, b, r[\type a \land \forall x[\of x a \supset \of{(r \; x)}{b}] \supset \of{(\abs a r)}{(\arr a b)}]$ \end{tabbing} \vspace{-0.25cm} \caption{Specification of simply-typed $\lambda$-calculus} \label{fig:spec} \end{figure} In order to illustrate the strengths and weaknesses of Abella, we detail in this section a proof of normalizability for the call-by-value, simply typed $\lambda$-calculus (sometimes also called ``weak normalizability''). We follow here the proof presented in \cite{pierce02book}. Stronger results are possible for the full, simply typed $\lambda$-calculus, but the one at hand suffices to expose the interesting reasoning techniques. The proof under consideration is based on Tait's logical relations argument \cite{tait67jsl} and makes use of simultaneous substitutions. Figure~\ref{fig:spec} contains the specification of call-by-value evaluation and of simple typing for the $\lambda$-calculus. Values are recognized by the predicate {\sl value}. Small-step evaluation is defined by {\sl step}, and a possibly zero length sequence of small steps is defined by {\sl steps}. The predicate {\sl type} recognizes well-formed types, and {\sl of} defines the typing rules of the calculus. A noteworthy aspect of the specification of the {\sl of} predicate is that it uses the {\sl type} predicate to ensure that types mentioned in abstraction terms are well-formed: a fact used in later arguments. \ignore{ This is necessary to do because later arguments will depend on the precise shape of this type component. One deviation of this specification from its standard presentation is that the {\sl of} predicate refers to the {\sl type} predicate to ensure that types mentioned in abstraction terms are well-formed. This is required since the object-logic is viewed as untyped from the perspective of our meta-logic, and since we will later require very precise information about the objects being manipulated. } The goal of this section is to prove weak normalizability, which we can now state formally in our meta-logic as follows: \begin{tabbing} \qquad $\forall M, A. \conc{\of M A} \supset \exists V . \conc{\steps M V} \land \conc{\val V}.$ \end{tabbing} The rest of this section describes definitions and lemmas necessary to prove this formula. In general, almost all results in this section have simple proofs based on induction, case analysis, applying lemmas, and building results from hypotheses. For such proofs, we will omit the details except to note the inductive argument and key lemmas used. The full details of this development are available in the software distribution of Abella. \lead{Evaluation and typing} Definitions can be used in Abella to introduce useful intervening concepts. One such concept is that of halting. We say that a term $M$ halts if it evaluates to a value in finitely many steps and we define a predicate capturing this notion as follows: \begin{tabbing} \qquad $\halts M \triangleq \exists V . \conc {\steps M V} \land \conc{\val V}.$ \end{tabbing} An most important property about halting is that it is invariant under evaluation steps (both forwards and backwards). Using the abbreviation $F \equiv G$ for $(F \supset G) \land (G \supset F)$, we can state this property formally as \begin{tabbing} \qquad $\forall M, N. \conc{\step M N} \supset (\halts M \equiv \halts N).$ \end{tabbing} This result is immediate in the backward direction, {\em i.e.}, $\halts N \supset \halts M$. In the forward direction it requires showing that one step of evaluation is deterministic: \begin{tabbing} \qquad $\forall M, N, P. \conc{\step M N} \land \conc{\step M P} \supset N = P.$ \end{tabbing} This formula is proved by induction on the height of the derivation of either one of the judgments involving the {\sl step} predicate. A standard result in the $\lambda$-calculus, which we will need later, is that one step of evaluation preserves typing. This is stated formally as \begin{tabbing} \qquad $\forall M, N, A. \conc{\step M N} \land \conc{\of M A} \supset \conc{\of N A}.$ \end{tabbing} The proof of this formula uses induction on the height of the derivation of the judgment involving the {\sl step} predicate. An interesting case in this proof is when $\step M N$ is $\step {(\app {(\abs B R)} P)} (R\; P)$ for some $B$, $R$, and $P$, {\em i.e.}, when $\beta$-reduction is performed. Deconstructing the typing judgment \begin{tabbing} \qquad $\conc{\of {(\app {(\abs B R)} P)} A}$ \end{tabbing} we can deduce that $\conc{\of P B} $ and $\ctxconc{(\of x B) :: nil}{\of {(R\; x)} A}$ where $x$ is a nominal constant. Here we use the instantiation property of our specification logic to replace $x$ with $P$ yielding $\ctxconc{(\of P B) :: nil}{\of {(R\; P)} A}$. Next we apply the cut property of our specification logic to deduce $\conc{\of {(R\; P)} A}$ which is our goal. Finally, we note that the contexts which are constructed during the proof of a typing judgment always have the form $(\of {x_1} {a_1}) :: \ldots :: (\of {x_n} {a_n}) :: nil$ where the $x_i$'s are distinct nominal constants and the $a_i$'s are valid types. We introduce the following formal definition of {\sl cntx} to exactly describe such contexts: \begin{tabbing} \qquad $\cntx nil \triangleq \top$ \qquad\qquad $(\nabla x. \cntx ((\of x A) :: L)) \triangleq \conc{\type A} \land \cntx L$ \end{tabbing} Note, $\nabla$ in the definition head ensures that the $x_i$'s are distinct nominal constants. \lead{The logical relation} The difficulty with proving weak normalizability directly is that the halting property is not closed under application, {\em i.e.}, $\halts M$ and $\halts N$ does not imply $\halts (\app M N)$. Instead, we must strengthen the halting property to one which includes a notion of closure under application. We define the logical relation {\sl reduce} by induction over the type of a term as follows: \begin{tabbing} \qquad\=$\reduce M (\arr A B)$ \= \kill \> $\reduce M i $ \> $\triangleq \conc{\of M i} \land \halts M$ \\ \> $\reduce M (\arr A B)$ \> $\triangleq$ \= $\conc{\of M (\arr A B)} \land \halts M \land {}$ \\ \>\>\> $\forall N. (\reduce N A \supset \reduce {(\app M N)} B)$ \end{tabbing} Note that {\sl reduce} is defined with a negative use of itself. Such a usage is permitted in ${\cal G}$\xspace only if there is a stratification condition that ensures that there are no logical cycles in the definition. In this case, the condition to use is obvious: the second argument to {\sl reduce} decreases in size in the recursive use. Like {\sl halts}, the {\sl reduce} relation is preserved by evaluation: \begin{tabbing} \qquad $\forall M, N, A. \conc{\step M N} \land \conc{\of M A} \supset (\reduce M A \equiv \reduce N A).$ \end{tabbing} This formula is proved by induction on the definition of {\sl reduce}, using the lemmas that {\sl halts} is preserved by evaluation and {\sl of}\/ is preserved by evaluation. Clearly {\sl reduce} is closed under application and it implies the halting property, thus we strengthen our desired weak normalizability result to the following: \begin{tabbing} \qquad $\forall M, A. \conc{\of M A} \supset \reduce M A.$ \end{tabbing} In order to prove this formula we will have to induct on the height of the proof of the judgment $\conc{\of M A}$. However, when we consider the case that $M$ is an abstraction, we will not be able to use the inductive hypothesis on the body of $M$ since {\sl reduce} is defined only on closed terms, {\em i.e.}, those typeable in the empty context. The standard way to deal with this issue is to generalize the desired formula to say that if $M$, a possibly open term, has type $A$ then each closed instantiation for all the free variables in $M$, say $N$, satisfies $\reduce N A$. This requires a formal description of simultaneous substitutions that can ``close'' a term. \lead{Arbitrary cascading substitutions and freshness} Given $\ctxconc{L}{\of M A}$, {\em i.e.}, an open term and its typing context, we define a process of substituting each free variable in $M$ with a value $V$ which satisfies the logical relation for the appropriate type. We define this {\sl subst} relation as follows: \begin{tabbing} \qquad\=$(\nabla x.$ \= $\subst {((\of x A) :: L)} {(R\; x)} M)$ \=\kill \> \> $\subst {nil} M M \triangleq \top$ \\ \> $(\nabla x.$ \> $\subst {((\of x A) :: L)} {(R\; x)} M) \triangleq$ \\ \> \hspace{3cm} $\exists V .~ \reduce V A \land \conc{\val V} \land \subst L {(R\; V)} M$ \end{tabbing} By employing $\nabla$ in the head of the second clause, we are able to use the notion of substitution in the meta-logic to directly and succinctly encode substitution in the object language. Also note that we are, in fact, defining a process of cascading substitutions rather than simultaneous substitutions. Since the substitutions we define (using closed terms) do not affect each other, these two notions of substitution are equivalent. We will have to prove some part of this formally, of course, which in turn requires proving results about the (non)occurrences of nominal constants in our judgments. The results in this section are often assumed in informal proofs. One consequence of defining cascading substitutions via the notion of substitution in the meta-logic is that we do not get to specify where substitutions are applied in a term. In particular, given an abstraction $\abs A R$ we cannot preclude the possibility that a substitution for a nominal constant in this term will affect the type $A$. Instead, we must show that well-formed types cannot contain free variables which can be formalized as $\forall A. \nabla x. \conc{\type (A\; x)} \supset \exists B.~ A = \lambda y. B$. This formula essentially states that any well-formed type which possibly depends on a nominal constant $x$ must depend on it only in a vacuous way. The above result about types assumes that judgments concerning {\sl type} occur in an empty context. Now, such judgments actually enter the picture through uses of the specification logic rule for {\sl of}\/ that deals with the case of abstractions. This means that we have to consider judgments involving {\sl type} that have a context meant to be used in judgments involving the {\sl of} predicate. To use the result we have just established, we must show that these contexts can be ignored. We formalize this as $\forall L, A.~\cntx L \land \ctxconc{L}{\type A} \supset \conc{\type A}$, a formula that can be proved using induction on the proof of the judgment $\ctxconc{L}{\type A}$. In the base case we must establish $\forall L, A.~\cntx L \land \member {(\type A)} L \supset \bot$, which is proved by induction on the proof of {\sl member}. Another necessary result is that in any provable judgment of the form $\ctxconc{L}{\of M A}$, any nominal constant (denoting a free variable) in $M$ must also occur in $L$, {\em i.e.}, \begin{tabbing} \qquad $\forall L, R, A. \nabla x.~ \cntx L \land \ctxconc{L}{\of {(R\; x)} (A\; x)} \supset \exists M.~ R = \lambda y. M$ \end{tabbing} The proof is by induction on the height of the derivation of the judgment involving {\sl of}. In the base case, we need that an element of a list cannot contain any nominal constant which does not occur in the list, {\em i.e.}, $\forall L, E. \nabla x. ~\member {(E\; x)} L \supset \exists F.~ E = \lambda y. F$. This formula is proved by induction on {\sl member}. We next show that typing judgments produce well-formed types by proving \begin{tabbing} \qquad $\forall L, M, A.~ \cntx L \land \ctxconc{L}{\of M A} \supset \conc{\type A}.$ \end{tabbing} The induction here is on the height of the derivation of the judgment involving {\sl of} and the base case is $\forall L, M, A.~ \cntx L \land \member {(\of M A)} L \supset \conc{\type A}$, which is proved by a simple induction on {\sl member}. Given our repertoire of results about the occurrences of nominal constants in judgments, we can now prove fundamental properties of arbitrary cascading substitutions. The first property states that closed terms, those typeable in the empty context, are not affected by substitutions, {\em i.e.}, \begin{tabbing}\qquad $\forall L, M, N, A.~ \conc{\of M A} \land \subst L M N \supset M = N.$ \end{tabbing} The proof here is by induction on {\sl subst} which corresponds to induction on the length of the list $L$. The key step within the proof is using the lemma that any nominal constant in the judgment $\conc{\of M A}$ must also be contained in the context of that judgment. Since the context is empty in this case, there are no nominal constants in $M$ and thus the substitutions from $L$ do not affect it. We must show that our cascading substitutions act compositionally on terms in the object $\lambda$-calculus. This is stated formally for application as follows: \begin{tabbing} \qquad $\forall L, M, N, R . $ \= $\cntx L \land \subst L {(\app M N)} R \supset$ \\ \> $\exists M', N'.~ R = \app {M'} N' \land \subst L M M' \land \subst L N N'$. \end{tabbing} This is proved by induction on {\sl cntx}, which amounts to induction on the length of the list $L$. For abstractions we prove the following, also by induction on {\sl cntx}: \begin{tabbing} \qquad $\forall L, M, R, A .~ \cntx L \land \subst L {(\abs A M)} R \land \conc{\type A} \supset$ \\ \qquad\qquad\qquad $\exists M'.~ R = \abs A M' \land (\forall V.$ \= $\reduce V A \land \conc{\val V} \supset$ \\ \> $\nabla x.~ \subst {((\of x A) :: L)} {(M\; x)} (M'\; V))$. \end{tabbing} Here we have the additional hypothesis of $\conc{\type A}$ to ensure that the substitutions created from $L$ do not affect $A$. At one point in this proof we have to show that the order in which cascading substitutions are applied is irrelevant. The key to showing this is realizing that all substitutions are for closed terms. Since closed terms cannot contain any nominal constants, substitutions do not affect each other. Finally, we must show that cascading substitutions preserve typing. Moreover, after applying a full cascading substitution for all the free variables in a term, that term should now be typeable in the empty context: \begin{tabbing} \qquad $\forall L, M, N, A.~\cntx L \land \subst L M N \land \ctxconc{L}{\of M A} \supset \conc{\of N A}.$ \end{tabbing} This formula is proved by induction on {\sl cntx} and by using the instantiation and cut properties of our specification logic. \lead{The final result} Using cascading substitutions we can now formalize the generalization of weak normalizability that we described earlier: given a (possibly open) well-typed term, every closed instantiation for it satisfies the logical relation {\sl reduce}\/: \begin{tabbing} \qquad $\forall L, M, N, A .~ \cntx L \land \ctxconc{L}{\of M A} \land \subst L M N \supset \reduce N A.$ \end{tabbing} The proof of this formula is by induction on the height of the derivation of the typing judgment $\ctxconc{L}{\of M A}$. The inductive cases are fairly straightforward using the compositional properties of cascading substitutions and various results about invariance under evaluation. In the base case, we must prove \begin{tabbing} \qquad $\forall L, M, N, A.~ \cntx L \land \member {(\of M A)} L \land \subst L M N \supset \reduce N A,$ \end{tabbing} which is done by induction on {\sl cntx}. Weak normalizability is now a simple corollary where we take $L$ to be $nil$. Thus we have proved $\forall M, A. \conc{\of M A} \supset \halts M$. \section{The Logical Foundation} \label{sec:logic} The logic ${\cal G}$\xspace \cite{gacek08lics} which we use to formalize arguments about structural operational semantics is based on an intuitionistic and predicative subset of Church's Simple Theory of Types. Terms in ${\cal G}$\xspace are monomorphically typed and are constructed using abstraction and application from constants and (bound) variables. The provability relation concerns terms of the distinguished type $o$ that are also called formulas. Logic is introduced by including special constants representing the propositional connectives $\top$, $\bot$, $\land$, $\lor$, $\supset$ and, for every type $\tau$ that does not contain $o$, the constants $\forall_\tau$ and $\exists_\tau$ of type $(\tau \rightarrow o) \rightarrow o$. The binary propositional connectives are written as usual in infix form and the expression $\forall_\tau x. B$ ($\exists_\tau x. B$) abbreviates the formula $\forall_\tau \lambda x.B$ (respectively, $\exists_\tau \lambda x.B$). Type subscripts are typically omitted from quantified formulas when their identities do not aid the discussion. The standard treatment of the universal quantifier accords it an extensional interpretation. When treating $\lambda$-tree syntax it is often necessary to give importance to the form of the argument for a statement like ``$B(x)$ holds for all $x$'' rather than focusing on whether or not every instance of $B(x)$ is true. The $\nabla$ quantifier \cite{miller05tocl} is used to encode such generic judgments. Specifically, we include the constants $\nabla_\tau$ of type $(\tau \rightarrow o) \rightarrow o$ for each type $\tau$ (not containing $o$). As with the other quantifiers, $\nabla_\tau x. B$ abbreviates $\nabla_\tau \lambda x. B$. The $FO\lambda^{\Delta\nabla}$\xspace logic \cite{miller05tocl} incorporates $\nabla$ quantification into a sequent calculus presentation of intuitionistic proof by attaching a local signature to every formula occurrence in a sequent. We are interested here in considering also proofs that use induction. In this situation, we are led naturally to including certain structural rules pertaining to local signatures \cite{tiu06lfmtp}. Written at the level of formulas, these are the $\nabla${\em -exchange rule} $\nabla x\nabla y. F \equiv \nabla y\nabla x. F$ and the $\nabla${\em -strengthening rule} $\nabla x. F \equiv F$, provided $x$ is not free in $F$. If we adopt these rules, we can make all local signatures equal and hence representable by an (implicit) global binder. We shall refer to these globally $\nabla$-bound variables as {\it nominal constants}. Intuitively, one can think of nominal constants as denoting arbitrary, unique names. Notice that the exchange rule requires us to consider atomic judgments as being identical if they differ by only permutations of nominal constants. The logic ${\cal G}$\xspace uses the above treatment of the $\nabla$ quantifier that was first introduced in the $LG^\omega$\xspace system \cite{tiu06lfmtp}. Specifically, an infinite collection of nominal constants are assumed for each type. The set of all nominal constants is denoted by $\mathcal{C}$. These constants are distinct from the collection of usual, non-nominal constants denoted by $\mathcal{K}$. We define the {\it support} of a term (or formula) $t$, written ${\rm supp}(t)$, as the set of nominal constants appearing in it. A permutation of nominal constants is a type preserving bijection $\pi$ from $\mathcal{C}$ to $\mathcal{C}$ such that $\{ x\ |\ \pi(x) \neq x\}$ is finite. Permutations are extended to terms (and formulas), written $\pi . t$, as follows: \begin{tabbing} \qquad\=\kill \> $\pi.a = \pi(a)$, if $a \in \mathcal{C}$ \qquad\qquad\= $\pi.c = c$ if $c\notin \mathcal{C}$ is atomic \\ \> $\pi.(\lambda x.M) = \lambda x.(\pi.M)$ \> $\pi.(M\; N) = (\pi.M)\; (\pi.N)$ \end{tabbing} \begin{figure*}[t] \[ \infer[id_\pi]{\Sigma : \Gamma, B \vdash B'}{\pi.B = \pi'.B'} \quad \infer[\hbox{\sl cut}]{\Sigma : \Gamma, \Delta \vdash C} {\Sigma : \Gamma \vdash B & \Sigma : B, \Delta \vdash C} \] \[ \begin{array}{cc} \noalign{\smallskip} \infer[\forall\mathcal{L}]{\Sigma : \Gamma, \forall_\tau x.B \vdash C} {\Sigma, \mathcal{K}, \mathcal{C} \vdash t : \tau & \Sigma : \Gamma, B[t/x] \vdash C} & \infer[\forall\mathcal{R},h\notin\Sigma] {\Sigma : \Gamma \vdash \forall x.B} {\Sigma, h : \Gamma \vdash B[h\ \bar{c}/x]} \\ \noalign{\smallskip} \infer[\nabla\mathcal{L},a\notin {\rm supp}(B)] {\Sigma : \Gamma, \nabla x. B \vdash C} {\Sigma : \Gamma, B[a/x] \vdash C} & \infer[\nabla\mathcal{R},a\notin {\rm supp}(B)] {\Sigma : \Gamma \vdash \nabla x.B} {\Sigma : \Gamma \vdash B[a/x]} \\ \noalign{\smallskip} \infer[\exists\mathcal{L},h\notin\Sigma] {\Sigma : \Gamma, \exists x. B \vdash C} {\Sigma, h : \Gamma, B[h\; \bar{c}/x] \vdash C} & \infer[\exists\mathcal{R}]{\Sigma : \Gamma \vdash \exists_\tau x.B} {\Sigma, \mathcal{K}, \mathcal{C} \vdash t:\tau & \Sigma : \Gamma \vdash B[t/x]} \end{array} \] \vspace{-0.25cm} \caption{The core rules of ${\cal G}$\xspace: the introduction rules for the propositional connectives are not displayed.} \label{fig:core-rules} \end{figure*} Figure \ref{fig:core-rules} presents a subset of the core rules for ${\cal G}$\xspace; the standard rules for the propositional connectives have been omitted for brevity. Sequents in this logic have the form $\Sigma : \Gamma \vdash C$ where $\Gamma$ is a set and the signature $\Sigma$ contains all the free variables of $\Gamma$ and $C$. In the rules, $\Gamma, F$ denotes $\Gamma \cup \{F\}$. In the $\nabla\mathcal{L}$ and $\nabla\mathcal{R}$ rules, $a$ denotes a nominal constant of appropriate type. In the $\exists\mathcal{L}$ and $\forall\mathcal{R}$ rules, $\bar{c}$ is a listing of the variables in ${\rm supp}(B)$ and $h\; \bar{c}$ represents the application of $h$ to these constants; raising is used here to encode the dependency of the quantified variable on ${\rm supp}(B)$ \cite{miller92jsc}. The judgment $\Sigma, \mathcal{K}, \mathcal{C} \vdash t : \tau$ that appears in the $\forall\mathcal{L}$ and $\exists\mathcal{R}$ rules enforces the requirement that the expression $t$ instantiating the quantifier in the rule is a well-formed term of type $\tau$ constructed from the variables in $\Sigma$ and the constants in ${\cal K} \cup {\cal C}$. Atomic judgments in ${\cal G}$\xspace are defined recursively by a set of clauses of the form $\forall \bar{x} .(\nabla \bar{z} . H) \triangleq B$: here $H$ is an atomic formula all of whose free variables are contained in either $\bar{x}$ or in $\bar{z}$ and $B$ is an arbitrary formula all of whose free variables are also free in $\nabla \bar{z}. H$. The atom $H$ is the {\em head} of such a clause and $B$ is its {\em body}. No nominal constant is permitted to appear in either of these formulas. A clause of this form provides part of the definition of a relation named by $H$ using $B$. The $\nabla$ quantifiers over $H$ may be instantiated by distinct nominal constants. The variables $\bar{x}$ that are bound by the $\forall$ quantifiers may be instantiated by terms that depend on any nominal constant except those chosen for the variables in $\bar{z}$. Certain auxiliary notions are needed in formalizing the rules for definitions in ${\cal G}$\xspace. A {\it substitution} $\theta$ is a type-preserving mapping from variables to terms such that the set $\{x\ |\ x\theta \neq x\}$, the {\em domain} of $\theta$, is finite. \ignore{ Although a substitution is extended to a mapping from terms to terms, formulas to formulas, {\em etc}, when we refer to its {\it domain} and {\it range}, we mean these sets for this most basic function. } A substitution is extended to a function from terms to terms in the usual fashion and we write its application using a postfix notation. If $\Gamma$ is a set of formulas then $\Gamma\theta$ is the set $\{J\theta\ |\ J \in \Gamma\}$. If $\Sigma$ is a signature then $\Sigma\theta$ is the signature that results from removing from $\Sigma$ the variables in the domain of $\theta$ and adding the variables that are free in the range of $\theta$. Given a clause $\forall x_1,\ldots,x_n . (\nabla \bar{z} . H) \triangleq B$, we define a version of it raised over the nominal constants $\bar{a}$ and away from a signature $\Sigma$ as \begin{tabbing} \qquad\=\kill \>$\forall \bar{h} . (\nabla \bar{z} . H[h_1\; \bar{a}/x_1, \ldots, h_n\; \bar{a}/x_n]) \triangleq B[h_1\; \bar{a}/x_1, \ldots, h_n\; \bar{a}/x_n],$ \end{tabbing} where $h_1,\ldots,h_n$ are distinct variables of suitable type that do not appear in $\Sigma$. Finally, given the sequent $\Sigma : \Gamma \vdash C$ and the nominal constants $\bar{c}$ that do not appear in the support of $\Gamma$ or $C$, let $\sigma$ be any substitution of the form \begin{tabbing} \qquad\=\kill \>$\{h'\; \bar{c}/h\ |\ h \in \Sigma\ \mbox{and}\ h'\ \mbox{is a variable of suitable type that is not in}\ \Sigma\}$. \end{tabbing} Then we call the sequent $\Sigma\sigma : \Gamma\sigma \vdash C\sigma$ a version of $\Sigma : \Gamma \vdash C$ raised over $\bar{c}$. \begin{figure}[t] \[ \infer[\kern-1pt\hsl {def}\mathcal{L}]{\Sigma : A, \Gamma \vdash C}{\{\Sigma'\theta : (\pi.B')\theta, \Gamma'\theta \vdash C'\theta\}} \qquad \infer[\kern-1pt\hsl {def}\mathcal{R}]{\Sigma : \Gamma \vdash A}{\Sigma' : \Gamma' \vdash (\pi . B')\theta} \] \vspace{-0.5cm} \caption{Rules for definitions} \label{fig:def} \end{figure} The introduction rules for atomic judgments based on definitions are presented in Figure~\ref{fig:def}. The $\hsl {def}\mathcal{L}$ rule has a set of premises that is generated by considering each definitional clause of the form $\forall \bar{x}.(\nabla \bar{z}. H) \triangleq B$ in the following fashion. Let $\bar{c}$ be a list of distinct nominal constants equal in length to $\bar{z}$ such that none of these constants appear in the support of $\Gamma$, $A$ or $C$ and let $\Sigma' : A', \Gamma' \vdash C'$ denote a version of the lower sequent raised over $\bar{c}$. Further, let $H'$ and $B'$ be obtained by taking the head and body of a version of the clause being considered raised over $\bar{a} = {\rm supp}(A)$ and away from $\Sigma'$ and applying the substitution $[\bar{c}/\bar{z}]$ to them. Then the set of premises arising from this clause are obtained by considering all permutations $\pi$ of $\bar{a}\bar{c}$ and all substitutions $\theta$ such that $(\pi. H')\theta = A'\theta$, with the proviso that the range of $\theta$ may not contain any nominal constants. The $\hsl {def}\mathcal{R}$ rule, by contrast, has exactly one premise that is obtained by using any one definitional clause. $B'$ and $H'$ are generated from this clause as in the $\hsl {def}\mathcal{L}$ case, but $\pi$ is now taken to be any one permutation of $\bar{a}\bar{c}$ and $\theta$ is taken to be any one substitution such that $(\pi . H')\theta = A'$, again with the proviso that the range of $\theta$ may not contain any nominal constants. Some of the expressiveness arising from the quantificational structure permitted in definitions in ${\cal G}$\xspace is demonstrated by the following definitional clauses: \begin{tabbing} \qquad $(\nabla x. \name x) \triangleq \top$ \qquad\qquad $\forall E. (\nabla x. \fresh x E) \triangleq \top$ \end{tabbing} The $\nabla$ quantifier in the first clause ensures that {\sl name} holds only for nominal constants. Similarly, the relative scopes of $\forall$ and $\nabla$ in the second clause force {\sl fresh} to hold only between a nominal constant and a term not containing that constant. \clearpage When ${\cal G}$\xspace is used in applications, bound variables in syntactic objects will be represented either explicitly, by term-level, $\lambda$-bound variables, or implicitly, by nominal constants. The equivariance principle for nominal constants realizes alpha convertibility in the latter situation. Encoding bound variables by $\lambda$-terms ensures that substitution is built-in and that dependencies of subterms on bindings is controlled; specific dependencies can be realized by using the device of raising. Definitions with $\nabla$ in the head allow for a similar control over dependencies pertaining to nominal constants and raising can be used to similar effect with these as well. \ignore{ As it is stated, the set of premises in the $\hsl {def}\mathcal{L}$ rule arising from any one definitional clause is potentially infinite because of the need to consider every unifying substitution. It is possible to restrict these substitutions instead to the members of a complete set of unifiers. In the situations where there is a single most general unifier, as is the case when we are dealing with the higher-order pattern fragment \cite{miller91jlc}, the number of premises arising from each definition clause is bounded by the number of permutations. In practice, this number can be quite small. } The consistency of ${\cal G}$\xspace requires some kind of stratification condition to govern the possible negative uses of predicates in the body of definitions. There are several choices for such a condition. Rather than picking one in an {\it a priori} fashion, we will note relevant such conditions as needed. The final capability of interest is induction over natural numbers. These numbers are encoded in ${\cal G}$\xspace using the type $nt$ and the constructors $z : nt$ and $s : nt \to nt$. Use of induction is controlled by the distinguished predicate $\hbox{\sl nat} : nt \to o$ which is treated by specific introduction rules. In particular, the left introduction rule for {\sl nat} corresponds to natural number induction. \section{Assessment and Future Work} \label{sec:future-work} The Abella system has been tested with several prototypical examples; details are available with the system distribution. These experiments indicate considerable promise for the two-level logic based approach in reasoning about formal systems. However, the experiments have also revealed some issues with Abella at a practical level. We discuss these below and suggest work aimed at addressing them. \ignore{ We use this section to assess our style of reasoning within a two-level logical system, specifically focusing on the current weaknesses. We fold this into a corresponding discussion of future work to overcome these issues. } \lead{Base case lemmas} Every lemma whose proof uses induction on a specification logic judgment with a non-empty context requires another lemma to be proved for the base case where that judgment follows because it is in the context. This creates mundane overhead. The work in these base case lemmas consists of a simple induction over the length of the context. Support for richer tactics for induction on specification judgments might lead to more user friendly behavior in such cases. \lead{Types in specifications} The specification logic is embedded as an untyped logic in ${\cal G}$\xspace. This is usually not an issue: specification logic judgments themselves impose type restrictions on terms. For example, the typing judgment $\of M A$ holds only if $M$ is a $\lambda$-term. However, sometimes explicit type judgments---such as the judgment {\sl type} for recognizing well-formed simple types---are required in specifications. One possibility that is being considered for addressing the typing issue that is of an implementation such as Abella automatically generating recognizer predicates based on type information. These predicates could then be implicitly attached to all declarations of meta-level variables. \lead{Different specification logics} Currently, Abella has built into it exactly one specification language ($hH^2$\xspace) and exactly one proof system for it (uniform proofs). Certain application areas might benefit from having other proof systems for intuitionistic logic available as well as other specification logics. For example, linear logic specification languages \cite{hodas94ic,miller96tcs} can be used to provide declarative specifications of the operational semantics of programming languages that contain features such as references, exceptions, and concurrency. Thus, McDowell and Miller \cite{mcdowell02tocl} presented a {\sl seq}-like predicate for a subset of intuitionistic linear logic that they used to specify the operational semantics of a simple functional language extended with references and to then prove a subject-reduction theorem for that language. It would be natural to consider extending the specification logic in Abella to be all of intuitionistic linear logic (or, in fact, all of linear logic) since this would enhance that logic's expressiveness a great deal. Such an extension could be designed so that if a given specification did not employ the novel linear logic connectives, then the encoding of {\em seq} would modularly revert back to that of intuitionistic logic. \section{Introduction} \label{sec:intro} This paper concerns reasoning about the descriptions of systems that manipulate formal objects such as programs and their specifications. A common approach to modelling the dynamic and static semantics of these systems is to use a syntax-driven rule-based presentation. These presentations can be naturally encoded as theories within a simple, intuitionistic logic. If the intuitionistic logic supports $\lambda$-terms and the quantification of variables ranging over such terms, then it also provides a convenient means for capturing binding notions in the syntactic objects of interest; in particular, it facilitates the use of the $\lambda$-tree approach to abstract syntax. A further benefit to using such a logic to encode semantic specifications is that an immediate and effective animation of them is provided by logic programming systems such as $\lambda$Prolog \cite{nadathur88iclp} and Twelf \cite{pfenning99cade}. Given a logic-based specification of a formal system, establishing properties of the system reduces to answering questions about what is provable in the logic encoding the specification. Different approaches can be adopted for this task. At one end, the specification logic can be formalized and reasoned about within a general purpose theorem-proving framework such as that provided by Coq \cite{bertot04book} or Isabelle \cite {nipkow02book}. At the other end, one can develop another logic, often called a {\em meta-logic}, that is explicitly tuned to reasoning about the specification logic. It is the latter approach that we examine here. In particular, we expose its practical use within the context of a specific theorem-proving system called Abella \cite{gacek08ijcar}. The design of a logic that can act as a powerful and expressive meta-logic has been the subject of much recent research \cite{baelde07cade,gacek08lics,mcdowell02tocl,miller05tocl,tiu06lfmtp}. The logics emanating from these studies share a common theme: they all provide recursive definitions as a means for encoding specification logics and some form of generic reasoning for modelling binding notions at the meta level. We expose here an expressive and flexible logic called ${\cal G}$\xspace within this framework. Abella is based on ${\cal G}$\xspace but also provides special support for the ways in which ${\cal G}$\xspace is intended to be used in meta-reasoning tasks. Our presentation pays attention to the novel features of both ${\cal G}$\xspace and Abella from this perspective. Concreteness is provided by considering proofs of evaluation, typing, and normalization properties of the $\lambda$-calculus. This paper is organized as follows. The logic ${\cal G}$\xspace is summarized in Section~\ref{sec:logic} and its particular realization in Abella is discussed in Section~\ref{sec:arch}. Section~\ref{sec:example} illustrates the use of Abella in a significant theorem-proving task, that of formalizing a Tait-style proof of normalizability in the $\lambda$-calculus. Section~\ref{sec:future-work} points out limitations of the currently implemented system. Finally, in Section~\ref{sec:related} we compare Abella-style reasoning with some other approaches to the same kind of reasoning tasks. \section{Related Work}\label{sec:related} \lead{Nominal logic approach} The Nominal package for Isabelle/HOL automates a process of defining and proving standard results about $\alpha$-equivalence classes \cite{urban05cade}. This allows for formal reasoning over objects with binding which is close to informal reasoning. One drawback of the nominal approach is that it does not provide a notion of substitution, and thus users must define their own substitution function and prove various properties relating to it. A proof of weak normalizability for the simply typed $\lambda$-calculus has been conducted with the nominal package \cite{narboux08nominal}, and in this case a notion of simultaneous substitution is used. For the nominal approach, this extended notion of substitution can be defined directly since one works with $\alpha$-equivalence classes and not higher-order terms as in our case. Additionally, the cost of defining and reasoning about simultaneous substitution is not a significant step up from what is already required for standard substitution in the nominal approach. The specification language for the nominal package is functions and predicates over $\alpha$-equivalence classes. This language does not have a built-in notion of hypothetical judgments which are typically useful for describing structural rules over objects with binding. For example, by encoding the simply typed $\lambda$-calculus in our specification language using hypothetical judgments for typing assumptions, we derive a type substitutivity property as consequence of general instantiation and cut properties of the logic, see Section \ref{sec:arch}. In the nominal approach, such a proof must be conducted manually. \lead{Twelf} The Twelf system \cite{pfenning99cade} uses LF terms and types for a specification language \cite{harper93jacm} and the meta-logic ${\cal M}_2^{+}$ \cite{schurmann00phd} for reasoning. The primary difference between the Twelf approach and ours is that the ${\cal M}_2^{+}$ meta-logic is relatively weak in expressive power. For instance, it is restricted to $\Pi_2$ formulas ({\em i.e.}, $\forall\exists$ formulas) and lacks logical connectives such as conjunction, disjunction, and implication. Despite these restrictions, the meta-logic is expressive enough for most common reasoning tasks and has been very successful in practice. Another significant difference is that ${\cal M}_2^{+}$ is designed with an inherent notion of a global hypothetical context. Thus the meta-logic builds in some notion of which judgments can depend on assumptions of other judgments. This is less of a concern in our approach since each judgments has its own local context. Due to the $\Pi_2$ restriction of the meta-logic ${\cal M}_2^{+}$, it is not possible to encode a direct proof of weak normalizability for the simply typed $\lambda$-calculus using a logical relations argument. Recently, however, an indirect proof was completed using an intermediate {\em assertion logic} which has enough richness to encode the proper logical relation \cite{schurmann08lics}. This is a useful technique for extending the expressive power of the Twelf system, but it comes with the cost of moving from a two-level logic approach to a three-level logic approach. \lead{Locally nameless} The locally nameless representation for syntactic objects with binding is a first-order approach using de Bruijn indices for bound variables and names for free variables. This balance between two representational techniques has been used successfully in practice \cite{aydemir08popl}. Our approach to representation can be seen as a meta-level version of this balance where we use (meta-level) $\lambda$-terms to represent explicitly bound variables and (meta-level) nominal constants for implicitly bound variables ({\em i.e.}, free variables). With this understanding, the trade-off between the first-order and meta-level approaches to bound/free variable representation is that the former works with existing theorem provers while the latter has substitution and equivariance built-in.
0804.3309
\section{Introduction} The Galactic center diffuse X-rays (GCDX) exhibit many K-shell lines from highly ionized atoms, such as the K$\alpha$, K$\beta$ and K$\gamma$ lines of He-like (Fe\emissiontype{XXV}) and H-like (Fe\emissiontype{XXVI}) iron, and the K$\alpha$ line from He-like nickel (Ni\emissiontype{XXVII}) (e.g. \cite{Ko07d}). These K-shell lines carry key information about the plasma physics. The electron and ionization temperatures of a hot plasma are constrained by the line flux ratio of Fe\emissiontype{XXV}-K$\beta$ to Fe\emissiontype{XXV}-K$\alpha$ and that of Fe\emissiontype{XXVI}-K$\alpha$ to Fe\emissiontype{XXV}-K$\alpha$, respectively. \citet{Ko07d}, using these line ratios, reported that the 5--11.5~keV band spectrum of the GCDX is naturally explained by a 6.5~keV-temperature plasma in collisional ionization equilibrium (CIE) plus a power-law component with a photon index of $\mit\Gamma$=1.4. The former component has nearly the same flux as that from the latter component (see figure 7, in \cite{Ko07d}). The origins of these components and highly ionized atomic lines are, however, open questions. In addition to these highly ionized atomic lines, the K$\alpha$ and K$\beta$ lines from neutral irons (Fe\emissiontype{I}) and the K$\alpha$ line from neutral nickels (Ni\emissiontype{I}) have also been discovered. A likely origin of these neutral K-shell lines is due to fluorescence irradiated by external X-ray sources (e.g. \cite{Ko96,Mu00,Mu01,Ko07b,Ko08,No08}). However, alternative scenarios, such as a bombarding of energetic electrons, have also been proposed (e.g. \cite{Pr03,Wan06,Yu07}). The key questions are: what are the origins of the 6.7~keV and 6.4~keV lines, and how much is the contribution of unresolved point sources? The 6.7~keV and 6.4~keV lines in the GCDX are spatially and spectrally entangled with each other. Spatially resolved spectroscopy could disentangle this complicated situation. Together with good energy resolution and low background near and above the $\sim$6~keV band, Suzaku (\cite{Mi07}) is the best satellite to date for this study. This paper focuses on the 5--11.5~keV band X-rays in the sub-degree region near the Galactic center (GC). The distance to GC is assumed to be 8~kpc (\cite{Re93}). In this paper, quoted errors are at the 90\% confidence level, unless otherwise mentioned. We use the Galactic coordinates; hence, the east means the positive Galactic longitude side and vice versa for the west. \section{Observations and Data Reduction} Two pointing observations (here, the east and west fields) towards GC were performed in September of 2005, with the X-ray Imaging Spectrometers (XIS; \cite{Ko07a}), at the focal planes of the X-Ray Telescopes (XRT; \cite{Se07}) onboard the Suzaku satellite (\cite{Mi07}). The data-selection criteria and subtraction method of the non X-ray background (NXBG) are the same as those given in \citet{Ko07d}. The charge transfer inefficiency (CTI) and fine gain-tuning of CCD-to-CCD and segment-to-segment levels (for the CTI and the CCD segment, see \cite{Ko07a}) were self-calibrated using the K$\alpha$ lines of Fe\emissiontype{XXV} (6.7~keV), Fe\emissiontype{I} (6.4~keV) and Helium-like sulfur (S\emissiontype{XV}) (2.46~keV). The absolute gain-tuning was made using the $^{55}$Fe calibration sources irradiating the CCD corners, and also using the Fe\emissiontype{XXVI}-K$\alpha$ line, which has a relatively simple structure. The over-all systematic error of our gain determination near the iron and nickel K-shell energies is estimated to be within $^{+3}_{-6}$ eV. The details concerning the calibration procedures and the results are given in \citet{Ko07d}. For a timing study of the 6.4~keV line flux, we used the Chandra GC data obtained by the Advanced CCD Imaging Spectrometer array (ACIS-I) with total exposure times of $\sim$500~ks. The logs are ObsID 2943, 2951, 2952, 2953, 2954, 3392, 3393, 3663 and 3665, observed in 2002. The data were reprocessed, using the CIAO version 3.4 and the calibration database version 3.4.0. \begin{figure} \begin{center} \FigureFile(80mm,60mm){figure1a.eps} \FigureFile(80mm,60mm){figure1b.eps} \end{center} \caption{ Mosaic (2-pointings of the east and west fields) maps of the 6.7~keV line (a) and of the 6.4~keV line (b) bands. The east and west fields are the left (east), and the right (west) sides, respectively. Solid polygons in figure (a) are 6.4~keV clumps (here, the west clump is source 1, and the east clump is source 2 ), while the dashed ellipse is the background region. The background data for the 6.4~keV clumps were obtained by excluding the data of source 1 and 2. The grid in the west field shows 16 segmentations for the spatial resolved study (see text).} \label{fig:IMAGE} \end{figure} \section{Analyses and Results} \subsection{The Extended Emission near the GC (GCDX)} \begin{figure*}[p] \begin{center} \FigureFile(70mm,50mm){figure2a.eps} \FigureFile(70mm,50mm){figure2b.eps} \FigureFile(70mm,50mm){figure2c.eps} \FigureFile(60mm,50mm){figure2d.eps} \end{center} \caption{ Correlation plots between the physical parameters. The open and filled circles are data from the east and west fields, respectively. For comparisons, the data of the 6.4~keV clumps (sources 1 and 2) and background are also plotted, as shown by the open and filled squares, respectively (see section 3.2).\\ a) Plots of the 5--10~keV band flux ($L_{5-10}$, horizontal axis) vs. the Fe\emissiontype{I}-K$\alpha$ flux ($F_{6.4}$, vertical axis). The unit of $F_{6.4}$ and $L_{5-10}$ are [photons cm$^{-2}$ s$^{-1}$ arcmin$^{-2}$] and [ergs~cm$^{-2}$ s$^{-1}$ arcmin$^{-2}$], respectively. The dashed line is an eye guide to the proportional relation of $L_{5-10}$~$\propto$~$F_{6.4}$.\\ b) Same as (a), but for Fe\emissiontype{XXV}-K$\alpha$ ($F_{6.7}$).\\ c) Same as (a), but for the sum of the line flux of Fe\emissiontype{XXV}-K$\alpha$ ($F_{6.7}$) and Fe\emissiontype{I}-K$\alpha$ ($F_{6.4}$). The solid line is the best-fit proportional line of $F_{6.7}$+0.49$\times$$F_{6.4}$ =1.26$\times10^{7}\times$$L_{5-10}$. \\ d) Same as (a), but for the relation of the equivalent width of the Fe\emissiontype{XXV}-K$\alpha$ line ($EW_{6.7}$) and that the Fe\emissiontype{I}-K$\alpha$ line ($EW_{6.4}$). The solid line shows the best-fit relation of $EW_{6.7}$+0.50$\times$$EW_{6.4}$ =0.62~~[keV]. } \label{fig:cor} \end{figure*} In order to see the spatial distribution of the Fe K-shell lines, we made line images of the 6.7~keV (Fe\emissiontype{XXV}-K$\alpha$) and 6.4~keV (Fe\emissiontype{I}-K$\alpha$) lines with the respective energy bands of 6.62--6.78~keV and 6.32--6.48~keV. The images are shown in figure \ref{fig:IMAGE}, where they include both the relevant line flux and the underlying continuum flux. From figure \ref{fig:IMAGE}, we can see that the 6.7~keV line flux in the east field is systematically larger than that in the west field. This contrast is clearer in the 6.4~keV line band, showing clear clumps near $(l, b) = (0.03, -0.07)$ and $(0.12, -0.12)$ (source 1 and 2; the polygons in figure 1a). To study more quantitatively, we divided both the east and west fields into 16 segments each (32 segments for total), as given by the grid in figure~1b. Since the 4 segments in each field corner are partially contaminated by the calibration X-rays (Mn\emissiontype {I}-K$\alpha$ and K$\beta$ lines at 5.9 and 6.5 keV, respectively), we made the X-ray spectra for the remaining 24 segments in the 5--11.5~keV band, and fitted with a phenomenological model as follows: \begin{eqnarray} &{\rm Abs} \times ({\rm PL}+ {\rm Abs} \times {\rm CXB} + {\rm Gaussians}) \nonumber&\\ && \hspace{-10em} \left[\rm{photons~keV}^{-1}~{\rm cm}^{-2}~{\rm s}^{-1}~{\rm str}^{-1}\right], \end{eqnarray} where PL is a power-law function, PL$=A \times E^{-\mit\Gamma}$. CXB is the cosmic X-ray background modeled as CXB=$8.75\times (E/1~{\rm keV})^{-1.486}$. Abs is the intra-Galactic absorption in the line of sight to the GC, and is given by $e^{-\sigma(E)N_{\rm H}}$, where $N_{\rm H}$ and $\sigma(E)$ are, respectively, the hydrogen column density and the absorption cross section with the solar abundances. The GCDX suffers due to the Galactic absorption (Abs) on the front side of the GC, while the CXB suffers due to both (front and back) side absorptions; hence, Abs is applied twice to the CXB, as is explicitly given in equation~1. Gaussians are given as $(F/\sqrt{2 \pi w})e^{-(E-E_{\rm C})^2/2w^2}$, where $w$ is the intrinsic line width ($1\sigma$). Following \citet{Ko07d}, we employed 10 K-shell lines (10 Gaussians) due to highly ionized and neutral atoms. The brightest 4 lines are K$\alpha$ from neutral (Fe\emissiontype{I}), He-like (Fe\emissiontype{XXV}) and H-like (Fe\emissiontype{XXVI}) irons and K$\beta$ of Fe\emissiontype{I}, at 6.4, 6.7, 6.97 and 7.06~keV, respectively. The other weak lines are K$\alpha$ from neutral (Ni\emissiontype{I}) and He-like (Ni\emissiontype{XXVII}) nickel, K$\beta$ and K$\gamma$ from Fe\emissiontype{XXV} and Fe\emissiontype{XXVI}, at 7.47, 7.81, 7.88, 8.25, 8.29, and 8.70~keV, respectively. At first, we fitted the spectra from the full region of the east and west fields separately, with essentially the same fitting procedures as those of \citet{Ko07d}: the line center energy, width and normalization (flux) of Fe\emissiontype{I}-K$\beta$ are fixed to 1.103, 1.103 and 0.11 times to those of Fe\emissiontype{I}-K$\alpha$ (see \cite{Ko07d}). The widths ($w$) for the 6 weak lines were fixed to be 1~eV (narrow line approximation). Then, using the best-fit line widths ($w$) and the line center energies ($E_{\rm C}$), we fitted the 24-segment spectra. The line flux ratio of Fe\emissiontype{I}-K$\beta$/Fe\emissiontype{I}-K$\alpha$ was fixed to the theoretical value of 0.11 (see \cite{Ko07d}). Therefore, the free parameters are normalizations of the power-law component ($A$) and those of the emission lines ($F$) except for Fe\emissiontype{I}-K$\beta$, $N_{\rm H}$ and the power-law index ($\mit\Gamma$). Using the best-fit photon indices ($\mit\Gamma$), the line fluxes of 6.4~keV ($F_{6.4}$) and 6.7~keV ($F_{6.7}$), the energy flux in the 5--10~keV band ($L_{5-10}$), and the equivalent widths of the 6.4~keV ($EW_{6.4}$) and the 6.7 keV ($EW_{6.7}$) lines, we made correlation plots (figure~\ref{fig:cor}). For a consistency check of the best-fit parameters between the east and west fields, we made two spectra from the small region of the overlap of both fields and fitted the spectra with the same model of equation 1. Since this small region includes the calibration Mn\emissiontype{I}-K$\alpha$ and K$\beta$ lines at 5.9 and 6.5 keV, the 6.4 keV flux ($F_{6.4}$) is contaminated by the 6.5 keV line. Therefore, we compared the best-fit flux of the 6.7 keV line ($F_{6.7}$) and the power-law in the 5--10~keV band ($L_{5-10}$). The best-fit values of $F_{6.7}$ are $3.88_{-0.16}^{+0.40}$ and $4.06_{-0.28}^{+0.21}$ (in unit of $10^{-5}$~photons cm$^{-2}$ s$^{-1}$) for the east and west fields, respectively. The best-fit values of $L_{5-10}$ are $3.57_{-0.09}^{+0.09}$ and $3.58_{-0.09}^{+0.08}$ (in unit of $10^{-12}$~ergs~cm$^{-2}$~s$^{-1}$) for the east and west fields, respectively. Thus, we confirmed that the relevant best-fit parameters obtained from two fields are consistent with each other. Figure~\ref{fig:cor}a shows that the flux ratio of the Fe\emissiontype{I}-K$\alpha$ line ($F_{6.4}$) to the 5--10~keV band ($L_{5-10}$) is not constant, but $F_{6.4}$ shows excess at larger flux domains. Figure~\ref{fig:cor}b shows a vice versa flux-relation of the Fe\emissiontype{XXV}-K$\alpha$ line ($F_{6.7}$) to the 5--10~keV band ($L_{5-10}$). These facts indicate that the 5--10~keV band flux ($L_{5-10}$) does not solely associate with the Fe\emissiontype{I}-K$\alpha$ line nor the Fe\emissiontype{XXV}-K$\alpha$ line. We therefore searched for a possible combination of the Fe\emissiontype{XXV}-K$\alpha$ and Fe\emissiontype{I}-K$\alpha$ flux to become proportional to the 5--10~keV band flux. Figure \ref{fig:cor}c shows the relation of the combined 6.7 keV and 6.4 keV flux ($F_{6.7}$ and $F_{6.4}$) vs. the 5--10 keV band flux ($L_{5-10}$). The best-fit relation is \begin{equation} F_{6.7}+0.49(_{-0.04}^{+0.03}) \times F_{6.4} = 1.26 \times 10^{7} \times L_{5-10}, \end{equation} where the data dispersion ($1 \sigma$) from the best-fit relation is 11\%. This relation is confirmed by correlation plots of the equivalent width of the Fe\emissiontype{XXV}-K$\alpha$ line ($EW_{6.7}$) and that of Fe\emissiontype{I}-K$\alpha$ ($EW_{6.4}$). The solid line in figure~\ref{fig:cor}d is the best-fit relation, given as \begin{equation} EW_{6.7} + 0.50(\pm0.06) \times EW_{6.4} = 0.62 (\pm0.07)~[{\rm keV}]. \end{equation} We note that \citet{War06} reported a similar (but rather qualitative) analysis using the emission lines obtained by XMM-Newton observations. The correlations of equations 2 and 3 suggest that the power-law component (PL) can be divided into two parts, PL1 and PL2, which are the power-law continuums associated with the K-shell lines from neutral and highly ionized atoms, respectively. We hence divide the 5--10~keV band flux of $L_{5-10}$ to $L1_{5-10}$ and $L2_{5-10}$, which belong to PL1 and PL2, respectively. These phenomenological relations of equations 2 and 3 also mean that the flux ratio $L2_{5-10}$/$L1_{5-10}$ is proportional to $\sim$(1/0.5)$\times$ ($F_{6.7}$/$F_{6.4}$). In figure \ref{fig:gamma}, we plot the photon index ($\mit\Gamma$) as a function of the flux ratio of Fe\emissiontype{I}-K$\alpha$ to Fe\emissiontype{XXV}-K$\alpha$ ($F_{6.4}/F_{6.7}$). Although the values $F_{6.4}/F_{6.7}$ scatter largely from $\sim$0.2 to 4, $\mit\Gamma$ is almost constant at about 1.9; the continuum shape is the same regardless the line ratio. Therefore, PL1 and PL2 have nearly the same photon indices ($\mit\Gamma$) of 1.9. \begin{figure} \begin{center} \FigureFile(80mm,50mm){figure3.eps} \end{center} \caption{Same as figure 2, but for the line flux ratio of Fe\emissiontype{I}-K$\alpha$ and Fe\emissiontype{XXV}-K$\alpha$ ($F_{6.4}/F_{6.7}$) (horizontal axis) vs. the photon index $\mit\Gamma$ (vertical axis). } \label{fig:gamma} \end{figure} \subsection{X-ray Image and Spectra of the 6.4~keV Clumps near the GC} In figure \ref{fig:IMAGE}, we can see strong enhancements of the 6.4~keV line near at the Radio Arc (sources 1 and 2). To make reliable spectra of sources 1 and 2, a precise estimation of the Galactic center diffuse X-rays (GCDX) is particularly important, because the GCDX comprises the major background, and is variable from position to position (see section 3.1). To minimize any systematic error due to subtraction of the position-dependent GCDX, we selected the background region to be as near as possible to sources 1 and 2. The background region thus selected is shown by the dashed ellipse in figure \ref{fig:IMAGE}, where the data of sources 1 and 2 (polygons) are excluded. First, we obtained the source and background spectra, and fitted with a phenomenological model of equation 1. The best-fit fluxes of the Fe\emissiontype{XXV}-K$\alpha$ lines for sources 1 and 2, and that of the background are 2.95($_{-0.24}^{+0.24})\times 10^{-6}$, 2.13($_{-0.18}^{+0.19})\times 10^{-6}$ and 2.25($_{-0.06}^{+0.06})\times 10^{-6}$ [photons~cm$^{-2}$~s$^{-1}$~arcmin$^{-2}$], respectively. Therefore, the GCDX in source 1 would be larger, but that in source 2 is smaller than the GCDX in the background region. Thus, a key issue is how to properly subtract the GCDX. As is suggested in section 3.1, the power-law component (PL) of the GCDX is divided into PL1 and PL2, which are associated with the 6.4 keV (neutral iron) and 6.7 keV (He-like iron) lines, respectively. It is very likely that PL1 and PL2 are also associated with the K-shell lines from neutral and highly ionized atoms. In order to subtract PL2 and associated K-shell lines from highly ionized atoms, we introduced a multiply factor, $\alpha$, which is the ratio of the 6.7~keV flux ($F_{6.7}$) of source 1 (or 2) to that of the background spectrum. The factors $\alpha$ are 1.31 for source 1 and 0.95 for source 2. We then re-constructed a background model consisting of the K-shell lines from highly ionized atoms and the relevant continuum component. As for the fluxes of the K-shell lines from highly ionized atoms, we multiplied the factor $\alpha$ to those of the best-fit line-flux of the background spectrum. For the continuum, on the other hand, we multiplied the factor of $\alpha \times F_{6.7}/(F_{6.7}+0.5 \times F_{6.4})$ to the best-fit PL of the background spectrum, where $F_{6.7}$ and $F_{6.4}$ are the line fluxes of the background regions. This is the same as $\alpha \times {\rm PL2}$, where PL2 is that from the background region. Adding this model background and CXB, we fitted the spectra of sources 1 and 2 with a model of an absorbed power-law plus K$\alpha$ and K$\beta$ lines of neutral iron and nickel. The best-fit spectra are given in figure \ref{fig:SS} with the dashed lines together with the model background (dotted line). The best-fit source parameters are listed in table~1. Note that the spectral parameters of sources 1 and 2 include the PL1 components of the background region. \begin{figure} \begin{center} \FigureFile(80mm,50mm){figure4a.eps} \FigureFile(80mm,50mm){figure4b.eps} \end{center} \caption{ X-ray spectra of source 1 (a) and source 2 (b). The dashed lines are the best-fit model of the sources, while the dotted lines are the model GCDX spectra obtained by the method given in the text. The CXB spectra are out of the frame of these figures. \label{fig:SS}} \end{figure} \begin{table} \begin{center} \caption{Best-fit parameters of the background-subtracted sources.} \label{tab:SS} \begin{tabular}{lcc} \hline & source 1 & source 2 \\ \hline $Ab_{\rm Fe}$\footnotemark[$*$] &3.8$_{-0.5}^{+0.6}$ &3.9$_{-0.6}^{+0.4}$ \\ Photon index ($\mit\Gamma$) &1.83$_{-0.03}^{+0.03}$&1.86$_{-0.02}^{+0.03}$ \\ $L_{5-10}$\footnotemark[$\dagger$] &2.61$_{-0.16}^{+0.14}$&3.37$_{-0.12}^{+0.12}$ \\ $F_{6.4}$\footnotemark[$\ddagger$] &6.92$_{-0.30}^{+0.29}$&7.45$_{-0.23}^{+0.29}$ \\ $EW_{6.4}$\footnotemark[$\S$] &1.23$_{-0.14}^{+0.14}$ &1.03$_{-0.09}^{+0.08}$ \\ $F_{7.05}$\footnotemark[$\ddagger$] &0.99$_{-0.25}^{+0.27}$&1.02$_{-0.23}^{+0.19}$ \\ \hline \multicolumn{3}{@{}l@{}}{\hbox to 0pt{\parbox{70mm}{\footnotesize \par\noindent \footnotemark[$*$] Iron abundances determined by the iron K-edge depth with a fixed $N_{\rm H}$ of 6$\times$10$^{22}$cm$^{-2}$. \par\noindent \footnotemark[$\dagger$] Unabsorbed 5--10~keV band flux in unit of 10$^{-13}$~erg~s$^{-1}$~cm$^{-2}$ arcmin$^{-2}$. \par\noindent \footnotemark[$\ddagger$] Unabsorbed line fluxes of Fe\emissiontype{I}-K$\alpha$ ($F_{6.4}$) and K$\beta$ ($F_{7.05}$) in unit of 10$^{-6}$photons s$^{-1}$~cm$^{-2}$ arcmin$^{-2}$. \par\noindent \footnotemark[$\S$] Equivalent width of the Fe\emissiontype{I}-K$\alpha$ line in unit of~keV.}\hss}} \end{tabular} \end{center} \end{table} \subsection{Timing Analysis of the 6.4~keV Clumps near the GC} \citet{Mu07} reported a time variability of the sub-structures in source 1 in Chandra observations. We therefore extended the time variability study for a longer time scale from the Chandra (2002) to the Suzaku (2005) observations. Since the spatial resolution of Suzaku is limited to resolve the sub-structures in source 1, and since the image position of source 1 with Suzaku and Chandra are slightly shifted from each other, we extracted the X-ray spectra from a larger region than source 1, as is given by the solid ellipses in figure~\ref{fig:S_C}. We subtracted the NXBG from the Suzaku spectrum in the same way as previously described, and fitted the spectra with the model of equation 1. The best-fit Suzaku fluxes of the 6.4 and 6.7~keV lines are given in table~\ref{tab:S_C}. For the Chandra spectrum, we subtracted the off-plane CXB (including NXBG) data using the "blank-sky" data-sets. The Chandra spectrum was fitted with the same model as Suzaku (equation 1), fixing the line energies, the power-law index and iron K-edge absorption to the Suzaku best-fit values. Free parameters were normalizations (flux) of the power-law and Gaussian lines. For a reasonable fit, we fine-tuned the Chandra energy gain by $\sim$0.2\%. The best-fit Chandra fluxes of the 6.4 and 6.7~keV lines are listed in table~\ref{tab:S_C}. \begin{figure} \begin{center} \FigureFile(80mm,50mm){figure5a.eps} \FigureFile(80mm,50mm){figure5b.eps} \end{center} \caption{(a) X-ray image obtained with Chandra in the 6--7~keV band, and (b) the Suzaku image in the 6.4~keV line band (6.32-- 6.48 keV). The dashed polygon in (a) shows source 1 (see text). The spectra are taken from the solid ellipses in (a) and (b).} \label{fig:S_C} \end{figure} From table~\ref{tab:S_C}, we can see that the 6.7~keV line flux is constant within the 90\% level of the statistical errors. This result is reasonable, because the 6.7 keV line is due to the largely extended GCDX, and hence should be invariant on the time scale of a few years. In other words, the constant 6.7~keV flux supports that the over-all systematic flux error of the 6.4~keV and 6.7~keV lines between the Chandra and the Suzaku observations, under the present procedure of data selection, screening and analysis, is smaller than the statistical 1.5$\sigma$ error. We note that even if the power-law index and iron K-edge absorption are free parameters in the fitting of the Chandra spectrum, the best-fit fluxes have almost the same values as listed in table \ref{tab:S_C}. Thus, from table \ref{tab:S_C}, the flux change of the 6.4~keV line from the Chandra (2002) to the Suzaku (2005) observations is significant at the 4.7$\sigma$ level (note that the errors in table 2 are at the 90\% level). \begin{table} \begin{center} \caption{Best-fit fluxes of the 6.4 and 6.7~keV lines of the Suzaku and Chandra observations.} \label{tab:S_C} \begin{tabular}{lcc} \hline & Chandra (2002)& Suzaku (2005)\\ \hline $Ab_{\rm Fe}$\footnotemark[$*$] &3.4\footnotemark[$\dagger$]& 3.4$_{-0.4}^{+0.5}$\\ Photon index ($\mit\Gamma$) &1.77\footnotemark[$\dagger$]& 1.77$_{-0.02}^{+0.02}$ \\ $F_{6.4}$\footnotemark[$\ddagger$] &7.83$_{-0.23}^{+0.23}$&6.89$_{-0.23}^{+0.20}$\\ $F_{6.7}$\footnotemark[$\ddagger$] &3.37$_{-0.20}^{+0.20}$ &3.61$_{-0.18}^{+0.19}$\\ \hline \multicolumn{3}{@{}l@{}}{\hbox to 0pt{\parbox{80mm}{\footnotesize \par\noindent \footnotemark[$*$] Iron abundances determined by the iron K-edge depth with a fixed $N_{\rm H}$ of 6$\times$10$^{22}$cm$^{-2}$. \par\noindent \footnotemark[$\dagger$] Fe abundance and $\mit\Gamma$ are fixed to the Suzaku best fit values. \par\noindent \footnotemark[$\ddagger$] Unabsorbed line fluxes of Fe\emissiontype{I}-K$\alpha$ and Fe\emissiontype{XXV}-K$\alpha$ in unit of 10$^{-5}$photons s$^{-1}$cm$^{-2}$.}\hss}} \end{tabular} \end{center} \end{table} \section {Discussion} \subsection{Decomposition of the GC Emission} The GCDX would have at least 3 components: high-temperature plasma (component-1), the 6.4 line with Thomson scattering or bremsstrahlung continuum (component-2), and integration of point sources plus other possible origins (component-3). K-shell lines from highly ionized atoms from component-1 constrain the plasma parameters. The most important lines are Fe\emissiontype{XXV}-K$\alpha$ (the 6.7 keV line), Fe\emissiontype{XXVI}-K$\alpha$ (the 6.97 keV line) and Fe\emissiontype{XXV}-K$\beta$ (the 7.88 keV line). The flux ratio of the 6.97~keV line to the 6.7~keV line gives the ionization temperature, while that of the 7.88~keV line to the 6.7~keV line gives the electron temperature. \citet{Ko07d} extensively studied the line flux (and ratio) from the GC (the east and west fields), and concluded that the GCDX has a high-temperature plasma in collisional ionization equilibrium. The GC spectrum was, in fact, nicely fitted with a 6.5 keV plasma and a power-law with a photon index of ${\mit\Gamma}=1.4$, plus neutral K-shell lines (see figure 7 in \cite{Ko07d}). The continuum flux in the 5--10 keV band of the high-temperature plasma was $\sim$0.5$\times$$L_{5-10}$ of the GCDX (the east and west fields) and that of the power-law ($\mit\Gamma$=1.4) was $\sim$0.5$\times$$L_{5-10}$ (see figure 7 in \cite{Ko07d}). As we already proposed, we can decompose the 5--10~keV band flux ($L_{5-10}$) into two components, $L1_{5-10}$ and $L2_{5-10}$ with the flux ratio $L2_{5-10}$/$L1_{5-10}$ being proportional to (1/0.5)$\times$ ($F_{6.4}$/$F_{6.7}$). \citet{Ko07d} found that the mean fluxes of the 6.7~keV line ($F_{6.7}$) and that of the 6.4 keV line ($F_{6.4}$) in the east and west fields are nearly equal to each other (see table~4 and figure~7 in \cite{Ko07d}). Therefore, for the mean 5--10~keV band flux of the east and west fields, about 2/3 (1/1.5) is attributable to the 6.7~keV line and the other 1/3 (0.5/1.5) is to the 6.4~keV line. The photon indices ($\mit\Gamma$) for the both components are the same at 1.9 (see section 3.1). The above two decompositions implicitly assumed that the 6.7~keV line is due to a plasma with a spatially uniform temperature. This assumption was verified by a spatial analysis of the flux ratio of Fe\emissiontype{XXV}-K$\alpha$ (6.7~keV line, $F_{6.7}$) to Fe\emissiontype{XXVI}-K$\alpha$ (6.97~keV line, $F_{6.97}$). In figure \ref{fig:Temp}, we plot the correlation of $F_{6.97}$ and $F_{6.7}$. \begin{figure} \begin{center} \FigureFile(80mm,50mm){figure6.eps} \end{center} \caption{ Same as figure 2, but the line flux plots of Fe\emissiontype{XXVI}-K$\alpha$ ($F_{6.97}$) (horizontal axis) and Fe\emissiontype{XXV}-K$\alpha$ ($F_{6.7}$) (vertical axis). \\ } \label{fig:Temp} \end{figure} From figure~\ref{fig:Temp}, we conclude that the plasma temperatures are approximately uniform. In detail, however, the flux ratios are systematically larger, and hence show a higher temperature in the west field than that in the east field by $\sim$10\% (figure \ref{fig:Temp}). \citet{Ko07d} reported that the GC has a lower temperature plasma (the soft component) with highly ionized atomic lines, such as silicon and sulfur. We therefore fit the low-energy band spectra, and found the plasma temperature to be $\sim$1~keV. The contribution of this plasma to the 6.7~keV-line flux is 5--10\%, but those to the 6.96~keV line and the 5--10~keV flux are negligible. The flux shift of the 6.7~keV line is within the 1$\sigma$ dispersion of the line flux vs. the continuum flux correlation (figure~\ref{fig:cor}c). Including these effects increases the temperature by $\sim$5\%. Using the K-edge structure, Koyama et al. (2007d) determined that the line-of-sight $N_{\rm Fe}$ to the GCDX is 9.7$\times10^{18}$~cm$^{-2}$. This corresponds to a 3.5 solar abundance of iron, assuming that the $N_{\rm H}$ to the GCDX is 6$\times10^{22}$~cm$^{-2}$. However the assumed $N_{\rm H}$ of 6$\times10^{22}$~cm$^{-2}$ may be smaller than the typical values. In fact, the Suzaku observations on the Sgr A East (SNR) and Arches (star cluster) revealed that $N_{\rm H}$ is 9--14$\times10^{22}$ \citep{Ko07c,Ts07}. Then, the iron K-edge abundance determined by the K-edge absorption is reduced to be 2.3--1.5 solar. On the other hand, the iron abundances of the 6.5 keV plasma in the GCDX were determined to be $\sim$1 solar by \citet{Ko07d}. \citet{War06} also reported that the iron abundance in the GCDX plasma is one solar. The 1~keV plasma is associated with the 2.46~keV line. \citet{No08} and \citet{Mo08} analyzed some of the 2.46~keV clumps, and found that the abundances of iron and other heavy elements are consistent with solar. Thus, iron abundance in the 1~keV plasma is likely to be one solar, and hence the $F_{6.7}$ values and the temperatures may not be significantly changed. We can therefore ignore the 1~keV plasma in the discussion. Now, we schematically show the results of the two different decompositions of the 5--10~keV band flux ($L_{5-10}$) in figure 7. The right side shows nearly equal participation of the 6.5~keV plasma (parenthesis is a phenomenological photon index), and the power-law with ${\mit\Gamma}=1.4$ \citep{Ko07d}. The 6.4 keV line was treated separately, and hence is not included in figure 7 (right). The left side shows the phenomenological participations in this work, PL1 and PL2 with a flux ratio of 1:2. The 6.7~keV and 6.4~keV lines are mainly included in the white and grey regions, respectively. There is an apparent discrepancy between the two decompositions. This may be solved if we take a point-source contribution into account (see the next section). \begin{figure} \begin{center} \FigureFile(100mm,60mm){figure7.eps} \end{center} \caption{ Schematic picture of the continuum flux ($L_{5-10}$) participation of the GCDX. The right side shows nearly equal participation of the 6.5~keV plasma (parenthesis is a phenomenological photon index), and the power-law ${\mit\Gamma}=1.4$ (\cite{Ko07d}). The left side shows the phenomenological participations (PL1 and PL2 with a ratio of 1:2). White and grey indicates the region of the power-law, which exhibits mainly the 6.7~keV and 6.4~keV lines, respectively. \label{fig:koyama} } \end{figure} \subsection{Origin of the High Temperature Plasma} The spectral analysis of the GC (the east and west fields) indicates that about half of the GCDX is due to the 6.5~keV plasma, which emits the 6.7~keV line and other K-shell lines from highly ionized iron and nickel (\cite{Ko07d}). However, figure~\ref{fig:koyama} suggests that, additionally, at least 1/6 of the total GCDX should also be the 6.7~keV line emitter. We propose that this is integrated point sources (component-3), because \citet{Mu04} reported that similar fractions of the GCDX come from point sources of $\ge3\times 10^{-15}$~ergs~cm$^{-2}$~s$^{-1}$ (2--9~keV), and the spectrum has a strong 6.7~keV line and a rather weak 6.4~keV line (see figure 7 of \cite{Mu04}). Therefore, in the present analysis, PL2 (the left of figure 7) may be contaminated by the integrated flux of point sources. On the other hand, in the analysis of \citet{Ko07d}, the 6.7~keV line from the point sources would be implicitly included in the 6.5~keV plasma (the right), while the continuum component of the point sources is included in the power-law component of ${\mit\Gamma}=1.4$. The continuum shape of the 6.5~keV plasma is approximated by a power-low of ${\mit\Gamma}=2.4$, while the integrated point sources has 0.9 (\cite{Mu04}). Then, the flux-weighted mean value is ${\mit\Gamma} \sim 2.0$, consistent with ${\mit\Gamma}=1.9$ of PL2. The power-law of ${\mit\Gamma}=1.4$ component is regarded as a sum of the PL1 (${\mit\Gamma}=1.9$) and the point sources of ${\mit\Gamma}=0.9$. Then, the weighted mean photon index of the power-law component becomes ${\mit\Gamma} \sim 1.6$, consistent with ${\mit\Gamma}=1.4$. From the above analysis, we infer that $\sim$1/6 of the GCDX is due to point sources that are already resolved by Chandra down to the flux $3\times10^{-15}$~erg~cm$^{-2}$~s$^{-1}$. Since point sources fainter than this flux level must be prevailing in the GC, the fraction of point source contribution to the GCDX of 1/6 is the lower limit. \citet{Re06}, \citet{Re07a} and \citet{Re07b} proposed that the point-source population contributes largely to the Galactic ridge diffuse X-rays (GRDX). This scenario may also be applied to the GCDX. The point-source distribution should be symmetric with respect to Sgr A*. However, we found the east-west asymmetry of the 6.7~keV line flux ($F_{6.7}$) as demonstrated by the open (east) and filled (west) circles in figure 2b (also see figure 6 of \cite{Ko07d}). The mean flux in the east field is $\sim$1.5 times larger than that of the west field. As we already pointed out, the 1-keV temperature plasma can not account for such a large asymmetry (see section 4.1). Thus, the maximum possible contribution of the point sources to the GCDX is about 70\% for the east region and 100\% for the west field. In this case, however, the spectrum of the integrated point sources must have, approximately, a power-law with $\mit\Gamma$=1.9 (or 17~keV temperature plasma) with sizeable 6.4~keV and 6.7~keV lines ($EW_{6.4}$= 150--700~eV, $EW_{6.7}$ = 200--800~eV) and the line ratio $F_{6.97}$/$F_{6.7}$=0.3--0.4. As far as we know, the most probable and popular point sources having such a hard spectrum with sizeable 6.4, 6.7 and 6.96~keV lines are intermediate poplars (IP). \citet{Ez99} complied 14 ASCA data-sets of 12 IPs. The mean $EW_{6.4}$ and $EW_{6.7}$ are 140~eV and 220~eV, respectively (calculated from table 2 of \cite{Ez99}), which are systematically smaller than those in the GCDX (see figure 2d). Thus, it may be unlikely that a large fraction of the GCDX can be accounted for by such point sources. At this moment, however, we reserve any definite conclusion until more quantitative estimates and observations for both the point source and the diffuse emission are evaluated. \subsection{Origin of the 6.4~keV-line and the Clumps} The origin of the 6.4~keV emission is due to the inner-shell ionization of nearly neutral irons. A plausible source for the inner-shell ionization is bombarding on the cloud gas by either high-energy electrons or high-energy X-rays. The former produces a relatively weak equivalent width of the 6.4~keV line ($EW_{6.4}$) of $\sim$0.3~keV (e.g. \cite{Ta03}), compared to the latter case of $EW_{6.4} \sim 1$~keV (for the solar abundance of iron) (e.g. \cite{Mu00}). In section 3.1, we show that the K-shell lines from neutral atoms and associated continuum (PL1) are prevailing in the GCDX. Substituting $EW_{6.7} = 0$ in equation 3, we obtain $EW_{6.4} = 1.4$~keV in PL1. Also, from the discussion in section 3.1, we can say that $\mit\Gamma$ for PL1 is $\sim$1.9. As is noted in section 3.2, the spectral parameters of sources 1 and 2 (table 1) include the PL1 components in the background region. However, the best-fit $EW_{6.4} =$ 1.0--1.2~keV (table 1) and power-law index ${\mit\Gamma}=1.8$ are almost identical to those from PL1 in the background region. Hence, no essential change on the spectral parameters (table 1), other than reducing the absolute fluxes, is present. The spectra of source 1 and source 2 were studied with Chandra and XMM-Newton (\cite{Yu02}, \cite{Pr03}, \cite{Yu07}). However the best-fit $EW_{6.4}$ were scattered from observation to observation. This may be due to GCDX subtraction, because GCDX is variable from position to position. In fact, the results of the off-plane background subtraction, where $F_{6.7}$, and hence $L2_{5-10}$, is smaller than those in the source region, give systematically smaller $EW_{6.4}$ compared to that of the near-by background subtraction. We argue that the present results of the $EW_{6.4}$$\sim$1.0--1.2~keV are reliable, because the GCDX is taken from a nearby background, and possible spatial variations of the GCDX are best estimated (see section 3.1). The $EW_{6.4}$ value of $\sim$1.0--1.2~keV is consistent with that irradiated by X-rays, unless the iron abundance in the east and west fields is $\sim$4--5 times solar. Although we have no conclusive data for the iron abundance in the cold cloud, the iron abundance in the GCDX is likely to be $\sim$ 1 solar (see section 4.1). Therefore, the observed equivalent width of the 6.4~keV line in sources 1 and 2 may favor, if not be conclusive, the origin of X-ray irradiation, rather than electron bombarding. The iron K-edge ($N_{\rm H}$) depth at 7.1~keV is another key parameter to judge the origin of the 6.4~keV clumps. If the $N_{\rm H}$ values are far larger than that of the Galactic absorption toward the GC, then the electron origin may not be favored (see \cite{Ta03}). The $N_{\rm H}$ value is, however, sensitive to the NXBG subtraction, because it becomes significant at high energy above $\sim$7~keV. Since Suzaku has low and stable NXBG compared to those of Chandra and XMM-Newton (\cite{Ko07a}), we argue that the present result of $N_{\rm H} \sim 2 \times 10^{23}$ (assuming 1 solar abundance of iron) is more reliable. This value is somehow larger than that toward the general GC regions (see section 4.1), but still may not be conclusive to judge whether the origin is X-rays or electrons. The most direct evidence to favor the X-ray origin is the time variability of the clumps, as was reported by \citet{Mu07} for the sub-structures of source 1. Also, the time variability of Sgr~B2, the other 6.4~keV clump, was found by \citet{Ko08}. We further confirmed the time variability of source 1 from the Chandra (2002) to the Suzaku (2005) observations with a 5-$\sigma$ confidence level. The real scale of source 1 is a few light-years, and hence the 3-year time variability of source 1 is possible only when the physical information travels across the source as fast as the speed of light, like X-ray irradiation. This speed is impossible by electrons and/or any finite-mass particles. \bigskip The authors thank all of the Suzaku team members, especially H. Uchiyama, H. Nakajima, H. Yamaguchi, and H. Mori for their support and useful information on the XIS performance. This work is supported by Grant-in-Aids from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan, the 21st Century COE "Center for Diversity and Universality in Physics'', Scientific Research A (KK), Priority Research Areas in Japan ``New Development in Black Hole Astronomy''(TGT), and Grant-in-Aid for Young Scientists B (HM). HM is also supported by the Sumitomo Foundation, Grant for Basic Science Research Projects, 071251, 2007. TI and YH are supported by JSPS Research Fellowship for Young Scientists.
0804.3548
\section{Introduction} \label{intro} In \cite{M1,M2}, Milnor defined invariants of links, known as the Milnor $\overline \mu$ invariants. In fact, these invariants are not universally defined, \emph{i.e.,} if the lower order invariants do not vanish, they are either not defined, or at best, they have indeterminacies. In \cite{HL1}, the notion of string link was introduced, together with the philosophy that Milnor's invariants are actually invariants of string links. Indeterminacies are determined precisely by the indeterminacy of representing a link as the closure of a string link. This philosophy led to the classification of links up to homotopy, and to an algorithm constructed by Xiao-Song. (Here and throughout, we will often refer to Xiao-Song Lin by his first name.) More precisely, Xiao-Song and the first author constructed an orbit space structure for the set of links up to homotopy. The group action was `unipotent', meaning it acted trivially on the successive layers of the nilpotent homotopy string link group. This was the determining structural feature which underlay the successful construction of Xiao-Song's algorithm. In \cite{HL2}, an analogous orbit space structure for link concordance was obtained and a study of the algebraic part of link concordance, corresponding to the Milnor concordance invariants, was made. The theory also applies to more general `concordance-type' equivalence relations, in particular to those studied by Kent Orr \cite{O} and developed in Xiao-Song's thesis. With the advent of the physical interpretation of the Jones Polynomial \cite{J}, predicted by Atiyah \cite{A} and established by Witten \cite{W}, a whole new area, known as Quantum Topology, emerged. Its perturbative aspects are succinctly summarized in the Universal Finite Type Invariant known as the Kontsevich Integral \cite{K}. Recall that, in the seminal paper \cite{L}, Xiao-Song had shown that Milnor Invariants are finite type invariants of string links. We refer the reader to the paper of the first author and Gregor Masbaum \cite{HM}, where a formula is given which computes the Milnor Invariants directly from the Kontsevich Integral. No successful attempt has been made at applying the methods of \cite{HL1,HL2} to the Vassiliev invariants \cite{V}. (The Vassiliev invariants were shown by Xiao-Song and Joan Birman \cite{BL} to be those invariants which satisfy the properties of finite type invariants. Subsequently, Bar-Natan \cite{B} adopted those properties as axioms for finite type invariants.) This is so, because, as we show, the classification scheme does \emph{not} hold. Thus, in the philosophy of \cite{HL1,HL2}, the finite type invariants of links ought to be refined. In this paper, we make such a refinement and show that after refinement, the classification scheme applies. We also show it applies to $C_n$-equivalence and to Self-$C_n$-equivalence. \subsection*{Acknowledgments} The authors extend thanks to J\o rgen Andersen and Bob Penner for organizing the stimulating conference \emph{Finite Type Invariants, Fat Graphs and Torelli-Johnson-Morita Theory} at the CTQM in Aarhus, March 2008, during which this work was done. They also thank J\o rgen Andersen and Gw\'ena\"el Massuyeau for useful conversations. The first author thanks the University of Nantes for releasing his time for research. He also wishes to thank Zhenghan Wang for the invitation to speak at the conference in honor of Xiao-Song at the Chern Institute in Tianjin, July 2007. It was there that he was baptized by Xiao-Song's wife, Jean, during a short ceremony in which he received his Chinese name, Ju Li Wu Xiao-Song. \section{Preliminaries} Let $D^2$ be the standard two-dimensional disk, and let $I$ denote the unit interval. Recall from \cite{HL1} the notion of string link. \begin{defi} Let $l\ge 1$. An $l$-component string link is a proper embedding, \[ \sigma : \bigsqcup_{i=1}^l I_i \rightarrow D^2\times I, \] of the disjoint union $\coprod_{i=1}^{l} I_i$ of $l$ copies of $I$ in $D^2\times I$, such that the $j=0,1$ levels are preserved and $\partial_j \s \subset D^2\times \{ j\}$ is the standard inclusion of $l$ points in $D^2$. By an abuse of notation, we will also denote by $\s \subset D^2\times I$, the image of the map $\s$. \end{defi} Note that we do not require that the $t$ levels, for $t\in I$, be preserved. A string link is a pure braid precisely when it preserves the $t$ levels for all $t\in I$. Note also that each string of an $l$-component string link is equipped with an (upward) orientation induced by the natural orientation of $I$. The set $SL(l)$ of isotopy classes of $l$-component string links (fixing the boundary) has a monoidal structure, with composition given by the \emph{stacking product} and with the trivial $l$-component string link $1_l$ as unit element. See Figure \ref{SL}. \begin{figure}[!h] \includegraphics{sl.eps} \caption{Multiplying two $2$-component string links.}\label{SL} \end{figure} \begin{rem} In the above, one may replace the disk $D^2$ with any surface $S$ to get the notion of a string link in $S\times I$. The $l$-component string links in $S$, up to isotopy, again has a monoidal structure. \end{rem} We denote by $L(l)$ the set of isotopy classes of $l$-component links. By a link, we mean an embedding $\coprod_{i=1}^{l} {\bf S}^1_i \rightarrow {\bf R}^3$. Thus the components are ordered and oriented. There is an obvious surjective \emph{closure} map \[ \hat\ : SL(l)\longrightarrow L(l) \] which closes an $l$-component string link $\s$ into an $l$-component link $\hat\s$. In \cite{HL1}, Xiao-Song and the first author introduced a certain left (resp. right) action of the monoid of isotopy classes of $2l$-component string links on $l$-component string links. See Figure \ref{leftright} for an illustration of these actions. Thus given two $l$-component string links $\s$, $\s'$, and a $2l$-component string link $\Sm$, one has $l$-component string links $\Sm \s$, $\s\Sm $, and a closed link $\s\Sm \s'$. \begin{figure}[!h] \includegraphics{actions.eps} \caption{Schematical representations of the left and right actions of $\Sigma$ on $\s$, $\Sigma \s$ and $\s \Sigma$, and of the closed link $\s \Sigma \s'$.}\label{leftright} \end{figure} One may represent the closure $\hat\s$ of a string link $\s$ as $1_l 1_{2l}\s$, as well as $\s 1_{2l}1_l$, and also as $1_l (1_l\otimes\s) 1_l$, where $\s_1\otimes\s_2$ denotes the $2l$-component string link obtained by horizontal juxtaposition. (One orients all strands appropriately in the above, e.g., in $\Sigma$, one must reverse the parametrization of the first $l$ strands.) The following result on basing links was proven in \cite{HL2}. \begin{prop}\label{basing} Let $\s_1$, $\s_2$ be two $l$-component string links whose closures are isotopic. Then there is a $2l$-component string link $\Sm$, with $1_l\Sm $ isotopic to $\s_1$, and $\Sm 1_l$ isotopic to $\s_2$. \end{prop} \section{The Habegger-Lin Classification Scheme} In \cite{HL2} a structure theorem was proven for certain `concordance-type' equivalence relations on the set of links. Given here for the convenience of the reader, though stated slightly differently, the result is in fact implicit in the proof in \cite{HL2}. Consider an equivalence relation $E$ on string links and on links (for all $l$), which is implied by isotopy. We will denote by $E(x)$, the $E$ equivalence class of $x$. We denote by $ESL(l)$, resp. $EL(l)$, the set of $E$ equivalence classes of $l$-component string links, resp. links. We will also denote by $E$ the map which sends a link or string link to its equivalence class. Consider the following set of Axioms for an equivalence relation $E$: For $i=1,2$, let $\s_i$ be $l$-component string links with $E(\s_1)=E(\s_2)$, and let $\Sm_i$ be $2l$-component string links with $E(\Sm_1)=E(\Sm_2)$. \be \item[(1)] $E(\hat\s_1)=E(\hat\s_2)$ \item[(2)] $E(1_l\otimes\s_1)=E(1_l\otimes\s_2)$ \item[(3)] $E(\s_1 \Sm_1)=E(\s_2\Sm_2)$ \item[(4)] $E(\Sm_1 \s_1)=E(\Sm_2\s_2).$ \item[(5)] For all string links $\s$, there is a string link $\s_1$, such that $E(\s\s_1)=E(1_l)$. \item[(6)] If $E(L)=E(L')$, then there is an $m$ and a sequence of string links $\s_i$, for $i=1,\dots, m$, such that $L$ is isotopic to $\hat\s_1$, and $L'$ is isotopic to $\hat\s_m$, and for all $i$, $1\le i< m$, either $E(\s_i)=E(\s_{i+1})$, or $\hat\s_i$ is isotopic to $\hat\s_{i+1}$ (i.e., the equivalence relation $E$ on links is generated by the equivalence relation of isotopy on links and the equivalence relation $E$ on string links). \item[($5'$)] For all string links $\s$, $E(\s\overline\s)=E(1_l)$. Here the string link $\overline \s$ is defined by, $\overline \s= R_t\circ\s \circ R_s$, where $R_s$ and $R_t$ are the reflection mappings at the source and target. \ee \begin{defi} An equivalence relation satisfying Axioms $(1)-(4)$ is called local. \end{defi} We have the following result. \begin{prop}\label{2.1} Let $E$ be a local equivalence relation. For $i=1,2$, let $\s_i$ and $\s'_i$ be $l$-component string links with $E(\s_1)=E(\s_2)$ and $E(\s'_1)=E(\s'_2)$, and let $\Sm_i$ be $2l$-component string links with $E(\Sm_1)=E(\Sm_2)$. Then $E(\s_1\s'_1)=E(\s_2\s'_2)$ and $E(\s_1 \Sm_1 \s'_1)=E(\s_2\Sm_2\s'_2)$. The monoidal structures, the left (resp. right) action and the closure mapping all pass to maps of equivalence classes. Let $ES^R(l)$ (resp. $ES^L(l)$), denote the right (resp. left) stabilizer of the unit element of $ESL(l)$. Then $ES^R(l)$ (resp. $ES^L(l)$) is a submonoid of $ESL(2l)$. Furthermore, the closure mapping of $ESL(l)$ to $EL(l)$, passes to the set of orbits of the $ES^R(l)$ (resp. $ES^L(l)$) action, i.e., we have a map $$ {{ESL(l)}\over{S^R(l)}} \longrightarrow EL(l).$$ If in addition, Axiom $(5)$ holds, then the monoid $ESL(l)$ is a group, and $ES^R(l)$ (resp. $ES^L(l)$) is a subgroup of $ESL(2l)$. If Axiom $(5')$ holds, then $ES^R(l)=ES^L(l)$. \end{prop} \begin{proof} By Axiom $(2)$, $E(1_l\otimes\s'_1)=E(1_l\otimes\s'_2)$. Using Axiom $(3)$, we have that $E(\s_1\s'_1)=E(\s_1(1_l\otimes\s'_1))=E(\s_2(1_l\otimes\s'_2))=E(\s_2\s'_2)$. One defines $E(\s_1) E(\s_2)= E(\s_1\s_2)$. This is well defined by the above, and the element $E(1_l)$ is a unit. One also defines $E(\s) E(\Sm)= E(\s\Sm)$ and $E(\Sm) E(\s)= E(\Sm\s)$. These are well defined, by Axioms $(3)$ and $(4)$, and are monoidal actions (of sets). Suppose $\s$ and $\s'$ define the same element of $ {{ESL(l)} \over {S^R(l)}}$, i.e., there is $E(\Sm)\in S^R(l)$ such that $E(\Sm)E(\s)=E(\s')$. One has $E(\hat {\s}')= E(1_l\Sm\s)=E(1_l 1_{2l}\s)=E(\hat\s)$. If Axiom $(5)$ holds, each element $E(\s)$ has a right inverse. Hence $ESL(l)$ is a group, for all $l$. If $E(\Sm)$ belongs to $ES^R(l)$, then $E(\overline\Sm)$ belongs to $ES^L(l)$. But if Axiom $(5')$ holds, then $E(\Sm)$ is the inverse of $E(\overline\Sm)$, so also belongs to $ES^L(l)$. This proves one inclusion and the other is proven similarly. \end{proof} \begin{thm}[Structure Theorem for $E$-equivalence] \hfill \be \item Let $E$ be a local equivalence relation satisfying Axiom $(5)$. Then the quotient map \[SL(l)\longrightarrow {{ESL(l)} \over {ES^R(l)}} \] factors through the closure mapping, i.e., we have a link invariant \[ \widetilde{E}: L(l) \longrightarrow {{ESL(l)} \over {ES^R(l)}}. \] such that the composite map to $EL(l)$ is $E$. \item Furthermore, if Axiom $(6)$ also holds, then we have a bijection $$ {{ESL(l)} \over {S^R(l)}} = EL(l).$$ \ee \end{thm} \begin{proof} Suppose $\hat\s$ is isotopic to $\hat\s'$. By Proposition \ref{2.1}, one has that, for some $\Sm_0$, $\s'$ is isotopic to $\Sm_0 1_l$ and $\s$ is isotopic to $1_l\Sm_0 $. Set $\Sm= \Sm_0(1_l\otimes \s_1)$, where $E(\s_1)$ satisfies $E(\s\s_1)=E(1_l)$ (and hence also $E(\s_1\s)=E(1_l)$). One has that $E(1_l)E(\Sm)=E(1_l \Sm)= E(\s\s_1)=E(1_l)$, so $E(\Sm)\in S^R(l)$. See Figure \ref{proof1}. \begin{figure}[!h] \includegraphics{proof1.eps} \caption{Proof that $\Sigma$ lies in $S^R(l)$. } \label{proof1} \end{figure} \noindent Finally, one has that $E(\Sm)E(\s)=E(\Sm\s)=E(\Sm_0)E(\s_1\s)=E(\Sm_0)E(1_l)= E(\Sm_0 1_l)=E(\s')$. See Figure \ref{proof2}. \begin{figure}[!h] \includegraphics{proof2.eps} \caption{Proof that $\Sigma \sigma$ is equivalent to $\sigma'$.}\label{proof2} \end{figure} \noindent This completes the proof of (1). To see (2), note that we have already shown that if the closures of two string links are isotopic, then they define the same element of $ {{ESL(l)} \over {S^R(l)}}$. Thus we have that for all $i$ in Axiom $(6)$, $E(\s_i)$ and $E(\s_{i+1})$ both agree in $ {{ESL(l)} \over {S^R(l)}}$. Thus the surjective map from $ {{ESL(l)} \over {S^R(l)}}$ to $ EL(l)$ is injective. \end{proof} \section{Structure Theorems for $C_n$-equivalence and for Self-$C_n$-equivalence.} We will denote by $FT_n$, the equivalence relation on tangles determined by finite type equivalence up to degree $n$, i.e., $FT_n$-equivalent tangles differ by an element in the $n+1$st term of the Vassiliev filtration. In \cite{H}, K. Habiro showed that, \emph{for knots}, $FT_n$-equivalence agrees with another equivalence relation, called $C_{n+1}$-equivalence. Habiro conjectured in \cite{H} that for string links, $FT_n$ equivalence is equivalent to $C_{n+1}$-equivalence. Habiro also showed that for links, the result does not hold. Note that, since the structure theorem holds for $C_{n+1}$-equivalence, if the equivalence relations were the same both for string links and for links, it would also hold for $FT_n$ equivalence. However, for $FT_n$ equivalence, the structure theorem does not hold (see Theorem 5.1 and the Borromean ring example of Section 5). By definition, two tangles are said to be $C_n$-equivalent, if there is a finite sequence of tree clasper surgeries, of degree greater than or equal to $n$, taking one tangle to the other, up to isotopy. See \cite{H} for the definition. (Note that in \cite{H}, a tree clasper is called an admissible, strict tree clasper.) Here the leaves of the tree can be assumed to be trivial and intersect the tangle in a single point. It is known that $C_{n+1}$-equivalent tangles are $FT_n$-equivalent (see \cite[\S 6]{H}). By definition, two tangles are said to be Self-$C_n$-equivalent, if there is a finite sequence of tree clasper surgeries, of degree greater than or equal to $n$, taking one tangle to the other, up to isotopy, such that the leaves of each tree are restricted to all intersect the same tangle component. \begin{rem} Self-$C_n$-Equivalence, for $n=1$, is link-homotopy. For $n=2$ it is also known as Self-Delta equivalence. \end{rem} $C_n$-equivalence and Self-$C_n$-equivalence are obviously local, i.e., they satisfy Axioms $(1)-(4)$ of Section 3. Axiom 5 was shown in \cite[Theorem 5.4]{H} for $C_n$-equivalence. \begin{prop}\label{5.1} Self-$C_n$-equivalence satisfies Axiom $(5)$ of Section 3. \end{prop} \begin{prop}\label{5.2} $C_n$-equivalence and Self-$C_n$-equivalence satisfy Axiom $(6)$ of Section 3. \end{prop} Applying Theorem 3.2, one has the following result. \begin{thm}[Structure Theorem for $C_n$-equivalence and Self-$C_n$-equivalence] $$ {{C_n SL(l)} \over {C_nS^R(l)}} = C_n L(l).$$ $$ {{\textrm{\textit{Self-C}}_n SL(l)} \over {\textrm{\textit{Self-C}}_nS^R(l)}} = \textrm{\textit{Self-C}}_n L(l).$$ \end{thm} \begin{proof}[Proof of Proposition \ref{5.2}] Suppose that $L'$ is obtained from $L$ by surgery on a disjoint union $F$ of tree claspers of degree $\ge n$. Let $L$ be the closure of $\s$. Since the disk base for $L$ retracts onto a 1-complex, we may assume it is disjoint from $F$. Thus $L'$ is the closure of a string link $\s'$, obtained from $\s$ by surgery on a union of tree claspers of degree $\ge n$. This shows that $C_n$-equivalence (resp. Self-$C_n$-equivalence) for links is implied by $C_n$-equivalence (resp. Self-$C_n$-equivalence) for string links and isotopy. \end{proof} Proposition \ref{5.1} is a special case of the following result. \begin{prop}\label{5.3} Let $S$ be any surface. Self-$C_n$-equivalence (and consequently $C_n$-equivalence) classes of string links in $S\times I$ form a group. \end{prop} \begin{proof}[Proof of Proposition \ref{5.3}] The proof is by induction on the number $l$ of components. For $l=1$, Self-$C_n$-equivalence is $C_n$-equivalence, so we may invoke \cite[Theorem 5.4]{H}. Suppose the result is true for $l-1$. Removing the first component from $\s$, we have an $l-1$-component string link $\s_0$. By the induction hypothesis, $\s_0$ has an inverse $\s_1$, up to Self-$C_n$-equivalence. Let $\s'=\s(1_1\otimes\s_1)$. It suffices to find a right inverse for $\s'$, up to Self-$C_n$-equivalence. Note that the string link $\s'_0$, obtained from $\s'$ by removing the first component, is Self-$C_n$-equivalent to the trivial string link $1_{l-1}$. Thus $1_{l-1}$ is obtainable from $\s'_0$ by surgery on a disjoint union $F$ of trees of degree $n$ such that the leaves of each tree are restricted to intersect a single component. We may assume that $F$ is disjoint from the first component of $\s'$. Perform surgery on $F$ to obtain from $\s'$ a string link $\s''$. As $\s'$ is Self-$C_n$-equivalent to $\s''$, it remains to find a right inverse for $\s''$. Note that after removing the first component of $\s''$, we obtain the trivial $l-1$-component string link. Thus if we remove from $\s''$ the last $l-1$ components, we have a one-component string link $\s''_0$ in $S'\times I$, where $S'$ is the surface obtained from $S$ by removing $l-1$ points. Since, by the result for $l=1$, the string link $\s''_0$ has a right inverse, up to Self-$C_n$-equivalence, so does $\s''$. \end{proof} \section{The Indeterminacy of Finite Type Invariants.} In this section we assume the reader is familiar with the notion of finite type invariant as well as the Kontsevich Integral, which is the Universal Finite Type Invariant. Recall from the last section that we have denoted by $FT_n$, the equivalence relation on tangles determined by finite type equivalence up to degree $n$, i.e., $FT_n$-equivalent tangles differ by an element in the $n+1$st term of the Vassiliev filtration. Let us begin with a disturbing fact about finite type invariants of links. The Borromean Rings are distinguished from the unlink by the triple Milnor Invariant. Unfortunately, this invariant, which is really only defined as an integer when the linking numbers of the 2-component sublinks vanish, dies in the space of trivalent Feynman diagrams (also known as Jacobi diagrams) on 3 circles. This is because, when passing from 3 intervals to 3 circles, invariants of linear combinations of string links, which die upon closure, must also die upon closure for other linear combinations which are equivalent. This can be seen using the Kontsevich Integral. Specifically, in the space of Jacobi diagrams on $3$ intervals we have \[ \includegraphics{stu.eps} \] where the right-hand side is obviously mapped to zero when closing. (Recall that the coefficient of the $Y$-shaped diagram on the left-hand side corresponds to the triple Milnor Invariant.) To see how this comes about more geometrically, consider the free group on 2 generators as a subgroup of the 3 component pure braid group. The word $xyx^{-1}y^{-1}$ represents the Borromean rings, after closure. Since $xyx^{-1}$ and $y$ are conjugate, and thus agree after closure, we see that the quantity $xyx^{-1}y^{-1}-1$, which (say after applying the Magnus expansion) is in degree 2 before closure, lies in degree 3 after closure, since it agrees after closure with the quantity $(xyx^{-1}y^{-1}-1)(y-1)$, which is in degree 3. (The degree considerations here are valid in the Vassiliev filtration as well). Thus we see that we can no longer distinguish the Borromean rings from the unlink! In summary, the indeterminacies of higher order invariants due to the non-vanishing of lower order ones, propagate to destroy what should be invariants of links whose lower order invariants vanish. We are thus led to a problem of refining the indeterminacies in a less algebraic way. We are guided by the structure theorem of the last section. Rationally, it is known, see \cite{HM}, that the set rational finite type equivalence classes of $l$-component string links is a finitely generated torsion free nilpotent group. Over the integers, it follows from the last section, since $C_{n+1}SL(l)$ is a group and surjects to $FT_nSL(l)$, that $FT_nSL(l)$ is also a group. The set $FT_nSL(2l)$ acts on $FT_nSL(l)$ on the left and right. Let $FT_nS^R(l)$ denote the stabilizer of $FT_n(1_l)$ under the right action. $FT_nL(l)$ denotes the set of $FT_n$ equivalence classes of $l$-component links. The main result of this paper is the following. \begin{thm}[Structure Theorem for Finite Type Equivalence] \hfill \be \item The projection mapping, of $SL(l)$ to the set $\frac{FT_nSL(l)}{FT_nS^R(l)}$ of left $FT_nS^R(l)$ orbits, factors through $L(l)$ and thus gives a well defined invariant of links \[ \widetilde{FT_n}: L(l) \longrightarrow {{FT_nSL(l)} \over {FT_nS^R(l)}}. \] \item The above link invariant lifts the indeterminacies given by finite type invariants of links, i.e., if two links determine the same element of $\frac{FT_nSL(l)}{FT_nS^R(l)}$, then they have the same finite type invariants up to degree $n$. That is, the above map, $\widetilde{FT_n}$, factors through a (surjective, but not generally injective) map, \[ {{FT_nSL(l)} \over {FT_nS^R(l)}} \longrightarrow FT_n L(l), \] and the composite mapping is \[{FT_n}: L(l) \longrightarrow FT_n L(l). \] \ee \end{thm} \begin{proof} Axioms $(1)-(4)$ follow from the local definition of the Vassiliev filtration. Axiom $(5)$ follows from the remark above that $FT_nSL(l)$ is a group. \end{proof} \begin{rem} The analogous theorem also holds if one restricts to the equivalence relation $FT_n^Q$, given by rational invariants of finite type of degree up to $n$. One can use the local property of the Kontsevich Integral (and the result cited above from \cite{HM}) to give an alternative proof of the Axioms $(1)-(5)$ in this case. Let $A_{\le n}(l)$ denote the algebra of Jacobi diagrams on $l$ strands of degree up to $n$. The action of $FT_n^QSL(2l)$ on the set $FT_n^QSL(l)$ is induced, via the Kontsevich Integral, by an analogously defined action of $A_{\le n}(2l)$ on $A_{\le n}(l)$, given purely diagrammatically. (In the definition of the action of string links, just replace the string links with diagrams.) Let $A_{\le n}(2l)_1$ be the stabilizer of the unit element in $A_{\le n}(l)$. The stabilizer $A_{\le n}(2l)_1$ contains $FT_n^QS^R(l)$. It is easily seen that there are surjective maps of the space of covariants $A_{\le n}(l)/FT_n^QS^R(l)$ to the space of covariants $A_{\le n}(l)/A_{\le n}(2l)_1$, and from $A_{\le n}(l)/A_{\le n}(2l)_1$ to the space $A_{\le n}(\coprod_{i=1}^{l} {\bf S}^1_i )$ of diagrams on $l$ circles, up to degree $n$. Using the link invariance of our theorem, and the universal property of the Kontsevich Integral, one can check that these maps are both isomorphisms. (We do not have a diagrammatical proof of this fact.) It follows that one should not pass to covariants to try to refine finite type invariants of links! \end{rem} We conclude this section with several problems. \begin{Problem} Use the `unipotent' action to write an algorithm, analogous to Xiao-Song's link-homotopy algorithm, `calculating' whether or not two (string) links determine the same element in the orbit space. \end{Problem} \begin{Problem} Does the full Kontsevich Integral for links (or integrally, modulo the intersection of the Vassiliev filtration) `recapture' the information lost at each finite level? (For example, the triple Milnor Invariant dies, but its cube does not. But of course the degree is now 6 and not 2.) \end{Problem}
0804.2259
\section{Introduction} A small fraction of Sun-like stars have giant planets with orbital periods smaller than about 10 days (Marcy et al.~2005, Udry \& Santos 2007). The existence of these planets was a surprise, because it was expected that giant planets would only be found beyond the ``snow line,'' with orbital distances greater than a few astronomical units. Other surprises have come from detailed studies of individual objects. Some are found on highly eccentric orbits (Johnson et al.~2006, Bakos et al.~2007, Maness et al.~2007, Johns-Krull et al.~2008). Some have mean densities that are quite small (Knutson et al.~2007, Mandushev et al.~2007) or large (Sato et al.~2005, Torres et al.~2007) in comparison with Jupiter. However, in at least one sense, the close-in giant planets have fulfilled prior expectations: they orbit their host stars in the prograde direction, relative to the sense of the stellar rotation. This is true, at least, of the 6 systems for which measurements of spin-orbit alignment have been reported (Queloz et al.~2000; Wolf et al.~2007; Narita et al.~2007a,b; Loeillet et al.~2007; Winn et al.~2005, 2006, 2007a). In all of these cases but one, the sky projections of the orbital axis and the stellar rotation axis are observed to be fairly well-aligned, with measurement precisions ranging from about 1.5 to 30 deg. The exception is HD~17156, for which the angle between those axes was found to be $62\pm 25$~deg (Narita et al.~2007b). In all of these cases, the measurement technique relies upon the Rossiter-McLaughlin (RM) effect, the anomalous Doppler shift that occurs during transits due to stellar rotation (see, e.g., Queloz et al.~2000, Ohta et al.~2005, Gim\'enez 2006, Gaudi \& Winn 2007, Winn 2007). A close alignment between the orbital and rotational axes seems natural because this pattern prevails in the Solar system, and because the angular momenta of the parent star and the planetary orbits presumably derive from the same protostellar disk. However, some theories of planetary migration---proposed to explain how giant planets attain short-period orbits---predict occasionally large misalignments (Chatterjee et al.~2007, Fabrycky \& Tremaine 2007, Wu et al.~2007, Nagasawa et al.~2008). These theories, as well as the general history of surprises in this field, provide motivation to continue measuring exoplanetary spin-orbit alignment. In this paper, we present a measurement of the RM effect for the transiting exoplanetary system TrES-2. This system was discovered by O'Donovan et al.~(2006). It consists of a planet with a mass of $1.2$~$M_{\rm Jup}$ and radius $1.2$~$R_{\rm Jup}$ orbiting a G0V star with a period of 2.5~d (O'Donovan et al.~2006, Holman et al.~2007, Sozzetti et al.~2007). It did not stand out as a promising RM target because the star is relatively faint ($V=11.4$) and is a slow rotator ($v\sin i_\star = 2.0\pm 1.5$~km~s$^{-1}$; O'Donovan et al.~2006). On the other hand, the transit occurs at a high impact parameter across the stellar disk ($b=0.8540\pm 0.0062$; Holman et al.~2007), a favorable circumstance for this type of measurement (Gaudi \& Winn 2007). Furthermore, in our continuing effort to measure the spin-orbit angles for a statistically meaningful number of systems, we do not want to ignore stars with small sky-projected rotation rates. This is because a small value of $v\sin i_\star$ might be caused by a small value of $\sin i_\star$, i.e., there might be a large spin-orbit misalignment. For these reasons, we pursued TrES-2. We describe the new data in \S~2, the model that we used to interpret the data in \S~3, and the results in \S~4. \section{Observations and Data Reduction} We observed a transit of TrES-2 on UT~2007~April~26 with the Keck~I 10m telescope and the High Resolution Echelle Spectrometer (HIRES; Vogt et al.~1994). We set up the instrument in the same manner that has been used consistently for the California-Carnegie planet search (Butler et al.~1996, 2006). In particular we employed the red cross-disperser and used the I$_2$ absorption cell to calibrate the instrumental response and the wavelength scale. The slit width was $0\farcs85$ and the typical exposure time was 3-4~min, giving a resolution of about 70,000 and a signal-to-noise ratio (SNR) of approximately 200~pixel$^{-1}$. We observed the star for 4~hr bracketing the predicted transit midpoint and obtained a total of 56 spectra, of which 30 were taken during the transit. We also obtained two iodine-free spectra, with a higher SNR and higher resolution. We used the sum of these spectra as a template for the Doppler analysis, which was performed with the algorithm of Butler et al.~(1996). We estimated the measurement error in the Doppler shift derived from a given spectrum based on the scatter among the solutions for individual 2~\AA~sections of the spectrum. The typical error was 6~m~s$^{-1}$. The data are given in Table~1 and plotted in Figs.~1 and 2. Also shown in those figures are data obtained previously by O'Donovan et al.~(2006), consisting of 11 velocities measured with Keck/HIRES using a different setup\footnote{Table~3 of O'Donovan et al.~(2006) gives incorrect values for the heliocentric Julian dates of the velocity measurements. The corrected dates were provided to us by D.~Charbonneau (private communication, 2007).}, as well as the photometric data of Holman et al.~(2007). \begin{figure}[p] \epsscale{1.0} \plotone{f1.eps} \caption{ Radial velocity measurements of TrES-2, from this work and from O'Donovan et al.~(2006), as a function of orbital phase. The best-fitting values of the systemic velocity have been subtracted. The solid line is the best-fitting model. \label{fig:1}} \end{figure} \begin{figure}[p] \epsscale{0.75} \plotone{f2.eps} \caption{ {\it Top.} The $z$-band photometry of Holman et al.~(2007), averaged into 1.5~min bins. The solid line is the best-fitting model. {\it Middle.} A close-up of the radial velocity data shown in Fig.~1, centered on the midtransit time. {\it Bottom.} Same, but the orbital velocity has been subtracted and the post-midtransit data ($t > 0$) have been inverted about the origin ($t\rightarrow -t$ and $\Delta v \rightarrow -\Delta v$), highlighting the Rossiter-McLaughlin anomaly. Filled symbols denote data from before midtransit, and open symbols denote data from after midtransit. \label{fig:2}} \end{figure} \section{The Model} To determine the projected spin-orbit angle and its uncertainty, we simultaneously fitted a parametric model to the radial-velocity data as well as the photometric data of Holman et al.~(2007). We included the photometric data as a convenient way to account for the uncertainties in the photometric parameters and their covariances with the spin-orbit parameters, although in practice the photometric uncertainties were irrelevant for this system. The model is based on a circular orbit of a star and planet. The photometric transit model was identical to the model used by Holman et al.~(2007). To calculate the anomalous Doppler shift as a function of the positions of the planet and star, we used the technique of Winn et al.~(2005): we simulated in-transit spectra, and determined the Doppler shifts using the same algorithm used on the actual data. The simulations rely on a template spectrum (described below) that is meant to mimic the emergent spectrum from a small portion of the photosphere. At a given moment of the transit, we denote by $\epsilon$ the fractional loss of stellar flux, and we denote by $v_p$ the line-of-sight velocity of the occulted portion of the stellar disk. To represent the occulted portion of the stellar spectrum, we scaled the template spectrum in flux by $\epsilon$ and shifted it in velocity by $v_p$. We subtracted the scaled and shifted spectrum from a rotationally-broadened template spectrum and then ``measured'' the anomalous Doppler shift $\Delta v$. This was repeated for a grid of $\{\epsilon, v_p\}$, and a polynomial function was fitted to the resulting grid. We used this polynomial to calculate the anomalous Doppler shift $\Delta v$ as a function of $\epsilon$ and $v_p$, which are themselves functions of time. Differential rotation was ignored, as its effects are expected to be negligible (Gaudi \& Winn 2007). The template spectrum should be similar to that of TrES-2 but with slightly narrower lines because of the lack of rotational broadening. We experimented with two different empirical templates based on observations of similar stars,\footnote{The two stars were HD~38858 ($T_{\rm eff} = 5726$~K, $\log g = 4.51\pm 0.08$, [Fe/H]~=~$-0.23\pm 0.04$, $v\sin i_\star = 0.3\pm 0.5$~km~s$^{-1}$) and HD~66428 ($T_{\rm eff} = 5752$~K, $\log g = 4.49\pm 0.08$, [Fe/H]~=~$+0.31\pm 0.04$, $v\sin i_\star = 0.0\pm 0.5$~km~s$^{-1}$). The stellar parameters are from the SPOCS catalog (Valenti \& Fischer 2005).} finding that both templates gave results consistent with the function $\Delta v = -\epsilon~v_p$. This function is consistent with the analytic expressions of Ohta et al.~(2006) and Gimenez~(2006), even though those analytic expressions do not attempt to account for the spectral deconvolution. It is simpler than the quadratic or cubic functions that we have derived for other systems (Winn et al.~2005, 2006, 2007a). We do not know the reason for the difference but it is possibly related to the much slower projected rotation speed of TrES-2. The fitting statistic was \begin{eqnarray} \chi^2 & = & \sum_{j=1}^{1033} \left[ \frac{f_j({\mathrm{obs}}) - f_j({\mathrm{calc}})}{\sigma_{f,j}} \right]^2 + \sum_{j=1}^{67} \left[ \frac{v_j({\mathrm{obs}}) - v_j({\mathrm{calc}})}{\sigma_{v,j}} \right]^2 , \end{eqnarray} where $f_j$(obs) and $\sigma_{f,j}$ are the flux measurements and uncertainties of Holman et al.~(2007), and $v_j$(obs) and $\sigma_{v,j}$ are the radial-velocity measurements and uncertainties from our new data and from O'Donovan et al.~(2006). The two model parameters relating to the RM effect are the line-of-sight stellar rotation velocity ($v \sin i_\star$), and the angle between the projected stellar spin axis and orbit normal ($\lambda$). The projected spin-orbit angle $\lambda$ ranges from $-180\arcdeg$ to $+180\arcdeg$, and is measured counterclockwise on the sky from the projected stellar rotational angular-momentum vector to the projected orbital angular-momentum vector (see Ohta et al.~2007 or Gaudi \& Winn 2007 for a diagram). If we define stellar ``north'' by the sky projection of the stellar angular-momentum vector, then when $\lambda=0\arcdeg$ the axes are aligned and the planet moves directly ``eastward'' across the face of the star, for $0\arcdeg < \lambda < 90\arcdeg$ the planet moves ``northeast,'' and so forth. The other model parameters were the planetary mass ($M_p$); the stellar and planetary radii ($R_\star$ and $R_p$); the orbital inclination ($i$); the mid-transit time ($T_c$); and an additive constant velocity for each of the two different velocity data sets ($\gamma_1$ and $\gamma_2$). We allowed our velocities to have a different additive constant from the velocities of O'Donovan et al.~(2006) in order to account for systematic differences in the spectrograph setup and reduction procedures. We fixed the orbital period to be $2.47063$~days (Holman et al.~2007). We used a Markov Chain Monte Carlo algorithm to solve for the model parameters and their confidence limits, with uniform priors on all parameters. This algorithm and our implementation of it are described in detail elsewhere (see, e.g., Winn et al.~2007b). The minimum $\chi^2$ is 1127.6, with 1091 degrees of freedom, giving $\chi^2/N_{\rm dof} = 1.034$ and indicating an acceptable fit. \section{Results} The RM effect is certainly not obvious in Fig.~1, which shows the entire spectroscopic orbit. It is not even very obvious in the middle panel of Fig.~2, which focuses on the velocity data around the time of transit. However, our analysis shows that the RM effect was indeed detected. As mentioned above, for the best-fitting model, $\chi^2_{\rm min} = 1127.6$. If the parameter $v\sin i_\star$ is set equal to zero, thereby neglecting the RM effect, then $\chi^2_{\rm min} = 1135.8$, with the increase of $\Delta\chi^2 = 8.2$ arising from the velocity data during the transit. We conclude that the RM effect was detected with a signal-to-noise ratio (SNR) of approximately $\sqrt{8.2} = 2.9$. Gaudi \& Winn (2007) have given analytic formulas for the signal-to-noise ratio of RM observations as a function of the system and telescope parameters, under the assumption of Gaussian velocity errors. Using their Eqn.~(26) for this case, the forecasted SNR is 2.9, in agreement with the actual SNR. One might wonder how much this result was influenced by the inclusion of the photometric data. To check on this, we tried setting aside the photometric data and fitting only the 67 radial-velocity data points. We fixed the photometric parameters ($M_p$, $R_p$, $R_\star$, $i$, $T_c$, and $P$) at the values determined previously. In this case we found $\chi^2_{\rm min} = 63.7$. If $v\sin i_\star$ is set equal to zero, then $\chi^2_{\rm min} = 71.9$, giving $\Delta\chi^2 = 8.2$, just as in the full model fit. This confirms that the lowered $\chi^2$ is an effect of a better fit to the transit velocities, and that the uncertainties in the photometric parameters are negligible in this instance. The best-fitting model parameters are also consistent with good alignment of the spin and the orbit. Specifically, we find $\lambda = -9\pm 12$~deg, and $v\sin i_\star = 1.0\pm 0.6$~km~s$^{-1}$, where the quoted values are the medians of the {\it a posteriori}\, distributions returned by the MCMC algorithm, and the error bars represent 68\% confidence limits. Table~2 gives these results, along with some other revelant system parameters of TrES-2, for convenience. Visually, the RM effect is more apparent in the bottom panel of Fig.~2, in which the orbital velocity has been subtracted from the data, and the sampling rate has been effectively doubled by inverting the data through the origin ($t \rightarrow -t$ and $\Delta v \rightarrow -\Delta v$). This works because for $\lambda\approx 0$, the RM waveform is antisymmetric about the origin. Figure~3 shows the {\it a posteriori}\, probability distribution for $\lambda$ and the joint distribution of $\lambda$ and $v\sin i_\star$. The distribution for $\lambda$ resembles a slightly asymmetric Gaussian function to which is added a low-level uniform probability distribution. Although only the region from $-90\arcdeg$ to $+90\arcdeg$ is shown in Fig.~3, this low-level uniform distribution extends all the way from $-180\arcdeg$ to $+180\arcdeg$. The uniform background corresponds to the very lowest allowed values of $v\sin i_\star$. This makes sense because when the rotation rate is zero, the Rossiter anomaly vanishes and $\lambda$ is irrelevant. Values of $\lambda$ between $-90\arcdeg$ and $+90\arcdeg$ correspond to prograde orbits, for which the stellar and orbital angular momenta are in the same half-plane. The integrated probability between $-90\arcdeg$ and $+90\arcdeg$ is 98\%. We conclude that the TrES-2 orbit is prograde with 98\% confidence. As an illustration of the constraints provided by our analysis, Fig.~4 shows a drawing of the face of the star and the orbit of the transiting planet. Our result for $v\sin i_\star$ is in agreement with the value reported by O'Donovan et al.~(2006), $2.0\pm 1.5$~km~s$^{-1}$, which was based on an analysis of the line broadening in an out-of-transit spectrum. This finding is also supported by an analysis of our own out-of-transit, iodine-free spectra, using the {\it Spectroscopy Made Easy}\, (SME) software package of Valenti \& Piskunov~(1996). The automated analysis gave a formal result of $v\sin i_\star = 0.5\pm 0.5$~km~s$^{-1}$, although the true uncertainty may be larger, since with a disk-integrated spectrum of such a slow rotator it is difficult to disentangle the effects of rotation, macroturbulence, microturbulence, and the instrumental profile. In particular, the SME code assumes ``typical'' values for the turbulent broadening mechanisms that are of the same magnitude as the rotation speed of TrES-2 (see \S~4.2-4.4 of Valenti \& Fischer 2005).\footnote{We investigated the consequences of accepting the SME result at face value, by imposing a Gaussian prior constraint on the $v\sin i_\star$ parameter with mean 0.5~km~s$^{-1}$ and standard error 0.5~km~s$^{-1}$. In that case, the MCMC analysis gave 68\%-confidence ranges of $-31$ to $1$~deg for $\lambda$ and $0.3$ to $1.1$~km~s$^{-1}$ for $v\sin i_\star$, and showed that the orbit is prograde with 95\% confidence. The constraint on $\lambda$ was weakened because the SME result favors slower rotation rates, for which the sensitivity of the RM waveform to $\lambda$ is reduced.} \begin{figure}[p] \epsscale{0.75} \plotone{f3.eps} \caption{ {\it Top.}---The probability distribution for $\lambda$, the angle between the sky projections of the orbital axis and the stellar rotation axis. {\it Bottom.}---The joint probability distribution of $\lambda$ and $v\sin i_\star$. The solid dot shows the best-fitting values. The contours represent 68\% and 95\% confidence limits. \label{fig:3}} \end{figure} \begin{figure}[p] \epsscale{0.75} \plotone{f4.eps} \caption{ Scale drawing of the TrES-2 system. The relative radii of the bodies and the impact parameter of the transit are taken from our best-fitting model. The ``north pole'' of the star is drawn with an arrow, and the curved arc shows the 68\%-confidence region for its orientation. The angle $\lambda$ is measured clockwise from the projected orbit normal vector (vertical dashed line) to the projected stellar north pole (tilted dashed line). The best-fitting value of $\lambda$ is negative. \label{fig:4}} \end{figure} \section{Summary and Discussion} We have monitored the apparent Doppler shift of TrES-2 throughout a transit of its giant planet and we have detected the Rossiter-McLaughlin effect. Using the available photometric and spectroscopic data, we have found good evidence that the orbit is prograde, as are the other 6 systems that have been measured (with the possible exception of HD~17156), and as are the planets in the Solar system. In this sense, our results for TrES-2 are not surprising. However, as mentioned in \S~1, some theories of planet migration do predict occasionally large misalignments. For example, Nagasawa et al.~(2008) investigated a scenario in which a planet is scattered into an eccentric, inclined orbit with a small periastron distance (as envisioned earlier by Rasio \& Ford 1996 and Marzari \& Weidenschilling 2002), and subsequently a more distant planet forces Kozai oscillations in the inner planet's eccentricity and inclination. If the periastron distance is small enough during the high-eccentricity phases, the orbit may circularize at a small orbital distance with a substantial inclination. Nagasawa et al.~(2008) found that this migration mechanism produces a very broad range of final inclinations, including a significant fraction of retrograde orbits. Of course, prograde orbits are also permitted in this scenario, and our finding of a prograde orbit for TrES-2 cannot be taken as evidence against this mechanism. We raise the issue only to show that a prograde orbit was not a foregone conclusion. Furthermore, we have shown it is possible to glean this information and measure the projected spin-orbit angle to within $12\arcdeg$, even for an 11th magnitude star with a slow projected rotation rate. A potentially important application of the RM effect is the detection of planets that are too small to be readily detected using other types of ground-based data. For example, in many cases of terrestrial planets detected by the {\it Corot}\, or {\it Kepler}\, satellites, it will be easier to observe the RM effect than to observe the star's orbital Doppler shift (and thereby measure the planet's mass). The theory underlying this idea has been discussed by Welsh et al.~(2004) and Gaudi \& Winn~(2007). The present work serves to illustrate this point with actual data. If TrES-2 had a rotation rate of 5~km~s$^{-1}$ instead of 1~km~s$^{-1}$, but all other stellar and orbital parameters were the same, then the quantity and quality of data presented in this paper would permit a $\sim$3$\sigma$ detection of a planet with a radius $\sim$$\sqrt{5}$ times smaller than TrES-2, or $\sim$6 Earth radii. If the transit were equatorial instead of grazing (the best configuration for detecting the effect, although not for assessing spin-orbit alignment), the duration of the transit would be longer by a factor of $\sim$2 and the amplitude of the RM effect would be larger by a factor of $\sim$2, leading to another factor-of-2 improvement in the detectable planet radius ($\sim$3~$R_\oplus$). Such a planet would produce a photometric transit depth of only $8\times 10^{-4}$, which is smaller than the transit depth of any known transiting planet. \acknowledgments We thank G.~Marcy for advice and encouragement, and D.~Charbonneau for helpful conversations. We are grateful for support from the NASA Keck PI Data Analysis Fund (JPL 1326712). We recognize and acknowledge the very significant cultural role and reverence that the summit of Mauna Kea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. Access to the Keck telescopes for this project was through the Telescope System Instrumentation Program, and was supported by AURA through the National Science Foundation under AURA Cooperative Agreement AST 0132798 as amended.
2110.04176
\section{Introduction} \label{sec:intro} \IEEEPARstart{R}{ecent} state-of-the-art convolutional models achieved astonishing results in various fields of application by large-scaling the overall parameters amount \cite{karras2020analyzing, dascoli2021convit, dosovitskiy2021image, Real2019ImgClass}. Simultaneously, hypercomplex algebra applications are gaining increasing attention in diverse spheres of research such as signal processing \cite{NAVARROMORENO2021108022, NAVARROMORENO202010100, Sanei2018ICASSP, XIANG2018193} or deep learning \cite{Kamayashi2021TNNLS, Lin2021TNNLS, Liu2021TNNLS, Valle2018TNNLS, Liu2018TNNLS, VALLE2020136, DECASTRO202054, PaulTNNLS2015, Hirose2014SSTNNLS}. Indeed, hypercomplex and quaternion neural networks (QNNs) demonstrated to significantly reduce the number of parameters while still obtaining comparable performance \cite{Muppidi2021ICASSP, ParcolletICLR2019, GrassucciQGAN2021, Tay2019QTRansformer, Cariow2021Oct, WU2020179, VALLE2021111}. These models exploit hypercomplex algebra properties, including the Hamilton product, to painstakingly design interactions among the imaginary units, thus involving $1/4$ or $1/8$ of free parameters with respect to real-valued models. Furthermore, thanks to the modelled interactions, hypercomplex networks capture internal latent relations in multidimensional inputs and preserve pre-existing correlations among input dimensions \cite{Chen2021QFM, GrassucciICASSP2021, Grassucci2021Entropy, Gai2021TCS, Vieira2020IJCNN}. Therefore, the quaternion domain is particularly appropriate for processing $3$D or $4$D data, such as color images or (up to) $4$-channel signals \cite{Took2019ICASSP}, while the octonion one is suitable for $8$D inputs. Unfortunately, most common color image datasets contain RGB images and some tricks are required to process this data type with QNNs. Among them, the most employed are padding a zero channel to the input in order to encapsulate the image in the four quaternion components, or remodelling the QNN layer with the help of vector maps \cite{Gaudet2021RemDim}. Additionally, while quaternion neural operations are widespread and easy to be integrated in pre-existing models, very few attempts have been made to extend models to different domain orders. Accordingly, the development of hypercomplex convolutional models for larger multidimensional inputs, such as magnitudes and phases of multichannel audio signals or $16$-band satellite images, still remains painful. Moreover, despite the significantly lower number of parameters, these models are often slightly slow with respect to real-valued baselines \cite{hoffmann2020algebranets} and ad-hoc algorithms may be necessary to improve efficiency \cite{Cariow2021Quat, Cariow2021Oct}. Recently, a novel literature branch aims at compress neural networks leveraging Kronecker product decomposition \cite{Huang2020StochasticNN, Tang2021SKFAC}, gaining considerable results in terms of model efficiency \cite{Wang2021KroneckerCD}. Lately, a parameterization of hypercomplex multiplications have been proposed to generalize hypercomplex fully connected layers by sum of Kronecker products \cite{Zhang2021PHM}. The latter method obtains high performance in various natural language processing tasks by also reducing the number of overall parameters. Other works extended this approach to graph neural networks \cite{le2021parameterized} and transfer learning \cite{mahabadi2021compacter}, proving the effectiveness of Kronecker product decomposition for hypercomplex operations. However, no solution exists for convolutional layers yet, which remain the most employed layers when dealing with multidimensional inputs, such as images and audio signals \cite{Wu2021CvTIC, Hersheyicassp2017}. In this paper, we devise the family of parameterized hypercomplex neural networks (PHNNs), which are lightweight large-scale hypercomplex neural models admitting any multidimensional input, whichever the number of dimensions. At the core of this novel set of models, we propose the parameterized hypercomplex convolutional (PHC) layer. Our method is flexible to operate in domains from $1$D to $n$D, where $n$ can be arbitrarily chosen by the user or tuned to let the model performance lead to the most appropriate domain for the given input data. Such a malleability comes from the ability of the proposed approach to subsume algebra rules to perform convolution regardless of whether these regulations are preset or not. Thus, neural models endowed with our approach adopt $1/n$ of free parameters with respect to their real-valued counterparts, and the amount of parameter reduction is a user choice. This makes PHNNs adaptable to a plethora of applications in which saving storage memory can be a crucial aspect. Additionally, PHNNs versatility allows processing multidimensional data in its natural domain by simply setting the dimensional hyperparameter $n$. For instance, color images can be analyzed in their RGB domain by setting $n=3$ without adding any useless information, contrary to standard processing for quaternion networks with the padded zero-channel. Indeed, PHC layers are able to grasp the proper algebra from input data, while capturing internal correlations among the image channels and saving $66\%$ of free parameters. On a thorough empirical evaluation on multiple benchmarks, we demonstrate the flexibility of our method that can be adopted in different domains of applications, from images to audio signals. We devise a set of PHNNs for large-scale image classification and sound event detection tasks, letting them operate in different hypercomplex domain and with various input dimensionality with $n$ ranging from $2$ to $16$. The contribution of this paper is three-fold. \begin{itemize} \item We introduce a parameterized hypercomplex convolutional (PHC) layer which grasps the convolution rules directly from data via backpropagation exploiting the Kronecker product properties, thus reducing the number of free parameters to $1/n$. \item We devise the family of parameterized hypercomplex neural networks (PHNNs), lightweight and more efficient large-scale hypercomplex models. Thanks to the proposed PHC layer and to the method in \cite{Zhang2021PHM} for fully connected layers, PHNNs can be employed with any kind of input and pre-existing neural models. To show the latter, we redefine common ResNets, VGGs and Sound Event Detection networks (SEDnets), operating in any user-defined domain just by choosing the hyperparameter $n$, which also drives the number of convolutional filters. \item We show how the proposed approach can be employed with any kind of multidimensional data by easily changing the hyperparameter $n$. Indeed, by setting $n=3$ a PHNN can process RGB images in their natural domain, while leveraging the properties of hypercomplex algebras, allowing parameter sharing inside the layers and leading to a parameter reduction to $1/3$. To the best of our knowledge, this is the first approach that processes color images with hypercomplex-based neural models without adding any padding channel. As well, multichannel audio signals can be analysed by simply considering $n=4$ for standard first-order ambisonics (which has $4$ microphone capsules), $n=8$ for an array of two ambisonics microphones, or even $n=16$ if we want to include the information of each channel phase \end{itemize} The rest of the paper is organized as follows. In Section \ref{subsec:real_layers}, we introduce concepts of hypercomplex algebra and we recapitulate real and quaternion-valued convolutional layers. Section \ref{sec:phc} rigorously introduces the theoretical aspects of the proposed method. Sections \ref{sec:phc_rgb} and \ref{sec:phc_audio} reveal how the approach can be adopted in different neural models and in two different domains, the images and audio one, expounding how to process RGB images with $n=3$ and multichannel audio with $n$ up to $8$. The experimental evaluation is presented in Section \ref{sec:img_class} for image classification and in Section \ref{sec:sed} for sound event detection. Finally, Section \ref{sec:abl} reports the ablation studies we conduct and in Section \ref{sec:conc} we draw conclusions. \section{Hypercomplex Neural Networks} \label{subsec:real_layers} \subsection{Hypercomplex Algebra} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Figures/octonions_mult_table.pdf} \caption{Example of hypercomplex multiplication table for $n=2$ i.e., complex, among others (green line), $n=4$ i.e., quaternions, tessarines, (blue line) and $n=8$, i.e., octonions, bi-quaternions, and so on (red line). While for these domains algebra rules exist and are predefined, no regulations are set for other domains such as $n=3,5,6,7$ (dashed grey lines). The parameterized hypercomplex approaches are able to learn these missing algebra rules from data, thus defining hypercomplex multiplication and convolution for any desired domain.} \label{fig:hprod} \end{figure} Hypercomplex neural networks rely in a hypercomplex number system based on the set of hypercomplex numbers $\bH$ and their corresponding algebra rules to shape additions and multiplications \cite{VALLE2021111}. These operations should be carefully modelled due to the interactions among imaginary units that may not behave as real-valued numbers. For instance, Figure~\ref{fig:hprod} reports an example of a multiplication table for complex (green), quaternion (blue) and octonion (red) numbers. However, this is just a small subset of the hypercomplex domain that exist. Indeed, for $n=4$ there exist quaternions, tessarines, among others, while for $n=8$ octonions, dual-quaternions, and so on. Each of these domains have different multiplication rules due to dissimilar imaginary units interactions. A generic hypercomplex number is defined as \begin{equation} h = h_0 + h_i \ii_i + \ldots + h_n \ii_n, \qquad i=1, \ldots, n \label{eq:hyp_num} \end{equation} \noindent being $h_0, \ldots, h_n \in \bR$ and $\ii_i, \ldots, \ii_n$ imaginary units. Different subsets of the hypercomplex domain exist, including complex, quaternion, and octonion, among others. They are identified by the number of imaginary units they employ and by the properties of their vector multiplication. The quaternion domain is one of the most popular for neural networks thanks to the Hamilton product properties. This domain has its foundations in the quaternion number $q = q_0 + q_1 \ii + q_2 \ij + q_3 \ik$, in which $q_c, \; c \in \{0,1,2,3\}$ are real coefficients and $\ii, \ij, \ik$ the imaginary untis. A quaternion with its real part $q_0$ equal to $0$ is named \textit{pure quaternion}. The imaginary units comply with the property $\ii^2 = \ij^2 = \ik^2 = -1$ and with the non-commutative products $\ii \ij = - \ij \ii ; \; \ij \ik = - \ik \ij ; \; \ik \ii = - \ii \ik$. Due to the non-commutativity of vector multiplication, the Hamilton product has been introduced to properly model the multiplication between two quaternions. \subsection{Real and Quaternion-Valued Convolutional Layers} A generic convolutional layer can be described by \begin{equation} \y = \text{Conv}(\x) = \W * \x + \mathbf{b}, \label{eq:conv} \end{equation} where the input $\x \in \bR^{t \times s}$ is convolved ($*$) with the filters tensor $\W \in \bR^{s \times d \times k \times k}$ to produce the output $\y \in \bR^{d \times t}$, where $s$ is the input channels dimension, $d$ the output one, $k$ is the filter size, and $t$ is the input and output dimension. The bias term $\mathbf{b}$ does not heavily influence the number of parameters, thus the degrees of freedom for this operation are essentially $\mathcal{O}(sdk^2)$. Quaternion convolutional layers, instead, build the weight tensor $\W \in \bR^{s \times d \times k \times k}$ by following the Hamilton product rule and organize filters according to it: \begin{equation} {\bf{W}} * {\bf{x}} = \left[ {\begin{array}{*{20}c} \hfill {{\bf{W}}_0 } & \hfill { - {\bf{W}}_1 } & \hfill { - {\bf{W}}_2 } & \hfill { - {\bf{W}}_3 } \\ \hfill {{\bf{W}}_1 } & \hfill {{\bf{W}}_0 } & \hfill { - {\bf{W}}_3 } & \hfill {{\bf{W}}_2 } \\ \hfill {{\bf{W}}_2 } & \hfill {{\bf{W}}_3 } & \hfill {{\bf{W}}_0 } & \hfill { - {\bf{W}}_1 } \\ \hfill {{\bf{W}}_3 } & \hfill { - {\bf{W}}_2 } & \hfill {{\bf{W}}_1 } & \hfill {{\bf{W}}_0 } \\ \end{array}} \right] * \left[ {\begin{array}{*{20}c} {{\bf{x}}_0 } \hfill \\ {{\bf{x}}_1 } \hfill \\ {{\bf{x}}_2 } \hfill \\ {{\bf{x}}_3 } \hfill \\ \end{array}} \right] \label{eq:qprod} \end{equation} where $\W_0, \W_1, \W_2, \W_3 \in \bR^{\frac{s}{4} \times \frac{d}{4} \times k \times k}$ are the real coefficients of the quaternion weight matrix $\W = \W_0 + \W_1 \ii + \W_2 \ij + \W_3 \ik$ and $\x_0, \x_1, \x_2, \x_3$ are the coefficients of the quaternion input $\x$ with the same structure. As done for real-valued layers, the bias can be ignored and the degree of freedom computations of the quaternion convolutional layer can be approximated to $\mathcal{O}(sdk^2/4)$. The lower number of parameters with respect to the real-valued operation is due to the reuse of filters performed by the Hamilton product in Eq.~\ref{eq:qprod}. Also, sharing the parameter submatrices forces to consider and exploit the correlation between the input components \cite{ParcolletAIR2019, Tay2019QTRansformer, GaudetIJCNN2018}. \section{Parameterizing Hypercomplex Convolutions} \label{sec:phc} \begin{figure*}[t] \begin{center} \includegraphics[width=0.9\textwidth]{Figures/Hamilton_prod.pdf} \end{center} \caption{The quaternion convolution rule can be expressed as sum of Kronecker products between the matrices $\mathbf{A}_i$ that subsume the algebra rules and the matrices $\mathbf{F}_i$ that contain the convolution filters, with $i=1,2,3,4$. In this example, the parameters of $\mathbf{A}_i$ are fixed for visualization purposes, but in PHC layers they are learnable parameters.} \label{fig:ham_prod} \end{figure*} In the following, we delineate the formulation for the proposed parameterized hypercomplex convolutional (PHC) layer. We also show that this approach is capable of learning the Hamilton product rule when two quaternions are convolved. \subsection{Parameterized Hypercomplex Convolutional Layers} \label{subsec:phc_layers} The PHC layer is based on the construction, by sum of Kronecker products, of the weight tensor $\mathbf{H}$ which encapsulates and organizes the filters of the convolution. The proposed method is formally defined as: \begin{equation} \y = \text{PHC}(\x) = \mathbf{H}*\x + \mathbf{b}, \end{equation} \noindent whereby, $\mathbf{H} \in \bR^{s \times d \times k \times k}$ is built by sum of Kronecker products between two learnable groups of matrices. Here, $s$ is the input dimensionality to the layer, $d$ is the output one, and $k$ is the filter size. More concretely, \begin{equation} \mathbf{H} = \sum_{i=1}^n \mathbf{A}_i \otimes \mathbf{F}_i, \end{equation} \noindent in which $\mathbf{A}_i \in \bR^{n \times n}$ with $i=1, ..., n$ are the matrices that describe the algebra rules and $\mathbf{F}_i \in \bR^{\frac{s}{n} \times \frac{d}{n} \times k \times k}$ represents the $i$-th batch of filters that are arranged by following the algebra rules to compose the final weight matrix. It is worth noting that $\frac{s}{n} \times \frac{d}{n} \times k \times k$ holds for squared kernels, while $\frac{s}{n} \times \frac{d}{n} \times k$ should be considered instead for 1D kernels. The core element of this approach is the Kronecker product \cite{KroneckerBook}, which is a generalization of the vector outer product that can be parameterized by $n$. The hyperparameter $n$ can be set by the user who wants to operate in a pre-defined real or hypercomplex domain (e.g., by setting $n=2$ the PHC layer is defined in the complex domain, or in the quaternion one if $n$ is set equal to $4$, as Figure \ref{fig:ham_prod} illustrates), or tuned to obtain the best performance from the model. The matrices $\mathbf{A}_i$ and $\mathbf{F}_i$ are learnt during training and their values are reused to build the definitive tensor $\mathbf{H}$. The degree of freedom of $\mathbf{A}_i$ and $\mathbf{F}_i$ are $n^3$ and $sdk^2/n$, respectively. Usually, real world applications employ a large number of filters in layers ($s, d = 256, 512, ...)$ and small values for $k$. Therefore, frequently $sdk^2 \gg n^3$ holds. Thus, the degrees of freedom for the PHC weight matrix can be approximated to $\mathcal{O}(sdk^2/n)$. Hence, the PHC layer reduces the number of parameters by $1/n$ with respect to a standard convolutional layer in real world problems. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{Figures/toy_examples.pdf} \end{center} \caption{Loss plots for toy examples. The PHC layer is able to learn the matrix $\mathbf{A}$ describing the convolution rule for pure (left) and full quaternions (right).} \label{fig:toy} \end{figure} Moreover, when processing multidimensional data with correlated channels, such as color images, rather than mulichannel audio or multisensor signals, PHC layers bring benefits due to the weight sharing among different channels. This allows capturing latent intra-channels relations that standard convolutional networks ignore because of the rigid structure of the weights \cite{GrassucciQGAN2021, ParcolletICASSP2019a}. The PHC layer is able to subsume hypercomplex convolution rules and the desired domain is specified by the hyperparameter $n$. Interestingly, by setting $n=1$ a real-valued convolutional layer can be represented too. Indeed, standard real layers do not involve parameter sharing, therefore the algebra rules are solely described by the single $\mathbf{A} \in \bR^{1\times1}$ and the complete set of filters are included in $\mathbf{F}^{s \times d \times k \times k}$. Therefore, the PHC layer fills the gaps left by pre-existing hypercomplex algebras in Fig.~\ref{fig:hprod} and subsumes the missing algebra rules directly from data, i.e., the dashed grey lines in Fig.~\ref{fig:hprod}. Thus, a neural model equipped with PHC layers can grasp the filter organization also for $n=3,5,6,7$ and so on. Moreover, any convolutional model can be endowed with our approach, since PHC layers easily replace standard convolution / transposed convolution operations and the hyperparameter $n$ gives high flexibility to adapt the layer to any kind of input, such as color images, multichannel audio or multisensor signals. \subsection{Learning Tests on Toy Examples} We test the receptive ability of the PHC layer in two toy problems building an artificial dataset. We highly encourage the reader to take a look at the section \texttt{tutorials} of the GitHub repository \url{https://github.com/eleGAN23/HyperNets} for more insights and results on toy examples, including the learned matrices $\mathbf{A}_i$. The first task aims at learning the right matrix $\mathbf{A}$ to build a quaternion convolutional layer which properly follows the Hamilton rule in Eq.~\ref{eq:qprod}. That is, we set $n=4$ and the objective is to learn the four matrices $\mathbf{A}_i$ as they are in the quaternion product in Fig.~\ref{fig:ham_prod}. We build the dataset by performing a convolution with a matrix of filters $\W \in \bH$, which are arranged following the regulation in Eq.~\ref{eq:qprod}, and a quaternion $\x \in \bH$ in input. The target is still a quaternion, named $\mathbf{y} \in \bH$. As shown in Fig. \ref{fig:toy} (right), the MSE loss of the PHC layer converges very fast, meaning that the layer properly learns the matrix $\mathbf{A}$ and the Hamilton convolution. The second toy example is a modification of the previous dataset target. Here, we want to learn the matrix $\mathbf{A}$ which describes the convolution among two pure quaternions. Therefore, when setting $n=4$, the matrix $\mathbf{A}_1$ of a pure quaternion should be complete null. Pure quaternions may be, as an example, an input RGB image and the weights of a hypercomplex convolutional layer since the first channel of RGB images is zero. Figure \ref{fig:toy} (left) displays the convergence of the PHC layer loss during training, proving that the proposed method is able of subsuming hypercomplex convolutional rules when dealing with pure quaternions too. \begin{strip} \begin{gather} \label{eq:visualphc} \begin{array}{c} \mathop {\left[ A \right]}\limits_{\left( {1 \times 1} \right)} \otimes \mathop {\left[ {\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}{\bf{F}}\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}} \right]}\limits_{\left( {s \times d \times k \times k} \right)} = \mathop {\left[ {\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}{\bf{H}}\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}} \right]}\limits_{\left( {s \times d \times k \times k} \right)} \\ \mathop {\left[ {{\bf{A}}_1 } \right]}\limits_{\left( {2 \times 2} \right)} \otimes \mathop {\left[ {\begin{array}{*{20}c} {} \\ {} \\ {} \\ \end{array}{\bf{F}}_1 \begin{array}{*{20}c} {} \\ {} \\ {} \\ \end{array}} \right]}\limits_{\left( {\frac{s}{2} \times \frac{d}{2} \times k \times k} \right)} + \mathop {\left[ {{\bf{A}}_2 } \right]}\limits_{\left( {2 \times 2} \right)} \otimes \mathop {\left[ {\begin{array}{*{20}c} {} \\ {} \\ {} \\ \end{array}{\bf{F}}_2 \begin{array}{*{20}c} {} \\ {} \\ {} \\ \end{array}} \right]}\limits_{\left( {\frac{s}{2} \times \frac{d}{2} \times k \times k} \right)} = \mathop {\left[ {\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}{\bf{H}}\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}} \right]}\limits_{\left( {s \times d \times k \times k} \right)} \\ \begin{array}{*{20}c} \vdots \\ \vdots \\ \end{array} \\ \mathop {\left[ {{\bf{A}}_1 } \right]}\limits_{\left( {n \times n} \right)} \otimes \mathop {\left[ {\begin{array}{*{20}c} {} \\ \end{array}{\bf{F}}_1 \begin{array}{*{20}c} {} \\ \end{array}} \right]}\limits_{\left( {\frac{s}{n} \times \frac{d}{n} \times k \times k} \right)} + \mathop {\left[ {{\bf{A}}_2 } \right]}\limits_{\left( {n \times n} \right)} \otimes \mathop {\left[ {\begin{array}{*{20}c} {} \\ \end{array}{\bf{F}}_2 \begin{array}{*{20}c} {} \\ \end{array}} \right]}\limits_{\left( {\frac{s}{n} \times \frac{d}{n} \times k \times k} \right)} + \ldots + \mathop {\left[ {{\bf{A}}_n } \right]}\limits_{\left( {n \times n} \right)} \otimes \mathop {\left[ {\begin{array}{*{20}c} {} \\ \end{array}{\bf{F}}_n \begin{array}{*{20}c} {} \\ \end{array}} \right]}\limits_{\left( {\frac{s}{n} \times \frac{d}{n} \times k \times k} \right)} = \mathop {\left[ {\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}{\bf{H}}\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}\begin{array}{*{20}c} {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ {} \\ \end{array}} \right]}\limits_{\left( {s \times d \times k \times k} \right)}. \\ \end{array} \end{gather} \end{strip} \subsection{Demystifying Parameterized Hypercomplex Convolutional Layers} \label{subsec:dem_phc} We provide a formal explanation of the PHC layer to better understand the Kronecker product and how it organizes convolution filters to reduce the overall number of parameters to $1/n$. In Eq.~\ref{eq:visualphc}, we show how the PHC layer generalizes from $1$D to $n$D domains. When subsuming real-valued convolutions in the first line of Eq.~\ref{eq:visualphc}, the Kronecker product is performed between a scalar $A$ and the filter matrix $\mathbf{F}$, whose dimension is the same as the final weight matrix $\mathbf{H}$, which is $s \times d \times k \times k$. Considering the complex case with $n=2$ in the second line of Eq.~\ref{eq:visualphc}, the algebra is defined in $\mathbf{A}_1$ and $\mathbf{A}_2$ while the filters are contained in $\mathbf{F}_1$ and $\mathbf{F}_2$, each of dimension $1/2$ the final matrix $\mathbf{H}$. Therefore, while the size of the weight matrix $\mathbf{H}$ remains unchanged, the parameter size is approximately $1/2$ the real one. In the last line of Eq.~\ref{eq:visualphc}, we can see the generalization of this process, in which the size of matrices $\mathbf{F}_i$, $i=1, ..., n$ is reduced proportionally to $n$. It is worth noting that, while the parameter size is reduced with growing values of $n$, the dimension of $\mathbf{H}$ remains the same. \section{Parameterized Hypercomplex Neural Networks for Color Images} \label{sec:phc_rgb} In this section, we describe how PHNNs can be applied for processing color images in hypercomplex domains without needing any additional information to the input and we propose examples of parameterized hypercomplex versions of common computer vision models such as VGGs and ResNets. In order to be consistent with literature, we perform each experiment with a real-valued baseline model, then we compare it with its complex and quaternion counterparts and with the proposed PHNN. Furthermore, we assess the malleability of the proposed approach testing different values of the hyperparameter $n$, therefore defining parameterized hypercomplex models in multiple domains. \subsection{Process Color Images with PHC Layers} Different encodes exist to process color images, however, the most common computer vision datasets are comprised of three-channel images in $\bR^3$. In the quaternion domain, RGB images are enclosed into a quaternion and processed as single elements \cite{ParcolletAIR2019}. The encapsulation is performed by considering the RGB channels as the real coefficients of the imaginary units and by padding a zeros channel as the first real component of the quaternion. Here, we propose to leverage the high malleability of PHC layers to deal with RGB images in hypercomplex domains without embedding useless information to the input. Indeed, the PHC can directly operate in $\bR^3$ by easily setting $n=3$ and process RGB images in their natural domain while exploiting hypercomplex network properties such as parameters sharing. Indeed, the great flexibility of PHC layers allows the user to choose whether processing images in $\bR^4$ or $\bR^3$. On one hand, by setting $n=4$, the zeros channel is added to the input even so the layer saves the $75\%$ of free parameters. On the other hand, by choosing $n=3$ the network does not handle any useless information, notwithstanding, it reduces the number of parameters by solely $66\%$. This is a trade-off which may depend on the application or on the hardware the user needs. Furthermore, the domain on which processing images can be tuned by letting the performance of the network indicates the best choice for $n$. \subsection{Parameterized Hypercomplex VGGs} A family of popular methods for image processing is based on the VGG networks \cite{VGG2015} that stack several convolutional layers and a closing fully connected classifier. To completely define models in the desired hypercomplex domain, we propose to endow the network with PHC layers as convolution components and with Parameterized Hypercomplex Multiplication (PHM) layers \cite{Zhang2021PHM} as linear classifier. The backbone of our PHVGG is then \begin{equation} \begin{split} \mathbf{h}_t &= \text{ReLU} \left( \text{PHC}_t \left( \mathbf{h}_{t-1} \right) \right) \qquad t=1,...,j \\ \y &= \text{ReLU} \left( \text{PHM}(\mathbf{h}_j) \right). \end{split} \end{equation} \subsection{Parameterized Hypercomplex ResNets} In recent literature, a copious set of high performance in image classification is obtained with models having a residual structure. ResNets \cite{Resnet2016} pile up manifold residual blocks composed of convolutional layers and identity mappings. A generic PHResNet residual block is defined by \begin{equation} \y = \mathcal{F}(\x, \{ \mathbf{H}_j \}) + \x, \end{equation} whereby $\mathbf{H}_j$ are the PHC weights of layer $j = 1, 2$ in the block, and $\mathcal{F}$ is \begin{equation} \mathcal{F}(\x, \{ \mathbf{H}_j \}) = \text{PHC} \left( \text{ReLU} \left( \text{PHC}(\x ) \right) \right), \end{equation} \noindent in which we omit batch normalization to simplify notation. The backward phase of a PHNNs reduces to a backpropagation similar to the quaternion neural networks one, which has been already developed in \cite{NittaQBack1995, ParcolletAIR2019, ParcolletICLR2019}. \section{Parameterized Hypercomplex Neural Networks for Multichannel Signals} \label{sec:phc_audio} In the following, we expound how PHNNs can be employed to deal with multichannel audio signals and we introduce, as an example, the parameterized hypercomplex Sound Event Detection networks (PHSEDnets). \subsection{Process multichannel audio with PHC layers} A first-order Ambisonics (FOA) signal is composed of $4$ microphone capsules, whose magnitude representations can be enclosed in a quaternion \cite{ComminielloICASSP2019a, RicciardiMLSP2020}. However, the quaternion algebra may be restrictive if more than one microphone is employed for registration or whether the phase information has to be included too. Indeed, quaternion neural networks badly fit with multidimensional input with more than $4$ channels \cite{Grassucci2022DualQ}. Conversely, the proposed method can be easily adapted to deal with these additional dimensions by handily setting the hyperparameter $n$ and thus completely leveraging each information in the $n$-dimensional input. \subsection{Parameterized Hypercomplex SEDnets} Sound Event Detection networks (SEDnets) \cite{Adavanne2019SoundEL} are comprised of a core convolutional component which extracts features from the input spectrogram. The information is then passed to a gated recurrent unit (GRU) module and to a stack of fully connected (FC) layers with a closing sigmoid $\sigma$ which outputs the probability the sound is in the audio frame. Formally, the PHSEDnet is described by \begin{equation} \begin{split} \mathbf{h}_t &= \text{PHC}_t(\mathbf{h}_{t-1}) \qquad t=1,...,j\\ \y &= \sigma \left( \text{FC} \left( \text{GRU} \left( \mathbf{h}_j \right) \right) \right). \end{split} \end{equation} After the GRU model, We employ standard fully connected layers, that can be also implemented as PHM layers with $n=1$, since the so processed signal loses its multidimensional original structure. \section{Experimental Evaluation on Image Classification} \label{sec:img_class} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Figures/bubbleplot_cifar10_vgg_legend_v2.pdf} \caption{CIFAR10 accuracy against number of network parameters for VGG and ResNet models. The larger is the point, the higher is the standard deviation over the runs. PHC-based models obtain better accuracies in both the families while far reducing the number of parameters. We do not display Complex VGGs as their accuracy is very low with respect to other models.} \label{fig:bubble} \end{figure} To begin with, we test the PHC layer on RGB images and we show how, exploiting the correlations among channels, the proposed method saves parameters while ensuring high performance. We perform each experiment with a real-valued baseline model and then we compare it with its complex and quaternion counterparts and with the proposed PHNNs. Furthermore, we assess the malleability of the proposed approach testing different values of the hyperparameter $n$, therefore defining parameterized hypercomplex models in multiple domains. \subsection{Experimental Setup} We perform the image classification task with five baseline models. We consider ResNet18, ResNet50 and ResNet152 from the ResNet family and VGG16 and VGG19 from the VGG one. Each hyperparameter is set according to the original papers \cite{Resnet2016, VGG2015}. We investigate the performance in four different color images datasets at different scales. We employ SVHN, CIFAR10, CIFAR100, and ImageNet and any kind of data augmentation is applied to these datasets in order to guarantee a fair comparison. We modify the number of filters for ResNets in order to be divisible by $3$ and thus having the possibility of testing a configuration with $n=3$. The modified versions of the ResNets are built with an initial convolutional layer of $60$ filters. Then, the subsequent blocks have $60, 120, 240, 516$ filters. The number of layers in the blocks depends on the ResNet chosen, whether 18, 50 or 152. Instead, VGG19 convolution component comprise two $24$, two $72$, four $216$, and eight $648$ filter layers, with batch normalization. The classifier is composed of three fully connected layers of $648$, $516$ and $10$, $100$ or $1000$ depending on the number of classes in the dataset. The rest of the hyperparameters are set as suggested in the original papers. The batch size is fixed to $128$ and training is performed via SGD optimizer with momentum equal to $0.9$, weight decay $5e^{-4}$ and a cosine annealing scheduler. For ResNets, the initial learning rate is set to $0.1$. For VGG is equal to $0.01$. Models on CIFAR10 and CIFAR100 are trained for $200$ epochs whereas on SVHN networks run for $50$ epochs. For the ImageNet dataset, we follow the recipes in \cite{wightman2021resnet}, so we resize the images for training at $160\times160$ while keeping the standard size of $224\times224$ for validation and test. We employ a step learning rate decay every $30$ epochs with $\gamma = 0.1$, the SGD optimizer and an initial learning rate of $0.1$ with weight decay $0.0001$. The training is performed for $300$k iterations with a batch size of $256$ employing four Tesla V100 GPUs. \subsection{Experimental Results} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Figures/success_barplot_v2.pdf} \caption{Bar plot of number of successes achieves by the models in Table~\ref{tab:img_class_full} in each of the runs. The PHC-based models with $n=3$ (red bar) far exceeds other configurations being the more performing choice for RGB image classification task.} \label{fig:barplot} \end{figure} \begin{table*}[t] \caption{Image classification results for VGG. The accuracy mean and standard deviation over three runs with different seeds is reported. Training (T) time and inference (I) time required on CIFAR10. For training time we report, in seconds per 100 iterations, the mean and the standard deviation over the iterations in one epoch, while the inference time is the time required to decode the test set. The PHNN with $n=4$ outperforms the quaternion counterpart both in terms of accuracy and time. The PHVGG with $n=2$ far exceeds the real-valued baseline in the considered datasets, while both the PHVGG19 versions with $n=2,4$ are more efficient than the real and quaternion-valued baselines at inference time. $p$-value under the T-test $0.0002$.} \label{tab:img_class} \begin{center} \begin{tabular}{llcccc} \multicolumn{1}{c}{\bf Model} &\multicolumn{1}{c}{\bf Params} &\multicolumn{1}{c}{\bf SVHN} &\multicolumn{1}{c}{\bf CIFAR10} &\multicolumn{1}{c}{\bf Time (T)} &\multicolumn{1}{c}{\bf Time (I)}\\ \hline \\ VGG16 & 15M & 94.364 $\pm$ 0.394 & 85.067 $\pm$ 0.765 & \textbf{2.2 $\pm$ 0.02} & \textbf{1.2}\\ Complex VGG16 & 7.6M (-50\%) & 93.555 $\pm$ 0.392 & 76.927 $\pm$ 0.511 & 5.2 $\pm$ 0.02 & 1.5\\ Quaternion VGG16 & 3.8M (-75\%) & 93.887 $\pm$ 0.292 & 83.997 $\pm$ 0.493 & 5.2 $\pm$ 0.02 & 2.2\\ PHVGG16 $n=2$ & 7.6M (-50\%) & \textbf{94.831 $\pm$ 0.257} & \textbf{86.510 $\pm$ 0.216} & \underline{3.2 $\pm$ 0.02} & \underline{1.4}\\ PHVGG16 $n=4$ & 3.8M (-75\%) & \underline{94.639 $\pm$ 0.121} & \underline{85.640 $\pm$ 0.205} & \underline{3.2 $\pm$ 0.02} & \underline{1.4}\\ \hline VGG19 & 29.8M & 94.140 $\pm$ 0.129 & \underline{85.624 $\pm$ 0.257} & \textbf{3.2 $\pm$ 0.02} & 16.0 \\ Complex VGG19 & 14.8M (-50\%) & 90.469 $\pm$ 0.222 & 76.979 $\pm$ 0.345 & 5.2 $\pm$ 0.02 & 16.2\\ Quaternion VGG19 & 7.5M (-75\%) & 93.983 $\pm$ 0.190 & 83.914 $\pm$ 0.129 & 6.2 $\pm$ 0.02 & 16.3 \\ PHVGG19 $n=2$ & 14.9M (-50\%) & \textbf{94.553 $\pm$ 0.229} & \textbf{85.750 $\pm$ 0.286} & \underline{4.0 $\pm$ 0.02} & \textbf{15.4} \\ PHVGG19 $n=4$ & 7.4M (-75\%) & \underline{94.169 $\pm$ 0.296} & 84.830 $\pm$ 0.733 & 4.2 $\pm$ 0.02 & \underline{15.5} \\ \end{tabular} \end{center} \end{table*} \begin{table*}[t] \caption{Image Classification results with ResNet models. Each experiment is run three times with different seeds and mean with standard deviation is reported. The proposed models far exceed real-valued and quaternion baselines almost in each experiment we conduct. Interestingly, the PHNN outperform the real-valued counterpart by $4\%$ points in the largest-scale experiment on CIFAR100. The time is similar to the claims in Table \ref{tab:img_class} so we do not add here to avoid redundancy.} \label{tab:img_class_full} \begin{center} \begin{tabular}{lllccccc} \multicolumn{1}{c}{\bf Model} &\multicolumn{1}{c}{\bf Params} &\multicolumn{1}{c}{\bf Storage Memory} &\multicolumn{1}{c}{\bf SVHN} &\multicolumn{1}{c}{\bf CIFAR10} &\multicolumn{1}{c}{\bf CIFAR100} \\ \hline \\ ResNet18 & 10.1M & 39MB & 93.992 $\pm$ 1.317 & \underline{89.543 $\pm$ 0.340} & \underline{62.634 $\pm$ 0.600} \\ Complex ResNet18 & 5.2M (-50\%) & 20MB (-50\%) & 89.902 $\pm$ 0.322 & 89.541 $\pm$ 0.412 & 60.417 $\pm$ 0.811 \\ Quaternion ResNet18 & 2.8M (-75\%) & 10MB (-75\%) & 93.661 $\pm$ 0.413 & 88.240 $\pm$ 0.377 & 59.850 $\pm$ 0.607 \\ PHResNet18 $n=2$ & 5.4M (-50\%) & 20MB (-50\%) & \textbf{94.359 $\pm$ 0.187} & 89.260 $\pm$ 0.625 & 60.320 $\pm$ 2.249 \\ PHResNet18 $n=3$ & 3.6M (-66\%) & 13MB (-66\%) & \underline{94.303 $\pm$ 1.234} & \textbf{89.603 $\pm$ 0.563} & \textbf{62.660 $\pm$ 1.067} \\ PHResNet18 $n=4$ & 2.7M (-75\%) & 10MB (-75\%) & 94.234 $\pm$ 0.161 & 88.847 $\pm$ 0.874 & 61.780 $\pm$ 0.689 \\ \hline ResNet50 & 22.5M & 86MB & \underline{94.546 $\pm$ 0.269} & 89.630 $\pm$ 0.305 & 65.514 $\pm$ 0.569 \\ Complex ResNet50 & 11.1M (-50\%) & 43MB (-50\%) & 89.004 $\pm$ 0.215 & 89.699 $\pm$ 0.485 & 65.104 $\pm$ 0.598 \\ Quaternion ResNet50 & 5.7M (-75\%) & 22MB (-75\%) & 93.685 $\pm$ 0.389 & 89.670 $\pm$ 0.383 & 63.760 $\pm$ 0.717 \\ PHResNet50 $n=2$ & 11.1M (-50\%) & 43MB (-50\%) & 93.849 $\pm$ 0.249 & \underline{89.750 $\pm$ 0.386} & 65.884 $\pm$ 0.333 \\ PHResNet50 $n=3$ & 7.6M (-66\%) & 29MB (-65\%) & 93.617 $\pm$ 0.497 & \textbf{90.423 $\pm$ 0.145} & \textbf{66.497 $\pm$ 1.256} \\ PHResNet50 $n=4$ & 5.7M (-75\%) & 23MB (-74\%) & \textbf{94.558 $\pm$ 0.754} & 88.897 $\pm$ 0.645 & \underline{66.240 $\pm$ 1.165} \\ \hline ResNet152 & 52.6M & 201MB & \textbf{94.625 $\pm$ 0.355} & 89.580 $\pm$ 0.173 & 62.053 $\pm$ 0.385 \\ Complex ResNet152 & 26.3M (-50\%) & 101MB (-50\%) & 90.332 $\pm$ 0.129 & 89.792 $\pm$ 0.427 & 63.125 $\pm$ 0.681 \\ Quaternion ResNet152 & 13.2M (-75\%) & 51MB (-75\%) & 93.638 $\pm$ 0.098 & 89.227 $\pm$ 0.287 & 61.267 $\pm$ 0.784 \\ PHResNet152 $n=2$ & 26.6M (-50\%) & 103MB (-49\%) & 93.915 $\pm$ 0.512 & \textbf{90.540 $\pm$ 0.401} & 65.817 $\pm$ 0.327 \\ PHResNet152 $n=3$ & 17.8M (-66\%) & 70MB (-65\%) & 93.955 $\pm$ 0.152 & \underline{90.077 $\pm$ 0.436} & \underline{66.347 $\pm$ 0.567} \\ PHResNet152 $n=4$ & 13.4M (-75\%) & 53 MB (-74\%) & \underline{94.290 $\pm$ 0.237} & 89.897 $\pm$ 0.097 & \textbf{66.437 $\pm$ 0.064} \\ \end{tabular} \end{center} \end{table*} We execute initial experiments with VGGs against Quaternion VGGs and two versions of PHVGGs with $n$ equal to $2$ and to $4$. Average and standard deviation accuracy over three runs are reported for SVHN and CIFAR10 datasets in Table \ref{tab:img_class}. We experiment also additional runs but any significant difference emerges as the randomness only affects the network initialization. Both the PHVGG16 and PHVGG19 versions clearly outperform real, complex and quaternion counterparts while being built with more than a half the number of parameters of the baseline. Additionally, PH-based models extraordinarily reduce the number of training and inference time (computed on an NVIDIA Tesla-V100) required with respect to the quaternion model which operates in a hypercomplex domain as well. Furthermore, when scaling up the experiment with VGG19, the proposed methods are more efficient at inference time with respect to the real-valued VGG19. Therefore, PHNNs can be easily adopted in applications with disk memory limitations, due to the reduction of parameters, and for fast inference problems thanks to the efficiency at testing time. Although the sum of Kronecker products in PHC layers requires additional computations, the increase is insignificant with respect to the FLOPs computated for the whole network, so the overall number of FLOPs is not heavily affected by our method and the count remains almost the same. Our approach has high malleability, indeed, when dealing with color images, we can the domain in which operating thanks to the hyperparameter $n$. Therefore, we test PHNNs in the complex ($n=2$), quaternion ($n=4$) or $\bH^3$ ($n=3$) domain, where in the latter we do not concatenate any zero padding and process the RGB channels of the image in their natural domain. Table \ref{tab:img_class_full} presents average and standard deviation accuracy over three runs with different seeds for ResNet-based models. We perform extensive experiments and the PH models with $n=4$ always outperform the quaternion counterpart gaining a higher accuracy and being more robust. This underlines the effectiveness of the PHC architectural flexibility over the predefined and rigid structure of quaternion layers. Furthermore, our method distinctly far exceeds the corresponding real-valued baselines across the experiments while saving from $50\%$ to $75\%$ parameters. Focusing on the latter result, the PHResNets with $n=3$ results to be the most suitable choice in many cases, proving the validity of processing RGB images in their natural domain leveraging hypercomplex algebra. However, performance with $n=3$ and $n=4$ are comparable, thus the choice of this hyperparameter may depend on the application or on the hardware employed. On one hand, $n=4$ may sometimes lead to lower performance, nevertheless it allows saving disk memory, as shown in the third column of Table \ref{tab:img_class_full}, thus it may be more appropriate for edge applications. On the other hand, processing color images with $n=3$ may bring higher accuracy even so it requires more parameters. Therefore, such a flexibility makes PHNNs adaptable to a large range of applications. Likewise, PHResNets with $n=2$ gain considerable accuracy scores with respect to the real-valued corresponding models and, due to the larger number of parameters with respect to the PH model with $n=3$, sometimes outperform it too. Finally, the PHResNet with $n=4$ obtains the overall best accuracy in the largest experiment of this set. Indeed, considering a ResNet152 backbone on CIFAR100, our method exceeds the real-valued baseline by more than $4\%$. This is the empirical proof that, PHNNs well scale to large real-world problems by notably reducing the overall number of parameters. These results are summarized for ResNets and VGGs models on CIFAR10 in Fig.~\ref{fig:bubble}. The plot displays models accuracies against models parameters. The PH-based models, either ResNets or VGGs exceed their real and quaternion-valued baselines while consistently reduce the number of parameters. What is more, in Table \ref{tab:img_class_full}, we also report the memory required to store models checkpoints for inference. Our method crucially reduces the amount of disk memory demand with respect to the heavier real-valued model. Further, we perform the image classifcation task on the ImageNet dataset. We compute the percentage of successes of ResNet-based models in each run for which we report the average accuracies in Table~\ref{tab:img_class_full}. As Fig.~\ref{fig:barplot} shows, the largest parcentage of successes is reached by the PHResNet with $n=3$ which has been demonstrated to be the most valuable choice for $n$ when dealing with RGB images. Therefore, we test the PHResNet with $n=3$ against the real-valued counterpart. Table \ref{tab:img_img} shows that the proposed method achieves comparable, and even slightly superior, performance than the real-valued baseline, while involving 66\% fewer parameters. Additionally, in Fig.\ref{fig:gradcam}, we provide Grad-CAM visualizations \cite{GradCAM2017ICCV} for a sample of predictions by our method in the ImageNet dataset to further prove the correct behavior of the PHResNet50 $n=3$ in this scenario. This proves the robustness of the proposed approach, which can be adopted and implemented in models at different scales. \begin{figure} \centering \includegraphics[width=\linewidth]{Figures/phnn_gradcam.pdf} \caption{Grad-CAM visualization for the PHResNet50 $n=3$ on the ImageNet dataset.} \label{fig:gradcam} \end{figure} \begin{table}[t] \caption{ImageNet classification with real-valued baseline against our best model PH $n=3$. Our approach outperform the baseline while saving the $66\%$ of parameters.} \label{tab:img_img} \begin{center} \begin{tabular}{llc} \multicolumn{1}{c}{\bf Model} & \multicolumn{1}{c}{\bf Params} & \multicolumn{1}{c}{\bf ImageNet}\\ \hline \\ ResNet50 & 25.7M & 67.990 \\ PHResNet50 $n=3$ & 9.6M (-66\%) & \textbf{68.584} \\ \end{tabular} \end{center} \end{table} \section{Experimental Evaluation on Sound Event Detection} \label{sec:sed} Sound event detection (SED) is the task of recognizing the sounds classes and at what temporal instances these sounds are active in an audio signal \cite{SED2021Mesaros}. We prove that the PHC layer is adaptable to $n$-dimensional input signals and, due to parameter reduction and hypercomplex algebra, is more performing in terms of efficiency and evaluation scores. \subsection{Experimental Setup} For sound event detection models we consider the augmented version of the SELDnet \cite{Adavanne2019SoundEL, ComminielloICASSP2019a} which was proposed as baseline for of the L3DAS21 Challenge Task 2 \cite{guizzo2021l3das21} and we perform our experiments with the corresponding released dataset\footnote{L3DAS21 dataset and code are available at: \url{https://github.com/l3das/L3DAS21}.}. We consider as our baselines the SEDnet (without the localization part) and its quaternion counterpart. The L3DAS21 Task 2 dataset contains 15 hours of MSMP B-format Ambisonics audio recordings, divided in 900 1-minute-long data points sampled at a rate of $32$ kHz, where up to 3 acoustic events may overlap. The 14 sounds classes have been selected from the FSD50K dataset and are representative for an office sounds: \textit{computer keyboard, drawer open/close, cupboard open/close, finger snapping, keys jangling, knock, laughter, scissors, telephone, writing, chink and clink, printer, female speech, male speech}. In this dataset, the volume difference between the sounds is in the range $0$ and $20$ dB full scale (dBFS). Considering the array of two microphones $1, 2$, the channels order is [W1, Z1, Y1, X1, W2, Z2, Y2, X2], where W, X, Y, Z are the B-format ambisonics channels if the phase (p) information is not considered. Whether we want to include also this information, the order will be [W1, Z1, Y1, X1, W1p, Z1p, Y1p, X1p, W2, Z2, Y2, X2, W2p, Z2p, Y2p, X2p] up to $16$ channels. In Fig.\ref{fig:l3das_dataset}, we show the $8$-channel input when considering one microphone and the phase information. Magnitudes and phases are normalized to be centered in $0$ with standard deviation $1$. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Figures/l3das_dataset_norm2res.pdf} \caption{Sample spectrograms from L3DAS21 dataset recorded by one microphone with four capsules.The first four figures represent the magnitudes while the last four contain the corresponding phases information. The black sections represent silent instants.} \label{fig:l3das_dataset} \end{figure} We perform experiments with multiple configurations of this dataset. We first test the recordings from one microphone considering the magnitudes only ($4$ channels input), then we test the networks with the signals recorded by two microphones and magnitudes only ($8$ channels input). The extracted features by the preprocessing are fed to the four-layer convolutional stack with $64,128,256,512$ filters, with batch normalization, ReLU activation, max pooling and dropout (probability $0.3$), with pooling sizes $(8,2),(8,2),(2,2),(1,1)$. The bidirectional GRU module has three layers, each with an hidden size of $256$. The tail is a four-layer fully connected classifier with $1024$ filters alternated by ReLUs and with a final dropout and a sigmoid activation function. The initial learning rate is set to $0.00001$. To be consistent with pre-existing literature metrics , we define True Positives as TP, False Positives as FP and False Negatives as FN. These are computed according to the detection metric \cite{guizzo2021l3das21}. Moreover, in order to compute the Error Rate (ER), we consider: $\text{S} = \min(\text{FN}, \text{FP})$, $\text{D} = \max(0, \text{FN}-\text{FP})$ and $\text{I} = \max(0, \text{FP}-\text{FN})$, as in \cite{Adavanne2019SoundEL, SED2021Mesaros}. Therefore, we consider: \begin{equation*} \text{F\textsubscript{score}} = \frac{2 \text{TP}}{2\text{TP} + \text{FP} + \text{FN}}, \end{equation*} \begin{equation*} \text{ER} = \frac{\text{S} + \text{D} + \text{I}}{\text{N}}, \end{equation*} \noindent whereby $N$ is the total number of active sound event classes in the reference. The SED\textsubscript{score} is defined by: \begin{equation*} \text{SED\textsubscript{score}} = \frac{\text{ER} + 1 - \text{F\textsubscript{score}}}{2}. \end{equation*} For ER and SED\textsubscript{score}, the lower scores, the better the performance, while for the F\textsubscript{score} higher values stand for better accuracy. \subsection{Experimental Results} \begin{table*}[t] \caption{SEDnets results with one microphone ($4$ channels input). Scores are computed over three runs with different seeds and we report the mean. The proposed method wtih $n=2$ far exceeds the baselines in each metric considered.} \label{tab:sed_4c} \begin{center} \begin{tabular}{llccccc} \multicolumn{1}{c}{\bf Model} &\multicolumn{1}{c}{\bf Conv Params} &\multicolumn{1}{c}{\bf \text{F\textsubscript{score}} $\uparrow$} &\multicolumn{1}{c}{\bf ER $\downarrow$} &\multicolumn{1}{c}{\bf \text{SED\textsubscript{score}} $\downarrow$} &\multicolumn{1}{c}{\bf P $\uparrow$} &\multicolumn{1}{c}{\bf R $\uparrow$} \\ \hline \\ SEDnet & 1.6M & 0.637 & \underline{0.450} & \underline{0.406} & 0.756 & \underline{0.5505} \\ Quaternion SEDnet & 0.4M (-75\%) & 0.580 & 0.516 & 0.468 & 0.724 & 0.484 \\ PHSEDnet $n=2$ & 0.8M (-50\%) & \textbf{0.680} & \textbf{0.389} & \textbf{0.355} & \textbf{0.767} & \textbf{0.611} \\ PHSEDnet $n=4$ & 0.4M (-75\%) & \underline{0.638} & 0.453 & 0.407 & \underline{0.765} & 0.547\\ \end{tabular} \end{center} \end{table*} \begin{table*}[t] \caption{SEDnets results with two microphones ($8$ channels input). Scores are computed over three runs with different seeds and we report the mean. The PHSEDnet $n=2$ outperform the baselines. For training time (seconds/iteration) the mean and the standard deviation over one epoch is reported, for inference time we report the time required to perform an iteration on the validation set. PH-based models far exceed baselines both in training and inference time.} \label{tab:sed_8c} \begin{center} \begin{tabular}{llccccccc} \multicolumn{1}{c}{\bf Model} &\multicolumn{1}{c}{\bf Conv Params} &\multicolumn{1}{c}{\bf \text{F\textsubscript{score}} $\uparrow$} &\multicolumn{1}{c}{\bf ER $\downarrow$} &\multicolumn{1}{c}{\bf \text{SED\textsubscript{score}} $\downarrow$} &\multicolumn{1}{c}{\bf P $\uparrow$} &\multicolumn{1}{c}{\bf R $\uparrow$} &\multicolumn{1}{c}{\bf Time (T)} &\multicolumn{1}{c}{\bf Time (I)} \\ \hline \\ SEDnet & 1.6M & \underline{0.663} & \underline{0.428} & \underline{0.383} & \textbf{0.788} & \underline{0.572} & 1.242 $\pm$ 0.088 & 1.198 \\ Quaternion SEDnet & 0.4M (-75\%) & 0.559 & 0.556 & 0.499 & 0.754 & 0.444 & 1.308 $\pm$ 0.088 & 1.298 \\ PHSEDnet $n=2$ & 0.8M (-50\%) & \textbf{0.669} & \textbf{0.406} & \textbf{0.368} & \underline{0.767} & \textbf{0.594} & \textbf{1.091 $\pm$ 0.074} & \underline{1.085} \\ PHSEDnet $n=4$ & 0.4M (-75\%) & 0.638 & 0.433 & 0.397 & 0.729 & 0.567 & \textbf{1.091 $\pm$ 0.032} & \textbf{1.077} \\ PHSEDnet $n=8$ & 0.2M (-87\%) & 0.553 & 0.560 & 0.503 & 0.747 & 0.439 & \underline{1.142 $\pm$ 0.042} & 1.173 \\ \end{tabular} \end{center} \end{table*} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{Figures/radar_sed.pdf} \caption{Radar plot for SEDnets results on L3DAS21 dataset with two microphones. The larger is the area, the better is the results. With the same computational time, PHC $n=2$ gains better scores with respect to PHC $n=4$ at a cost of more parameters. The real-valued SEDnet, although the discrete SED scores, has a high computational time demand as well the largest number of parameters.} \label{fig:radar} \end{figure} We investigate PHSEDnets in complex, quaternion and octonion domain with $n=2,4,8$ and train each network for $1000$ epochs with a batch size of $16$. The proposed parameterized hypercomplex SEDnets distinctly outperform real and quaternion-valued baselines, as reported in Table \ref{tab:sed_4c} and Table \ref{tab:sed_8c}. Indeed, the PHSEDnet with $n=2$ gains the best results for each score and in both one and two microphone datasets, proving that the weights sharing due to the hypercomplex parameterization is able to capture more information regardless the lower number of parameters. It is interesting to note that the PHSEDnet $n=4$, which operates in the quaternion domain, achieves improved scores with respect to the Quaternion SEDnet that follows the rigid predefined algebra rules. Further, the malleability of PHC layers allows gaining comparable performance with respect to the quaternion baseline even so reducing convolutional parameters by $87\%$, just setting $n=8$. In Section \ref{ssubsec:sedresults}, we show additional experimental results of PH models able to save $94\%$ of convolutional parameters while operating in the sedonion domain by involving $n=16$. Furthermore, PHSEDnets are more efficient in terms of time required for training and inference. Table \ref{tab:sed_8c} shows also that each tested version of the proposed method is faster regards as the real SEDnet and the quaternion one, both at training and at inference time. Time efficiency is crucial in audio applications where networks are usually trained for thousands of epochs and datasets are very large and require protracted computations. Figure~\ref{fig:radar} summarises number of parameters, metrics scores and computational time in a radar plot from which it is clear that PHSEDnet $n=2$ gains the best scores and a large time saving at a cost of more parameters with respect to other versions but the real one. A good trade-off is brought by the PH model $n=4$ which further reduces the number of parameters at the cost of slightly worse SED$_\text{score}$ and ER. Moreover, the real-valued SEDnet is capable of obtaining fair scores while having the largest parameters amount and high computational time demanding. \section{Ablation Studies} \label{sec:abl} \subsection{Less parameters do not lead to higher generalization} \label{ssubsec:imageresults} \begin{table}[] \caption{Experiments on SVHN dataset with the smallest networks from each family, ResNet20 and VGG11, the latter with modified number of filters in order to be divided by each value of $n$ and FC layers in the closing classifier. We test also the PHNN with $n=1$ to replicate the real domain which outperform the real-valued ResNet20.} \label{tab:img_class_app2} \begin{center} \begin{tabular}{llc} \multicolumn{1}{c}{\bf Model} &\multicolumn{1}{c}{\bf Params} &\multicolumn{1}{c}{\bf SVHN} \\ \hline \\ ResNet20 & 0.27M & 90.463 \\ Quaternion ResNet20 & 0.07M (-75\%) & 93.535 \\ PHResNet20 $n=1$ & 0.27M & \textbf{93.796} \\ PHResNet20 $n=2$ & 0.14M (-50\%) & \underline{93.708} \\ PHResNet20 $n=4$ & 0.07M (-75\%) & 93.669 \\ \hline VGG11 & 13.8M & 93.488 \\ Quaternion VGG11 & 3.9M (-71\%) & 92.888 \\ PHVGG11 $n=2$ & 7.2M (-48\%) & \textbf{93.958} \\ PHVGG11 $n=3$ & 5.0M (-64\%) & 93.804 \\ PHVGG11 $n=4$ & 3.9M (-71\%) & \underline{93.919} \\ \end{tabular} \end{center} \end{table} \begin{table}[] \caption{The first lines report VGG16 results with real-valued classifier for quaternion and PHNNs. Extension of Table \ref{tab:img_class}. Additional experiments with ResNet56 and ResNet110, the latter with modified number of filters in order to be divided by each value of $n$. Accuracy score is the mean over three runs with different seeds.} \label{tab:img_class_app3} \begin{center} \begin{tabular}{llcc} \multicolumn{1}{c}{\bf Model} &\multicolumn{1}{c}{\bf Params} &\multicolumn{1}{c}{\bf SVHN} &\multicolumn{1}{c}{\bf CIFAR10} \\ \hline \\ Quaternion VGG16 & 4.2M (-72\%) & 94.086 & 84.126 \\ PHVGG16 $n=2$ & 7.9M (-62\%) & \textbf{94.885} & \textbf{86.147} \\ PHVGG16 $n=4$ & 4.2M (-72\%) & \underline{94.562} & \underline{85.710} \\ \hline ResNet56 & 0.9M & \underline{94.116} & \textbf{83.700} \\ Quaternion ResNet56 & 0.2M (-75\%) & 93.664 & 81.687 \\ PHResNet56 $n=2$ & 0.4M (-50\%) & 93.722 & \underline{83.413} \\ PHResNet56 $n=4$ & 0.2 (-75\%) & \textbf{94.122} & 82.720 \\ \hline ResNet110 & 16.7M & 93.461 & 84.810 \\ Quaternion ResNet110 & 4.2M (-75\%) & 92.788 & 83.920 \\ PHResNet110 $n=2$ & 8.4M (-50\%) & 93.746 & 83.220 \\ PHResNet110 $n=3$ & 5.6M (-66\%) & \underline{94.712} & \underline{85.200} \\ PHResNet110 $n=4$ & 4.2M (-75\%) & \textbf{94.885} & \textbf{85.280} \\ \end{tabular} \end{center} \end{table} In the following, we demonstrate that higher accuracies achieved by our method are not caused by the parameter reduction which may lead to more generalization. To this end, we perform multiple experiments. First, we test lighter ResNets that were originally built for the CIFAR10 dataset \cite{Resnet2016}: ResNet20, ResNet56 and ResNet110. Second, we consider also the smallest VGG network, that is the VGG11 which has $14$M parameters. Finally, we perform experiments on SVHN, CIFAR10 and CIFAR100 with the larger ResNet18, ResNet50 and ResNet152 reducing the number of filters by $75\%$ so to have the same number of parameters of quaternion and PHNN with $n=4$ counterparts. Table \ref{tab:img_class_app2} reports experiments with ResNet20 where we test also $n=1$ to replicate the real-valued model, outperforming it. Experiments with VGG11 with modified number of filters in order to be divided by each value of $n$ is also reported in the same table. Finally, in Table \ref{tab:img_class_app3} we report experiments on SVHN and CIFAR10 with ResNet56 and ResNet110, the latter with modified number of filters. PH models gain good performance in each test we conduct while reducing the amount of free parameters. Indeed, the PHResNet20s gain almost $94\%$ of accuracy on the SVHN dataset involving just $70$k parameters. \begin{table}[t] \caption{Real-Valued ResNets with convolutional filters reduced by $75\%$, denoted by (s). Full models exceeds reduced versions in each of the experiment, proving that a smaller number of parameters do not lead to higher generalization capabilities.} \label{tab:red_real} \begin{center} \begin{tabular}{llccc} \multicolumn{1}{c}{\bf Model} &\multicolumn{1}{c}{\bf Params} &\multicolumn{1}{c}{\bf SVHN} &\multicolumn{1}{c}{\bf CIFAR10} &\multicolumn{1}{c}{\bf CIFAR100} \\ \hline \\ ResNet18 & 10.1M & \textbf{93.992} & \textbf{89.543} & \textbf{62.634} \\ ResNet18 (s) & 2.7M (-75\%) & 93.842 & 88.310 & 59.590 \\ \hline ResNet50 & 22.5M & \textbf{94.546} & \textbf{89.630} & \textbf{65.514} \\ ResNet50 (s) & 5.7M (-75\%) & 93.915 & 89.370 & 62.450 \\ \hline ResNet152 & 52.6M & \textbf{94.625} & \textbf{89.580} & \textbf{62.053} \\ ResNet152 (s) & 13.2M (-75\%)& 94.400 & 89.001 & 60.850 \\ \end{tabular} \end{center} \end{table} Finally, in order to further remove the hypothesis that smaller number of neural parameters leads to higher generalization capabilities, we perform experiments with real-valued baselines with a number of parameters reduced by $75\%$. Table \ref{tab:red_real} shows that reducing the number of filters downgrades the performance and thus it is not sufficient to improve the generalization capabilities of a model. We do not include standard deviations for values in the ablation studies as the values are similar to the previous examples so we aim at favoring paper readability. \subsection{Push the hyperparameter $n$ up to $16$} \label{ssubsec:sedresults} In the following, we perform additional experiments for the sound event detection task. We conduct a test considering two microphones and the phase information, so to have an input with $16$ channels. For this purposes, we consider as baseline the quaternion model and PHNNs with $n=4,8,16$ so to test higher order domains. Quaternion and PHSEDnet with $n=4$ manage the $16$ channels by grouping them in four components, thus assembling them in $4$ channels: one channel containing the magnitudes of the first microphone, one channel the phases of the same microphone, and so on. Therefore, the details coming from the magnitudes, which are the most important for sound event detection, are grouped together without properly exploiting this information. On the contrary, employing PHC layers allows the model to process information without roughly grouping channels while instead leveraging every information by easily setting $n$ equal to the number of channels, that is in this case $16$. From Table~\ref{tab:sed_app}, it is clear that employing a $4$-channel model such as Quaternion or PHC with $n=4$ does not lead to higher performance, despite the higher number of parameters. Indeed, the best scores are obtained with PHC models involving $n=8$ and $n=16$ that are able to grasp information from each channel. \begin{table*}[t] \caption{SED results with two microphone: magnitudes and phases (16 channels input). We test higher order hypercomplex domains up to sedonions by setting $n=16$. Although the incredible reduction of the number of parameters with respect to the real-valued baseline in Table \ref{tab:sed_8c}, the PHNN with $n=16$ still has comparable performance with other models. Furthermore, the PHSEDnet with $n=8$ outperform also the quaternion baseline which has more degrees of freedom.} \label{tab:sed_app} \begin{center} \begin{tabular}{lcccccc} \multicolumn{1}{c}{\bf Model} &\multicolumn{1}{c}{\bf Conv Params} &\multicolumn{1}{c}{\bf \text{F\textsubscript{score}} $\uparrow$} &\multicolumn{1}{c}{\bf ER $\downarrow$} &\multicolumn{1}{c}{\bf \text{SED\textsubscript{score}} $\downarrow$} &\multicolumn{1}{c}{\bf P $\uparrow$} &\multicolumn{1}{c}{\bf R $\uparrow$} \\ \hline \\ Quaternion SEDnet & 0.4M (-75\%) & 0.580 & 0.480 & 0.450 & 0.655 & 0.520 \\ PHSEDnet $n=4$ & 0.4M (-75\%) & 0.585 & \underline{0.470} & \underline{0.443} & 0.653 & \underline{0.530} \\ PHSEDnet $n=8$ & 0.2M (-87\%) & \textbf{0.607} & \textbf{0.466} & \textbf{0.430} & \underline{0.702} & \textbf{0.534} \\ PHSEDnet $n=16$ & 0.1M (-94\%) & \underline{0.588} & 0.509 & 0.461 & \textbf{0.734} & 0.491 \\ \end{tabular} \end{center} \end{table*} \section{Conclusion} \label{sec:conc} In this paper, we introduce a parameterized hypercomplex convolutional (PHC) layer which grasps the convolution rule directly from data and can operate in any domain from $1$D to $n$D, regardless the algebra regulations are preset. The proposed approach reduces the convolution parameters to $1/n$ with respect to real-valued counterparts and allows capturing internal latent relations thanks to parameter sharing among input dimensions. Employing this method, jointly with the one in \cite{Zhang2021PHM}, we devise the family of parameterized hypercomplex neural networks (PHNNs), a set of lightweight and efficient neural models exploiting hypercomplex algebra properties for increased performance and high flexibility. We show our method is flexible to operate in different fields of application by performing experiments with images and audio signals. We also prove the malleability and the robustness of our approach to learn convolution rules in any domain by setting different values for the hyperparameter $n$ from $2$ to $16$. \subsection*{CO2 Emission Related to Experiments} Experiments were conducted using a private infrastructure, which has a carbon efficiency of 0.445 kgCO$_2$eq/kWh. A cumulative of 2000 hours of computation was performed on hardware of type Tesla V100-SXM2-32GB (TDP of 300W). Total emissions are estimated to be 267 kgCO$_2$eq of which 0 percents were directly offset. Estimations were conducted using the \href{https://mlco2.github.io/impact#compute}{MachineLearning Impact calculator} presented in \cite{lacoste2019quantifying}. More in detail, considering an experiment for the sound event detection (SED) task, according to Table \ref{tab:sed_8c}, the real-valued baseline requires approximately 20 hours for training and validation, with a corresponding carbon emissions of $2.71$ kgCO$_2$eq. Conversely, the proposed PH model takes approximately $17$ hours with a reduction of carbon emissions of $16\%$, being $2.28$ kgCO$_2$eq. In conclusion, we believe that the improved efficiency of our method with respect to standard models may be a little step towards reducing carbon emissions. \bibliographystyle{IEEEtran}